Logo
    Search

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    enJune 07, 2024

    Podcast Summary

    • Canada AI ResearchCanada's strong scientific research funding has contributed to the country's early success in AI, and a $2.4 billion compute strategy aims to keep talent at home by increasing accessibility to computing power

      Canada has a rich history in AI research, with many foundational discoveries coming from Canadian universities and scientists. Prime Minister Justin Trudeau, a long-time science fiction fan and former engineering student, has been thinking about the intersection of technology and society for much of his life. He believes that Canada's strong scientific research funding has played a significant role in the country's early success in AI. To keep this talent at home, the Canadian government recently announced a $2.4 billion compute strategy to increase accessibility to computing power for businesses, researchers, and academics.

    • Canada's AI advantageCanada invests in AI research and infrastructure, offers lower temps, clean energy, stable climate, diverse talent, and focuses on retaining top talent through better access to capital and a welcoming environment for immigrants, while addressing potential labor market impacts.

      Canada offers several advantages when it comes to data center investment, including lower temperatures, access to clean and affordable green electricity, a stable political climate, and a diverse talent pool. However, retaining top tech talent remains a challenge, as many are lured away by higher salaries and opportunities in the US. To address this, Canada is focusing on providing better access to capital and creating a welcoming environment for immigrants with tech skills. The country's investment of $2.4 billion in AI research and infrastructure is part of an effort to secure Canada's AI advantage, which includes leveraging the country's expertise in cybersecurity and diversity. Despite concerns about liberal democracies being outmatched by countries with vast resources, the interviewee believes that the creativity and dynamism of an ecosystem are essential to driving AI innovation. The goal is to ensure that AI benefits everyone, rather than exacerbating wealth inequality. The interviewee also acknowledges the potential impact of AI on labor markets and has allocated $50 million to support workers affected by the technology. Overall, the vision is for AI to create opportunities for people and improve lives, rather than leading to mass unemployment.

    • AI ethicsAI should be used to enhance human capabilities, but ethical considerations and potential risks must be addressed in areas like drafting, education, and AI development.

      Technology, including AI, should be embraced as a tool to enhance human capabilities, but it's crucial to consider the ethical implications and potential risks. The use of AI in drafting, for instance, can help writers focus on the content and unique strengths, but it also raises concerns about productivity, power imbalance, and overloading employees. In education, while technology like AI can make learning more accessible and efficient, it's essential to maintain a balance and foster critical thinking skills. In the debate around existential risk and AI, it's important to be responsible and proactive in managing the technology while also recognizing the potential benefits. The conversation around regulating tech companies, such as taxing their digital ad revenue or requiring compensation for AI's use of news content, is an ongoing challenge to maintain a strong democracy and support local journalism.

    • Digital well-being of childrenGovernments can put pressure on platforms to prioritize issues like journalistic integrity, protection of free speech, and protection against hate speech to ensure the digital well-being of children, while citizens should be empowered to be discerning and fact-checking and support local journalism.

      There's a need for platforms to take on greater responsibility for the well-being of users, especially children, in the digital world. While governments may not have the tools to regulate online content effectively, they can put pressure on platforms to prioritize issues like journalistic integrity, protection of free speech, and protection against hate speech. The conversation around TikTok's potential risks, including data security and impact on children, should not be conflated. Instead, we should focus on empowering citizens to be discerning and fact-checking, and supporting local journalism to cover races and issues that aren't in the spotlight. AI can be a useful tool, but it's important to scrutinize its recommendations and ensure they're based on facts. The rise of deep fakes and synthetic media underscores the urgency of these issues, and the need for a collective effort to combat misinformation and disinformation.

    • AI and National SecurityGovernments and private sector have crucial roles in AI development and regulation for national security and public safety. OpenAI faces concerns about safety and transparency, and governments must stay informed and use AI responsibly to counteract potential threats.

      Both governments and the private sector have crucial roles to play in the development and regulation of AI technologies for national security and public safety. Prime Minister Trudeau emphasized the importance of governments staying informed about advanced AI technologies and using them responsibly to counteract potential threats. Meanwhile, a group of current and former OpenAI employees, led by Daniel Coquatello, have raised concerns about safety and transparency within the company and are advocating for stronger whistleblower protections and the ability to report concerns to external authorities. OpenAI has responded by creating a new safety committee and maintaining their commitment to creating safe and capable AI systems. The power dynamics within OpenAI and the potential conflicts of interest with Sam Altman's investments are also important considerations in the ongoing debate about the ethical and safe development of AI.

    • AI safety regulationFormer OpenAI researcher Daniel Coquatello raises concerns about potential safety issues being deprioritized in the rapidly advancing AI industry, citing Microsoft's deployment of a powerful AI model without safety board approval as a worrying sign for self-regulation.

      As we approach the development of generally intelligent systems, the importance of trust and safety in the companies and individuals leading this field cannot be overstated. Daniel Coquatello, a former researcher at OpenAI, shares concerns about the potential for safety being deprioritized as AI progresses exponentially. He recounts an incident involving Microsoft deploying a powerful AI model without approval from the safety board, which he sees as a worrying sign for self-regulation in the industry. These issues are becoming increasingly significant as the world gets closer to the point where the progress of AI will be a major concern for everyone. It's essential to pay attention to the people and companies leading this field and ensure they prioritize safety and good governance.

    • OpenAI safety concernsDan expressed disappointment over OpenAI's handling of safety and transparency in GPT-4 development, emphasizing the need for external oversight and accountability due to concerns over rapid scaling and lack of testing.

      The discussion highlights concerns about OpenAI's handling of safety and transparency in their development of advanced AI models, specifically GPT-4. Dan expresses disappointment over the company's seemingly reckless approach, lack of testing, and rapid scaling of capabilities. He emphasizes the need for external oversight and accountability. The conversation also touches upon the hardware overhang argument and the potential consequences of accelerated chip production. Dan's decision to refuse to sign the off-boarding paperwork, potentially losing $1.7 million, underscores his commitment to speaking out about these issues. The board crisis and Sam Altman's reinstatement further intensified Dan's concerns and ultimately led him to question whether OpenAI was the right place for him.

    • AI Ethics and TransparencyForming a group to propose policies promoting transparency and open criticism within AI labs, including an anonymous reporting hotline, a culture of open criticism, and the ability to discuss confidential information in the context of safety concerns.

      Ethical concerns and transparency are crucial in the development and deployment of artificial intelligence (AI). The speaker, a former OpenAI employee, shared his experience of being asked to sign a non-disparagement agreement and refusing to do so due to ethical concerns. The agreement would have prevented him from criticizing the company in the future. After the Vox report on this issue, OpenAI responded, stating they had never enforced such agreements and were embarrassed about their existence. The speaker then formed a group of concerned employees to propose policies promoting transparency and open criticism within AI labs. These proposals include an anonymous reporting hotline, a culture of open criticism, and the ability for employees to discuss confidential information in the context of raising safety concerns. Despite the lack of specific evidence to back up some claims, the speaker emphasizes the importance of these policies without revealing confidential information. The group's strategy is to build momentum for these policies through public discourse and advocacy rather than waiting for a catastrophic event.

    • AI safety narrativeThe imminent danger of AGI and its potential harm to humanity needs a more persuasive narrative to resonate with the public, despite some AI researchers predicting its reality within a few years

      The safety concerns surrounding artificial intelligence (AI) and its potential impact on humanity need a more persuasive narrative to resonate with the public. Danielle Fong, an AI researcher, believes that the capabilities of publicly available AI models and the progress made over the last few years are enough to suggest that AGI could be a reality within the next few years. However, her predictions and high probability of AGI causing harm to humanity have led some to dismiss her views. Fong admits that her perspective is more pessimistic than most, but she believes that the imminent danger of AGI warrants open discussion with the public. She has received mixed reactions since speaking out, with some praising her bravery and others feeling betrayed. Fong hopes to step out of the media spotlight and focus on her research, but she remains committed to advocating for open dialogue about the potential risks of AGI.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.