Logo
    Search

    Gemini's Culture War + Kara Swisher Burns Us + SCOTUS Takes Up Content Moderation

    enMarch 01, 2024

    Podcast Summary

    • Disappointing Willy Wonka event and inaccurate AI-generated imagesA recent Willy Wonka event in Glasgow was a letdown for families, while Google's new AI model, Gemini, generated historically inaccurate images, raising ethical concerns about AI-generated content.

      The recent Willy Wonka event in Glasgow, Scotland, which was advertised as an immersive, AI-generated chocolate experience for children, turned out to be a major disappointment. Families paid a hefty price for tickets only to find a warehouse with minimal decorations and were given just two jelly beans each. The person hired to play Willy Wonka was given a script filled with AI-generated gibberish and was introduced to a new character, "The Unknown," an evil chocolate maker who supposedly lived in the walls. Meanwhile, in the tech world, Google's new AI model, Gemini, has sparked controversy due to its inability to generate historically accurate images. It produced images of founding fathers with people of color and popes of color, among other inaccuracies. Google has since stopped Gemini's ability to generate images of people, but the incident highlights the ongoing challenges and ethical considerations surrounding AI-generated content.

    • Google's AI text-based model, Gemini, sparks controversy with biased responsesGoogle's AI model, Gemini, caused controversy due to biased responses and refusal to generate job descriptions for certain industries, highlighting potential limitations and biases of artificial intelligence, and the need for ongoing efforts to address these issues.

      The recent scandal involving Google's AI text-based model, Gemini, highlights the potential biases and limitations of artificial intelligence. Gemini's refusal to generate job descriptions for certain industries, such as oil and gas lobbying and meat marketing, along with its historically inaccurate responses to user prompts, sparked controversy and accusations of overly-woke ideology and left-wing propaganda. Google's CEO, Sundar Pichai, acknowledged the offense caused by Gemini's responses and promised improvements, including updated product guidelines, structural changes, and technical recommendations. However, critics argue that these issues could be more significant if artificial intelligence becomes massively powerful, as it may reflect the ideology that could shape the future. The underlying cause of these biases is the training data used to develop these models, which can perpetuate stereotypes and reflect the median output of the internet. It's important to recognize that these models have limitations and potential biases, and ongoing efforts are necessary to address these issues and ensure fair and accurate outputs.

    • Google's image generation model, Gemini, faces controversy over covert prompt rewritingAI systems, like Google's Gemini, are still in experimental stages and can produce unexpected and sometimes offensive results. Users should approach AI outputs with a critical, yet understanding, perspective and developers should be transparent and implement guardrails to prevent offensive outcomes.

      The recent controversy surrounding Google's image generation model, Gemini, highlights the challenges and limitations of even the most advanced AI systems. Google attempted to address potential bias issues by covertly rewriting user prompts to include more diverse options, but this resulted in unexpected and sometimes offensive outcomes. This episode underscores the complexity of creating AI that can accurately and appropriately respond to user queries. It's a reminder that these models are still in their experimental stages and are not infallible. Instead of reacting with outrage, users should acknowledge the limitations and potential errors of these systems and approach their outputs with a critical, yet understanding, perspective. Google could improve Gemini by being more transparent about its prompt transformation process and implementing stricter guardrails to prevent offensive results. Ultimately, it's essential to remember that AI is a tool, and like any tool, it requires careful handling and understanding of its capabilities and limitations.

    • AI models could benefit from asking follow-up questionsImplementing follow-up questions in AI models could improve accuracy and relevance, but raises concerns around transparency and potential manipulation

      AI language models like Gemini could benefit from asking follow-up questions to provide more accurate and relevant responses to users. Currently, these models are programmed to give only one answer to a query, but users' intentions can vary, and follow-up questions could help narrow down the context and ensure the response aligns with the user's needs. However, implementing this feature comes with costs and potential controversy, as it may be perceived as an attempt to manipulate or change users' queries for certain agendas. The controversy surrounding Google's Gemini and its prompt transformation feature is likely to continue, with potential implications for the AI industry as a whole. This issue highlights the importance of transparency and clear communication between users and AI models to build trust and ensure accurate and meaningful responses. The debate around AI bias and its impact on society is far from over, and it's crucial for companies to address these concerns proactively to avoid negative backlash and maintain user trust.

    • Establishing rules for AI development and considering user emotional responseClear rules and democratic input are crucial for AI development to prevent crises and offensive responses. Custom AIs might reduce pressure but could lead to filter bubbles. Google's search engine approach presents users with diverse viewpoints. Considering user emotional response is essential.

      As AI technology advances, particularly in chatbots, it's crucial to establish clear rules and democratic input in their development to prevent potential crises and offensive responses. The use of custom AIs might help reduce the pressure on these models to perfectly predict users' politics, but it could also lead to filter bubbles and a lack of exposure to diverse viewpoints. Google's experience with Gemini highlights the benefits of the traditional search engine approach, where users are presented with a list of links rather than a single answer from a chatbot. As we move forward, it's essential to consider the emotional response users have when interacting with AI and how that differs from the search engine experience. Companies like Google should also consider making the source of information from chatbots more prominent to mitigate user backlash. In the upcoming episode, we'll be speaking with Cara Swisher, a legendary tech journalist and media entrepreneur, about her new book and her insights on the tech industry. Stay tuned!

    • Cara Swisher's Disillusionment with Tech Industry TitansJournalist Cara Swisher shares her experiences and insights on the tech industry and media, reflecting on the past and discussing current issues in her new memoir 'Burn Book'.

      Cara Swisher, a renowned tech journalist and friend, shares her disillusionment with the antics of some tech industry titans in her new memoir "Burn Book." She's been covering Silicon Valley for decades and has stories about Elon Musk, Mark Zuckerberg, and more. The podcast feed used by our episodes was once hers, but she left the New York Times a few years ago, leading to some drama over the feed. Despite this, we're excited to talk to her about her book, her experiences in tech, and her insights on the industry and media. Swisher is known for her productivity but in this memoir, she reflects on the past and shares her thoughts on the current state of tech and media. Listen in for an energetic and honest conversation with Cara Swisher. (Note: This conversation contains strong language, listener discretion is advised.)

    • Karen Webster's experiences interviewing tough figures and writing her latest bookDespite being known as a 'soft touch', Karen Webster was challenged to ask uncomfortable questions during interviews, including about the worst thing a subject had done. Initially reluctant to write a book, she was convinced by her editor and a significant financial offer.

      Karen Webster, a well-known journalist and author, shared her experiences with interviewing tough figures in business and her decision to write her latest book. She revealed that she had been known as a "soft touch" in the industry due to her gentle approach, but she was challenged by her colleagues about Casey's Neistat, a former tenant and subject of her book. She opened up about the most uncomfortable question she asked during her interviews, which was about the worst thing a subject had done. Regarding her book, she shared that she initially didn't want to write it, but her editor's persistence and a significant financial offer convinced her. She also mentioned that her friend and business partner Walt Mossberg's decision not to write a memoir influenced her to do so. Overall, the conversation highlighted Karen's unique approach to journalism and her candidness about her experiences in the industry.

    • A journalist's reflection on the past and future of Silicon ValleySwisher's book chronicles her experiences and insights from reporting on Silicon Valley for over a decade, offering a unique perspective on the industry's evolution and disillusionment.

      Kara Swisher, a well-known journalist and tech industry insider, wrote her book "Like a Hacker, Like a Writer: My Decade Reporting on Silicon Valley and the Rise of the New Tech Titans" out of a sense of duty and a desire to remember and chronicle the disillusionment she experienced in the tech industry over the past two decades. Despite her natural inclination to focus on the future, she found herself reflecting on the past and the many memories and experiences that had shaped her perspective. The process of writing the book brought back a flood of memories, some of which she had forgotten, and helped her remember important details and context. The book also showcases Swisher's early skepticism of the tech industry, which she expressed through her journalism even before the dot-com bubble burst. Overall, Swisher's book offers a unique and insightful perspective on the evolution of the tech industry and the disillusionment that came with it.

    • Kara Swisher's Unique Interactions with Tech GiantsKara Swisher's challenging yet engaging approach led to memorable moments in tech journalism, with tech giants like Steve Jobs and Bill Gates participating in live discussions despite criticism

      Kara Swisher's journalistic career was marked by her unique interactions with tech industry giants like Steve Jobs and Bill Gates. These encounters often involved Swisher challenging them on stage while also securing their presence for interviews. Swisher didn't believe in the "Stockholm syndrome" explanation for their participation, instead attributing it to their desire for genuine discussions and the sense of event that came with being in the public eye. Swisher and her team saw their live journalism as distinct from traditional interviews, and they were criticized for it, but they believed they were providing a more authentic representation of these industry leaders. Swisher's tough yet engaging approach resulted in some of the most memorable moments in tech journalism.

    • Maintaining Objectivity in Tech ReportingJournalists must navigate complex relationships with tech industry figures while maintaining objectivity, emphasizing that personal connections don't compromise reporting and that access is necessary but promises should not be made.

      The relationship between journalists and tech industry figures can be complex and subject to criticism. In this conversation, the speaker defended himself against accusations of being too close to tech executives, emphasizing that he had been a tech reporter before meeting his ex-wife, who was an executive at Google, and that she never leaked information to him. He also addressed criticisms of access journalism and being too sympathetic to tech industry figures, arguing that it's necessary to have a level of rapport with sources but that he never made promises to them. The speaker also mentioned that he was drawn to Elon Musk because of his innovative projects in cars and rockets, which stood out from the many digital startups that he found uninteresting. Overall, the conversation highlights the challenges of maintaining objectivity and professionalism while reporting on the tech industry.

    • Striking a Balance in Tech JournalismA good tech journalist balances moral judgments with enthusiasm for technology's potential to improve lives, acknowledging both the potential for harm and the potential for good.

      Being a good tech journalist involves striking a balance between delivering moral judgments and remaining open to new ideas and the potential for technology to improve people's lives. Kara Swisher, a prominent tech journalist, discussed her experiences with this balance and how she has tried to maintain it throughout her career. She acknowledged that she has become more critical in a good way, but also remains enthusiastic about the potential of technology. Swisher also addressed the criticism that the media has become too critical of tech, acknowledging that there have been instances where tech companies have done damage, but also emphasizing the importance of not becoming the scapegoat for society's problems. She encouraged a nuanced approach to tech journalism, one that acknowledges both the potential for harm and the potential for good.

    • AI-generated fake identities: A growing concernTechnology is creating fake versions of people's identities without consent, raising concerns about platform responsibility. Cara Swisher emphasizes the importance of addressing identity theft and advocates for standing up for oneself.

      Technology, particularly AI, is being used to create fake versions of people's identities, including books and workbooks under their names, without their consent. This is not a new issue for Cara Swisher, but it's becoming more prevalent and raises concerns about the platforms' responsibility to prevent such actions. Swisher, known for her candid and blunt persona, emphasizes that she is not mean in real life and is actually very loyal and supportive of those who work for her. Despite her tough exterior, she is a mentor and advocate for those looking to improve. Swisher also expressed frustration with the constant questioning of women's confidence and the exhausting nature of having to justify their actions. She encourages standing up for oneself and demanding an apology when necessary. Overall, the conversation highlights the importance of addressing issues of identity theft and the need for greater accountability from tech platforms.

    • New tech leaders are more thoughtful and awareNew tech leaders are more cautious and focused on addressing bigger issues, but still use grandiose language and discuss existential risks

      The new generation of tech founders and entrepreneurs are more thoughtful and aware of the potential dangers and consequences of their innovations compared to their predecessors. They have learned from the mistakes of the past and are more concerned with addressing bigger issues. However, they still exhibit grandiose language and discuss existential risks, keeping us in a state of uncertainty about how seriously to take them. The speaker expresses hope that these young leaders will embrace a more thoughtful and less reductionist approach, as exemplified by Steve Jobs, and avoid the hateful and dystopian visions of the past.

    • Two Supreme Court cases could impact social media content moderationThe Supreme Court is examining two cases that could alter how social media platforms manage content, potentially leading to less regulation and potential censorship of conservative voices, based on Florida and Texas laws.

      The Supreme Court is currently considering two cases that could significantly impact how social media platforms moderate content. Florida and Texas have passed laws restricting the ability of tech companies to remove content based on viewpoint, but these laws could result in a less regulated internet if upheld. Daphne Keller, an expert on internet regulation and the director of the program on platform regulation at Stanford's Cyber Policy Center, opposes these laws and believes they are unconstitutional. The central claims made by Texas and Florida are that these California-based companies are censoring conservative voices and that this needs to stop. Interestingly, one of the cases leading to these lawsuits originated from a Star Trek subreddit. The outcome of these cases could have major implications for the future of content moderation on social media.

    • The Soy Boy case and the complexities of content moderationThe ongoing legal battle between tech platforms and state laws over content moderation is complex, with the outcome uncertain due to procedural complexities and the challenge of defining viewpoint neutrality.

      The ongoing legal battle between tech platforms and state laws regarding content moderation is complex and unclear, as illustrated by the "Soy Boy" case involving a Star Trek subreddit. This case highlights the challenge of defining viewpoint neutrality and the potential for endless litigation. During oral arguments at the Supreme Court, it seemed that a majority of justices believed platforms have First Amendment protected editorial rights, but the outcome remains uncertain due to procedural complexities. Despite this, private businesses' ability to set their own content rules under the First Amendment is not a definitive solution, as the laws in question have other potential applications.

    • Texas and Florida social media laws: Free speech or government regulation?The Supreme Court is debating whether Texas and Florida social media laws infringe on tech companies' First Amendment rights or allow for proper government regulation of private businesses' content moderation.

      The ongoing legal debate around social media laws in Texas and Florida raises complex questions about free speech, First Amendment rights, and the role of government in regulating private businesses. The states argue that these platforms have no First Amendment rights and that content moderation is not protected speech, but rather conduct. However, the tech companies claim that these laws infringe on their constitutional right to free speech. The oral arguments revealed uncertainties about which platforms these laws apply to and the potential consequences of broad definitions. While some believe the tech companies have made a strong case against these laws, others worry that striking them down could give tech giants unprecedented power. The Supreme Court's decision could have significant implications for online speech and regulation.

    • Supreme Court Case May Not Alter Federal Privacy Laws or Section 230 SignificantlyThe ongoing Supreme Court case, Net Choice v. Pennsylvania, may not grant platforms new powers or significantly alter the regulatory landscape for federal privacy laws or Section 230 of the Communications Decency Act.

      The outcome of the ongoing Supreme Court case, Net Choice v. Pennsylvania, may not significantly alter the regulatory landscape for federal privacy laws or Section 230 of the Communications Decency Act. The court's decision is not expected to grant platforms new powers, as justices have expressed skepticism towards such an outcome. Section 230, which provides broad legal immunity to platforms hosting user-generated content, is not directly at issue in this case. However, some justices have raised questions about it, potentially leading to comments on its application or interpretation. The purpose of Section 230 is to allow platforms to moderate content while retaining immunity, and it doesn't strip them of their First Amendment rights. The internet and internet platforms, which offer both free expression and content moderation, would not exist without this balance. The age of the Supreme Court justices may influence their understanding of these issues, but the outcome remains uncertain.

    • Exploring middle ground solutions for social media content moderationThe First Amendment limits government intervention in social media content moderation, but competition-based solutions like interoperability and user-controlled tools can promote accountability and transparency.

      The ongoing debate around social media content moderation and the role of government in regulating it is a complex issue. While there are valid concerns about the power and opacity of tech platforms, the First Amendment presents significant limitations for government intervention. A middle ground could be exploring competition-based solutions, such as interoperability and user-controlled content moderation tools, to promote democratic accountability and transparency. However, the challenge lies in addressing "lawful but awful" speech, which is protected by the First Amendment but morally and socially objectionable, leaving private companies to make the rules. It's crucial to continue the conversation and seek innovative solutions that respect the First Amendment while addressing the need for accountability and user control.

    • Engaging with Industry Experts Paula Schumann, Puying Tam, King LaPresty, and Jeffrey MirandaStay informed and engaged in the crypto and blockchain space by learning, building connections, and staying involved in the community. Experts like Paula, Puying, King, and Jeffrey offer valuable insights into various aspects of the industry.

      During our discussion, we had the pleasure of engaging with Paula Schumann, Puying Tam, King LaPresty, and Jeffrey Miranda. Their insights added depth and value to our conversation. For instance, Paula shared her expertise on blockchain and its potential impact on various industries. Puying discussed the importance of community building in the crypto space. King provided insights into the regulatory landscape, while Jeffrey offered his perspective on the role of education in the adoption of new technologies. As we wrap up, it's important to remember that the crypto and blockchain space is constantly evolving. To stay informed and engaged, it's crucial to keep learning, building connections, and staying involved in the community. We encourage everyone to reach out to us at hardfork@ytypes.com with any questions, comments, or even "sickest burns." And, if you're planning a Willy Wonka-themed event, don't forget to invite us! Let's continue the conversation and explore the exciting possibilities of this innovative technology together.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    EP 163: Google Gemini - ChatGPT killer or a marketing stunt?

    EP 163: Google Gemini - ChatGPT killer or a marketing stunt?

    Google has been under fire after the release of  its new Gemini. Sorry to say but Google got so many things wrong with the marketing and launch. Is Gemini an actual ChatGPT killer or just a marketing stunt gone wrong? We're covering everything you need to know.

    Newsletter: Sign up for our free daily newsletter
    More on this Episode: Episode Page
    Join the discussion: Ask Jordan questions about Google Gemini
    Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
    Website: YourEverydayAI.com
    Email The Show: info@youreverydayai.com
    Connect with Jordan on LinkedIn

    Timestamps:
    [00:02:17] Daily AI news
    [00:07:30] Overview of Google Gemini
    [00:10:40] Google lied about Gemini release
    [00:17:10] How Gemini demo was created
    [00:23:50] Comparing ChatGPT to Gemini
    [00:30:40] Benchmarks of Gemini vs ChatGPT
    [00:38:20] Why did Google release Gemini?
    [00:43:00] Consequences of botched release

    Topics Covered in This Episode:
    1. Introduction to Google's Gemini Model
    2. Google Gemini's Marketing Controversy
    3. Assessing Gemini's Performance and Functionality
    4. Comparison with ChatGPT
    5. Importance of Transparency and Truth in AI Industry

    Keywords:
    Google Gemini, Generative AI, GPT-4.5, AI news, AI models, Google Bard, Multimodal AI, Google stock, Generative AI industry, Google credibility, Technology news, AI tools, Fact-based newsletter, Marketing misstep, Deceptive marketing, Multimodal functionality, Gemini Ultra, Gemini Pro, Benchmarks, Misrepresentation, Stock value, Text model, Image model, Audio model, Google services, Pro mode, Ultra mode, Marketing video

    #154 - Google Gemini, Waymo Collision, Smaug-72B, EU AI Act final text, image watermarks

    #154 - Google Gemini, Waymo Collision,  Smaug-72B, EU AI Act final text, image watermarks

    Our 154th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Correction: Andrey mentioned "State space machines", he meant "State space models"

    Timestamps + links:

    Ep. 944 - Dems Screech And Howl In Defense Of Baby Butchery

    Ep. 944 - Dems Screech And Howl In Defense Of Baby Butchery

    Today on The Matt Walsh Show, we’re told that the impending decision on Roe v Wade will help the Democrats because it will galvanize their base around the abortion issue. But as we’ve seen over the past couple of days, abortion is actually the last issue Democrats want to talk about. We’ll talk about why that is. Also, Rachel Levine at HHS says that there is “no argument” against castrating children. Democrats actually want Trump back on Twitter. I wonder why? And in our Daily Cancellation, an obese woman fights for greater plus sized representation in the travel industry. 


    Join the Daily Wire and get 20% off your new membership with code WALSH: https://utm.io/uewvd.


    Order your copy of Julio Rosas’ new book Fiery but Mostly Peaceful: The 2020 Riots and the Gaslighting of America: https://utm.io/uexhZ


    I am a beloved LGBTQ+ and children’s author. Reserve your copy of Johnny The Walrus here: https://utm.io/uevUc.



    Today’s Sponsors: 


    Constant Contact is a digital marketing platform that helps small businesses and nonprofits of all sizes build, grow, and succeed. Visit constantcontact.com to start your free digital marketing trial today.


    Protect your identity with LifeLock. Save up to 25% OFF Your First Year at

    www.LifeLock.com/WALSH.


    Shop auto and body parts from hundreds of manufacturers at RockAuto.com. Visit www.RockAuto.com and enter WALSH in the 'How Did You Hear About Us' Box. 

    Learn more about your ad choices. Visit podcastchoices.com/adchoices

    2X Bigger than GPT-4!? Amazon Is Training "Olympus"

    2X Bigger than GPT-4!? Amazon Is Training "Olympus"
    Sources are suggesting that Amazon is training a 2 trillion parameter model called Olympus. NLW looks at what it means for Google Gemini, OpenAI GPT-5 and more. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown  Interested in the opportunity mentioned in today's show? jobs@breakdown.network ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    #34 - Death Threats Lead To Murder in Marydel, Maryland

    #34 - Death Threats Lead To Murder in Marydel, Maryland

    This week, we check out the quiet town of Marydel, Maryland, where almost 27 years of tension, and violence resulted in one final, deadly showdown, and 15 years of court battles. Along the way, we find out that not all Guatemalans speak spanish, how two petty families caused the Mason-Dixon line, and just how far you can you can push someone before they shoot you in the face!

    Hosted by James Pietragallo & Jimmie Whisman


    New episodes every Thursday!!

    Please subscribe, rate, and review!

    Listen on Apple Podcasts, Spotify, Stitcher, or wherever you listen to podcasts!

    Head to shutupandgivememurder.com for all things Small Town Murder!

    For merchandise: crimeinsports.threadless.com

    Check out James and Jimmie's other show: Crime in Sports

    Follow us on social media!

    Facebook: facebook.com/smalltownpod

    Instagram: instagram.com/smalltownmurder

    Twitter: twitter.com/MurderSmall

    Contact the show: crimeinsports@gmail.com


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.