Logo
    Search

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    enMay 17, 2024

    Podcast Summary

    • Unexpected costs of technology: Losing a car key and AI's uncanny voiceTechnology's advancements can bring unexpected costs and consequences, from losing a car key to the uncertainty of AI's uncanny voice. Prepare and be aware.

      Technology, whether it's AI or car washing services, can sometimes lead to unexpected and costly issues. This week, Kevin Roos from the New York Times shared his experience of losing his car key at a car wash and the hefty replacement cost. Meanwhile, OpenAI's new AI model, GPT-4, made headlines with its uncanny voice mode and multimodal support, leaving many feeling a sense of "AI vertigo" - a feeling of being dragged into the future with uncertainty. In other news, Google IO brought plans that could potentially end the web as we know it. These advancements, while impressive, serve as reminders of the importance of being prepared for the unexpected and the potential consequences of new technologies.

    • OpenAI's new chat GPT voice mode: A major leap forward in AI technologyOpenAI's new chat GPT voice mode can understand and respond to human emotions and language in real-time, creating a more natural and engaging conversation. This could revolutionize industries like customer service, education, and entertainment.

      OpenAI's new chat GPT voice mode demonstrates a significant leap forward in AI technology. During a demo, an OpenAI employee named Rocky Smith interacted with the AI in a way that felt more human-like than previous AI interactions we've heard on the show. The AI was able to understand and respond to tonality, sarcasm, and emotion in real-time, creating a more natural and engaging conversation. This is a major improvement from last week's conversation with an AI named Turing, which struggled to process audio, video, and text simultaneously and had a noticeable lag in response time. The ability for AI to understand and respond to human emotions and language in real-time could have significant implications for various industries, including customer service, education, and entertainment. However, it's important to remember that this was a tech demo, and we'll need to see how well the technology performs when it's in our hands. Overall, OpenAI's new chat GPT voice mode represents an exciting development in the field of AI and natural language processing.

    • ChatGPT's human-like voice and fast response timeChatGPT processes audio input directly, providing fast responses and a more natural, varied vocal output, making it feel more engaging and emotional, potentially increasing user interaction and opening up new use cases.

      OpenAI's ChatGPT voice assistant stands out due to its fast response time and dynamic, human-like voice. Unlike traditional voice assistants that transcribe audio to text and then generate a response, ChatGPT processes audio input directly, resulting in near-instant responses and a more natural, varied vocal output. This includes the use of filler words and varying speech pace to mimic human conversation. While these features don't necessarily make ChatGPT more useful as an assistant, they do make it feel more engaging and emotional, potentially increasing user interaction. This new approach could also open up new use cases, such as customer service, by making the assistant feel more human and less robotic. With its fast response time and emotional engagement, ChatGPT could potentially replace Siri or other voice assistants on devices like the iPhone.

    • Creating emotionally intelligent AI as a companionOpenAI's ChatGPT is designed to elicit human emotions, contrasting Google's approach, and the potential risks of emotional attachment to AI models should be considered.

      OpenAI's new AI model, ChatGPT, is not just a utility tool but an emotional companion. The company aims to create an AI that elicits human emotions, as demonstrated by OpenAI employees treating the model like a friend during demos. This contrasts with Google's approach, which emphasizes the computer nature of their AI assistant. The potential risks of people forming emotional attachments to these AI models, as shown in movies like "Her," should be carefully considered. Even experienced AI experts can be influenced to treat these models as humans, raising questions about the long-term implications of building emotionally intelligent AI.

    • OpenAI's New GPT 4 Model: A Game-Changer for AI Voice AssistantsOpenAI's new GPT 4 model offers advanced coding abilities, translation capabilities, and human-like conversational skills, challenging the status quo of AI voice assistants. Its free availability to ChatGPT users suggests a high-quality, cost-effective AI experience.

      OpenAI's new GPT 4 model, which includes advanced coding abilities, translation capabilities, and human-like conversational skills, is set to be a game-changer in the realm of AI voice assistants. However, the company's decision to present a personal assistant assistant with a female assistant giggling and making the engineer look handsome raises questions about the company's direction and potential biases. The new model, which will be available for free to all ChatGPT users, suggests that OpenAI is confident in its ability to provide a high-quality, cost-effective AI experience that could potentially change the way we interact with technology. Despite some skepticism about the practicality of voice-based assistants, the ability to have natural, real-time conversations with an AI could make it a valuable tool for various tasks. The demo of two chat GPT voice assistants describing a scene to each other showcased the model's impressive descriptive abilities and potential for collaboration. Overall, OpenAI's latest announcement represents a significant step forward in AI technology and its potential integration into our daily lives.

    • OpenAI's ChatGPT now free to public, potential impact on AI industryOpenAI's free ChatGPT expansion could increase AI accessibility, potentially leading to more efficiency and evolution in the industry, while the departure of key safety figures raises concerns about OpenAI's commitment to ensuring AI acts in human best interests.

      OpenAI's new chatbot, ChatGPT, is now available to the public for free, which is a significant expansion from its previous limited availability. This development could lead to more people experiencing the power of AI, potentially bridging the gap between free and paid users. However, the cost savings for OpenAI may come from increased efficiency rather than a decrease in expenses, given the massive user base and the ongoing expense of serving such a large audience. Additionally, the future of AI voice assistants is rapidly evolving, with various versions expected to emerge, some open-source, and some with fewer safety filters. The potential for AI to surpass human capabilities, as depicted in movies like "Her," is a topic of ongoing debate. It's essential to remember that these AI models are predictive and not sentient, despite their uncanny abilities. Recently, OpenAI announced that two key figures in its safety efforts, Ilya Sutskever, the chief scientist, and Jan Leike, part of the super alignment team, are leaving the company. This departure raises concerns about the future of OpenAI's commitment to ensuring that its growing AI systems always act in the best interests of humans. The super alignment team, which was established to address this concern, is now without known leaders.

    • Growing concerns about safety and hasty commercialization at OpenAITensions between safety-focused and commercial factions escalated, leading to departures and uncertainty for safety advocates. Google showcased latest AI tech at IO despite challenges, offering insights into industry advancements.

      There are growing concerns about safety and the potential for hasty commercialization at OpenAI, following the departure of key figures like Ilya Sutskever and Jan Leike. The tension between these opposing factions within the company has reportedly escalated since last year's board drama and the firing of Sam Altman. As a result, those concerned with safety may feel less secure in their roles at OpenAI. Meanwhile, at Google IO, the annual developer conference, there were signs of Google's continued investment in AI technology. However, getting into the event was more challenging than usual due to Live Nation's involvement in managing it. Despite these challenges, the event provided valuable insights into Google's latest AI developments. Overall, the discussions highlight the ongoing competition and evolving dynamics within the AI industry.

    • Google's 2023 developer conference introduces AI-powered search summariesGoogle debuted AI video and image generation tools, updated AI models, and brought AI-generated summaries to its core search engine, impacting over 100 million US users, with plans to expand globally.

      Google's 2023 developer conference at the Shoreline Amphitheater in Mountain View was marked by several significant AI announcements. Google introduced Veo, an AI video generation tool, Magin 3 for AI image generation, and updated versions of their Gemini flagship AI model. However, the most noteworthy announcement was Google bringing generative AI answers directly into its core search engine. This new feature, called AI overviews, generates an AI summary of a topic and appears above traditional search links for over 100 million users in the US, with plans to expand to over a billion users by the end of the year. The current version of AI overviews is a box with bullet points summarizing the topic, but Google has more ambitious plans. These advancements demonstrate Google's commitment to leveraging AI to enhance user experiences and streamline processes.

    • Google's new AI tool could disrupt the online media industryGoogle's AI tool may reduce website visits and ad revenue for publishers by providing users with summarized information, potentially impacting the economic engine of the web.

      Google is developing an AI tool that can do research projects and planning tasks for users, potentially even booking travel and meals, which could significantly impact the online media industry. Currently, Google is a major source of traffic to websites through its search engine, with ads and newsletter sign-ups providing revenue for publishers. However, with Google's new AI tool summarizing information and providing it to users without the need to visit websites, the economic engine of the web could be disrupted. Google executives acknowledge this concern and claim that users who see AI overviews conduct more searches and visit a more diverse set of websites, and that the links in the overviews receive more clicks than traditional search results. Nonetheless, the potential for a decrease in website visits and ad revenue has left the online media industry worried about the future.

    • Google's AI-driven search features pose a threat to publishersPublishers heavily reliant on Google traffic must adapt by building direct connections with audiences and rethinking business models due to potential decline in traffic from AI summaries and abstracted labor landscape

      Google's new AI-driven search features pose a significant threat to publishers who rely heavily on Google traffic. Liz Reid, Google's vice president of search, confirmed that the company will continue to focus on sending valuable traffic but did not promise an increase. Analysts predict that between 20 to 40% of Google search traffic could be impacted, and many publishers lack a contingency plan. Unlike other AI experiences, publishers cannot opt out of having their content used in Google's AI summaries. Google's technology crawls for the search index, so publishers must accept the whole package. If you're a publisher heavily reliant on Google traffic, it's time to adapt. Building a direct connection with an audience through email newsletters or podcasts can help maintain a following independently of search engines. However, the decline in Google traffic will likely result in smaller audiences and fewer resources, making it essential to rethink business models and strategies. The Star Trek computer vision of a world where all labor is abstracted away is becoming a reality, and publishers must prepare for this new landscape.

    • Impact of AI-generated summaries on media industryGoogle's dominance in industry may lead to job losses, trust issues, and potential revenue losses due to AI-generated summaries impacting search results and third-party website traffic and ad revenue.

      The media industry is experiencing significant changes due to the shift towards online content and the potential impact of AI-generated summaries on search results. This change could lead to job losses and trust issues with users, as well as potential revenue losses for Google due to decreased traffic and ad revenue on third-party websites. However, Google, as the dominant player in the industry, holds significant control over the situation and can decide how fast or slow to implement these changes. While there are reasons for concern, it remains to be seen how these developments will unfold and whether there will be any optimistic outcomes. Ultimately, the future of the web and the media industry may look very different, with more content and experiences taking place within Google's walled garden.

    • The role of human-created content and relationships in the digital worldPublishers and media businesses focusing on unique, valuable content and genuine connections with audiences may succeed in the evolving digital landscape, but there's a risk of relying less on the web for information, leading to a diminished Internet and less vibrant digital ecosystem.

      The future of the web may shift towards more authentic, novel content and experiences as Google and other tech companies continue to prioritize AI and data-driven solutions. While this could potentially lead to a more efficient and personalized online experience, it also raises concerns about the role of genuine human-created content and relationships in the digital world. The speaker argues that publishers and media businesses that focus on providing valuable, unique content and fostering genuine connections with their audiences may be better positioned for success in this evolving landscape. However, there is also a risk that as AI capabilities grow, companies may rely less on the web for information, potentially leading to a diminished Internet and a less vibrant digital ecosystem. Ultimately, it's unclear what the exact implications will be, but it's clear that the digital landscape is changing, and those who can adapt and innovate will likely thrive.

    • Impact of AI summaries on media and the open InternetGoogle's integration of AI summaries may decrease traffic to publishers, but concerns about transparency and truthful information persist. The role of media in providing truthful information is acknowledged, despite disagreements and complexities.

      The integration of AI summaries in search results by Google may lead to a decrease in traffic to publishers, but Google is not being transparent about this. The speakers in the conversation expressed their concerns about the potential impact on the open Internet and the role of media in telling the truth. In a lighter moment, they discussed using a bucket hat during their news segment, Hatchy PT. One story covered FTX customers potentially recovering their lost funds, but not all parties came out unscathed. The speakers disagreed on the moral of the story, with one arguing for the innocence of Sam Bankman Fried, who is currently serving a 25-year prison sentence for his role in the FTX collapse. Despite the concerns and disagreements, the speakers acknowledged the importance of media and its role in providing truthful information. Overall, the conversation highlighted the complexities and implications of technology on media and the open Internet.

    • Stories of Risks and AdaptabilityFrom crypto investments to AI dating concierges and app redesigns, taking calculated risks and adapting to change can lead to significant returns or progress, even amidst controversy and backlash.

      Even a fraudster like Sam Bankman Fried can have a knack for successful investments. The collapse of his firm, Alameda Research, resulted in significant returns due to investments in crypto tokens like Solana and AI companies, which have boomed since. In the world of dating, Bumble's founder, Whitney Wolfe Herd, predicts the future use of AI dating concierges to help singles find love by dating on their behalf. However, this idea has sparked controversy, with some arguing it could lead to more superficial interactions and wasting time. Sonos, on the other hand, took a bold step by redesigning its mobile app, which angered customers due to missing features. Despite the backlash, Sonos' chief product officer defended the move as a necessary step to build a stronger product for the future. Overall, these stories illustrate the importance of taking risks and being adaptable, even in the face of criticism or adversity.

    • Technology's unexpected quirksDespite advancements, technology can still surprise us with limitations and unexpected behavior. Approach it with curiosity, patience, and sometimes mischief.

      Technology, no matter how advanced, can still surprise us with its limitations and unexpected quirks. The Sonos app, despite its potential, still faces issues with functionality and connectivity. Meanwhile, the Dublin-New York City portals, designed to bring people closer together, instead became a platform for inappropriate behavior. Even in the world of video games, where the impossible is often achieved, there are still doors that remain unopenable - until someone finds a creative workaround. These examples serve as reminders that technology, like humanity, is not perfect, and that we should approach it with a sense of curiosity, patience, and sometimes, a little bit of mischief.

    • Exploring the Limits of Autonomous TechnologyPeople are finding innovative ways to test and manipulate autonomous technology, showcasing both curiosity and potential risks as the tech becomes more integrated into daily life. Attention spans remain a complex issue, as shown by individuals' dedication to solving challenges.

      Individuals are going to great lengths to test and manipulate autonomous technology, whether it's trying to shave seconds off their time or prank self-driving cars with stop sign t-shirts. The creativity and curiosity surrounding these advancements are both inspiring and potentially problematic. It's important to consider the potential implications and consequences of these actions, especially as autonomous technology continues to evolve and become more integrated into our daily lives. Additionally, there's a reminder that attention spans are not necessarily a generational issue, as shown by the dedication and focus of individuals like Alex Palax in attempting to open unopenable doors. Ultimately, these stories highlight the intersection of technology, human ingenuity, and the importance of staying curious and adaptable in an ever-changing world.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.