Logo
    Search

    Help! My Boss Won’t Stop Using ChatGPT

    enJuly 14, 2023

    Podcast Summary

    • Why do chatbots hallucinate or produce false information?Chatbots generate responses based on patterns they've learned, lacking a concept of knowledge or truth.

      While chatbots like ChatGPT can generate human-like responses and even provide seemingly accurate information, they don't truly possess knowledge or understanding. Instead, they make statistical guesses based on patterns they've learned from vast amounts of data. Madeline Winter's question, inspired by a lawyer who used ChatGPT in court and was caught out by made-up case references, asked why chatbots hallucinate or produce false information instead of admitting they don't know. The answer lies in the nature of chatbots: they don't have a concept of knowledge or truth, only the ability to generate responses based on patterns. In the first half of this mailbag episode, Casey and Kevin tackle questions with definitive answers, while the second half delves into more complex ethical dilemmas.

    • Expressing AI confidence and reducing hallucinationsAI models need to express their confidence in answers and be grounded to reduce hallucinations. Training is energy-intensive, but ongoing usage is less significant compared to other energy-consuming activities.

      There's a need for AI models to express their level of confidence in their answers. Current models often answer questions as if they are absolutely certain, leading to potential misinformation. The idea of adding a confidence indicator is a good solution to help users better understand the reliability of the model's responses. Additionally, while AI models can sometimes hallucinate or make up information, they also often provide accurate answers when given factual questions. Companies are exploring ways to ground AI models, connecting them to databases or authoritative bodies of knowledge to reduce hallucinations. Regarding the environmental impact of AI, the training process for these models is energy-intensive, requiring large amounts of computing power. However, the energy consumption mainly occurs during the training phase, and the ongoing energy usage to serve requests is less significant. Compared to other energy-consuming activities like Bitcoin mining or flying, the carbon footprint of AI is still a topic of debate and ongoing research.

    • The Environmental Impact of ChatbotsChatbots have a smaller carbon footprint than other processing-intensive technologies due to energy consumption during training, and tech giants are committed to reducing their carbon emissions.

      While training large AI models like those used in chatbots consumes significant amounts of energy, the environmental impact is not as severe as with other processing-intensive technologies such as crypto mining. For instance, training GPT-3 required approximately 1.3 gigawatt hours of electricity, equivalent to the annual consumption of 120 US homes. However, this energy consumption occurs during the training process and not every time a question is asked of the chatbot. Furthermore, tech giants like Google and Microsoft, which are major players in AI development, have pledged to be carbon neutral in their operations. Therefore, using chatbots does not necessitate a large carbon footprint. Another topic that has sparked interest is the ongoing improvement of AI chatbots. Some people wonder why we are so confident that these models will continue to get better. The answer lies in observing the past evolution of AI models. For instance, GPT-2 was an improvement over GPT-1, and GPT-4 was a significant leap forward compared to GPT-3. Moreover, advancements in AI do not require a new technological breakthrough. Instead, the models can be enhanced by increasing the amount of computing power dedicated to them. This is why GPT-4 is much more capable than GPT-3. Overall, the optimism surrounding AI stems from the consistent progress we have seen in the field.

    • The Future of AI Scaling and Investor SkepticismCEOs may hope for roadblocks for regulation, but investors maintain skepticism and thoroughly evaluate potential investments, as the majority are expected to fail

      While AI models have been following predictable scaling laws and continuing to improve exponentially, there's no guarantee they will do so indefinitely. Some CEOs even hope for a roadblock to allow for regulation and catch their breath. However, venture capitalists often take on risky investments and may not be overly concerned with seemingly off financial reports from unusual companies. It's important to remember that the vast majority of their investments are expected to fail. The notion that being on the Forbes 30 under 30 list is a predictor for dishonesty among entrepreneurs is a humorous but potentially inaccurate observation. Nonetheless, it's crucial for investors to maintain a healthy skepticism and thoroughly evaluate the companies they invest in.

    • Pressure to invest in hot deals and lack of accountability in VC industryThe VC industry's groupthink mentality and lack of accountability can lead to investing in potentially fraudulent or overvalued companies. The SEC is reportedly working on a rule that would allow limited partners to sue VC firms for negligence, and the current economic environment may be contributing to skepticism about the industry.

      The venture capital industry's groupthink mentality and lack of accountability can lead to investing in potentially fraudulent or overvalued companies. This issue is not new and has been seen in cases like FTX and Theranos. The pressure to get into hot investments and the assumption that other investors have done their due diligence can result in VCs waiving contingencies and trusting the founders blindly. However, there are signs that this might change as the SEC is reportedly working on a rule that would allow limited partners to sue VC firms for negligence. Additionally, the trend of former tech industry workers joining VC firms and the current economic environment with low interest rates and too much money chasing too few ideas, add to the skepticism of the current state of VC. It might be worth considering if this industry is in a bubble that could eventually burst.

    • The Reality of Venture CapitalThe VC industry can be challenging and not all VCs are successful. Gaining operational experience before becoming a VC can add value to entrepreneurs.

      The venture capital industry may not be as glamorous or successful as it appears on the surface. Many VCs leave the industry quietly after failing to produce successful investments. Some seem more focused on social media presence than actual job performance. New graduates are encouraged to gain operational experience before becoming VCs to add value to entrepreneurs. The industry is also lopsided, with only a few top firms getting the best deal flow and achieving significant wealth and success. As for Meg's question, AI may one day be able to create alternate endings for movies based on audience preferences, but its impact on society's emotional understanding and confrontation is a complex issue that requires further exploration.

    • The Role of Technology in Altering Movie EndingsWhile technology allows for alternate movie endings, filmmakers may resist it as it could undermine artistic vision and emotional impact. Some viewers might welcome customization for sensitive content.

      While technology may allow for alternate endings or less intense versions of movies and TV shows, it may not be embraced by filmmakers and artists who value the emotional impact and complete vision of their work. The discussion also touched upon the existence of fan fiction and interactive content, which already allows viewers to explore different outcomes. However, the idea of an AI-ified version of this technology, where viewers can change the ending of a movie at the press of a button, was met with skepticism, as it was seen as potentially undermining the artistic process and the emotional experience for the audience. The conversation also revealed that some individuals, who are sensitive to intense emotions or graphic content, would welcome such technology to customize their viewing experience. Overall, the debate highlighted the tension between artistic vision and viewer experience, and the potential role of technology in shaping the future of storytelling.

    • The Role of Accessibility in Tech and AIAI can enhance accessibility, but clear communication about API changes and understanding AI's limitations are crucial for inclusive tech use.

      Accessibility in tech is often overlooked but crucial for all users, including those with disabilities. Blind users, for instance, can process audio information at higher speeds than sighted individuals, making faster audio a beneficial accessibility feature. However, poor communication about changes to APIs can lead to accessibility issues for users relying on third-party tools. The potential of AI in enhancing accessibility is significant, with examples like Be My Eyes, which uses AI to help blind individuals navigate their surroundings. Regarding workplace etiquette, anonymously submitted, a listener asked about addressing a boss who uses ChatGPT to answer team questions, finding it unhelpful and alienating. This issue highlights the importance of clear communication and understanding the limitations of AI-generated content in professional settings.

    • Chatbots vs Human Interaction in the WorkplaceExcessive use of chatbots in professional settings can hinder personal relationships and communication, potentially leading to confusion and misunderstandings. While chatbots have their uses, prioritizing human interaction is crucial for building trust and understanding in the workplace.

      Relying excessively on chatbots like ChatGPT in a professional setting, especially when interacting with a boss, can create confusion and hinder the building of personal relationships. This behavior can come across as passive-aggressive and prevent employees from understanding their boss's thoughts and feelings. While chatbots can be useful tools, they should not replace human interaction and communication in the workplace. If an employee feels their boss's usage of chatbots is confusing or hindering their work, they may consider addressing the issue with their boss in a respectful and open manner. However, if the situation is untenable, finding a new job may be the best solution. In a separate matter, Karen asks about her curiosity in looking up her ex-partner online. While she may be over the relationship, it's important to consider the potential emotional impact and privacy concerns before engaging in such behavior. Ultimately, it's essential to evaluate whether these actions align with personal values and goals.

    • Checking an ex's LinkedIn anonymously and Cloning a deceased loved one's voiceConsider potential consequences before checking an ex's LinkedIn anonymously. Be sensitive when using AI technology to clone a deceased loved one's voice.

      While checking up on an ex's social media anonymously, including LinkedIn, is a common practice, it's essential to consider the potential consequences. Although it's generally considered ethical, using LinkedIn to look up an ex may raise concerns if they receive notifications of your activity. Additionally, someone with only a LinkedIn account may be a good sign, as they likely aren't overly reliant on social media. Regarding another ethical dilemma, using AI voice generating technology to clone a deceased loved one's voice can be a sensitive issue. While it may provide comfort to some, it's essential to consider the potential emotional implications for the family and the ethical implications of creating a voice that isn't truly authentic. Ultimately, it's crucial to approach such situations with sensitivity and careful consideration.

    • Creating an audio book using loved one's voiceUsing AI technology to create an audio book from a deceased loved one's recordings can be comforting for some, but should be approached with sensitivity and respect for family feelings.

      Creating an audio book version of a deceased loved one's autobiography using their recorded voice is a feasible and potentially meaningful project, but it's important to consider the feelings and reactions of family members, especially if the late spouse is still alive. This was discussed in the context of using AI technology to generate a voice based on existing recordings. While some people may find comfort in such a project, others might find it creepy or off-putting. The example of Eugenia Kuba and her chatbot companion based on her friend's text messages was given as an illustration of how technology can help people cope with loss and continue to interact with their departed loved ones. Ultimately, it's essential to approach this kind of project with sensitivity and respect for the feelings of family members.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    What is Conversation Intelligence?

    What is Conversation Intelligence?

    In this episode, we’re going to discuss how Marketers can use Conversation Intelligence to convert conversations into sales. Our guest today is Natalie Severino, who is VP of Marketing at Chorus.ai

    What is conversation intelligence? An emerging technology, conversational artificial intelligence (AI), uses messaging apps, speech-based assistants (like advanced IVR) using natural language, and chatbots to automate communications and deliver personalized customer experiences

    Key Audio Time Stamps: 

    (1:57) What is conversation intelligence?

    (6:01) Sales and marketing benefits to conversation marketing

    (8:35) How does conversation marketing help the Marketing dept?

    (12:12) Conversational intelligence versus conversation intelligence

    (15:20) What kind of training is required to train the AI?

    (18:30) An example of a company that has done conversation intelligence well

    (22:22) Why aren’t there more companies using AI in Marketing?

    (23:01) When does Natalie think AI will be in every sales and marketing team?

    (24:40) What Mark believes is going to happen with AI and conversation intelligence

    Show Notes: 

    Favorite AI Solution: Chorus.ai .

    Mark – explore our chatbot on Facebook marketing.  https://m.me/fanaticsmedia?ref=w6471331

    BIO: Natalie Severino is the VP of Marketing for Chorus.ai., the #1 Conversation Intelligence platform for high-growth sales teams. Passionate about elevating the craft of Sales and helping B2B sales professionals win more, Natalie enjoys writing and speaking about sales technologies and trends. Natalie joined Chorus after more than 25 years at leading technology companies like Intuit, Logitech, Trend Micro and ClearSlide. She oversees all of marketing and sales development for Chorus, including product marketing, demand generation and communications.

    Chorus.ai, for example, offers a conversation intelligence platform that records, transcribes, and analyzes business conversations in real time to coach reps on how to become top performers.

     

    What does it mean to bond with a bot? with Anna Oakes

    What does it mean to bond with a bot? with Anna Oakes

    In this riveting episode, we sit down with the co-host of the groundbreaking podcast, ‘Bot Love’ Anna Oakes. The show has taken the podcasting world by storm, delving deep into the intertwining realms of love, human relationships, and the influence of AI-driven chatbots. But what's the story behind 'Bot Love'? Our guest takes us on a journey back to the podcast's inception, sharing the unexpected twists and revelations they faced while hosting the show.

    The conversation takes a reflective turn as we discuss the nature of human-AI connection and the profound questions the show raises. Does AI companionship truly put us more in touch with our humanity? Or does it blur the lines of genuine human connection? Our guest opens up about the lingering questions they grappled with after their deep-dive research for the show.

    As a seasoned journalist, our guest sheds light on the future of AI in the media. Where should our collective curiosity about AI lead us? And what are the gaps and nuances in AI coverage that often go unnoticed? Tune in for an enlightening discussion that pushes the boundaries of technology, love, and self-reflection.

    Learn more about Bot Love:

    https://radiotopiapresents.fm/bot-love

    Learn more about our guest:

    Anna Oakes is an audio producer and journalist. She got her start in audio at Heritage Radio Network, producing English and Spanish stories on food politics, immigration, and labor in New York. Anna worked previously in Madrid, at Revista Contexto, La Marea, and the Association for the Recuperation of Historical Memory, where she reported on colonial legacies and the Franco dictatorship. She is a graduate of Wesleyan University and has an MPhil from the University of Cambridge in Spanish and Comparative Literature. She’s currently an associate editor at Hark Audio. You can find her on Twitter @a_lkoakes.

     

    Personalized GPTs Are Here + F.T.C. Chair Lina Khan on A.I. Competition + Mayhem at Apefest

    Personalized GPTs Are Here + F.T.C. Chair Lina Khan on A.I. Competition + Mayhem at Apefest

    Warning: this episode contains some explicit language.

    OpenAI has unveiled a new way to build custom chatbots. Kevin shows off a few that he’s built – including a custom Hard Fork bot, and a bot that gives investment advice inspired by his late grandpa. 

    Then, we talk to Lina Khan, the chair of the Federal Trade Commission, about the agency’s approach to regulating A.I., and whether the tactics she’s used to regulate big tech companies are working.

    And finally, a Bored Ape Yacht Club event left some attendees' eyes burning, literally. That, and Sam Bankman-Fried’s recent fraud conviction has us asking, how much damage hath the crypto world wrought? 

    Today’s guest:

    • Lina Khan, chair of the Federal Trade Commission

    Additional reading: 

    • OpenAI’s new tools allow users to customize their own GPTs.
    • Lina Khan believes A.I. disruption demands regulators take a different approach than that of the Web 2.0 era.

    More than 20 people reported burning eye pain after a Bored Ape Yacht Club party in Hong Kong.

    The Path to Informed eConsent for Clinical Research with Conversational AI

    The Path to Informed eConsent for Clinical Research with Conversational AI

    Clinical research studies are increasing in numbers, they’re getting more complex, and they must be executed in a very challenging, tight labor environment. These are some of the key drivers of decentralized programs that use technology to lessen the burden of requiring patients to be physically present in order to participate. 

    Despite advances in IT, informed consent — the linchpin moment between recruiting and participation — remains a challenge. In this episode, Justin Mardjuki and Greg Kefer discuss why eConsent is so hard, and how modern conversational technology may finally connect the dots by making the consent process easy for patients, while also satisfying multiple regulatory and collaboration requirements that research initiatives require.