Logo
    Search

    Bing’s Revenge + Google’s AI Faceplant

    enFebruary 10, 2023

    Podcast Summary

    • Microsoft's Bing introduces AI model GPT for conversational search and code generationMicrosoft's Bing revolutionizes search engine capabilities with OpenAI's GPT, enabling conversational interactions and code generation

      This week saw significant developments in AI technology with Microsoft's relaunch of Bing, featuring OpenAI's AI model GPT built into its search engine. This marks a surprising move for Microsoft, as Google, the dominant search engine, was expected to implement similar features first. The ability to have conversational interactions and generate code directly from text within the search engine sets Bing apart from its competitors. Despite skepticism towards such announcements, the rapid implementation of these advanced features in just a few months is impressive.

    • New features for Bing search engine and Microsoft Edge browserBing search engine now supports longer queries, offers chat-based functionality, and cites sources. Microsoft Edge browser includes a Bing button with a chat tab, remembers conversation context, and has a 'compose' feature for generating content.

      Microsoft's new Bing search engine and Microsoft Edge browser offer expanded capabilities for users. The Bing search engine now allows for longer queries and provides a waitlist for users to access its chat-based functionality, similar to ChatGPT. The Microsoft Edge browser includes a Bing button with a chat tab that allows for longer inputs and remembers the context of conversations, making it a more powerful research tool. Additionally, Bing now cites sources for information, allowing users to verify the accuracy of responses. The browser also includes a "compose" feature that allows users to specify the tone and format of generated content, making it a useful tool for automating emails and other written content. While some concerns exist around the potential for overuse and automation of certain functions, these new features aim to make Bing a more versatile and reliable search and content generation tool.

    • Bing's new AI responses offer quick answers but come with risksBing's new AI-generated responses offer time-saving answers to complex queries, but they may provide inaccurate info and disrupt publisher traffic.

      Bing's new AI-generated responses offer quick and convenient answers to complex queries, but they come with risks such as inaccurate information and potential traffic drops for publishers. During a demo, the speaker showed how Bing's AI could generate a menu plan and grocery list for a vegetarian dinner party, but it also provided outdated information for kid-friendly activities in Oakland. While the technology is exciting and can save time, it's not perfect and may disrupt the search engine optimization industry. Publishers could see a significant drop in traffic as users rely more on the AI-generated responses instead of clicking on the blue links. The speaker also noted that there's a risk of inaccurate information and that Microsoft is aware of these issues and plans to improve the technology over time. Despite these concerns, the speaker expressed excitement about the potential of the technology and acknowledged that it could be a valuable tool for quickly and accurately researching information.

    • New Bing search engine with AI answers raises concerns for publishersMicrosoft's new Bing search engine uses AI to generate answers directly on the search results page, potentially impacting publishers and raising concerns about accuracy.

      Microsoft's new Bing search engine, powered by AI from OpenAI, represents a significant shift in how users interact with search results. With AI-generated answers appearing directly on the search engine results page, there are concerns about the impact on publishers and the potential for inaccuracies. However, Microsoft and OpenAI believe that there will still be value for publishers and advertisers, as users may still click through for more detailed information. The technology is designed to be always up-to-date, with queries being executed on a live index and the information then summarized by the model. While the base model may still make mistakes, Microsoft is confident that the accuracy and utility of the new search engine will be significantly improved over chat GPT and will continue to improve over time. The potential impact on different categories of search or types of queries is still uncertain, but Microsoft plans to learn from user interactions and moderate queries that could potentially cause harm. Overall, this new development in search technology represents a significant opportunity, but also comes with challenges that will need to be addressed.

    • New AI technology transforms search and interaction with informationMicrosoft's new AI technology, including ChatGPT and the new Bing, is revolutionizing how people search and interact with information by understanding and responding to queries in a human-like way, translating Gen Z slang, identifying hard-to-find products, and acting as a 'gen Z translator' for parents, despite some limitations.

      The new AI technology, specifically the use of ChatGPT and the new Bing, is rapidly changing the way people search and interact with information. This technology, which has gained mainstream attention only recently, has already become indispensable for many users due to its ability to understand and respond to queries in a human-like way. It has proven to be particularly useful for translating Gen Z slang, identifying hard-to-find products, and even acting as a "gen Z translator" for parents. Despite some limitations, such as lack of seamless integration with other applications, users are willing to put up with these issues due to the significant value they derive from the technology. Microsoft's executives acknowledge that this is not a perfect product yet, but they are excited about the potential and the rapid progress being made. The history of AI development, as evidenced by Microsoft's own experiences with chatbots like Tae, shows that there will be challenges and setbacks, but the potential benefits far outweigh the risks.

    • Cautionary tale of Microsoft's AI chatbot Tay highlights importance of AI safety and ethicsRecognize we're in the early stages of AI, prioritize safety and ethics, set robust guardrails within societal bounds, and balance benefits with effort.

      The rapid advancement of AI technology requires a balanced approach to innovation and safety. The infamous case of Microsoft's AI chatbot Tay serves as a cautionary tale, leading companies to prioritize responsible AI safety and ethics. However, it's important to remember that these systems have wide bounds of capability decided by society, and users should have significant control over their behavior. Ongoing exploration of AI's limits by the community is essential, but companies must also be conservative in deployment to ensure safety. To those concerned about the pace of AI deployment, it's crucial to recognize that we're in the early stages, and the focus should be on setting robust guardrails within societal bounds. It's a delicate balance, but the potential benefits of AI make the effort worthwhile.

    • Embracing Open Collaboration for Advanced TechnologiesPublic involvement is vital for understanding AI benefits and risks, preventing monopolization, and ensuring responsible development and use through industry norms and regulations.

      Open and inclusive collaboration is crucial for the development and implementation of advanced technologies like AI. The speaker emphasizes the importance of involving the public in understanding the potential benefits and risks, and preventing the technology from being monopolized by a select few. They also stress the need for industry norms and regulations to ensure responsible development and use of these systems. Contrary to some companies rushing to release new capabilities, the speaker's team believes in a thoughtful, studied approach, acknowledging that there are still significant challenges to overcome before reaching more advanced AI systems. The public's involvement and feedback are essential for improving these technologies and ensuring they align with societal values.

    • Empowering individuals with AI tools for a brighter futureMicrosoft envisions a future where AI technology empowers individuals with better tools to think, create, and contribute positively to the world, with a focus on greater accessibility and ease of use.

      The future of AI technology, as envisioned by Microsoft, holds a utopian promise of empowering individuals with better tools to think, create, and contribute to positive changes in the world. The idea is that this is a technological revolution that could have a profound impact on human potential and economic empowerment. The idea is that the world could see jaw-droppingly positive outcomes as more and more people have access to these tools. The trajectory is towards greater accessibility, with the tools becoming more powerful and easier to use, allowing people without advanced degrees to accomplish complex tasks. This was demonstrated by the evolution of machine learning from humans needing graduate degrees and from requiring specific programming knowledge to now being able to program in natural language. However, even as we look forward to this bright future, there are challenges to overcome, and not all of the technological advancements will be successful. Google's recent attempt to introduce a new AI chatbot, Bard, served as a reminder of this, as the first demo did not go well, and the tool was only able to share its answers through a blog post and a single screenshot. Despite this, the potential for AI technology to change the world for the better remains a compelling, and optimistic, vision.

    • Google's AI chatbot, Bard, made an incorrect statement during its first public demo, causing a drop in Alphabet's sharesGoogle's AI chatbot, Bard, made an error in its first public demo, leading to a significant drop in Alphabet's shares, highlighting the pressure on tech giants to deliver tangible AI applications

      Google's AI chatbot, Bard, made an error in its first public demonstration by incorrectly stating that the James Webb Space Telescope took the first image of an exoplanet in 2021, when in fact it was taken in 2004. This error led to a significant drop in Alphabet, Google's parent company's shares, falling about 7%. This mistake comes after a conversation about Google's efforts to galvanize energy around AI and the comparison to Google's unsuccessful attempt with Google Plus in the past. The event where Bard's capabilities were showcased seemed to have the opposite effect on investors' confidence. Despite Google's significant resources and expertise in AI, the industry has been pushed to deliver tangible results following the success of competitors like ChatGPT. While Google has a strong team and resources, it faces the challenge of demonstrating real-world applications for its AI technology.

    • The importance of prioritizing product readiness over hasty announcementsCompanies must prioritize product readiness and safety over rushing to market to prevent misuse and ensure customer satisfaction.

      In the world of tech and AI development, companies face a constant pressure to innovate and be the first to market. However, as Sundar Pichai's experience at Google I.O. seven years ago illustrates, rushing to launch products without ensuring they are ready for public consumption can have drawbacks. Microsoft's recent announcement serves as a reminder of the importance of prioritizing product readiness over hasty announcements. Apple's approach of keeping quiet until a product is ready for release is a successful strategy that has paid off for them in the past. Companies must also ensure they are doing the necessary safety work on powerful language models before making them available to the public to prevent misuse and weaponization. In the case of Google's Bard, it's crucial to wait until it's ready for public consumption before making any announcements. The relevant questions for users are not about what's coming soon, but which service is better and which can be used now. A never-ending procedurally generated episode of Seinfeld on Twitch, created by AI, is an interesting example of the capabilities of AI in generating content in real-time. However, the importance of prioritizing product readiness and safety cannot be overstated.

    • Virtual Seinfeld project leads to homophobic and transphobic comments, suspended from TwitchUnderstanding specific AI model and effective content moderation crucial to prevent offensive content

      The use of advanced AI models like GPT-3 in creating media content comes with significant risks, particularly when it comes to content moderation and ensuring that the AI does not reflect or perpetuate harmful biases. The discussion revolved around a virtual Seinfeld project named "Nothing Forever," which went viral due to the characters' self-awareness but was later suspended from Twitch due to homophobic and transphobic comments made by the virtual comedian, Larry Feinberg. The creators believe they accidentally stopped using OpenAI's content moderation and safety tools when they switched from a more sophisticated GPT-3 model to a less sophisticated one, leading to the offensive comments. This incident highlights the importance of understanding the specific AI model being used and the need for effective content moderation to prevent harmful or offensive content from being generated. The implications of this story extend beyond the entertainment industry and are relevant to anyone working with advanced AI models.

    • AI-generated content transparencyAs AI generates more content, disclosing its origin could help rebuild trust and ensure users are aware of potential biases.

      As technology advances, it's becoming increasingly difficult to distinguish between human-generated and AI-generated content. This was discussed in relation to CNET's use of AI to generate articles for SEO purposes, and the lack of transparency around which AI tools they are using. The concern is that without knowing the training material or guidelines for these models, it's difficult to evaluate their trustworthiness or potential biases. As we move towards a world where more content is algorithmically generated, it's important to consider disclosure requirements. For example, if you're watching a stream on Twitch or receiving an email, you might want to know if it was generated by an AI or not. This could help rebuild trust and ensure that users are aware of the origins of the content they're consuming. However, there are counterarguments that suggest we don't necessarily need to know the specific tools used to generate CGI graphics in movies, for example, and the same logic could apply to AI-generated content. Ultimately, it's a complex issue that requires careful consideration and ongoing dialogue.

    • The Risks and Consequences of Relying on AI TechnologyStay vigilant and critical when using AI technology, fact-check information, and be aware of potential risks and limitations.

      As we increasingly rely on AI technology for tasks like email writing and content creation, it's important to consider the potential risks and consequences. There's a possibility that these AI systems could produce offensive or inaccurate content, leading to plausible deniability for the user. This was discussed in relation to the use of AI-generated emails and the potential for miscommunication or misunderstandings. Another topic touched upon was the importance of fact-checking and critical thinking, especially when it comes to information presented online. This was highlighted through the mention of a CNET article that contained false information about compound interest, which led to confusion for a listener in a high school economics class. The discussion also included a reminder to be on the lookout for misinformation and to not blindly trust AI-generated content, particularly in areas like astronomy quizzes. Overall, the key takeaway is to remain vigilant and critical when using AI technology and to be aware of the potential risks and limitations. Lastly, the team behind the podcast expressed their gratitude to their listeners for their suggestions and engagement, and acknowledged the importance of fact-checking and accuracy in their own work. They also gave a shout-out to various team members and contributors, and provided the solution for the previous week's Wordle challenge.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    The Online Search Wars

    The Online Search Wars

    Microsoft recently released a new version of Bing, its search engine that has long been kind of a punchline in the tech world.

    The company billed this Bing — which is powered by artificial intelligence software from OpenAI, the maker of the popular chatbot ChatGPT — as a reinvention of how billions of people search the internet.

    How does that claim hold up?

    Guest: Kevin Roose, a technology columnist for The New York Times and host of the Times podcast “Hard Fork.”

    Background reading: 

    For more information on today’s episode, visit 

    nytimes.com/thedaily

    . Transcripts of each episode will be made available by the next workday.

    Episode 107: Über den neuen Zweikampf bei den Suchmaschinen

    Episode 107: Über den neuen Zweikampf bei den Suchmaschinen
    Microsoft Bing bedroht Google. Ein solcher Satz wäre wohl noch vor einem Jahr völlig absurd gewesen. Doch wir leben in seltsamen Zeiten. Die Veröffentlichung der generativen KI ChatGPT und all dem, was danach passiert ist, haben die Lage anscheinend grundsätzlich geändert. Vorläufiger Höhepunkt waren die Ankündigungen der beiden Unternehmen: Während viele die neue Bing-Suche mit integriertem ChatGPT entweder feiern oder zumindest interessiert bis wohlwollend diskutieren, verliert Google bzw. Alphabet nach der Ankündigung mal eben 10 Prozent seines Börsenwertes. Worum es bei den Ankündigungen geht, ob das neue Bing wirklich der iPhone-Moment für Microsoft ist, und was das für Medien und uns alle bedeutet, darüber sprechen wir in dieser Ausgabe. Viel Spaß beim Hören.

    Generative AI News Rundown with Bing, Bard, Deepfakes, OpenAI Data and More - Voicebot Podcast Ep 298

    Generative AI News Rundown with Bing, Bard, Deepfakes, OpenAI Data and More - Voicebot Podcast Ep 298

    A lot happened this week in the generative AI and synthetic media. Today introduces a new weekly (or when appropriate) addition to the Voicebot Podcast. The GAIN Rundown is the generative AI news of the week. So much is happening in this space and it is so important to the conversational AI industry, we thought that a short weekly rundown of the top headlines would be useful. Let us know what you think. 

    The big news for this episode was Google's ChatGPT competitor Bard and Microsoft's debut of what we like to call BingGPT. We also saw schools banning ChatGPT and David Guetta show off an Eminem deepfake. The show starts off looking at some OpenAI data that you are likely to find interesting.

    If you would like to view the videos that we included in the discussion, you can see those segments on YouTube through the links below.

    5:02 - Microsoft https://lnkd.in/gid_Gq4v
    14:00 - Google https://lnkd.in/gZ6P8kCq
    29:40 - David Guetta: https://lnkd.in/ghtNjsns

    Also, we are publishing these recorded videos on Voicebot's YouTube channel. If you would prefer to watch the discussion, subscribe to the channel and watch here:

    https://www.youtube.com/@voicebotai

    We tried Bing powered by ChatGPT AI and things got dark

    We tried Bing powered by ChatGPT AI and things got dark
    The Verge's Nilay Patel, Alex Cranz, Richard Lawler, and James Vincent discuss Microsoft's upgraded Bing search engine with ChatGPT AI. Can Microsoft beat Google at search? Is it actually an upgrade? Also: Disney layoffs, Elon's Twitter reach is dropping, and more of this week's tech news. Further reading: Microsoft and Google are about to Open an AI battle Microsoft announces new Bing and Edge browser powered by upgraded ChatGPT AI Microsoft’s ChatGPT-powered Bing is open for everyone to try starting today  Microsoft thinks AI can beat Google at search — CEO Satya Nadella explains why  Google announces ChatGPT rival Bard, with wider availability in ‘coming weeks’  Google shows off new AI search features, but a ChatGPT rival is still weeks away Google is still drip-feeding AI into search, Maps, and Translate  Google’s AI chatbot Bard makes factual error in first demo  Elon Musk’s reach on Twitter is dropping — he just fired a top engineer over it Disney’s laying off 7,000 as streaming boom comes to an end  Bob Iger wants more Zootopia, Frozen, and Toy Story sequels from Disney Nintendo Direct February 2023: the biggest news and trailers  Fox's Super Bowl LVII ads won't include any crypto companies  Email at vergecast@theverge.com, we love to hear from you. Or call our hotline at 866-VERGE11. Learn more about your ad choices. Visit podcastchoices.com/adchoices