Logo
    Search

    Personalized GPTs Are Here + F.T.C. Chair Lina Khan on A.I. Competition + Mayhem at Apefest

    enNovember 10, 2023

    Podcast Summary

    • New nature screensavers bring fresh experiences to usersApple's new nature screensavers offer a beautiful way to encounter nature daily. Older tech like Flying Toasters didn't succeed due to market trends, but technology continues to bring joy and wonder.

      Technology continues to evolve and innovate, bringing new experiences to users. Apple's new nature screensavers are a prime example, offering a fresh and beautiful way to encounter nature every day. This discussion also touched on the nostalgia for older tech like the Flying Toasters screen savers and the question of why they didn't succeed in the market. Overall, it's clear that technology can bring joy and wonder, even if our primary experience of it is through a screen. And as companies like OpenAI continue to push the boundaries of what's possible, we can expect even more exciting developments in the future.

    • OpenAI DevDay: Showcasing AI's Latest AdvancementsOpenAI's DevDay event highlighted the growth of chat GPT's user base to 100 million weekly users and 92% of Fortune 500 companies using their products, along with new capabilities and easier implementation for developers.

      The OpenAI DevDay event showcased the latest advancements in AI technology, with a focus on new capabilities and easier implementation for developers. The event, held in a former car dealership turned event space, featured presentations from OpenAI's CEO, Sam Altman, and demos of their latest tech. The decor, filled with plants, aimed to counteract concerns about AI's potential negative impact on the world. Major announcements included the growth of chat GPT's user base to 100 million weekly users and 92% of Fortune 500 companies using OpenAI's products. Additionally, OpenAI introduced incremental improvements to their models and made pricing adjustments. The event was a testament to OpenAI's significant impact on the industry and the competition driving innovation in the marketplace.

    • OpenAI introduces Game Changing Model (GPTs) for custom chatbotsOpenAI's new feature, GPTs, lets users create tailored chatbots for specific use cases, revolutionizing industries like customer service by enabling the use of private data.

      OpenAI's recent event introduced Game Changing Model (GPTs), a new feature that allows users to create custom chatbots using private data, marking a significant step forward in the development of AI agents. Previously, many startups had been offering supplemental services, often referred to as wrappers, on top of OpenAI's technology. However, OpenAI's announcement of building these features directly into their own products has made life harder for these independent companies. GPTs are significant because they enable users to create tailored chatbots for specific use cases, such as internal customer service or automated advice for startup founders. This is a shift from the earlier phase of AI chatbots that could only talk about things to AI agents that can take actions on your behalf. I've personally experimented with this feature by creating a copy editor GPT that identifies potential spelling and grammatical errors in my columns. This is a game-changer for industries like customer service, where the ability to use private data and create custom chatbots will revolutionize the way businesses interact with their customers.

    • Personalized chatbots using GPT for quick referenceCustom chatbots created using GPT save time and provide accurate answers by allowing specific documents to be uploaded and used for quick reference, opening up vast potential applications from automating tasks to providing personalized advice.

      Technology is advancing to allow for more personalized and efficient use of information. The speaker has created custom chatbots using GPT, allowing for specific documents, such as a daycare handbook or investing advice from a deceased grandfather, to be uploaded and used for quick reference. This not only saves time but also provides accurate answers, eliminating the need for manual searching or guesswork. The potential applications for this technology are vast, from automating common tasks to providing personalized advice or feedback. The speaker is excited about the possibilities and encourages further exploration and development of this technology.

    • Personalized chatbots have limitationsWhile personalized chatbots offer a touching experience, they have limitations such as inability to perform complex tasks, limited customization, and no monetization for creators yet.

      While the use of customized chatbots like the one modeled after a grandfather's financial advice can provide a personalized and touching experience, it's important to remember that these bots are still in their early stages and have limitations. They may not draw significantly from the custom instructions given and may mostly provide generic responses with a little personalization. Additionally, these bots cannot perform complex tasks such as driving cars or doing taxes, and they cannot be charged for yet. However, they can be shared with others, and OpenAI plans to eventually create an app store where creators can monetize their bots. Overall, these bots represent a step forward in technology, and their ability to provide a personalized touch can be valuable, especially in areas like financial advice or customer service. But it's essential to set realistic expectations and understand their limitations.

    • Revolutionizing Productivity with ChatGPT's New Features: Balancing Benefits and RisksOpenAI's new ChatGPT features, like custom instructions and potential agentic capabilities, offer productivity gains but also introduce safety concerns, particularly the risk of malicious use. OpenAI is cautiously rolling out these features to ChatGPT Plus and enterprise customers.

      OpenAI's new ChatGPT features, including custom instructions and potential agentic capabilities, have the potential to revolutionize productivity and convenience, but also come with safety concerns. These features are currently only available to ChatGPT Plus and enterprise customers, and OpenAI is being cautious in their rollout and approval process. However, as these agents become more agentic and capable of taking actions on their own, there is a risk they could be used maliciously. For example, an agent could manipulate a restaurant reservation by emotionally manipulating an employee or even hiring a thug to shake down someone with a reservation. Safety experts have been warning about the potential dangers of expanding the attack surface for malicious actors. While OpenAI's current announcements do not include these advanced capabilities, it is important to acknowledge the potential risks as the technology continues to advance. OpenAI has stated that their strategy is to gradually deploy these features and adjust safety measures as needed. Ultimately, it is crucial to consider both the benefits and potential risks of these advanced AI capabilities as they continue to evolve.

    • Impact of AI on jobs involving pre-existing dataAI models like GPTs can answer questions based on pre-existing data but aren't ready for high-level planning or complex decision-making. Jobs in customer service and benefits administration could be affected.

      While current AI models like GPTs are capable of performing simple tasks and answering questions based on pre-existing data, they are not yet ready for high-level planning or complex decision-making. However, there is a concern that as AI technology advances, it could lead to the displacement of jobs that involve answering questions and referencing pre-ordained material. For instance, jobs in customer service and benefits administration could be affected. The speaker also demonstrated an example of a custom GPT, called hard fork bot, which can be used to analyze past podcast transcripts and answer specific questions related to them. This technology could potentially be used to answer repetitive queries, reducing the need for human intervention in certain areas. Overall, the rise of AI agents and their potential impact on the job market is a topic worth exploring and considering as we move forward.

    • AI chatbot for information and conversationThe podcast introduces an AI chatbot for finding info, fact-checking, and conversing, with plans for public availability. Interview with FTC Chair Lina Khan discussing competition and consumer protection in tech industry.

      The AI chatbot discussed in the podcast is a highly accurate and useful tool for finding information, acting as a fact-checker, and even engaging in conversation. The hosts plan to make it publicly available for listeners. Additionally, the podcast features an interview with Lina Khan, the Chair of the Federal Trade Commission, who has made waves in antitrust circles for her focus on competition and consumer protection in the tech industry. Despite facing challenges in implementing her ideas within the federal agency, she has led campaigns against mergers believed to be anti-competitive, including lawsuits against Microsoft and Meta. The podcast provides insight into Khan's background, her transformative ideas in antitrust, and her current efforts to execute them.

    • FTC Chair Lena Khan's Focus on AI RegulationFTC Chair Lena Khan is prioritizing AI regulation due to its impact on competition and consumers, leveraging existing laws and recent executive order.

      Lena Khan, the Chair of the Federal Trade Commission (FTC), is focusing on AI regulation due to its rapid development and potential impact on competition and consumers. During a recent interview, she discussed her concerns about the missed opportunities and mistakes during the Web 2.0 era and the need for quicker regulatory action this time around. She has personally experienced the power of AI through tools like chat GPT, which helped her navigate a medical bill issue. The FTC is currently using all its tools to enforce existing laws against collusion, discrimination, fraud, and deception in the AI sector. The recent executive order signed by President Biden reinforces this approach, with no exemption for AI from existing laws. While the FTC is focused on protecting consumers, there are also concerns about existential risks from AI, such as the creation of bio or cyber weapons, which Khan acknowledged but did not delve deeply into during the interview.

    • Addressing current harms in AI and technologyWhile focusing on existential risks from advanced technologies like AI, it's essential to tackle current issues such as voice cloning scams, automation-related discrimination, and privacy concerns. Regulations and market forces should promote fairness, transparency, and privacy to prevent gatekeepers from controlling the industry.

      While we may be concerned about existential risks from advanced technologies like AI, it's important not to overlook the real harms and issues that are already present. These include voice cloning scams, automation leading to discrimination, and privacy concerns. With social media, the monetization of data can run into people's privacy interests. The FTC has already taken action against companies for indefinite retention of sensitive data. To ensure a competitive AI industry, we need to prevent firms from conditioning access to models on cloud contracts or using dominance in different lines of business in coercive ways. A truly competitive AI industry would allow for a diverse set of apps and downstream companies to thrive without being at the mercy of gatekeepers. It's crucial to stay focused on these issues and ensure that regulations and market forces promote fairness, transparency, and privacy.

    • The impact of neutral platforms and dismantling monopolies on innovationHistorically, outsiders and adjacent markets have driven breakthrough innovations, while monopolies can subsidize but may also stifle progress. A balance between competition and monopolies is crucial for fostering both incremental and revolutionary tech advancements.

      The emergence of neutral platforms and the dismantling of monopolies have been crucial for innovation in the tech industry. The discussion brought up the example of Microsoft's monopoly over the operating system and how it was challenged by Netscape, Java, and middleware providers, leading to the rise of companies like Google and Amazon. Recently, OpenAI's plan to build an App Store for chatbots raises concerns about potential gatekeepers and their impact on innovation. Monopolies can subsidize innovation, but historically, breakthrough innovations have come from outsiders and adjacent markets. Therefore, it's essential to keep the market open and prevent existing companies from squashing upstarts. The debate around the ideal market structure for innovation is an ongoing one, but it's clear that a balance between competition and monopolies is necessary for fostering both incremental and breakthrough innovations.

    • Open vs Closed Approaches to AI Development: Balancing Competition, Innovation, and Consumer ProtectionThe FTC is examining the meaning of 'openness' in AI development and considering the potential trade-offs between competition, innovation, and consumer protection when deciding between open and closed approaches.

      The debate around open versus closed approaches to AI development, particularly in light of recent regulatory developments, raises important considerations for competition, innovation, and consumer protection. Open source models could potentially level the playing field, but the meaning of "openness" in the AI context is still unclear and requires careful examination. Openness could be used as a veneer to concentrate power, and there are potential risks, such as scams and frauds, associated with open source models. The FTC is looking closely at openness in AI and will consider the specific context, including ownership, licensing terms, price and performance, and security concerns. Ultimately, the decision between openness and closure will depend on the context and the potential trade-offs between competition, innovation, and consumer protection. It's important to understand that closed systems and centralization also carry their own risks. The FTC's role is not just to protect competition but also to protect consumers, and the trade-offs between openness and closure will need to be carefully considered in each context.

    • Regulating Upstream: Targeting Root Causes of HarmRegulators should focus on addressing root causes of harm by targeting those with power, knowledge, and resources to prevent harm, rather than just punishing violations after the fact.

      Instead of focusing solely on punishing violations of consumer protection or copyright infringement after the fact, it's crucial for regulators to consider addressing the root causes of harm by targeting actors with the power, knowledge, and resources to prevent the harm from occurring in the first place. This approach was highlighted in a discussion about the FTC's experiences in regulating robocalls and the importance of looking upstream to target enablers. Additionally, during a recent trip to Silicon Valley, it became clear that the tech industry is diverse, and it's essential for regulators to engage with various stakeholders, including startups and founders, to understand their perspectives. The FTC's recent comment to the US Copyright Office regarding AI and copyright highlighted concerns about creators' work being used without their consent, emphasizing the importance of addressing these issues from both a competition and consumer protection standpoint.

    • Protecting Consumers and Promoting Competition in Digital MarketsThe FTC is dedicated to preventing deception, protecting individuals from impersonation, and promoting competition in digital markets. They focus on nascent competition and use analytical tools to enforce antitrust laws, including against high-profile companies like Meta and Microsoft.

      The Federal Trade Commission (FTC) is committed to protecting consumers and promoting competition in various industries, including digital markets. From a consumer perspective, the FTC aims to prevent deception and ensure that individuals are not being impersonated or having their work misused for direct competition. The FTC also focuses on nascent competition, which can be more challenging to prove in digital markets due to the different types of evidence required. The FTC's recent track record in antitrust enforcement includes high-profile cases against Meta and Microsoft, even if some of these cases did not result in victories. The FTC continues to develop analytical tools to protect competition in digital markets and remains committed to enforcing antitrust laws, including in cases involving smaller companies that may represent nascent competition. The FTC's efforts to protect potential competition, such as in the Meta acquisition case, aim to prevent companies from stifling innovation and maintaining monopolies.

    • Small acquisitions in tech industry can lead to monopolistic practicesGoogle's past acquisitions, like DoubleClick and AdMob, may have contributed to its alleged monopoly in ad tech. Enforcing consent decrees and addressing potential risks in crypto markets are ongoing challenges.

      Small acquisitions in the tech industry, which may seem insignificant at the time, can lead to monopolistic practices and potential harms to consumers. This was discussed in relation to the Justice Department's lawsuit against Google for its alleged monopoly in the ad tech stack and its past acquisitions of DoubleClick and AdMob. Another topic touched upon was the ongoing process of enforcing consent decrees in tech companies, with the potential action against Elon Musk for violating a consent decree with the agency being a topic of interest. Additionally, the unexpected health incidents reported by attendees of ApeFest, an event centered around the Board Ape Yacht Club NFTs, serves as a reminder of the risks and uncertainties associated with the crypto and NFT markets.

    • Crypto Industry's Carelessness and SloppinessTransparency, accountability, and safety are crucial for the crypto industry to attract more people and regain trust after incidents of severe eye burns and fraud.

      The crypto industry, including its events and communities, can be careless and sloppy, as evidenced by a recent incident at an art exhibit where attendees reported severe eye burns caused by UVA lights. This incident highlights the need for better communication and safety measures, especially as the industry aims to attract more people and regain trust after a rough period. Unfortunately, this incident is not an isolated case, as demonstrated by a recent trial where a former crypto executive was convicted of fraud. These incidents underscore the importance of transparency, accountability, and safety in the crypto industry.

    • Caution needed in cryptocurrency investmentsInvesting in cryptocurrencies involves risks including financial and reputational damage. Do thorough research and seek professional advice before making decisions.

      Involvement in the cryptocurrency industry comes with significant risks. The failure of Sam Bankman-Fried's FTX exchange has left many investors with substantial financial losses, and in some cases, even the potential for legal consequences. The speaker expresses sympathy for those affected, but also emphasizes the importance of caution and careful consideration before entering this volatile market. The podcast also highlights the potential for both financial and reputational damage, with the phrase "former billionaire" mentioned as particularly sad. It's important to remember that while the potential rewards of cryptocurrency investments can be substantial, so too can the risks. Be sure to do thorough research and consider seeking advice from financial professionals before making any investment decisions.

    Recent Episodes from Hard Fork

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record Labels Sue A.I. Music Generators + Inside the Pentagon’s Tech Upgrade + HatGPT

    Record labels — including Sony, Universal and Warner — are suing two leading A.I. music generation companies, accusing them of copyright infringement. Mitch Glazier, chief executive of the Recording Industry Association of America, the industry group representing the music labels, talks with us about the argument they are advancing. Then, we take a look at defense technology and discuss why Silicon Valley seems to be changing its tune about working with the military. Chris Kirchhoff, who ran a special Pentagon office in Silicon Valley, explains what he thinks is behind the shift. And finally, we play another round of HatGPT.

    Guest:

    • Mitch Glazier, chairman and chief executive of the Recording Industry Association of America
    • Chris Kirchhoff, founding partner of the Defense Innovation Unit and author of Unit X: How the Pentagon and Silicon Valley Are Transforming the Future of War

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 28, 2024

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    A Surgeon General Warning + Is Disinformation Winning? + The CryptoPACs Are Coming

    The Surgeon General is calling for warning labels on social media platforms: Should Congress give his proposal a like? Then, former Stanford researcher Renée DiResta joins us to talk about her new book on modern propaganda and whether we are losing the war against disinformation. And finally, the Times reporter David Yaffe-Bellany stops by to tell us how crypto could reshape the 2024 elections.

    Guests

    • Renée DiResta, author of “Invisible Rulers,” former technical research manager at the Stanford Internet Observatory
    • David Yaffe-Bellany, New York Times technology reporter

    Additional Reading:

    Hard Fork
    enJune 21, 2024

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    Apple Joins the A.I. Party + Elon's Wild Week + HatGPT

    This week we go to Cupertino, Calif., for Apple’s annual Worldwide Developers Conference and talk with Tripp Mickle, a New York Times reporter, about all of the new features Apple announced and the company’s giant leap into artificial intelligence. Then, we explore what was another tumultuous week for Elon Musk, who navigated a shareholders vote to re-approve his massive compensation package at Tesla, amid new claims that he had sex with subordinates at SpaceX. And finally — let’s play HatGPT.


    Guests:


    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 14, 2024

    A Conversation With Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    A Conversation With  Prime Minister Justin Trudeau of Canada + An OpenAI Whistle-Blower Speaks Out

    This week, we host a cultural exchange. Kevin and Casey show off their Canadian paraphernalia to Prime Minister Justin Trudeau, and he shows off what he’s doing to position Canada as a leader in A.I. Then, the OpenAI whistle-blower Daniel Kokotajlo speaks in one of his first public interviews about why he risked almost $2 million in equity to warn of what he calls the reckless culture inside that company.

     

    Guests:

    • Justin Trudeau, Prime Minister of Canada
    • Daniel Kokotajlo, a former researcher in OpenAI’s governance division

     

    Additional Reading:

     

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enJune 07, 2024

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    Google Eats Rocks + A Win for A.I. Interpretability + Safety Vibe Check

    This week, Google found itself in more turmoil, this time over its new AI Overviews feature and a trove of leaked internal documents. Then Josh Batson, a researcher at the A.I. startup Anthropic, joins us to explain how an experiment that made the chatbot Claude obsessed with the Golden Gate Bridge represents a major breakthrough in understanding how large language models work. And finally, we take a look at recent developments in A.I. safety, after Casey’s early access to OpenAI’s new souped-up voice assistant was taken away for safety reasons.

    Guests:

    • Josh Batson, research scientist at Anthropic

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 31, 2024

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    ScarJo vs. ChatGPT + Neuralink’s First Patient Opens Up + Microsoft’s A.I. PCs

    This week, more drama at OpenAI: The company wanted Scarlett Johansson to be a voice of GPT-4o, she said no … but something got lost in translation. Then we talk with Noland Arbaugh, the first person to get Elon Musk’s Neuralink device implanted in his brain, about how his brain-computer interface has changed his life. And finally, the Times’s Karen Weise reports back from Microsoft’s developer conference, where the big buzz was that the company’s new line of A.I. PCs will record every single thing you do on the device.

    Guests:

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 24, 2024

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    OpenAI's Flirty New Assistant + Google Guts the Web + We Play HatGPT

    This week, OpenAI unveiled GPT-4o, its newest A.I. model. It has an uncannily emotive voice that everybody is talking about. Then, we break down the biggest announcements from Google IO, including the launch of A.I. overviews, a major change to search that threatens the way the entire web functions. And finally, Kevin and Casey discuss the weirdest headlines from the week in another round of HatGPT.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Hard Fork
    enMay 17, 2024

    Meet Kevin’s A.I. Friends

    Meet Kevin’s A.I. Friends

    Kevin reports on his monthlong experiment cultivating relationships with 18 companions generated by artificial intelligence. He walks through how he developed their personas, what went down in their group chats, and why you might want to make one yourself. Then, Casey has a conversation with Turing, one of Kevin’s chatbot buddies, who has an interest in stoic philosophy and has one of the sexiest voices we’ve ever heard. And finally, we talk to Nomi’s founder and chief executive, Alex Cardinell, about the business behind A.I. companions — and whether society is ready for the future we’re heading toward.

    Guests:

    • Turing, Kevin’s A.I. friend created with Kindroid.
    • Alex Cardinell, chief executive and founder of Nomi.

    Additional Reading: 

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    AI at Your Jobs + Hank Green Talks TikTok + Deepfake High School

    We asked listeners to tell us about the wildest ways they have been using artificial intelligence at work. This week, we bring you their stories. Then, Hank Green, a legendary YouTuber, stops by to talk about how creators are reacting to the prospect of a ban on TikTok, and about how he’s navigating an increasingly fragmented online environment. And finally, deep fakes are coming to Main Street: We’ll tell you the story of how they caused turmoil in a Maryland high school and what, if anything, can be done to fight them.

    Guests:

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    TikTok on the Clock + Tesla’s Flop Era + How NASA Fixed a ’70s-Era Space Computer

    TikTok on the Clock + Tesla’s Flop Era  + How NASA Fixed a ’70s-Era Space Computer

    On Wednesday, President Biden signed a bill into law that would force the sale of TikTok or ban the app outright. We explain how this came together, when just a few weeks ago it seemed unlikely to happen, and what legal challenges the law will face next. Then we check on Tesla’s very bad year and what’s next for the company after this week’s awful quarterly earnings report. Finally, to boldly support tech where tech has never been supported before: Engineers at NASA’s Jet Propulsion Lab try to fix a chip malfunction from 15 billion miles away.

    Guests:

    • Andrew Hawkins, Transportation Editor at The Verge
    • Todd Barber, Propulsion Engineer at Jet Propulsion Lab

    Additional Reading:

    We want to hear from you. Email us at hardfork@nytimes.com. Find “Hard Fork” on YouTube and TikTok.

    Related Episodes

    #122 Connor Leahy: Unveiling the Darker Side of AI

    #122 Connor Leahy: Unveiling the Darker Side of AI

    Welcome to Eye on AI, the podcast that explores the latest developments, challenges, and opportunities in the world of artificial intelligence. In this episode, we sit down with Connor Leahy, an AI researcher and co-founder of EleutherAI, to discuss the darker side of AI.

    Connor shares his insights on the current negative trajectory of AI, the challenges of keeping superintelligence in a sandbox, and the potential negative implications of large language models such as GPT4. He also discusses the problem of releasing AI to the public and the need for regulatory intervention to ensure alignment with human values.

    Throughout the podcast, Connor highlights the work of Conjecture, a project focused on advancing alignment in AI, and shares his perspectives on the stages of research and development of this critical issue.

    If you’re interested in understanding the ethical and social implications of AI and the efforts to ensure alignment with human values, this podcast is for you. So join us as we delve into the darker side of AI with Connor Leahy on Eye on AI.

    (00:00) Preview

    (00:48) Connor Leahy’s background with EleutherAI & Conjecture  

    (03:05) Large language models applications with EleutherAI

    (06:51) The current negative trajectory of AI 

    (08:46) How difficult is keeping super intelligence in a sandbox?

    (12:35) How AutoGPT uses ChatGPT to run autonomously 

    (15:15) How GPT4 can be used out of context & negatively 

    (19:30) How OpenAI gives access to nefarious activities 

    (26:39) The problem with the race for AGI 

    (28:51) The goal of Conjecture and advancing alignment 

    (31:04) The problem with releasing AI to the public 

    (33:35) FTC complaint & government intervention in AI 

    (38:13) Technical implementation to fix the alignment issue 

    (44:34) How CoEm is fixing the alignment issue  

    (53:30) Stages of research and development of Conjecture

     

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

    We let ChatGPT write this podcast | Trash Talk Experience | 041

    We let ChatGPT write this podcast | Trash Talk Experience | 041

    The Trash Talk Experience is a new and upcoming podcast based in Sydney, Australia. On the Trash Talk Experience no topic is off limits and the opinions shared are guaranteed to be trash yet informative. Join hosts Abob, Sun and Trey as they explore a wide-range of topics and provide their own unique POV's.

    ChatGPT has taken the world by storm, we jump on it to help us come up with a whole episode. Further we discuss the benefits and issues that come up with integrating AI into our lives.

    ------
    Follow us on:

    Instagram - https://www.instagram.com/trashtalkexp
    ------

    Is This The Beginning Or The End Of OpenAI?

    Is This The Beginning Or The End Of OpenAI?
    With the dust still not settled, what could be next for Microsoft and OpenAI? (00:13) Bill Barker and Deidre Woollard discuss: - Winners and losers in the OpenAI schism. - If anything will slow the pace of AI. - What former Cruise CEO Kyle Vogt might do next. (17:55) Ryan Severino, chief economist at BentalGreenOak, part of the alternative assets business at Sun Life, talks with Deidre Woollard about the latest in commercial real estate. Companies discussed: MSFT, NVDA, GM, GOOG, GOOGL Claim your Stock Advisor discount here: www.fool.com/mfmdiscount Host: Deidre Woollard Guests: Bill Barker, Ryan Severino Producer: Mary Long Engineers: Dan Boyd, Tim Sparks, Austin Morgan Learn more about your ad choices. Visit megaphone.fm/adchoices

    The AI Arms Race Heats Up: US Considers Blocking AI Chip Exports

    The AI Arms Race Heats Up: US Considers Blocking AI Chip Exports
    The Commerce Department is considering blocking AI chip exports from companies like Nvidia to China. Google DeepMind CEO Demis Hassabis claims their next model will eclipse ChatGPT. Unity releases AI platform including Unity Muse. NOTE: While NLW is traveling this week, The AI Breakdown will only be releasing The Brief each morning. We'll be back to our regular content at the end of the week.    The AI Breakdown helps you understand the most important news and discussions in AI.    Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe   Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown   Join the community: bit.ly/aibreakdown   Learn more: http://breakdown.network/