Podcast Summary
AI gaining attention in Washington D.C. with bipartisan working group formed: A bipartisan working group was formed in the House Financial Services Committee to explore AI's impact on financial services and housing industries. AI CEO Sam Altman met with House Speaker Mike Johnson to discuss safety and risks. A new AI device, the rabbit, sold 10,000 units in a day, highlighting AI's growing role in industries.
The use of artificial intelligence (AI) is gaining increasing attention in Washington D.C., as evidenced by the recent formation of a bipartisan working group in the House Financial Services Committee. The group, which will explore the impact of AI on the financial services and housing industries, is being led by Republican French Hill and Democrat Steven Lynch. This development signifies the significance of AI as a political issue, but it remains to be seen whether any progress will be made during an election year. Meanwhile, OpenAI CEO Sam Altman met with House Speaker Mike Johnson to discuss AI safety and risks. Johnson's office stated that they discussed the potential benefits and risks of AI and other technologies, and the speaker believes that Congress should encourage innovation while staying mindful of potential risks. Altman added that they discussed finding a balance between the tremendous upside of AI and mitigating its risks. Another indication of the growing importance of AI came from the rapid sale-out of the first manufacturing run of a new AI device, the rabbit, which sold 10,000 units in a single day. Overall, these developments underscore the growing role of AI in various industries and the need for policymakers to address its potential risks and benefits.
Hardware is a better strategy for startups than apps: Building a hardware product offers unique features, advanced user experience, and lowers the barrier to entry compared to developing an app due to the challenges of maintaining and creating customer loyalty with constant updates and platform compatibility.
Rabbit's CEO, Jesse Lu, believes that building a hardware product, like the Rabbit r one, is a better strategy for startups than developing an app. According to Lu, apps are easy to build but difficult to maintain and create customer loyalty due to the need for constant feature matching and platform compatibility. Additionally, apps cannot offer the advanced features and user experience that Rabbit OS, a generation ahead of current app-based OS, can provide. By focusing on hardware, Rabbit is able to create a high-quality product that offers unique features and lowers the barrier to entry for early adopters. While the Rabbit r one doesn't aim to replace smartphones, it does offer a more intuitive user experience for certain tasks and allows users to teach it new functions. Ultimately, Lu argues that tech progresses through revolution rather than improvement, and the Rabbit team represents the future of technology by pushing boundaries and innovating in the hardware space.
AI in Motorsports: Successes and Challenges: AI is transforming industries, but its implementation in motorsports faced backlash due to the male-dominated nature of the industry. Retailers, however, are successfully using AI to enhance online shopping experiences.
While the use of AI in various industries continues to grow, it comes with its challenges. In the world of motorsports, Mahindra Racing's attempt to create an AI influencer, Ava Beyond Reality, faced significant backlash from fans due to the male-dominated nature of the industry. The failure of this experiment raises questions about the willingness of racing organizations to hire real women instead of AI influencers to represent their female fan base. On the other hand, AI-generated scams have become increasingly common, with fake Taylor Swift giveaways being a recent example. Retailers, however, are finding success in integrating generative AI tools into their workflows to improve online shopping experiences, such as AI-powered chatbots and checkout mechanisms. Overall, while AI offers new possibilities, it's important to be aware of its limitations and potential risks.
Most GPTs have little usage, but top ones have substantial conversations: Research shows that the majority of GPTs have minimal usage, while top ones have significant conversations. Developers can differentiate their GPTs by adding APIs for feedback and direct messaging.
While there are a significant number of Generative Pre-trained Transformers (GPTs) in existence, the majority of them have very little usage. According to research by Gary Song, about two-thirds of the publicly accessible GPTs had less than 10 conversations. The top GPTs, however, had a substantial number of conversations. Gary expresses skepticism about this trend, suggesting that the value of a GPT may not lie in its public usage, but rather in its private use for automating workflows. He also notes that the quality of GPTs appears to have declined since December 2020, with fewer GPTs having custom functionality. Gary's advice is for developers to add an API for feedback and direct messaging within their GPTs to differentiate themselves from the majority of GPTs that have no custom functionality. Overall, the research indicates that there is room for improvement in the GPT ecosystem, particularly in the area of function invocation and the development of more sophisticated applications beyond simple utility tools.
Discussing the importance of complex actions and custom data sources for GPT models: OpenAI is adding features like reviews to help users determine usefulness of GPTs, but concerns about cloning and mimicry persist. Focus is on creating more custom actions to enhance GPT capabilities, with better protections for instructions and documents in development.
Learning from the OpenAI AMA discussion is the importance of having more complex actions and custom data sources for Generative Pretrained Transformer (GPT) models. OpenAI's engineering team confirmed that they will be adding features like reviews to help users determine the usefulness of different GPTs. However, concerns were raised about the potential for cloning and mimicry of GPTs, with some users asking about protections for their instructions and documents. OpenAI acknowledged these concerns but suggested that custom actions run on one's own machines are more defensible than the prompting and documents, which can be disassembled and copied. Some users also asked about legitimate ways to build on top of existing GPT designs. OpenAI is working on better protections for instructions and documents, but the focus is on creating more custom actions to enhance the capabilities of GPT models. The addition of reviews will provide valuable information for users, helping them make informed decisions about which GPTs to use. Overall, the discussion highlights the ongoing efforts to improve and protect the use of GPT models while also encouraging the creation of more complex and customized applications.
Exploring ways to connect and expand GPTs: Developers are investigating remixing and interconnectivity between different GPTs, as well as the potential for embedding them outside of the platform, to enhance capabilities and utility.
Developers are actively exploring ways to connect and expand the capabilities of GPTs, including the potential for remixing and interconnectivity between different models, as well as the possibility of embedding GPTs outside of the platform. During a recent discussion, Eldon Tyrell asked about the possibility of building off of others' GPTs and refining models further. TurtleSoupy acknowledged the potential of remixing but mentioned privacy concerns and the decision to be conservative for now. Kate Yanchenka raised the idea of having a GPT call other GPTs for greater utility, which ComQuad Express acknowledged as a pain point and something they're trying to make headway on. The OpenAI team also confirmed the possibility of bringing GPTs out of the platform in the future. John Ride asked about the inner workings of GPTs, and TurtleSoupy explained that they consist of specialized knowledge, capabilities, and instructions. The team is working on building deeper into these areas, specifically around how functions are invoked, to improve the ecosystem's thriving potential. Overall, the discussion highlighted the continuous exploration and development of GPTs and their potential for interconnectivity and expansion.
OpenAI's GPT models expansion plans: OpenAI is actively working to expand GPT models' capabilities beyond prompt-driven systems, aiming for agentic actions and instructions. The GPT store is an experiment for discovering use cases and rewarding effective models, part of a larger plan to push AI technology's boundaries.
Learning from this AMA with Turtle Soupy from OpenAI is that they are actively working to expand the capabilities of their GPT models beyond the current prompt-driven systems. The long-term vision is to make these models more agentic and capable of more sophisticated actions and instructions. The GPT store is seen as a way to discover use cases and reward those who build effective models, even if the models' current capacity is limited. This is part of a larger plan to push the boundaries of AI technology, and the store may serve more as a grand experiment for OpenAI's future plans than a utility for customers in the short term. In essence, the AMA highlighted that the current state of GPT models is just the beginning, and OpenAI is committed to advancing the technology further.