Logo
    Search

    #117 - Google’s Bard Rush, BloombergGPT, ChatGPT King, Balenciaga Harry Potter

    enApril 07, 2023

    Podcast Summary

    • Google shifts focus from Google Assistant to Bard AIGoogle is transforming Google Assistant into an expert system or AI-first chatbot, potentially competing with ChatGPT and Microsoft's investment in OpenAI

      There's been significant internal changes at Google regarding their AI projects, specifically with the shift of focus from Google Assistant to Bard AI. This shift was highlighted by an internal memo that revealed senior executives being moved from Google Assistant to Bard, suggesting a potential transformation of Google Assistant into an expert system with pre-programmed responses or an AI-first chatbot similar to ChatGPT. The memo also indicated Google's concern about competition from OpenAI's ChatGPT and Microsoft's investment in OpenAI. These personnel transfers serve as reliable, unhidable information, providing insight into Google's strategic direction in the AI field.

    • Google's race to integrate advanced AI systems poses new safety challengesGoogle's integration of advanced AI systems like Bard with older technology poses new safety challenges, as seen with Google Assistant, and the historical safety measures implemented by DeepMind may no longer be sufficient in the era of open AI systems.

      The race to develop and release advanced AI systems, as seen with Google's collaboration with DeepMind and the impact of ChatGPT's public release, is leading to a potential "race to the bottom" on safety measures. Historically, DeepMind, a Google subsidiary focused on AI development, had implemented safety measures to maintain control and independence in the Google acquisition. However, with the emergence of open AI systems, these measures may no longer be sufficient. Google Assistant, Google's Siri equivalent, is an example of older technology that relies on database querying and parsing, and its integration with more advanced AI systems like Bard poses new challenges and risks. This shift from expert systems to AI-first systems also increases the potential impact on real-world behavior. The collaboration between Google and DeepMind on the Gemini project is a sign of the significance and urgency of this development for Google. The parallel between DeepMind and OpenAI, which diverged in their approaches to AI development over time, further highlights the stakes. It remains to be seen how these companies will navigate the balance between innovation and safety in the rapidly evolving AI landscape.

    • Philosophical differences in AI safetyDeepMind and OpenAI hold contrasting views on AI safety, with OpenAI advocating for public exploration and DeepMind favoring caution. Google's ownership and Microsoft's partnership impact their priorities, with DeepMind focusing on research and safety, and OpenAI on scaling capabilities.

      While two major AI research labs, DeepMind and OpenAI, share a consensus on the potential risks of advanced AI systems, they hold vastly different philosophies regarding how to mitigate those risks. OpenAI advocates for releasing AI systems to the public for exploration and learning from real-world interactions, while DeepMind favors a more cautious approach, considering the possibility of a sudden phase transition leading to inherent danger. DeepMind's ownership by Google and OpenAI's partnership with Microsoft influence their respective priorities, with DeepMind focusing on internal cost savings and publishing extensive research on AI safety, while OpenAI has shifted its focus towards scaling capabilities and less transparent reporting. The blurring lines between safety and capability research and concerns over sharing safety knowledge pose challenges in this rapidly evolving field. Additionally, as AI becomes a commercial product, better AI safety can serve as a competitive advantage, leading to a paradoxical situation where companies may not openly share their safety research.

    • Robotics Investments Reach $1 Billion in 2023Robotics investments totaled $620 million in February 2023, bringing the yearly total to over $1 billion. Hardware-focused investors are attracted to robotics due to its longer lifespan and potential for revenue and profit, despite the longer timeline compared to software-oriented businesses.

      Despite the recent hype around language models and AI, robotics continues to be a significant area of investment. In February 2023, robotics investments totaled $620 million, with sectors including drones, data analytics, construction, and healthcare. This brings the total for the first few months of 2023 to over $1 billion. Although there can be noise in the industry, the investment activity is a healthy sign and counters the narrative that language models are taking all the attention away from robotics. The challenge for software-oriented products is their potential short lifespan due to rapid technological advancements. However, robotics offers a moat, making it a potentially attractive area for investors, especially those with a focus on hardware industries. Although there's less risk in robotics, it will take longer to achieve revenue and profit compared to software-oriented businesses. As some venture capital firms specialize in hardware industries, we can expect to continue seeing more and more robotics in the future.

    • AI and robotics reshape industries, requiring human intuition and US investmentAI integrates human skills in robotics, US leads robotics investments, Bing's chatbot displays ads in text responses

      The integration of artificial intelligence (AI) and robotics in various industries continues to evolve, with unexpected jobs and regions remaining relevant. For instance, Machino Labs' development of robotic systems for prototyping work showcases the need for human intuition and hand skills in the face of automation. Additionally, the US leading robotics investments may come as a surprise, considering China's manufacturing prowess. Regarding advertising, Microsoft's Bing chatbot now displays ads in a new, less intrusive way. Instead of traditional clickable ads, Bing integrates ad citations into text responses, providing answers and promoting brands simultaneously. This marks another fundamental change in online advertising, requiring careful consideration of user experience and interface design. Overall, these developments underscore the importance of staying informed about advancements in AI, robotics, and advertising to maintain a comprehensive understanding of the evolving tech landscape.

    • Over 35% of Y Combinator's winter class of 2023 focus on AI technologyY Combinator's winter class of 2023 sees over a third of startups utilizing AI technology, reflecting the growing impact and utility of these tools in various industries

      The integration of AI in startups is becoming increasingly prevalent and significant, as evidenced by the fact that over 35% of startups in Y Combinator's winter class of 2023 are focused on AI technology. This trend is not a result of hype alone, but rather a reflection of the genuine impact and utility these tools provide. The article highlights various AI-focused companies, including those in the field of generative AI, which can build useful applications quickly in areas such as finance, compliance, and domain-specific knowledge. The partnership between Y Combinator and OpenAI, and Y Combinator's history of backing solid, infrastructure-level startups, also plays a role in the prevalence of AI companies in the accelerator. The use of chatbots as a marketing tool raises questions about accuracy and trust, but it remains to be seen how this will be addressed in the coming months. Overall, the integration of AI in startups is a notable trend that is here to stay.

    • AI startups thrive at Y Combinator demo dayMicrosoft and Google compete in AI space, driving growth for AI startups like Replet, which gained 20M users and uses Google's heavy-duty AI models

      The use of AI is becoming increasingly prevalent and essential for various applications and industries, leading to rapid growth for AI-focused startups. This trend was evident at Y Combinator's demo day, where several AI companies reported impressive growth rates. Replet, a Y Combinator alumnus, is one such startup, which has gained 20 million users and is now in competition with Microsoft's GitHub and its co-pilot feature. Google is supporting Replet by providing access to heavy-duty AI models like Palm and Lambda. This competition between Microsoft and Google in the AI space is a good sign, leading to faster development and innovation for the benefit of users. The hype around AI is fueling this growth, making it an exciting time for both investors and startups in the AI sector.

    • Microsoft's acquisition of GitHub and partnership with Replet signal the strategic importance of AI in various industriesMicrosoft's acquisition of GitHub and partnership with Replet highlight the growing importance of AI in industries like finance and the potential for substantial revenue generation, while OpenAI's shift towards commercialization showcases the potential for significant profits in the industry.

      The landscape of AI is evolving rapidly, with companies like Microsoft, Google, and OpenAI making significant moves to capitalize on the growing demand for AI technologies. Microsoft's acquisition of GitHub and partnership with Replet highlights the strategic importance of these companies in the AI ecosystem, while OpenAI's shift towards commercialization showcases the potential for substantial revenue generation in the industry. Microsoft's acquisition of GitHub and partnership with Replet raises questions about the strategic risks Replet is taking by increasing its reliance on Google products and potential data sharing. This partnership differs from an acquisition in that Replet retains some level of independence, but the implications of this agreement are still uncertain. OpenAI's transformation into a for-profit company is another significant development, with the company now generating hundreds of millions of dollars in revenue through the sale of APIs and employing sales staff. This revenue allows OpenAI to self-finance its next generation of AI systems and may indicate that the company is approaching "economic take-off." These developments underscore the growing importance of AI in various industries, particularly finance, and the potential for substantial revenue generation. However, the implications of these moves for the broader AI ecosystem and the potential risks and benefits for individual companies remain to be seen.

    • Innovation in AI no longer lies in algorithms, but in dataLarge datasets drive innovation in AI, with GPTs demonstrating this in finance and other domains, while standardization and focus on safety, interpretability, and user interface become key areas of research.

      The use of generative pre-trained transformers (GPTs) in various domains, such as finance with Bloomberg GPT, demonstrates the power of large, curated datasets in driving innovation. The innovation no longer lies in the algorithms themselves, but in the data they are trained on. This is a shift from the past when machine learning papers were exciting due to algorithm breakthroughs. GPTs are able to perform well in both general purpose reasoning and specific domains, like finance, without losing their edge. This middle ground strategy is a departure from the traditional approach of creating specific models for each domain, like MedGBT. The emerging template for engineering systems is standardized, and the focus is now on deploying these systems and understanding their safety, interpretability, and user interface. This shift towards research on these aspects could be a positive development for academia, as it moves away from a focus on just improving performance benchmarks.

    • Transparency and User Experience in Large-Scale AI ResearchAs AI models reach astronomical price ranges, safety, interpretability, and user experience become crucial. Industry leaders like Bloomberg are setting the bar high with transparent reports, potentially attracting talent and instilling confidence in their technology.

      The intersection of academia and large-scale AI research is becoming more challenging as models reach the $200 million price range. Safety and interpretability are increasingly important, and user experience is gaining relevance. Bloomberg's recent detailed paper release on their AI model, GPD4, is a notable example of transparency in the industry, which may be a recruitment strategy or a sign of confidence in their superior technology. The 2023 AI index report from Stanford's Human AI Institute highlights the growing industry-led development of AI, as well as the increasing number of AI incidents and controversies. The report emphasizes the urgent need for policy responses to keep up with the rapidly evolving technology landscape.

    • AI and robotics legislation on the risePolicymakers are struggling to keep up with the rapid advancements in AI and robotics, resulting in an increase in AI-related bills and legal cases.

      As technology advances, particularly in the realm of AI and robotics, policymakers are struggling to keep up. The number of AI-related bills proposed in the United States has seen a significant increase in the past five to six years, with nine of them passing in 2022. This trend is reflected in the increasing number of AI-related legal cases, which reached over 100 in 2022. The rapid advancement of technology, such as the ability to train quadruped robots to perform manipulation tasks in addition to walking, is pushing the boundaries of what was previously thought possible. This research represents a middle ground between traditional coding and AI training, with humans identifying distinct sub-problems for the AI to solve. However, it is likely that these sub-tasks will eventually be subsumed into one single, large network. This interconnectedness of advancements in AI, robotics, and policy is leading to a future where these technologies will be integrated into a hierarchical system. The ease of access to information and the readability of reports on these advancements make it essential for policymakers to stay informed and adapt to these changes.

    • AI Advancements in Video GenerationNew AI models could generate high-resolution videos indistinguishable from reality in a few years, but ethical concerns and limitations persist.

      We are on the brink of a new era in AI technology, with advancements in video generation expected to follow text and image generation. The New York Times reported on this trend, mentioning the new "runway model" that could produce high-resolution videos indistinguishable from reality in a few years. However, the quality of video generation is currently not as advanced as images, and there are ethical concerns regarding the potential misuse of AI in elections. A study by students at BYU found that GPT-3, an AI model, could accurately predict survey responses based on demographics, raising questions about the extent of AI's ability to mimic human behavior and reasoning. This discovery has philosophical implications, as it challenges our understanding of human rationality and political decision-making. Overall, these developments highlight the need for continued ethical discussions and safeguards as AI technology advances.

    • AI models using other AI models for tasksA new paradigm in AI technology sees models using other models to complete tasks, raising questions about system architecture and societal implications.

      We are witnessing an evolution in AI technology, where models are now using other models to complete tasks. This was exemplified in a recent paper from Hugging Face, which demonstrated a language model like ChatGPT deciding which AI models from the Hugging Face repository to use to answer a user query. This development raises questions about the architectural evolution of AI, with potential variations ranging from a system of interconnected models to one large, all-encompassing model. The paper comes at a time when the use of AI models is on everyone's mind, following the launch of OpenAI's ChatGPT API just two weeks prior. This new paradigm also brings up intriguing design questions regarding fault tracing and responsibility in these complex systems. From a societal perspective, the first story discussed the background of Sam Altman, the CEO of OpenAI, and the history of OpenAI and its flagship product, ChatGPT. The article provided valuable context for those unfamiliar with these developments. The second story highlighted the potential concerns and implications of these advanced AI models, with the CEO of OpenAI, Chad GPT King, expressing his confidence in the technology while acknowledging the validity of others' concerns. Overall, these developments underscore the importance of ongoing dialogue and careful consideration of the societal and ethical implications of AI technology.

    • Sam Altman's Unique Leadership in AISam Altman, OpenAI's leader, holds no equity, prioritizing mission over personal gain. Sam's dual beliefs about AI's impact and childhood experiences shape his perspective. EU's AI Act aims to be the most comprehensive regulation, setting new standards.

      Sam Altman, the co-founder of OpenAI, holds no equity in the company he leads, reflecting his deep commitment to the mission rather than personal financial gain. The article also explores Altman's duality, as he holds seemingly contradictory beliefs about AI's potential impact on the world. The piece also delves into Altman's childhood experiences and reveals his unique perspective, drawing comparisons between OpenAI and historical projects of grand ambition. The European Union's Artificial Intelligence Act, another topic discussed, aims to be the most comprehensive AI regulation to date, surpassing efforts by other countries. These topics offer intriguing insights into the world of AI and its leaders.

    • EU's New AI Legislation: Defining Terms and Implementing RegulationsThe EU is working on new AI legislation to categorize and regulate different applications based on risk level, potentially influencing global policies. However, defining AI and measuring risk levels may prove challenging.

      The European Union is working on passing legislation, known as the Act on Artificial Intelligence (AI), which aims to categorize different applications of AI based on their risk level and impose corresponding regulations. This legislation is expected to have a significant impact on the tech industry, as the EU's regulations often influence global policies due to the "Brussels Effect." However, defining the terms and implementing the regulations may prove challenging, as there are debates over what constitutes AI, how to measure risk, and how to categorize different applications. Despite these challenges, many experts believe that some form of regulation is necessary to prevent potential negative consequences of unchecked AI development. The US, which currently has limited AI regulations, may follow the EU's lead in establishing a regulatory framework for AI.

    • Unintended consequences and potential abuse in text-to-image AI servicesThe lack of regulation and clear policies in the AI industry, particularly in text-to-image services, can lead to unintended consequences and potential abuse. Companies must consider the philosophical implications of their technology and the potential impact on society as they grow.

      The lack of regulation and clear policies in the rapidly growing AI industry, particularly in text-to-image services, can lead to unintended consequences and potential abuse. This was highlighted in a Washington Post article about Stability AI, a small company that makes one of the leading text-to-image services, which has had to discontinue its free tier due to abuse and viral images generating controversy. The company's founder mentioned minimizing drama and avoiding political satire in China as reasons for certain restrictions. However, the article also noted that the company's small size and lack of experience in dealing with the political realities of being a large company with a global footprint may have contributed to these issues. The FTC is now reviewing competition in AI, acknowledging the importance of addressing these concerns as the industry continues to grow at an unprecedented rate. It's crucial for companies to consider the philosophical implications of their technology and the potential impact on society as they grow and make decisions regarding what is allowed and not allowed.

    • Balancing Accessibility and Risks in AIWhile promoting competition and democratization of AI is beneficial, it's crucial to address potential risks, including catastrophic accidents and alignment, and prevent proliferation and address security concerns.

      While the promotion of competition and democratization of AI is beneficial for accessibility and cost reduction, it may not be the best scenario when considering potential risks, especially those related to catastrophic accidents and alignment. The FTC's perspective is understandable, as they prioritize consumer protection and affordability. However, there are other dimensions to this issue, such as the need for preventing proliferation and addressing security concerns. These different perspectives may lead to conflicts between various departments and agencies. Additionally, the growing capabilities and proliferation of AI models, along with the decreasing cost to manipulate them, introduce new challenges. For instance, chatbots can be easily manipulated through jailbreaking, phishing, hidden prompt injection, and data poisoning. These vulnerabilities can lead to malicious use and accident risks. It's essential to recognize these dependencies and consider the potential consequences as we continue to build our brave new world with increasingly advanced AI systems.

    • AI's integration into society: benefits and concernsAI models like Chad GPT are transforming education and communication, but also raise concerns about phishing and security. Over half of K-12 teachers use AI daily, and its implications are vast and far-reaching.

      The integration of AI models like Chad GPT into various aspects of society, such as education and communication, is leading to significant changes and raising important questions. The potential impact on phishing and education is a major concern, with the possibility of foreign scammers using advanced AI models to deceive people and cause harm. However, there are also potential benefits, such as personalized education and making teachers' lives easier. The rapid advancement of AI technology means that we must be prepared to adapt and evolve in response, and the new normal may involve constantly learning and adapting to new tools. According to a survey, more than half of K-12 teachers have used Chad GPT, and many use it daily. Chinese creators are even using AI to generate retro-urban photography. The implications of these developments are vast and far-reaching, and it's clear that we must approach them with a flexible and open-minded attitude. The dust may not have time to settle before the next big thing comes along.

    • Data's influence on creative process: AI-generated art and entertainmentArtists are using AI to blend human creativity and machine learning, generating unique images and videos. This raises questions about the boundary between human and machine-generated content.

      Data is increasingly influencing the creative process, from art to fashion, as seen in the examples discussed. In the first instance, artists are using AI models to generate images based on historical data, creating a unique blend of human creativity and machine learning. This was evident in the use of Mid-Journey to help a street photographer recreate memories of his childhood in China. In another instance, a YouTuber used AI to create a deep fake video of Harry Potter characters in a Balenciaga fashion show, demonstrating the potential for AI to generate new and entertaining content by combining existing ideas. The uncanny realism of these creations raises questions about the boundary between human and machine-generated art, and whether the novelty lies in the idea or the execution. Overall, these examples highlight the growing role of data and AI in shaping creative output.

    • Support the podcast by sharing and leaving a reviewSharing and leaving a review on Apple Podcasts helps spread the word and support the creators

      Supporting the podcast you enjoy is simple and impactful. Sharing it with your network, whether that's friends, family, or social media, can help spread the word and reach new listeners. Leaving a review on Apple Podcasts is another way to provide valuable feedback and help the creators improve. By engaging in these actions, you're contributing to the podcast's growth and success. Remember, every share and review counts, so don't hesitate to take a moment to support the content you appreciate. And, of course, be sure to tune in next week for another exciting episode.

    Recent Episodes from Last Week in AI

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    #162 - Udio Song AI, TPU v5, Mixtral 8x22, Mixture-of-Depths, Musicians sign open letter

    Our 162nd episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 15, 2024

    Related Episodes

    Hinton and Bengio on Managing AI Risks

    Hinton and Bengio on Managing AI Risks
    A group of scientists, academics and researchers has released a new framework on managing AI risks. NLW explores whether we're moving to more specific policy proposals. Today's Sponsors: Listen to the chart-topping podcast 'web3 with a16z crypto' wherever you get your podcasts or here: https://link.chtbl.com/xz5kFVEK?sid=AIBreakdown  ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    #158 Connor Leahy: The Unspoken Risks of Centralizing AI Power

    #158 Connor Leahy: The Unspoken Risks of Centralizing AI Power

    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

    Download NetSuite’s popular KPI Checklist, designed to give you consistently excellent performance - absolutely free at NetSuite.com/EYEONAI

     

    On episode 158 of Eye on AI, host Craig Smith dives deep into the world of AI safety, governance, and open-source dilemmas with Connor Leahy, CEO of Conjecture, an AI company specializing in AI safety.

    Connor, known for his pioneering work in open-source large language models, shares his views on the monopolization of AI technology and the risks of keeping such powerful technology in the hands of a few.

    The episode starts with a discussion on the dangers of centralizing AI power, reflecting on OpenAI's situation and the broader implications for AI governance. Connor draws parallels with historical examples, emphasizing the need for widespread governance and responsible AI development. He highlights the importance of creating AI architectures that are understandable and controllable, discussing the challenges in ensuring AI safety in a rapidly evolving field.

    We also explore the complexities of AI ethics, touching upon the necessity of policy and regulation in shaping AI's future. We discuss the potential of AI systems, the importance of public understanding and involvement in AI governance, and the role of governments in regulating AI development.

    The episode concludes with a thought-provoking reflection on the future of AI and its impact on society, economy, and politics. Connor urges the need for careful consideration and action in the face of AI's unprecedented capabilities, advocating for a more cautious approach to AI development.

    Remember to leave a 5-star rating on Spotify and a review on Apple Podcasts if you enjoyed this podcast.

     

    Stay Updated:  

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

     

    (00:00) Preview

    (00:25) Netsuite by Oracle

    (02:42) Introducing Connor Leahy

    (06:35) The Mayak Facility: A Historical Parallel

    (13:39) Open Source AI: Safety and Risks

    (19:31) Flaws of Self-Regulation in AI

    (24:30) Connor’s Policy Proposals for AI

    (31:02) Implementing a Kill Switch in AI Systems

    (33:39) The Role of Public Opinion and Policy in AI

    (41:00) AI Agents and the Risk of Disinformation

    (49:26) Survivorship Bias and AI Risks

    (52:43) A Hopeful Outlook on AI and Society

    (57:08) Closing Remarks and A word From Our Sponsors

     

    #118 - Anthropic vs OpenAI, AutoGPT, RL at Scale, AI Safety, Memeworthy AI Videos

    #118 - Anthropic vs OpenAI, AutoGPT, RL at Scale, AI Safety, Memeworthy AI Videos

    Our 118th episode with a summary and discussion of last week's big AI news!

    Check out Jeremie's new book Quantum Physics Made Me Do It

    Read out our text newsletter at https://lastweekin.ai/

    Stories this week:

    Episode 431- Eric Brown of Microsoft , University of Maryland Phd Candidate Adira Colton , Extended Reality, Robotics, and Martial Arts

    Episode 431- Eric Brown of Microsoft , University of Maryland Phd Candidate Adira Colton , Extended Reality, Robotics, and Martial Arts

    This is episode 431 of the Fun with the Maryland STEM Festival podcast. The podcast where you meet adults and students doing interesting and fun STEM activities in Maryland. This episode includes interviews with Eric Brown of Microsoft recorded on April 20th and University of Maryland Mechanical Engineering Student Adira Colton recorded on April 26th. Eric will talk about his working supporter customers using AR and IR. Adira discusses her NSF Fellowship and her work in robotics and additive manufacturing

    Why Everyone is Angry About the AI Safety Board

    Why Everyone is Angry About the AI Safety Board
    Explore the widespread backlash against the newly established Artificial Intelligence Safety and Security Board by the Department of Homeland Security in this episode of AI Breakdown. This board has drawn criticism from various factions within the AI community, from safety advocates to accelerationists, all of whom are concerned about the potential for regulatory capture and the board's composition. ** Join NLW's May Cohort on Superintelligent. Use code nlwmay for 25% off your first month and to join the special learning group. https://besuper.ai/ ** Consensus 2024 is happening May 29-31 in Austin, Texas. This year marks the tenth annual Consensus, making it the largest and longest-running event dedicated to all sides of crypto, blockchain and Web3. Use code AIBREAKDOWN to get 15% off your pass at https://go.coindesk.com/43SWugo  ** ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/