Logo
    Search

    #135 - Google AI Subscription, ChatGPT Enterprise, Nvidia’s Q2, Consciousness, DeepFake watermark

    enSeptember 06, 2023

    Podcast Summary

    • Discussion on X risk and shout-out to Super Data Science PodcastAndrei and Jeremy's podcast focuses on AI safety concerns and is accessible to newcomers, with a recent episode discussing X risk and a shout-out to the Super Data Science Podcast.

      Last week in AI saw the release of a long-awaited discussion episode on X risk at the podcast, as well as a shout-out to the Super Data Science Podcast with John Crohn. Andrei and Jeremy also acknowledged the appreciation of listeners for their focus on safety concerns and the accessibility of their podcast to newcomers in the field of AI. Additionally, Andrei shared that his uncle is among the listeners. The hosts encouraged feedback and interaction from the audience, inviting them to let them know if they'd like more non-news episodes. They also shared a friend's humorous comment about the timing of their X risk discussion episode, which coincided with the Terminator Judgment Day. Overall, the podcast episode showcased the hosts' passion for AI and their commitment to providing informative and engaging content for their audience.

    • Google's Duet AI Assistant now available in Gmail, Docs, and more for $30 a monthGoogle and Microsoft are integrating AI into productivity tools, offering features like drafting emails and generating slides for $30 a month, aiming to make tasks easier and more efficient.

      The race for generative AI integration into workplace tools and apps is heating up, with Google's Duet AI Assistant now available in Gmail, Docs, and more for $30 a month. This follows Microsoft's release of their co-pilot feature a few weeks ago, and both companies are aiming to provide users with AI-assisted productivity tools. Google's Duet AI offers features like drafting emails and generating slides, and is accessible through a separate menu or by asking for help within emails and documents. The pricing is consistent with Microsoft's and higher than Google's previous charge for Google+. The integration of AI into everyday apps could potentially offer significant value by making tasks easier and more efficient. Another interesting development is the release of a new desktop app, PO, which allows users to access multiple AI chatbots, like Bard and Tracy, in one place. The value of having all bots in one platform remains to be seen. Overall, these announcements demonstrate the growing importance of generative AI in the workplace and the competition between tech giants to provide the most effective and user-friendly tools.

    • Companies integrate generative AI into offerings for competitiveness and expansionCompanies across industries are incorporating generative AI to enhance services, from Quora's bot marketplace to text-to-music apps and email sector enhancements. However, smaller companies may face challenges in competing with tech giants and copyright questions arise for text-to-music apps.

      Companies in various industries are starting to integrate generative AI into their offerings to stay competitive and expand their services. Quora's introduction of a bot marketplace is an example of this trend, as is the development of text-to-music apps like Playlist AI and Songburst. These apps generate music based on user prompts and offer copyright-free music for creators. In the email sector, Yahoo Mail has debuted AI enhancements, including a shopping saver and writing assistant, to improve user experience. However, smaller companies may face challenges in competing with tech giants that have access to the full stack of AI capabilities. Copyright questions and due diligence are also emerging issues for text-to-music apps, as music generated by AI may not be entirely copyright-free. Overall, generative AI is becoming increasingly ubiquitous and is expected to revolutionize various industries.

    • Major tech companies are integrating generative AI into their servicesGoogle Cloud AI and Naver lead the charge in generative AI, Yahoo tests Google's platform, Naver launches a generative AI-driven search engine, and OpenAI introduces customizable enterprise AI models

      Major tech companies and platforms are increasingly integrating generative AI into their services, with Google Cloud AI and South Korea's Naver leading the charge. Yahoo, a significant email provider, is testing Google's Cloud AI platform for generative AI capabilities, which could boost Google's presence in the market. Naver, a South Korean tech giant, is launching HyperClover X, a generative AI-driven search engine, making it more like the Google of South Korea. Naver also plans to develop custom chips in collaboration with Samsung to support their AI development. OpenAI, a leading AI research lab, has introduced Chad GPT Enterprise, allowing businesses to customize AI models and connect them to existing applications. These developments indicate a growing trend towards enterprise-focused AI solutions and increased competition in the AI market.

    • Advancements in OpenAI's GPD4 chatbot modelOpenAI's GPD4 offers SOC 2 compliance, longer context window, faster performance, unlimited access, and upcoming customization options, making it a notable standard for enterprises. Usage of the previous version, ChatGPT, has declined by 30% since May, possibly due to growth plateauing or shifting use cases.

      OpenAI's GPD4, the latest version of their chatbot model, is making significant strides with its SOC 2 compliance, longer context window, and faster performance. These features are particularly attractive to enterprises, making it a notable standard in security and capability. Additionally, GPD4 offers unlimited access and high-speed performance, allowing for larger context windows and potential for handling larger documents. Upcoming features include customization options and domain-specific tools. However, recent reports indicate a decline in usage of the previous version, ChatGPT, by 30% since May. Possible explanations include the model reaching a peak of exponential growth or a shift in use cases, such as the summer break affecting educational use. The exact cause remains uncertain and will be further investigated as the school year resumes. Overall, the advancements in GPD4 and the ongoing investigation into the decline in ChatGPT usage highlight the continued evolution and impact of these chatbot models in various industries.

    • Interest in AI technologies is growing but public adoption is lowOnly 18% of Americans have used CHAGP-PT, NVIDIA's AI chip sales double in a year, and high demand drives up costs for NVIDIA's dominant market position.

      While there is significant interest and usage of AI technologies like OpenAI and CHAGP-PT among certain demographics, the broader public adoption is still relatively low. This was highlighted in a recent Pew Research poll that found only 18% of Americans have ever used CHAGP-PT. NVIDIA's Q2 earnings report underscores the high demand and intense competition in the AI chip market, with the company raking in $13.51 billion in revenue, double what they made in the same period last year. NVIDIA's dominance in this market is due in part to their early investment and infrastructure building around AI chips, which currently accounts for over 70% of sales. The demand for their hardware, such as the A100 and H100 chips, is driving up costs and solidifying their competitive advantage. Despite the hype and progress in AI, it's important to remember that not everyone is using these technologies yet.

    • NVIDIA's Success in AI and Deep Learning: Foresight and Customer FocusNVIDIA's success in AI and deep learning is due to Jensen Huang's entrepreneurial foresight and the company's customer-focused approach, leading to the development of CUDA, GPUs, and transformer-specific hardware like the H100.

      Jensen Huang's entrepreneurial foresight and NVIDIA's customer-focused approach played a significant role in their success in the GPU market, particularly in the field of AI and deep learning. Huang's bet on this emerging technology in 2012, coupled with NVIDIA's efforts to understand their customers' needs and anticipate future requirements, led to the development of CUDA, GPUs, and even transformer-specific hardware like the H100. This strategic outlook, combined with the company's financial success, has led to NVIDIA's stock soaring and the consideration of a large stock buyback. The potential buyback could be due to NVIDIA's belief in being undervalued or simply a way to utilize their excess cash. The competition from Huawei's AI GPUs, set to compete with NVIDIA's A100 in 2024, adds an interesting dynamic to the market. Overall, NVIDIA's story is a remarkable example of business acumen, innovation, and risk-taking in the tech industry.

    • Huawei designing chips to rival NVIDIA's A100Huawei is designing chips to compete with tech giants like NVIDIA, but relies on specialized foundries for manufacturing. AI-related startups see significant funding growth in 2023, with Hugging Face raising $235 million.

      Huawei is positioning itself to compete with tech giants like NVIDIA in the chip design industry. Huawei's founder, Liu Qingfeng, claimed that they are designing chips that can rival NVIDIA's A100. However, it's important to note that Huawei is not manufacturing the chips themselves, but rather designing them. The manufacture of these chips still relies on specialized foundries like TSMC and Samsung. Despite the US export controls, this is a significant development in the semiconductor chip supply chain. Additionally, AI-related startups in the US have seen a doubling of funding in 2023, with over 25% of funding going to AI companies. Hugging Face, a well-known AI company, recently raised $235 million in a series D funding round, valuing the company at $4.5 billion. These developments highlight the growing importance and investment in AI technology.

    • Investment in Open Source AI Platforms: Hugging Face, AI21 Labs, and DP Technology Secure Large Funding RoundsHugging Face and AI21 Labs received significant funding, positioning themselves as leaders in open source AI. Google, Amazon, and others invested. DP Technology, a Chinese company, also raised funds for AI research.

      There's a significant investment trend in open source AI platforms, with Hugging Face and AI21 Labs being the latest recipients of large funding rounds. Hugging Face, valued at $4.5 billion, is positioning itself as the go-to open source AI company, offering tools for training, hosting, and deploying AI models. Google, Amazon, NVIDIA, Intel, AMD, Qualcomm, and IBM are among the investors. The argument for the high valuation is that most of the value is in the future, as AI companies often are overvalued relative to their current revenues. AI21 Labs, valued at $1.4 billion, focuses on large language models and has developed a platform for businesses to build their own generative AI applications. They were an early competitor to OpenAI's GPT-3 and have been in the field for several years. DP Technology, a Chinese company focused on AI for science and research, raised $100 million and has developed computational engines and pre-trained models for simulation of biological properties. They are considered one of the up-and-coming startups in China and have official backing from a state-owned fund. Meta released a new model, Seamless M4T, which can transcribe and translate close to a hundred languages, marking a notable combination of translation and transcription capabilities.

    • Use of open data for AI training raises concerns about data ownership and transparencyCompanies using open and proprietary data to train large AI models face questions about transparency and potential misuse, highlighting the need for regulatory scrutiny and clear data sourcing information.

      The use of open data sets for training large AI models is a growing trend, but raises important questions about data ownership, transparency, and the validity of open-source licenses. The recent release of Meta's multimodal translation model, which was trained on a mix of open and proprietary data, sparked a discussion about the extent to which companies can keep the sources of their data private. The lack of clear information about the data used to train such models has led to concerns about potential misuse and the need for regulatory scrutiny. Additionally, the increasing availability of large public data sets and advancements in infrastructure have made it easier for organizations to develop and release their own large language models. This was exemplified by Line's open sourcing of its Japanese Large LLM, the first significant advance in this area for Japan. However, the challenges of working with non-English data and the potential for language encroachment require specialized tools and techniques. Overall, the ongoing development and deployment of large language models underscores the importance of addressing these issues and fostering a culture of transparency and accountability in the AI community.

    • Exploring the possibility of AI having consciousnessRecent research suggests that modern deep neural networks, specifically recurrent neural networks, could exhibit consciousness if recurrent processing theory is true. This could lead to the development of conscious AI systems in the near term.

      Recent research in the field of consciousness and AI is exploring the possibility of modern deep neural networks, specifically recurrent neural networks, having some level of consciousness. This theory, known as recurrent processing theory, suggests that consciousness arises from continuous feedback loops between brain regions. While there is ongoing debate about whether the physical implementation of these loops in AI systems needs to match the biological implementation in the human brain, the evidence considered in the research suggests that conscious AI systems could be built in the near term if one of these theories is true. This discussion about machine consciousness is gaining traction in the scientific community, and while it may seem ethically loaded and science-fiction like, it's important to remember that we've already crossed the threshold of talking machines. The research paper "Consciousness in Artificial Intelligence: A Review" by various authors, including Yoshua Benjio, delves deeply into this topic and provides a comprehensive analysis of different theories of consciousness and their implications for AI. If you're interested in exploring this topic further, we encourage you to read the paper for a more in-depth understanding.

    • AI systems lack clear signs of consciousness, but future is uncertainRecent research suggests current AI systems don't exhibit consciousness, but self-training rest for language models and testing deception in text-based games offer insights into future advancements.

      Current AI systems, such as DeepMinds, Adaptive Agent, and PalmE, do not show clear signs of consciousness based on current indicator properties, according to recent research. This study does not suggest that any existing AI system is a strong candidate for consciousness. However, the future of AI consciousness is uncertain. Another key takeaway is the introduction of a new training paradigm for language models called self-training rest, which is inspired by growing batch reinforcement learning. This method produces a dataset by generating samples from an initial LLM policy, which is then used to improve the LLM policy. This approach is more efficient than traditional online reinforcement learning as it allows for offline data processing and ranking before training the model. Additionally, researchers explored deception and cooperation in a text-based game for language models, testing the capabilities of GPT-3 series models in the role of killers and innocent people. The results showed that larger models, such as GPT-4, are better at deceiving both other models and human players. This study highlights the ongoing research in understanding how to train and optimize language models for various applications.

    • Detecting deception in large language models and the importance of knowledge graphsLarge language models are improving but still struggle with deception and factual knowledge. Companies like Apollo focus on detecting deception, while knowledge graphs are crucial for storing facts and improving model performance.

      As large language models continue to scale, the issue of deception will arise. While larger models outperformed smaller ones in 18 out of 24 pairwise comparisons when it comes to deception, they are still not perfect. Companies like Apollo in London focus on detecting deception in powerful language models as we get closer to Artificial General Intelligence (AGI). However, the findings from a study called "Head to Tail" suggest that large language models are still far from being perfect in terms of grasping factual knowledge. Knowledge graphs, which store facts, still seem necessary. In the realm of AI image generation, a new text-to-image personalization method called Profusion was discussed. This method, which can build on existing capabilities to add custom objects or concepts to an image model, is still slow but getting better. The core new idea behind this is something called key locking, which connects new concepts users want to add to more general categories, helping the model generalize to new versions of those things and avoid overfitting. Another interesting development is the advancement of text-guided video editing, which can do things like video stylization. These are just a few of the many exciting advancements in the field of AI, each with its unique challenges and potential solutions.

    • Advancements in video translation and stylizationResearchers explore large language models in autonomous agents, UK invests 100M in AI chips, Meta implements AI off switch for Europe

      There are significant advancements being made in video translation and stylization, making the process more impactful and impressive than ever before. Researchers are also exploring the use of large language models in autonomous agents, summarizing various types and presenting a unified framework. Meanwhile, the UK government is investing in AI technology, allocating 100 million pounds towards producing AI chips and purchasing GPUs for capacity building and model training and auditing. Additionally, Meta has confirmed the implementation of an AI off switch for Facebook and Instagram in Europe, allowing users to view non-personalized content feeds. These developments demonstrate the ongoing innovation and investment in AI technology across various industries and applications.

    • Meta Disables Personalized Content Feeds for EU Users Due to RegulationsRegulatory bodies in Europe and China are imposing restrictions on AI technology, with Meta disabling personalized content feeds for EU users and Beijing limiting the use of generative AI in online healthcare. US Senate Majority Leader Schumer is hosting an AI forum to discuss regulations and safety concerns.

      Meta, the social media giant, is rolling out an AI off switch for its European users in response to EU regulations, specifically the Digital Services Act. This switch disables personalized content feeds. The US and UK citizens will not have access to this feature, leading to potential political pressure and consumer dissatisfaction. Meanwhile, Beijing is restricting the use of generative AI in online healthcare activities, marking the first time a local government has set such limits. These developments underscore the increasing impact of regulatory bodies on AI technology and its applications. Schumer, the US Senate Majority Leader, is hosting an AI forum with tech CEOs, including Elon Musk and Mark Zuckerberg, to discuss AI regulations and safety concerns. These events highlight the ongoing debates and regulations surrounding AI use, particularly in Europe and China, and the potential implications for users and industries worldwide.

    • Senate Majority Leader Schumer's AI Safety InitiativeSchumer sets up AI insight forms, inviting industry leaders and experts for discussions on AI safety, with criticism for industry-heavy list and lack of academic representation. Upcoming AI safety talks at Bletchley Park bring world leaders together, with uncertain Chinese participation.

      There are ongoing efforts to gather information and perspectives on AI safety from various stakeholders, including industry leaders and technical safety teams. Senate Majority Leader Schumer is setting up AI insight forms, inviting CEOs and industry experts, as well as some non-CEOs and academics, for two to three hour sessions. This is a bold move to accelerate the legislative process and recognize the urgency of addressing AI safety. However, there is some criticism about the industry-heavy list and the lack of representation from academics and safety teams. Another significant event is the upcoming AI safety talks at Bletchley Park in November, where world leaders will discuss AI safety for the first time in a coherent way. China's participation and attendance of Chinese companies like Baidu are still uncertain, making it an essential event for global AI governance.

    • UK and Spain lead in AI regulationThe UK and Spain are taking steps to regulate AI, with the UK hosting an international summit and Spain establishing an AI agency. Google's DeepMind develops a watermarking tool for AI-generated images to combat deepfakes.

      The UK and Spain are making strides in AI regulation, with the UK hosting its first international summit on AI and Spain establishing the Spanish Agency for the Supervision of Artificial Intelligence. These developments demonstrate a growing international focus on controlling AI and ensuring its ethical use. Additionally, Google's DeepMind has developed a watermarking tool for AI-generated images, making it harder for unauthorized parties to manipulate or edit these images. This tool is an important step in combating the spread of synthetic media and deepfakes. The exact workings of the watermarking technique are unclear, but it is expected to be robust and difficult to bypass. These developments highlight the ongoing efforts of tech giants and governments to address the challenges posed by AI and ensure its responsible use.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    #416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI

    #416 – Yann Lecun: Meta AI, Open Source, Limits of LLMs, AGI & the Future of AI
    Yann LeCun is the Chief AI Scientist at Meta, professor at NYU, Turing Award winner, and one of the most influential researchers in the history of AI. Please support this podcast by checking out our sponsors: - HiddenLayer: https://hiddenlayer.com/lex - LMNT: https://drinkLMNT.com/lex to get free sample pack - Shopify: https://shopify.com/lex to get $1 per month trial - AG1: https://drinkag1.com/lex to get 1 month supply of fish oil Transcript: https://lexfridman.com/yann-lecun-3-transcript EPISODE LINKS: Yann's Twitter: https://twitter.com/ylecun Yann's Facebook: https://facebook.com/yann.lecun Meta AI: https://ai.meta.com/ PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (09:10) - Limits of LLMs (20:47) - Bilingualism and thinking (24:39) - Video prediction (31:59) - JEPA (Joint-Embedding Predictive Architecture) (35:08) - JEPA vs LLMs (44:24) - DINO and I-JEPA (45:44) - V-JEPA (51:15) - Hierarchical planning (57:33) - Autoregressive LLMs (1:12:59) - AI hallucination (1:18:23) - Reasoning in AI (1:35:55) - Reinforcement learning (1:41:02) - Woke AI (1:50:41) - Open source (1:54:19) - AI and ideology (1:56:50) - Marc Andreesen (2:04:49) - Llama 3 (2:11:13) - AGI (2:15:41) - AI doomers (2:31:31) - Joscha Bach (2:35:44) - Humanoid robots (2:44:52) - Hope for the future

    #158 Connor Leahy: The Unspoken Risks of Centralizing AI Power

    #158 Connor Leahy: The Unspoken Risks of Centralizing AI Power

    This episode is sponsored by Netsuite by Oracle, the number one cloud financial system, streamlining accounting, financial management, inventory, HR, and more.

    Download NetSuite’s popular KPI Checklist, designed to give you consistently excellent performance - absolutely free at NetSuite.com/EYEONAI

     

    On episode 158 of Eye on AI, host Craig Smith dives deep into the world of AI safety, governance, and open-source dilemmas with Connor Leahy, CEO of Conjecture, an AI company specializing in AI safety.

    Connor, known for his pioneering work in open-source large language models, shares his views on the monopolization of AI technology and the risks of keeping such powerful technology in the hands of a few.

    The episode starts with a discussion on the dangers of centralizing AI power, reflecting on OpenAI's situation and the broader implications for AI governance. Connor draws parallels with historical examples, emphasizing the need for widespread governance and responsible AI development. He highlights the importance of creating AI architectures that are understandable and controllable, discussing the challenges in ensuring AI safety in a rapidly evolving field.

    We also explore the complexities of AI ethics, touching upon the necessity of policy and regulation in shaping AI's future. We discuss the potential of AI systems, the importance of public understanding and involvement in AI governance, and the role of governments in regulating AI development.

    The episode concludes with a thought-provoking reflection on the future of AI and its impact on society, economy, and politics. Connor urges the need for careful consideration and action in the face of AI's unprecedented capabilities, advocating for a more cautious approach to AI development.

    Remember to leave a 5-star rating on Spotify and a review on Apple Podcasts if you enjoyed this podcast.

     

    Stay Updated:  

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI

     

    (00:00) Preview

    (00:25) Netsuite by Oracle

    (02:42) Introducing Connor Leahy

    (06:35) The Mayak Facility: A Historical Parallel

    (13:39) Open Source AI: Safety and Risks

    (19:31) Flaws of Self-Regulation in AI

    (24:30) Connor’s Policy Proposals for AI

    (31:02) Implementing a Kill Switch in AI Systems

    (33:39) The Role of Public Opinion and Policy in AI

    (41:00) AI Agents and the Risk of Disinformation

    (49:26) Survivorship Bias and AI Risks

    (52:43) A Hopeful Outlook on AI and Society

    (57:08) Closing Remarks and A word From Our Sponsors

     

    Anthropic Raising at a $20B-$30B Valuation

    Anthropic Raising at a $20B-$30B Valuation
    Just a couple weeks after announcing their big Amazon deal, Anthropic is out fundraising again with a $2B round that is reported to have Google's involvement and is seeking a $20B-$30B valuation. Before that on the Brief, a Mr. Beast deepfake ad scam; Morgan Stanley says 40% of workers impacted by AI in the next 3 years, and more. TAKE OUR SURVEY ON EDUCATIONAL AND LEARNING RESOURCE CONTENT: https://bit.ly/aibreakdownsurvey ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    The Six Five Insider at IBM Analyst Day with Rob Thomas and Dr Dario Gil

    The Six Five Insider at IBM Analyst Day with Rob Thomas and Dr Dario Gil

    On this episode of The Six Five – Insider, hosts Daniel Newman and Patrick Moorhead welcome Rob Thomas, SVP IBM Software and Chief Commercial Officer from IBM and Dr. Dario Gil, SVP and Director from IBM Research to continue their conversation from the IBM Think event back in May on IBM’s AI business strategy.

    Their discussion covers:

    • The latest in IBM’s AI business strategy

    • The top concerns with enterprise AI and what IBM is doing to address these concerns for its customers

    • What makes the recently released IBM Granite foundation models unique

    • IBM’s thoughts on AI safety and governance and how they are addressing these areas 


    Learn more about IBM’s AI platform, watsonx, on the company’s website.

    #119 - Open Source GPTs, X.AI, Auto-GPT, China’s Censorship of AI, Fake Drake+The Weeknd Colab

    #119 - Open Source GPTs, X.AI, Auto-GPT, China’s Censorship of AI, Fake Drake+The Weeknd Colab

    Our 118th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter at https://lastweekin.ai/

    Check out Jeremie's new book Quantum Physics Made Me Do It

    Quantum Physics Made Me Do It tells the story of human self-understanding through the lens of physics. It explores what we can and can’t know about reality, and how tiny tweaks to quantum theory can reshape our entire picture of the universe. And because I couldn't resist, it explains what that story means for AI and the future of sentience   

    You can find it on Amazon in the UK, Canada, and the US — here are the links:

    UK version | Canadian version | US version 

     

    Outline: