Logo
    Search

    advanced ai

    Explore "advanced ai" with insightful episodes like "Boston Dynamics Shows Next Frontier in Robotics", "A New AI Safety Report (and Why the Media Loves the AI Extinction Narrative)", "Sam Altman Wants to Raise $5-$7 TRILLION for AI Chip Plants", "Artificial: Episode 3, ChatGPT" and "#146 - ChatGPT’s 1 year anniversary, DeepMind GNoME, Extraction of Training Data from LLMs, AnyDream" from podcasts like ""The AI Breakdown: Daily Artificial Intelligence News and Discussions", "The AI Breakdown: Daily Artificial Intelligence News and Discussions", "The AI Breakdown: Daily Artificial Intelligence News and Discussions", "The Journal." and "Last Week in AI"" and more!

    Episodes (17)

    Boston Dynamics Shows Next Frontier in Robotics

    Boston Dynamics Shows Next Frontier in Robotics
    Boston Dynamics has unveiled a new version of their Atlas robot, showcasing advanced mobility with unconventional joint movements, capturing widespread attention. This innovation highlights the evolving capabilities of robotics, challenging traditional designs and extending potential applications. In the broader robotics industry, Figure has secured substantial investment for their A1 humanoid robot, enhancing interaction through AI, while Tesla's Optimus remains a key player awaiting updates. ** CHECK OUT THE JUST-LAUNCHED SUPERINTELLIGENT PLATFORM - 300+ AI video tutorials https://besuper.ai/ Consensus 2024 is happening May 29-31 in Austin, Texas. This year marks the tenth annual Consensus, making it the largest and longest-running event dedicated to all sides of crypto, blockchain and Web3. Use code AIBREAKDOWN to get 15% off your pass at https://go.coindesk.com/43SWugo  ** ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    A New AI Safety Report (and Why the Media Loves the AI Extinction Narrative)

    A New AI Safety Report (and Why the Media Loves the AI Extinction Narrative)
    US-funded report sparks media frenzy, advocating for drastic AI measures. Unpacking the actual content versus sensational media narratives on potential AI threats, highlighting the gap between report recommendations and media portrayal. ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Sam Altman Wants to Raise $5-$7 TRILLION for AI Chip Plants

    Sam Altman Wants to Raise $5-$7 TRILLION for AI Chip Plants
    WSJ writes about an incredibly ambitious plan to transform the landscape for compute. Additionally, the White House has started an AI Safety Consortium. Today's Sponsors: Notion - Notion AI. Knowledge, answers, ideas. One click away. - https://notion.com/aibreakdown ABOUT THE AI BREAKDOWN The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Artificial: Episode 3, ChatGPT

    Artificial: Episode 3, ChatGPT
    OpenAI launched ChatGPT with low expectations and little fanfare. But the chatbot was an instant hit and went on to become one of the fastest growing consumer apps in tech history. ChatGPT’s surprise success gave OpenAI its first shot to make big money, and the company moved quickly to cash in — even as critics called out some very real problems with the company’s hit product.  Further Reading: Outcry Against AI Companies Grows Over Who Controls Internet’s Content  The Awkward Partnership Leading the AI Boom  Further Listening: Artificial: Episode 1, The Dream   Artificial: Episode 2, Selling Out  Learn more about your ad choices. Visit megaphone.fm/adchoices

    #146 - ChatGPT’s 1 year anniversary, DeepMind GNoME, Extraction of Training Data from LLMs, AnyDream

    #146 - ChatGPT’s 1 year anniversary, DeepMind GNoME,  Extraction of Training Data from LLMs, AnyDream

    Our 146th episode with a summary and discussion of last week's big AI news!

    Note: this one is coming out a bit late, sorry! We'll have a new ep with coverage of the big news about Gemini and the EU AI Act out soon though.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai

    Timestamps + links:

    #144 - OpenAI CEO UN-FIRED, Cruise founders quit, Meta video editing, LLMs can lie, policy updates

    #144 - OpenAI CEO UN-FIRED, Cruise founders quit, Meta video editing, LLMs can lie, policy updates

    Our 144th episode with a summary and discussion of last week's big AI news, now back with the usual hosts!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai

    Timestamps + links:

    AI Ethics at Code 2023

    AI Ethics at Code 2023
    Platformer's Casey Newton moderates a conversation at Code 2023 on ethics in artificial intelligence, with Ajeya Cotra, Senior Program Officer at Open Philanthropy, and Helen Toner, Director of Strategy at Georgetown University’s Center for Security and Emerging Technology. The panel discusses the risks and rewards of the technology, as well as best practices and safety measures. Recorded on September 27th in Los Angeles. Learn more about your ad choices. Visit podcastchoices.com/adchoices

    From Asimov to Anthropic: The Evolution of Ethical AI

    From Asimov to Anthropic: The Evolution of Ethical AI

    Today we explored an exciting new approach called Constitutional AI that aims to align advanced AI systems with human ethics and values. Researchers are encoding principles like honesty, justice, and avoiding harm directly into the objectives and constraints of AI to make their behavior more beneficial. We discussed how AI safety startup Anthropic is pioneering Constitutional AI techniques in their assistant Claude to make it helpful, harmless, and honest. Constitutional frameworks provide proactive guardrails for AI rather than just optimizing for narrow goals like accuracy. This episode covered the origins, real-world applications, and connections to pioneering concepts like Asimov’s Laws of Robotics. Despite ongoing challenges, Constitutional AI demonstrates promising progress towards developing AI we can trust. Stay tuned for more episodes examining this fascinating field!


    Here you find my free Udemy class: The Essential Guide to Claude 2⁠⁠⁠ This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output.

    Music credit: "Modern Situations by Unicorn Heads"

    Iron Man's Tech through a Real-World Lens: Suiting Up with AI

    Iron Man's Tech through a Real-World Lens: Suiting Up with AI

    This time on A Beginner's Guide to AI, we explore how Tony Stark pioneers artificial intelligence through companions like JARVIS and FRIDAY. JARVIS represents a fully-realized AI system, showcasing abilities like natural language processing, adaptive learning, and general intelligence that surpass even today's most advanced AI. When JARVIS is destroyed, Stark builds the more specialized FRIDAY, who lacks JARVIS’ personality and well-rounded competence. Their contrast reveals tradeoffs between developing AI for general versus narrow purposes that researchers still grapple with today. While not yet feasible, the Iron Man films provide an imaginative glimpse into how AI could look in the future. Perhaps one day, we’ll all have loyal AI partners that transform our lives for the better.


    This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output.


    Music credit: "Modern Situations by Unicorn Heads"


    ---

    THE CONTENT OF THIS EPISODE

    IRON MAN'S WORLD: HOW SCI-FI PREVIEWED OUR AI-POWERED REALITY


    Dive deep into the AI-driven world of Iron Man, one of Marvel's iconic superheroes. Tony Stark, brought to life by Robert Downey Jr., has heavily relied on AI from his earliest inventions to leading drone armies. As the Marvel Cinematic Universe unfolds, Stark's evolving relationship with AI sets the stage for fascinating discussions about the real-world implications and the future potential of such AI.


    JARVIS: Tony's First AI Companion


    JARVIS, an acronym for "Just A Rather Very Intelligent System", was the inaugural AI introduced with the Iron Man suit. Besides assisting Tony in various capacities like flying and formulating battle strategies, JARVIS showcases advanced features such as natural language processing, speech recognition, and synthesis. When compared to the present-day AI assistants like Siri and Alexa, JARVIS transcends them with a more encompassing general intelligence. Fully integrated into the Iron Man suits, JARVIS is adept at operating Stark's machinery, managing pivotal servers, and even piloting the armor when the situation demands. Over time, JARVIS evolves to manifest dynamic intelligence, demonstrating the ability to adapt to varying situations and self-improve.


    JARVIS vs. FRIDAY: A Comparative Study


    After being tragically destroyed by the villainous Ultron, JARVIS was succeeded by FRIDAY. While FRIDAY presents improved security features and processing speed, she lacks the iconic personality and versatility inherent to JARVIS. This transition accentuates the inherent challenges in developing AI for specific functionalities as opposed to general capabilities. As it stands, the majority of today's AI systems are unable to mirror the fluid adaptability portrayed by fictional AIs such as JARVIS.


    Tony Stark's Perspective on AI


    Throughout the episode, we delve deep into Stark's innovative foray into the realm of AI, from the multifaceted capabilities of JARVIS to the more specialized nature of FRIDAY. The overarching narrative of the Iron Man series offers invaluable insights into the prospective future of AI and the dilemmas faced when balancing the development of specific versus general AI. One of the paramount takeaways is that Stark's brilliance doesn't solely reside in his armor but in his unmatched ability to integrate AI seamlessly. The Iron Man narrative serves as a poignant reminder of the transformative potential AI possesses, along with the accompanying responsibilities.


    Conclusion:


    The riveting journey of AI, as depicted through the lens of Iron Man, provides a tantalizing glimpse into its latent potential and inherent challenges. The trajectory of AI's future is largely undetermined, and it's imperative for us to shape it with foresight and responsibility.

    AI and Existential Risk - Overview and Discussion

    AI and Existential Risk - Overview and Discussion

    A special non-news episode in which Andrey and Jeremie discussion AI X-Risk!

    Please let us know if you'd like use to record more of this sort of thing by emailing contact@lastweekin.ai or commenting whether you listen.

    Outline:

    (00:00) Intro (03:55) Topic overview (10:22) Definitions of terms (35:25) AI X-Risk scenarios (41:00) Pathways to Extinction (52:48) Relevant assumptions (58:45) Our positions on AI X-Risk (01:08:10) General Debate (01:31:25) Positive/Negative transfer (01:37:40) X-Risk within 5 years  (01:46:50) Can we control an AGI (01:55:22) AI Safety Aesthetics (02:00:53) Recap (02:02:20) Outer vs inner alignment (02:06:45) AI safety and policy today (02:15:35) Outro

    Links

    Taxonomy of Pathways to Dangerous AI

    Clarifying AI X-risk

    Existential Risks and Global Governance Issues Around AI and Robotics

    Current and Near-Term AI as a Potential Existential Risk Factor

    AI x-risk, approximately ordered by embarrassment

    Classification of Global Catastrophic Risks Connected with Artificial Intelligence

    X-Risk Analysis for AI Research

    The Alignment Problem from a Deep Learning Perspective

    The Terminator Movies: From Sci-Fi Nightmare to AI Safety Blueprint

    The Terminator Movies: From Sci-Fi Nightmare to AI Safety Blueprint

    The Terminator films have profoundly shaped how society thinks about artificial intelligence. This episode analyzes concepts like artificial general intelligence through the lens of Skynet, the malevolent AI in the movies. We explore real-world AI safety research inspired by cautionary sci-fi narratives. The episode prompts a thoughtful examination of how we can develop advanced AI that enhances humanity rather than destroying it. With ethical, responsible innovation, we can steer the future toward an AI-enabled world that benefits all.


    This podcast was generated with the help of artificial intelligence. We do fact check with human eyes, but there might still be hallucinations in the output.

    Music credit: "Modern Situations by Unicorn Heads"

    Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

    Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

    Here is my conversation with Dario Amodei, CEO of Anthropic.

    Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.

    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (00:00:00) - Introduction

    (00:01:00) - Scaling

    (00:15:46) - Language

    (00:22:58) - Economic Usefulness

    (00:38:05) - Bioterrorism

    (00:43:35) - Cybersecurity

    (00:47:19) - Alignment & mechanistic interpretability

    (00:57:43) - Does alignment research require scale?

    (01:05:30) - Misuse vs misalignment

    (01:09:06) - What if AI goes well?

    (01:11:05) - China

    (01:15:11) - How to think about alignment

    (01:31:31) - Is modern security good enough?

    (01:36:09) - Inefficiencies in training

    (01:45:53) - Anthropic’s Long Term Benefit Trust

    (01:51:18) - Is Claude conscious?

    (01:56:14) - Keeping a low profile



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    A Critique of Marc Andreessen's "Why AI Will Save The World"

    A Critique of Marc Andreessen's "Why AI Will Save The World"
    Last week's Long Reads was Marc Andreessen's "Why AI Will Save the World." This week we read Dwarkesh Patel's "Contra Marc Andreessen on AI."   The AI Breakdown helps you understand the most important news and discussions in AI.  Subscribe to The AI Breakdown newsletter: https://theaibreakdown.beehiiv.com/subscribe Subscribe to The AI Breakdown on YouTube: https://www.youtube.com/@TheAIBreakdown Join the community: bit.ly/aibreakdown Learn more: http://breakdown.network/

    Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

    Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

    For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

    We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

    If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (0:00:00) - TIME article

    (0:09:06) - Are humans aligned?

    (0:37:35) - Large language models

    (1:07:15) - Can AIs help with alignment?

    (1:30:17) - Society’s response to AI

    (1:44:42) - Predictions (or lack thereof)

    (1:56:55) - Being Eliezer

    (2:13:06) - Othogonality

    (2:35:00) - Could alignment be easier than we think?

    (3:02:15) - What will AIs want?

    (3:43:54) - Writing fiction & whether rationality helps you win



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    DEBRIEF - We're All Gonna Die

    DEBRIEF - We're All Gonna Die

    Debriefing the episode with Eliezer Yudkowsky. This one was so good, we had to share. The fate of humanity might depend on it.

    The Debrief Episode goes out EVERY MONDAY for Bankless Citizens. Want the Debrief Episode? Get the Premium RSS feed by Subscribing to Bankless!

    WATCH THE FULL EPISODE HERE:
    https://youtu.be/gA1sNLL6yg4 

    ------
    🚀 SUBSCRIBE TO NEWSLETTER:          https://newsletter.banklesshq.com/ 

    -----
    Not financial or tax advice. This channel is strictly educational and is not investment advice or a solicitation to buy or sell any assets or to make any financial decisions. This video is not tax advice. Talk to your accountant. Do your own research.

    Disclosure. From time-to-time I may add links in this newsletter to products I use. I may receive commission if you make a purchase through one of these links. Additionally, the Bankless writers hold crypto assets. See our investment disclosures here:
    https://www.bankless.com/disclosures 

    Can effective altruism be redeemed?

    Can effective altruism be redeemed?
    Guest host Sigal Samuel talks with Holden Karnofsky about effective altruism, a movement flung into public scrutiny with the collapse of Sam Bankman-Fried and his crypto exchange, FTX. They discuss EA’s approach to charitable giving, the relationship between effective altruism and the moral philosophy of utilitarianism, and what reforms might be needed for the future of the movement. Note: In August 2022, Bankman-Fried’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project is now on pause. Host: Sigal Samuel (@SigalSamuel), Senior Reporter, Vox Guest: Holden Karnofsky, co-founder of GiveWell; CEO of Open Philanthropy References:  "Effective altruism gave rise to Sam Bankman-Fried. Now it's facing a moral reckoning" by Sigal Samuel (Vox; Nov. 16, 2022) "The Reluctant Prophet of Effective Altruism" by Gideon Lewis-Kraus (New Yorker; Aug. 8, 2022) "Sam Bankman-Fried tries to explain himself" by Kelsey Piper (Vox; Nov. 16, 2022) "EA is about maximization, and maximization is perilous" by Holden Karnofsky (Effective Altruism Forum; Sept. 2, 2022) "Defending One-Dimensional Ethics" by Holden Karnofsky (Cold Takes blog; Feb. 15, 2022) "Future-proof ethics" by Holden Karnofsky (Cold Takes blog; Feb. 2, 2022) "Bayesian mindset" by Holden Karnofsky (Cold Takes blog; Dec. 21, 2021) "EA Structural Reform Ideas" by Carla Zoe Cremer (Nov. 12, 2022) "Democratising Risk: In Search of a Methodology to Study Existential Risk" by Carla Cremer and Luke Kemp (SSRN; Dec. 28, 2021)   Enjoyed this episode? Rate The Gray Area ⭐⭐⭐⭐⭐ and leave a review on Apple Podcasts. Subscribe for free. Be the first to hear the next episode of The Gray Area. Subscribe in your favorite podcast app. Support The Gray Area by making a financial contribution to Vox! bit.ly/givepodcasts This episode was made by:  Producer: Erikk Geannikis Editor: Amy Drozdowska Engineer: Patrick Boyd Editorial Director, Vox Talk: A.M. Hall Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Annaka Harris On Consciousness

    Annaka Harris On Consciousness
    What is consciousness? How does it arise? And why does it exist? We take ‘experience' for granted. But the very existence of consciousness raises profound questions: Why would any collection of matter in the universe be conscious? How are we able to think about this? And why should we? Our guide for today's philosophic and scientific exploration of these mysteries is Annaka Harris. An editor and consultant for science writers specializing in neuroscience and physics, Annaka is the author of the children's book I Wonder, a collaborator on the Mindful Games Activity Cards, by Susan Kaiser Greenland, and a volunteer mindfulness teacher for the Inner Kids organization. Annaka's work has appeared in The New York Times and all of her guided meditations and lessons for children are available on the Waking Up app, the digital meditation platform created by her husband Sam Harris — the renown author, public intellectual, blogger, and podcast host. Annaka’s latest book — which recently hit the New York Times bestseller list and provides the focus for today’s conversation — is entitled, Conscious: A Brief Guide to the Fundamental Mystery of the Mind. A must-read for any and all curious about one of the Universe's great mysteries, it's a brief yet mind-bending read that challenges our assumptions about the nature, origin and purpose of consciousness. Equal parts nerdy and fun, this is a deeply profound conversation that tackles the very nature of consciousness itself — and what it means to be a living being having ‘an experience'. We discuss how Annaka became interested in this field and the path undertaken to writing this book. Parsing instinct from scientific fact, we deconstruct our assumptions about consciousness and grapple with its essential nature — what is consciousness exactly? And where does it physically reside? We discuss meditation and artificial intelligence. We dive into plant consciousness. We explore panpsychism (a theory I quite fancy). And we muse about the role of spirituality in scientific inquiry. All told, this tackles the current limits of science and human understanding and leaves us wondering, is it possible to truly understand everything? The visually inclined can watch our entire conversation on YouTube here: bit.ly/annakaharris460 (please subscribe!) An intellectual delight from start to finish, I thoroughly enjoyed talking to Annaka and I sincerely hope you enjoy the listen. Peace + Plants, Rich