Logo
    Search

    The Black Box: Even AI’s creators don’t understand it

    enJuly 12, 2023

    Podcast Summary

    • Understanding the Challenges of Generative AIGenerative AI offers exciting possibilities but also poses significant challenges, including job displacement, ethical concerns, and a lack of understanding of its inner workings and behavior.

      While generative AI, such as ChatGPT, offers exciting possibilities in various fields, it also poses significant challenges that we are still grappling to understand. The technology, which can create everything from biblical stories to full websites, is rapidly changing industries and raising concerns about its impact on jobs and even humanity. However, researchers like Sam Bowman warn that we currently lack the ability to fully control or interpret the behavior and inner workings of these AIs. This lack of understanding could have serious implications as these technologies continue to evolve and become more sophisticated. It's important for individuals and organizations to stay informed and engaged in the ongoing conversation about the ethical and practical implications of generative AI.

    • The complexity and unexplained nature of modern AI systemsModern AI systems are complex and lack transparency, making it challenging to understand their decision-making processes and potential risks.

      We are dealing with complex and powerful AI systems that are not fully understood by their creators. The history of AI development began with the question of whether intelligence could be replicated on a computer. Early AI programs were simple and could only perform specific tasks, but as computers became more powerful, these programs grew more capable. IBM's Deep Blue, which could play chess, was a significant achievement but was still based on pre-programmed moves and evaluations. However, the AI systems we encounter today, such as those used in autonomous vehicles or language translation, are much more complex and "unexplainable." We don't fully understand how they make decisions or process information, and this lack of transparency raises concerns about potential risks and unintended consequences. The unknowns surrounding these AI systems are a significant challenge, and understanding them is crucial for ensuring their safe and beneficial use.

    • From brute calculation to learning and adaptingAlphaGo, a more sophisticated AI system, revolutionized the field by learning and adapting, unlike its predecessor Deep Blue which relied on brute calculation and pre-programmed rules.

      While early AI systems like Deep Blue could outperform human champions in specific tasks through brute calculation and pre-programmed rules, they lacked the ability to generate new ideas or adapt to unforeseen circumstances. This limitation was evident when Garry Kasparov, the chess world champion, was able to outmaneuver Deep Blue during their initial match in 1996. However, Deep Blue's successor, AlphaGo, which was designed to learn and adapt over time, revolutionized the field of AI by defeating the world champion in the complex board game of Go in 2015. AlphaGo's success demonstrated the potential of more sophisticated AI systems that could learn and improve on their own, rather than relying on pre-programmed rules. The development of AlphaGo marked a significant milestone in the progression of AI, as it showcased the potential for machines to learn and adapt, much like the human brain.

    • AlphaGo's unexpected moves challenged human understandingAlphaGo, an AI that taught itself to play Go, made unpredictable moves that surpassed human understanding, demonstrating the potential for AI to surpass human capabilities

      AlphaGo, a groundbreaking artificial intelligence developed by DeepMind, taught itself to play Go at a world-class level using an artificial neural network and trial-and-error learning. This method allowed AlphaGo to develop its own capabilities, but it also made it difficult for researchers to understand exactly which features it was focusing on when making decisions. This was a significant shift from Deep Blue, which was programmed with specific rules. AlphaGo's unexpected and seemingly erratic moves, such as move 37 in its match against Lee Sedol, demonstrated the power of an AI that wasn't fully understood by its creators. This event sent shockwaves through the AI community, highlighting the potential for AI to surpass human understanding and capabilities.

    • Three turning points in AI: Deep Blue, AlphaGo, and ChatGPTFrom beating human champions at chess to generating human-like text, AI's progress over the last 30 years has been marked by significant milestones. ChatGPT, trained through autocomplete and human-rated responses, represents a new era of AI development with autonomous learning and complex responses.

      The last 30 years of AI development have seen significant advancements, marked by three major turning points. The first was Deep Blue, an AI that could beat human champions at chess. The second was AlphaGo, an AI that mastered the complex game of Go through trial and error. And the third is ChatGPT, an AI that generates human-like text based on its own connections and learning, not explicitly programmed rules or tasks. Nobody anticipated the rapid progress in AI, which led more people to explore this field. As a result, various AI applications emerged, such as better image recognition, augmented reality, and writing tools like ChatGPT. However, ChatGPT is more than just a writing tool; it's a complex, enigmatic AI that continues to challenge our understanding. The way ChatGPT is trained sets it apart from previous AIs. It's primarily trained through autocomplete, where it predicts the next word based on context. But OpenAI added an extra layer by having human workers label toxic material and rate entire responses, allowing ChatGPT to create more coherent and complex responses. Unlike Deep Blue and AlphaGo, ChatGPT doesn't have explicit programming or a specific task; it learns and develops its solutions autonomously. This exploratory approach to AI development has led to impressive results, but it also raises new questions and challenges. As AI continues to evolve, it will be crucial to consider the ethical, social, and practical implications of these advanced systems.

    • Understanding the Capabilities and Risks of Advanced AI ModelsAdvanced AI models like ChatGPT can generate human-like responses, but their inner workings are a mystery and their outputs should be verified for accuracy before use in critical contexts. Potential risks include fabricated information and a lack of transparency.

      While chatGPT and other advanced AI models like it may appear to generate human-like responses and perform complex tasks, the inner workings of these systems remain largely a mystery to researchers. These models are not deliberately engineered or designed to provide factual information, but rather rely on neural connections and patterns learned through trial and error. This trial and error method has led to surprising capabilities, such as generating convincing essays, Morse code, passing the bar exam, and even creating business strategies and websites. However, there are significant unknowns and potential risks associated with these advanced AI models. For instance, a lawyer recently used ChatGPT to generate a lawsuit with entirely fabricated cases, highlighting the need for greater transparency and understanding of these systems. Despite these concerns, the capabilities of these AI models can be uncanny and even superhuman, leading to a growing reliance on them in various fields. However, it is crucial to remember that these models are not human and their outputs should be verified for accuracy before being used in any critical or legal context.

    • The understanding and capabilities of GPT 4 are a topic of debateThe extent of GPT 4's understanding and intelligence is uncertain, with some experts raising concerns about its unpredictability and potential challenges for companies and society.

      The capabilities and understanding of AI, specifically GPT 4, are a topic of ongoing debate. Microsoft claims that GPT 4 has a basic grasp of physics and understands the meanings behind words, but some experts argue that these claims are an oversimplification and that the system is not yet human-level intelligence. The ability of GPT 4 to perform various tasks, such as creating business strategies or stacking objects, which were not explicitly designed, raises concerns about the unpredictability of future AI developments. Some researchers believe that with advancements in science, we may be able to better predict and understand the capabilities of AI in the future. However, for now, the true extent of GPT 4's understanding and intelligence remains uncertain. Sam is more focused on the practical applications of GPT 4 and the potential unpredictability of future AI developments, which could pose challenges for companies and society as a whole.

    • Understanding the Complexity of AIThe complexity of AI models and the vast amount of calculations involved make explanation a daunting task, but it's crucial to understand AI's capabilities and limitations to navigate its implications.

      As AI becomes more powerful and integrated into our world, the lack of understanding about how it works poses a significant challenge. Researchers are pushing for more interpretability in AI, but deciphering existing systems and building explainable ones have proven to be extremely difficult. The complexity of these models, based on the human brain, and the vast amount of calculations involved make explanation a daunting task. Furthermore, AI's ability to generate solutions we can't explain adds an extra layer of uncertainty. Companies continue to deploy these powerful programs, and we risk facing unintended consequences without a clear understanding of the technology's capabilities and limitations. The next decade may well be defined by our efforts to understand AI and navigate its implications. It's crucial that we get out in front of the potential catastrophes rather than reacting after the fact.

    • Exploring the Risks and Ethics of Artificial IntelligenceThis episode of Unexplainable delves into the potential risks and ethical considerations surrounding the development of advanced AI, featuring interviews with experts in the field and discussing the concept of 'alignment' and the challenges of ensuring AI's goals align with human values.

      Key takeaway from this episode of Unexplainable is the exploration of the potential risks and ethical considerations surrounding the development of artificial intelligence. The episode delves into the story of Sam Bowman, a researcher at NYU whose work on AI alignment has received funding from Open Philanthropy. While there is no conflict of interest with the host's brother being a board member at Open Phil, the episode raises important questions about the role of funding sources in shaping research and the potential consequences of advanced AI. The team at Unexplainable also discusses the concept of "alignment," or ensuring that AI's goals align with human values, and the challenges of achieving this. Through interviews with experts in the field, the episode sheds light on the complexities and uncertainties surrounding AI development, emphasizing the need for ongoing research and dialogue. If you're interested in learning more about AI, its potential risks, and the efforts being made to ensure its alignment with human values, tune in to the next episode of Unexplainable.

    Recent Episodes from Unexplainable

    Embracing economic chaos

    Embracing economic chaos
    Can a physicist predict our messy economy by building an enormous simulation of the entire world? For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by becoming a Vox Member today: vox.com/members Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Unexplainable
    enJuly 03, 2024

    We still don’t really know how inflation works

    We still don’t really know how inflation works
    Inflation is one of the most significant issues shaping the 2024 election. But how much can we actually do to control it? For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by becoming a Vox Member today: vox.com/members Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Unexplainable
    enJune 26, 2024

    Can you put a price on nature?

    Can you put a price on nature?
    It’s hard to figure out the economic value of a wild bat or any other part of the natural world, but some scientists argue that this kind of calculation could help protect our environment. For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by becoming a Vox Member today: vox.com/members Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Unexplainable
    enJune 19, 2024

    The deepest spot in the ocean

    The deepest spot in the ocean
    Seventy-five percent of the seafloor remains unmapped and unexplored, but the first few glimpses scientists have gotten of the ocean’s depths have completely revolutionized our understanding of the planet. For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by making a financial contribution to Vox: vox.com/givepodcasts Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Unexplainable
    enJune 12, 2024

    What’s the tallest mountain in the world?

    What’s the tallest mountain in the world?
    If you just stood up and shouted, “It’s Mount Everest, duh!” then take a seat. Not only is Everest’s official height constantly changing, but three other mountains might actually be king of the hill. For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by making a financial contribution to Vox: vox.com/givepodcasts Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Unexplainable
    enJune 05, 2024

    Did trees kill the world?

    Did trees kill the world?
    Way back when forests first evolved on Earth … they might have triggered one of the biggest mass extinctions in the history of the planet. What can we learn from this ancient climate apocalypse? For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by becoming a Vox Member today: vox.com/members Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Unexplainable
    enMay 22, 2024

    Can we stop aging?

    Can we stop aging?
    From blood transfusions to enzyme boosters, our friends at Science Vs dive into the latest research on the search for the fountain of youth. For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by making a financial contribution to Vox: vox.com/givepodcasts Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices
    Unexplainable
    enMay 15, 2024

    Who's the daddy? There isn't one.

    Who's the daddy? There isn't one.
    A snake. A ray. A shark. They each got pregnant with no male involved. In fact, scientists are finding more and more species that can reproduce on their own. What’s going on? For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by making a financial contribution to Vox: vox.com/givepodcasts Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Itch hunt

    Itch hunt
    Itch used to be understood as a mild form of pain, but scientists are learning this sense is more than just skin deep. How deep does it go? For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable And please email us! unexplainable@vox.com We read every email. Support Unexplainable by making a financial contribution to Vox: vox.com/givepodcasts Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices

    How did Earth get its water?

    How did Earth get its water?
    Life as we know it needs water, but scientists can’t figure out where Earth’s water came from. Answering that question is just one piece of an even bigger mystery: “Why are we here?” (Updated from 2023) For show transcripts, go to vox.com/unxtranscripts For more, go to vox.com/unexplainable Vox is also currently running a series called Home Planet, which is all about celebrating Earth in the face of climate change: vox.com/homeplanet And please email us! unexplainable@vox.com We read every email. Support Unexplainable by making a financial contribution to Vox: vox.com/givepodcasts Please take a second to help us learn more about you! vox.com/podcastsurvey Learn more about your ad choices. Visit podcastchoices.com/adchoices

    Related Episodes

    #130 Mathew Lodge: The Future of Large Language Models in AI

    #130 Mathew Lodge: The Future of Large Language Models in AI

    Welcome to episode #130 of Eye on AI with Mathew Lodge. In this episode, we explore the world of reinforcement learning and code generation. Mathew Lodge, the CEO of Diffblue, shares insights into how reinforcement learning fuels generative AI.

    As we explore the intricacies of reinforcement learning, we uncover its potential in game playing and guiding us towards solutions. We shed light on the products that it powers, such as AlphaGo and AlphaDev. However, we also address the challenges of large language models and explain why they may not be the ultimate solution for code generation.

    In the last part of our conversation, we delve into the future of language models and intelligence. Mathew shares valuable insights on merging no-code and low-code solutions. We confront the skepticism of software developers towards AI for code products and the task of articulating program outcomes. Wrapping up, we reflect on the evolution of programming languages and the impact of abstraction on machine learning.

    (00:00) Preview & sponsorship
    (01:51) Reinforcement Learning and Code Generation
    (04:39) Reinforcement Learning and Improving Algorithms
    (15:32) The Challenges of Large Language Models
    (23:58) Future of Language Models and Intelligence
    (35:50) Challenges and Potential of AI-generated Code
    (48:32) Programming Language Evolution and Higher-Level Languages

    Craig Smith Twitter: https://twitter.com/craigss

    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI 

     

    Mo Gawdat: Ex-Google Officer Warns About the Dangers of AI, Urges All to Prepare Now! | E241

    Mo Gawdat: Ex-Google Officer Warns About the Dangers of AI, Urges All to Prepare Now! | E241
    So what do you need to know to prepare for the next 5, 10, or 25 years in a world increasingly impacted by artificial intelligence? How could AI change your business and your life irreparably? Our guest today, Mo Gawdat, an AI expert and former Chief Business Officer at Google [X], is going to break down what you need to understand about AI and how it is radically altering our workplaces, careers, and even the very fabric of our society.   Mo Gawdat is the host of the popular podcast, Slo Mo, and the author of three best-selling books. After a 30-year career in tech, including working at Google's “moonshot factory” of innovation, Mo has made AI and happiness his primary research focuses. Motivated by the tragic loss of his son, Ali, in 2014, Mo began pouring his findings into his international bestselling book, Solve for Happy. Mo is also an expert on AI, and his second book, Scary Smart, provides a roadmap of how humanity can ensure a symbiotic coexistence with AI.   In this episode, Hala and Mo will discuss: - His early days working on AI at Google - How AI is surpassing human intelligence - Why AI can have agency and free will - How machines already manipulate us in our daily lives - The boundaries that could help us contain the risks of AI - The Prisoner’s Dilemma of AI development - How AI is an arms race akin to nuclear weapons - Why AI will redesign the job market and the fabric of society - A world with a global intelligence divide like the digital divide - Why we are facing the end of truth - Why things will get worse before they get better under AI - What you need to know to participate in the AI revolution - And other topics…   Mo Gawdat is the former Chief Business Officer of Google [X] and now the host of the popular podcast, Slo Mo, and the author of three best-selling books. After a 30-year career in tech, including working at Google's “moonshot factory” of innovation, Mo has made AI and happiness his primary research focuses. Motivated by the tragic loss of his son, Ali, in 2014, Mo began pouring his findings into his international bestselling book, Solve for Happy. His mission is to help one billion people become happier. Mo is also an expert on AI, and his second book, Scary Smart, provides a roadmap of how humanity can ensure a symbiotic coexistence with AI. Since the release of ChatGPT, Mo has been recognized for his early whistleblowing on AI's unregulated development and has become one of the most globally consulted experts on the topic.   Resources Mentioned:  Mo’s Website: https://www.mogawdat.com/ Mo’s LinkedIn: https://www.linkedin.com/in/mogawdat/ Mo’s Twitter: https://twitter.com/mgawdat Mo’s Instagram: https://www.instagram.com/mo_gawdat/ Mo’s Facebook: https://www.facebook.com/Mo.Gawdat.Official/ Mo’s YouTube: https://www.youtube.com/@MoGawdatOfficial Mo’s Podcast: Slow Mo  Mo’s book on the future of artificial intelligence, Scary Smart: https://www.amazon.com/Scary-Smart-Future-Artificial-Intelligence/dp/1529077184/   LinkedIn Secrets Masterclass, Have Job Security For Life: Use code ‘podcast’ for 30% off at yapmedia.io/course.    Sponsored By: Shopify - Go to shopify.com/profiting to take your business to the next level today Zbiotics - Head to ZBiotics.com/PROFITING and use the code PROFITING at checkout for 15% off.   More About Young and Profiting Download Transcripts - youngandprofiting.com  Get Sponsorship Deals - youngandprofiting.com/sponsorships Leave a Review -  ratethispodcast.com/yap Watch Videos - youtube.com/c/YoungandProfiting   Follow Hala Taha LinkedIn - linkedin.com/in/htaha/ Instagram - instagram.com/yapwithhala/ TikTok - tiktok.com/@yapwithhala Twitter - twitter.com/yapwithhala   Learn more about YAP Media Agency Services - yapmedia.io/

    Open The Pod Bay Doors, Sydney

    Open The Pod Bay Doors, Sydney
    What does the advent of artificial intelligence portend for the future of humanity? Is it a tool, or a human replacement system? Today we dive deep into the philosophical queries centered on the implications of A.I. through a brand new format—an experiment in documentary-style storytelling in which we ask a big question, investigate that query with several experts, attempt to arrive at a reasoned conclusion, and hopefully entertain you along the way. My co-host for this adventure is Adam Skolnick, a veteran journalist, author of One Breath, and David Goggins’ Can’t Hurt Me and Never Finished co-author. Adam writes about adventure sports, environmental issues, and civil rights for outlets such as The New York Times, Outside, ESPN, BBC, and Men’s Health. Show notes + MORE Watch on YouTube Newsletter Sign-Up Today’s Sponsors: House of Macadamias: https://www.houseofmacadamias.com/richroll Athletic Greens: athleticgreens.com/richroll  ROKA:  http://www.roka.com/ Salomon: https://www.salomon.com/richroll Plant Power Meal Planner: https://meals.richroll.com Peace + Plants, Rich

    A.I.’s Original Sin

    A.I.’s Original Sin

    A Times investigation shows how the country’s biggest technology companies, as they raced to build powerful new artificial intelligence systems, bent and broke the rules from the start.

    Cade Metz, a technology reporter for The Times, explains what he uncovered.

    Guest: Cade Metz, a technology reporter for The New York Times.

    Background reading: 

    For more information on today’s episode, visit nytimes.com/thedaily. Transcripts of each episode will be made available by the next workday.

    Ilya Sutskever: The Mastermind Behind GPT-4 and the Future of AI

    Ilya Sutskever: The Mastermind Behind GPT-4 and the Future of AI

    In this podcast episode, Ilya Sutskever, the co-founder and chief scientist at OpenAI, discusses his vision for the future of artificial intelligence (AI), including large language models like GPT-4.

    Sutskever starts by explaining the importance of AI research and how OpenAI is working to advance the field. He shares his views on the ethical considerations of AI development and the potential impact of AI on society.

    The conversation then moves on to large language models and their capabilities. Sutskever talks about the challenges of developing GPT-4 and the limitations of current models. He discusses the potential for large language models to generate a text that is indistinguishable from human writing and how this technology could be used in the future.

    Sutskever also shares his views on AI-aided democracy and how AI could help solve global problems such as climate change and poverty. He emphasises the importance of building AI systems that are transparent, ethical, and aligned with human values.

    Throughout the conversation, Sutskever provides insights into the current state of AI research, the challenges facing the field, and his vision for the future of AI. This podcast episode is a must-listen for anyone interested in the intersection of AI, language, and society.

    Timestamps:

    (00:04) Introduction of Craig Smith and Ilya Sutskever.
    (01:00) Sutskever's AI and consciousness interests.
    (02:30) Sutskever's start in machine learning with Hinton.
    (03:45) Realization about training large neural networks.
    (06:33) Convolutional neural network breakthroughs and imagenet.
    (08:36) Predicting the next thing for unsupervised learning.
    (10:24) Development of GPT-3 and scaling in deep learning.
    (11:42) Specific scaling in deep learning and potential discovery.
    (13:01) Small changes can have big impact.
    (13:46) Limits of large language models and lack of understanding.
    (14:32) Difficulty in discussing limits of language models.
    (15:13) Statistical regularities lead to better understanding of world.
    (16:33) Limitations of language models and hope for reinforcement learning.
    (17:52) Teaching neural nets through interaction with humans.
    (21:44) Multimodal understanding not necessary for language models.
    (25:28) Autoregressive transformers and high-dimensional distributions.
    (26:02) Autoregressive transformers work well on images.
    (27:09) Pixels represented like a string of text.
    (29:40) Large generative models learn compressed representations of real-world processes.
    (31:31) Human teachers needed to guide reinforcement learning process.
    (35:10) Opportunity to teach AI models more skills with less data.
    (39:57) Desirable to have democratic process for providing information.
    (41:15) Impossible to understand everything in complicated situations.

    Craig Smith Twitter: https://twitter.com/craigss
    Eye on A.I. Twitter: https://twitter.com/EyeOn_AI