Logo
    Search

    The Pioneers of Modern AI with Cade Metz, author of "Genius Makers"

    enApril 12, 2021

    Podcast Summary

    • The Rise of Neural Networks: A Story of Determination and Cultural ToleranceNeural networks, once a fringe idea, revolutionized AI through determination and cultural acceptance, despite facing skepticism and resistance.

      Against all odds, an idea that was once considered fringe, neural networks, started to gain traction and revolutionize the field of artificial intelligence, despite facing skepticism and resistance in certain cultures and academic communities. This idea, which dates back to the 1950s, was championed by individuals like Jeff Hinton, who remained committed to it despite the lack of support. Hinton's journey took him from the UK to the US, where he found a more receptive environment for his research. However, even as his work became more successful, he made ethical decisions that influenced the way the field approached ethics and government funding. The story of neural networks' rise to prominence is a testament to the power of determination and the importance of cultural tolerance for innovative ideas.

    • The auction of Jeff Hinton's team sets the stage for the evolution of neural networksThe book explores the impact of neural networks on technology through the stories of key figures like Jeff Hinton and the tight-knit AI research community.

      The history of artificial intelligence (AI) is filled with fascinating and repeating situations, as seen in the book under discussion. The author chose to begin the story with an auction of Jeff Hinton and his team, the creators of a groundbreaking AI paper, to the highest bidder. This opening, full of tension and drama, encapsulated the involvement of major players, including American and Chinese tech giants, and set the stage for the book's exploration of the evolution of neural networks. The surprise element of China's early presence in the field adds depth to the narrative. The author's central idea was to follow the impact of neural networks on technology, and the story fell into place as various key figures, including Jeff Hinton, emerged as essential contributors to this field. The tight-knit community of AI researchers and their interconnected ideas create an intriguing and illuminating thread throughout the book.

    • The Unexpected Connections and Collaborations in AIUnexpected collaborations and chance encounters have significantly contributed to the advancements in AI. Companies often engage in a race to adopt AI technology to avoid being left behind.

      The field of artificial intelligence (AI) has seen significant advancements and transformations over the past few decades, driven by the connections and collaborations between key individuals and companies. These relationships have often formed unexpectedly, such as when Jeff Dean and Andrew Ng independently joined Google with similar ideas, or when Elon Musk mentioned DeepMind on a private jet, leading to Larry Page's interest. The small world of AI research and development has been marked by a constant arms race between companies, with a fear of being left behind driving rapid investment and adoption. Microsoft, for example, eventually embraced AI despite initial hesitation. The personalities of companies have also played a role in the integration of AI, with some finding it a natural fit and others struggling to adapt. The author interviewed numerous individuals for the book, both on and off the record, to uncover these stories and threads.

    • Verifying facts through multiple sourcesThorough journalism requires cross-verifying facts from multiple sources to ensure accuracy and maintain narrative flow.

      Meticulous journalism involves speaking with a multitude of sources, ensuring accuracy by cross-verifying key facts from multiple sources. Discrepancies and inconsistencies are common, but handling them delicately is crucial for maintaining the flow and coherence of the narrative. The process of writing a book, like "Genius Makers," involves making decisions about who to focus on, as every individual contributes uniquely to the larger story. Surprising discoveries may emerge during the writing process, but they often fit into the larger narrative and reflect the trajectory of ongoing technological developments.

    • The Clash Between AGI Researchers and Corporate ObjectivesThe pursuit of AGI raises ethical concerns, but researchers' beliefs and corporate objectives may clash, requiring ongoing dialogue and collective responsibility.

      The pursuit of artificial general intelligence (AGI) in tech companies can lead to a clash between idealistic researchers and corporate objectives. Many researchers, despite being atheists, hold a deep belief in AGI, akin to a religious faith. However, the technology, which can be biased and generate toxic content, poses significant ethical challenges. Companies, driven by their future and bottom line, may respond differently to these concerns, leading to a contentious dynamic. The recent focus on bias in AI systems, which can be endemic due to human data, highlights this ongoing issue. While some researchers advocate for addressing these ethical concerns, others face criticism or even leave their companies. This complex issue requires ongoing dialogue and a collective effort to ensure that AI technology is developed and deployed responsibly.

    • Neural networks and large language models come with ethical considerations and potential risks, particularly around bias and toxicity in training data.The development of advanced technologies like neural networks and large language models brings ethical concerns and potential risks, especially regarding biased data. Recognizing their dual-use nature and considering ethical implications is crucial.

      The development and implementation of advanced technologies, such as neural networks and large language models, come with significant ethical considerations and potential risks, particularly when it comes to bias and toxicity in training data. The analogy of a neural network as a giant lasagna highlights the difficulty of removing biased data once it's been integrated into the model. The increasing size and complexity of these models only exacerbate the problem. While there may be potential uses for these technologies in areas like mental health counseling, the lack of guardrails and the potential for harm make it a challenging issue to navigate. As humans, we often think in absolutes, but the world and technology are more complicated than that. It's important to recognize the potential dual-use nature of these technologies and consider the ethical implications as we continue to develop and deploy them. The book "The Master Algorithm" sheds light on these issues and serves as a reminder of the need for thoughtful consideration and regulation in the field of artificial intelligence.

    • The intersection of tech, ethics, and geopolitics in AIOngoing debates and disagreements highlight the need for ethical guidelines in AI development, with concerns over military applications and data privacy. Fei-Fei Li's contributions to data collection and image recognition illustrate the importance of processing power and data in neural networks.

      The intersection of technology, ethics, and geopolitics in the field of artificial intelligence and machine learning is a complex and contentious issue. The conversation highlighted the concerns of those within the industry, including founders, researchers, and insiders, regarding the use of AI technology in areas such as military applications and data privacy. The struggle for control and alignment with ethical principles is ongoing, with some individuals leaving the field altogether due to their moral objections. Fei-Fei Li, a prominent figure in the field, played a crucial role in advancing the technology through her contributions to data collection and image recognition. Her story mirrors the importance of both processing power and data in the development of neural networks. The ongoing debates and disagreements underscore the need for continued dialogue and potential ethical guidelines for those working in the field.

    • Perspectives and biases in AI and tech developmentUnconscious biases can impact data collection and decision-making in AI and tech development, leading to differing viewpoints on ethical and national security issues. The underrepresentation of diverse groups in the field exacerbates these biases.

      Perspectives and biases play a significant role in the field of artificial intelligence and technology development. The dynamic between individuals and companies can lead to differing viewpoints on ethical and national security issues. Unconscious biases can also impact data collection and decision-making. The case of Project Maven at Google illustrates this, as some saw it as a necessary step for national security while others viewed it as a detrimental move. The underrepresentation of diverse groups in the field exacerbates these biases. The book "Artificial Unintelligence" documents these issues and aims to show various perspectives to provide a comprehensive understanding of the complexities in the field.

    • China's competitive position in AIDespite leading in AI publications and citations, US reliance on foreign talent complicates the competition between the US and China in AI. Both countries must navigate immigration and security concerns while continuing to innovate.

      The situation in China regarding AI development is more complex and significant than often perceived. China has surpassed the US in AI publications and citations this year, indicating a competitive position in this field. However, it's essential to remember that the US relies on foreign talent, including immigrants, for innovation and growth. The complexity of this issue is further highlighted by the fact that many key players in the field are not US citizens. This race between the US and China is not over, and both countries must navigate the challenges of immigration and security concerns while continuing to advance in AI research. Overall, the conversation underscores the importance of a nuanced understanding of global technological developments and their implications.

    Recent Episodes from Last Week in AI

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    #172 - Claude and Gemini updates, Gemma 2, GPT-4 Critic

    Our 172nd episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekinai.com and/or hello@gladstone.ai

    Last Week in AI
    enJuly 01, 2024

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    #171 - - Apple Intelligence, Dream Machine, SSI Inc

    Our 171st episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 24, 2024

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    #170 - new Sora rival, OpenAI robotics, understanding GPT4, AGI by 2027?

    Our 170th episode with a summary and discussion of last week's big AI news!

    With hosts Andrey Kurenkov (https://twitter.com/andrey_kurenkov) and Jeremie Harris (https://twitter.com/jeremiecharris)

    Feel free to leave us feedback here.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 09, 2024

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    #169 - Google's Search Errors, OpenAI news & DRAMA, new leaderboards

    Our 168th episode with a summary and discussion of last week's big AI news!

    Feel free to leave us feedback here: https://forms.gle/ngXvXZpNJxaAprDv6

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enJune 03, 2024

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    #168 - OpenAI vs Scar Jo + safety researchers, MS AI updates, cool Anthropic research

    Our 168th episode with a summary and discussion of last week's big AI news!

    With guest host Gavin Purcell from AI for Humans podcast!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + Links:

    Last Week in AI
    enMay 28, 2024

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    #167 - GPT-4o, Project Astra, Veo, OpenAI Departures, Interview with Andrey

    Our 167th episode with a summary and discussion of last week's big AI news!

    With guest host Daliana Liu (https://www.linkedin.com/in/dalianaliu/) from The Data Scientist Show!

    And a special one-time interview with Andrey in the latter part of the podcast.

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 19, 2024

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    #166 - new AI song generator, Microsoft's GPT4 efforts, AlphaFold3, xLSTM, OpenAI Model Spec

    Our 166th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 12, 2024

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    #165 - Sora challenger, Astribot's S1, Med-Gemini, Refusal in LLMs

    Our 165th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enMay 05, 2024

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    #164 - Meta AI, Phi-3, OpenELM, Bollywood Deepfakes

    Our 164th episode with a summary and discussion of last week's big AI news!

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 30, 2024

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    #163 - Llama 3, Grok-1.5 Vision, new Atlas robot, RHO-1, Medium ban

    Our 163rd episode with a summary and discussion of last week's big AI news!

    Note: apology for this one coming out a few days late, got delayed in editing it -Andrey

    Read out our text newsletter and comment on the podcast at https://lastweekin.ai/

    Email us your questions and feedback at contact@lastweekin.ai and/or hello@gladstone.ai

    Timestamps + links:

    Last Week in AI
    enApril 24, 2024

    Related Episodes

    #94 – Ilya Sutskever: Deep Learning

    #94 – Ilya Sutskever: Deep Learning
    Ilya Sutskever is the co-founder of OpenAI, is one of the most cited computer scientist in history with over 165,000 citations, and to me, is one of the most brilliant and insightful minds ever in the field of deep learning. There are very few people in this world who I would rather talk to and brainstorm with about deep learning, intelligence, and life than Ilya, on and off the mic. Support this podcast by signing up with these sponsors: – Cash App – use code “LexPodcast” and download: – Cash App (App Store): https://apple.co/2sPrUHe – Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Ilya's Twitter: https://twitter.com/ilyasut Ilya's Website: https://www.cs.toronto.edu/~ilya/ This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:23 - AlexNet paper and the ImageNet moment 08:33 - Cost functions 13:39 - Recurrent neural networks 16:19 - Key ideas that led to success of deep learning 19:57 - What's harder to solve: language or vision? 29:35 - We're massively underestimating deep learning 36:04 - Deep double descent 41:20 - Backpropagation 42:42 - Can neural networks be made to reason? 50:35 - Long-term memory 56:37 - Language models 1:00:35 - GPT-2 1:07:14 - Active learning 1:08:52 - Staged release of AI systems 1:13:41 - How to build AGI? 1:25:00 - Question to AGI 1:32:07 - Meaning of life

    Jeremiah Lowin – Machine Learning in Investing – [Invest Like the Best, EP.105]

    Jeremiah Lowin – Machine Learning in Investing – [Invest Like the Best, EP.105]
    My guest this week is one of my best and oldest friends, Jeremiah Lowin. Jeremiah has had a fascinating career, starting with advanced work in statistics before moving into the risk management field in the hedge fund world. Through his career he has studied data, risk, statistics, and machine learning—the last of which is the topic of our conversation today.  He has now left the world of finance to found a company called Prefect, which is a framework for building data infrastructure. Prefect was inspired by observing frictions between data scientists and data engineers, and solves these problems with a functional API for defining and executing data workflows. These problems, while wonky, are ones I can relate to working in quantitative investing—and others that suffer from them out there will be nodding their heads. In full and fair disclosure, both me and my family are investors in Jeremiah’s business. You won’t have to worry about that potential conflict of interest in today’s conversation, though, because our focus is on the deployment of machine learning technologies in the realm of investing. What I love about talking to Jeremiah is that he is an optimist and a skeptic. He loves working with new statistical learning technologies, but often thinks they are overhyped or entirely unsuited to the tasks they are being used for. We get into some deep detail on how tests are set up, the importance of data, and how the minimization of error is a guiding light in machine learning and perhaps all of human learning, too. Let’s dive in. For more episodes go to InvestorFieldGuide.com/podcast. Sign up for the book club, where you’ll get a full investor curriculum and then 3-4 suggestions every month at InvestorFieldGuide.com/bookclub. Follow Patrick on Twitter at @patrick_oshag Show Notes 2:06 - (First Question) – What do people need to think about when considering using machine learning tools 3:19 – Types of problems that AI is perfect for 6:09 – Walking through an actual test and understanding the terminology 11:52 – Data in training: training set, test set, validation set 13:55 – The difference between machine learning and classical academic finance modelling 16:09 – What will the future of investing look like using these technologies 19:53 – The concept of stationarity 21:31 – Why you shouldn’t take for granted label formation in tests 24:12 – Ability for a model to shrug 26:13 – Hyper parameter tuning 28:16 – Categories of types of models 30:49 – Idea of a nearest neighbor or K-Means Algorithm 34:48 – Trees as the ultimate utility player in this landscape 38:00 – Features and data sets as the driver of edge in Machine Learning 40:12 – Key considerations when working through time series 42:05 – Pitfalls he has seen when folks try to build predictive market investing models 44:36 – Getting started 46:29 – Looking back at his career, what are some of the frontier vs settled applications of machine learning he has implemented 49:49 – Does intereptability matter in all of this 52:31 – How gradient decent fits into this whole picture     Learn More For more episodes go to InvestorFieldGuide.com/podcast.  Sign up for the book club, where you’ll get a full investor curriculum and then 3-4 suggestions every month at InvestorFieldGuide.com/bookclub Follow Patrick on twitter at @patrick_oshag

    What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever

    What is Digital Life? with OpenAI Co-Founder & Chief Scientist Ilya Sutskever
    Each iteration of ChatGPT has demonstrated remarkable step function capabilities. But what’s next? Ilya Sutskever, Co-Founder & Chief Scientist at OpenAI, joins Sarah Guo and Elad Gil to discuss the origins of OpenAI as a capped profit company, early emergent behaviors of GPT models, the token scarcity issue, next frontiers of AI research, his argument for working on AI safety now, and the premise of Superalignment. Plus, how do we define digital life? Ilya Sutskever is Co-founder and Chief Scientist of OpenAI. He leads research at OpenAI and is one of the architects behind the GPT models. He co-leads OpenAI's new "Superalignment" project, which tries to solve the alignment of superintelligences in 4 years. Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his Ph.D in Computer Science from the University of Toronto. Show Links: Ilya Sutskever | LinkedIn Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ilyasut Show Notes: (00:00) - Early Days of AI Research (06:51) - Origins of Open Ai & CapProfit Structure (13:46) - Emergent Behaviors of GPT Models (17:55) - Model Scale Over Time & Reliability (22:23) - Roles & Boundaries of Open-Source in the AI Ecosystem (28:22) - Comparing AI Systems to Biological & Human Intelligence (30:52) - Definition of Digital Life (32:59) - Super Alignment & Creating Pro Human AI (39:01) - Accelerating & Decelerating Forces

    How neural networks improve diagnostics

    How neural networks improve diagnostics

    Justin Anderson, CEO of Xpert AI, joins hosts Barb Darrow and Michael Hickins to talk about AI, how neural networks can improve healthcare diagnoses, explain how deep learning models are built and how they work, as well as the ethical implications of the use of AI-assistants in the human brain. He also talks about how Oracle for Startups has helped get Xpert AI and other startups off the ground.


    Ep#016: Ensuring Fairness in Precision Medicine with Dr. Kadija Ferryman

    Ep#016: Ensuring Fairness in Precision Medicine with Dr. Kadija Ferryman
    Precision medicine has the potential to transform healthcare in terms of diagnostics, treatment, and prevention of disease. But what does a future with a more personalized approach to medicine look like? Who will ultimately benefit from precision medicine?

    In this episode, we dive into these questions with Dr. Kadija Ferryman, a cultural anthropologist whose research centers on the ethical dimensions of health risk technologies, especially as they relate to racial disparities in health. Kadija is an Industry Assistant Professor at NYU’s Tandon School of Engineering. She is also an affiliate at the Data & Society Research Institute, where she led a research study on Fairness in Precision Medicine. This ground-breaking study examined the potential for biased and discriminatory outcomes in the emerging field of precision medicine.

    Together with Kadija, we talk about:

    ◦ BiDil: the first FDA-approved drug marketed to a single racial-ethnic group
    ◦ The promise and potential of precision medicine
    ◦ The 'Fairness in Precision Medicine' study
    ◦ The need for proactive ethical studies
    ◦ Bias in how we collect and examine electronic health data
    ◦ Bias in medical outcomes due to existing patterns of marginalization
    ◦ Human bias in AI and Machine Learning
    ◦ Implicit bias in healthcare professionals
    ◦ Establishing a more equitable medical future

    Get in touch with Kadija:
    ◦ Twitter: @KadijaFerryman
    ◦ Web: http://www.kadijaferryman.com
    ◦ Web (Data & Society): https://datasociety.net/people/ferryman-kadija/
    ◦ Web (CRDS): https://criticalracedigitalstudies.com


    Make sure to download the full show notes with our guest's bio, links to their most notable work, and our recommendations for further reads on the topic of the episode at pmedcast.com