Logo
    Search

    human intelligence

    Explore "human intelligence" with insightful episodes like "Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality", "#344 – Noam Brown: AI vs Humans in Poker and Games of Strategic Negotiation", "Can we use maths to beat the robots?", "#302 – Richard Haier: IQ Tests, Human Intelligence, and Group Differences" and "177. Intimations of Creativity | Dr. Scott Barry Kaufman" from podcasts like ""Dwarkesh Podcast", "Lex Fridman Podcast", "More or Less: Behind the Stats", "Lex Fridman Podcast" and "The Jordan B. Peterson Podcast"" and more!

    Episodes (10)

    Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

    Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

    For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

    We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

    If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

    Watch on YouTube. Listen on Apple PodcastsSpotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

    Timestamps

    (0:00:00) - TIME article

    (0:09:06) - Are humans aligned?

    (0:37:35) - Large language models

    (1:07:15) - Can AIs help with alignment?

    (1:30:17) - Society’s response to AI

    (1:44:42) - Predictions (or lack thereof)

    (1:56:55) - Being Eliezer

    (2:13:06) - Othogonality

    (2:35:00) - Could alignment be easier than we think?

    (3:02:15) - What will AIs want?

    (3:43:54) - Writing fiction & whether rationality helps you win



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    #344 – Noam Brown: AI vs Humans in Poker and Games of Strategic Negotiation

    #344 – Noam Brown: AI vs Humans in Poker and Games of Strategic Negotiation
    Noam Brown is a research scientist at FAIR, Meta AI, co-creator of AI that achieved superhuman level performance in games of No-Limit Texas Hold'em and Diplomacy. Please support this podcast by checking out our sponsors: - True Classic Tees: https://trueclassictees.com/lex and use code LEX to get 25% off - Audible: https://audible.com/lex to get 30-day free trial - InsideTracker: https://insidetracker.com/lex to get 20% off - ExpressVPN: https://expressvpn.com/lexpod to get 3 months free EPISODE LINKS: Noam's Twitter: https://twitter.com/polynoamial Noam's LinkedIn: https://www.linkedin.com/in/noam-brown-8b785b62/ webDiplomacy: https://webdiplomacy.net/ Noam's papers: Superhuman AI for multiplayer poker: https://par.nsf.gov/servlets/purl/10119653 Superhuman AI for heads-up no-limit poker: https://par.nsf.gov/servlets/purl/10077416 Human-level play in the game of Diplomacy: https://www.science.org/doi/10.1126/science.ade9097 PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (05:37) - No Limit Texas Hold 'em (09:30) - Solving poker (22:40) - Poker vs Chess (29:18) - AI playing poker (1:02:46) - Heads-up vs Multi-way poker (1:13:37) - Greatest poker player of all time (1:17:10) - Diplomacy game (1:27:01) - AI negotiating with humans (2:09:26) - AI in geopolitics (2:14:11) - Human-like AI for games (2:20:12) - Ethics of AI (2:24:26) - AGI (2:28:25) - Advice to beginners

    Can we use maths to beat the robots?

    Can we use maths to beat the robots?

    Daily advances in the technology of artificial intelligence may leave humans playing catch-up – but in at least one area we can still retain an edge, mathematics. However it’ll require changes in how we think about and teach maths and we may still have to leave the simple adding up to the computers. Junaid Mubeen, author of Mathematical Intelligence, tells Tim Harford what it’ll take to stay ahead of the machines.

    Presenter: Tim Harford Producer: Jon Bithrey Sound Engineer: Rod Farquhar Production Coordinator: Jacqui Johnson Editor: Richard Vadon

    (Image: Digital generated image of artificial intelligence robot scanning the data: Getty / Andriy Onufriyenko)

    #302 – Richard Haier: IQ Tests, Human Intelligence, and Group Differences

    #302 – Richard Haier: IQ Tests, Human Intelligence, and Group Differences
    Richard Haier is a psychologist specializing in the science of human intelligence. Please support this podcast by checking out our sponsors: - Calm: https://calm.com/lex to get 40% off - Linode: https://linode.com/lex to get $100 free credit - BiOptimizers: http://www.magbreakthrough.com/lex to get 10% off - SimpliSafe: https://simplisafe.com/lex and use code LEX - MasterClass: https://masterclass.com/lex to get 15% off EPISODE LINKS: Richard's Twitter: https://twitter.com/rjhaier Richard's Website: https://richardhaier.com/ Documents & Articles: 1. Child IQ and survival to 79: https://ncbi.nlm.nih.gov/pmc/articles/PMC5491698/ 2. Study of Mathematically Precocious Youth: https://my.vanderbilt.edu/smpy/files/2013/02/DoingPsychScience2006.pdf Books: 1. The Neuroscience of Intelligence: https://amzn.to/3n50DcC 2. The Book of Five Rings: https://amzn.to/3y4Xcc6 3. The Rise and Fall of the Third Reich: https://amzn.to/3zPAW7q 4. Flowers for Algernon: https://amzn.to/3OfRKZS 5. The Bell Curve: https://amzn.to/3Ng4RJe 6. The Mismeasure of Man: https://amzn.to/3N9IkxB 7. Human Diversity: https://amzn.to/3O7Trsc 8. Facing Reality: https://amzn.to/3bfzqkX PODCAST INFO: Podcast website: https://lexfridman.com/podcast Apple Podcasts: https://apple.co/2lwqZIr Spotify: https://spoti.fi/2nEwCF8 RSS: https://lexfridman.com/feed/podcast/ YouTube Full Episodes: https://youtube.com/lexfridman YouTube Clips: https://youtube.com/lexclips SUPPORT & CONNECT: - Check out the sponsors above, it's the best way to support this podcast - Support on Patreon: https://www.patreon.com/lexfridman - Twitter: https://twitter.com/lexfridman - Instagram: https://www.instagram.com/lexfridman - LinkedIn: https://www.linkedin.com/in/lexfridman - Facebook: https://www.facebook.com/lexfridman - Medium: https://medium.com/@lexfridman OUTLINE: Here's the timestamps for the episode. On some podcast players you should be able to click the timestamp to jump to that time. (00:00) - Introduction (08:06) - Measuring human intelligence (22:34) - IQ tests (45:23) - College entrance exams (53:59) - Genetics (59:58) - Enhancing intelligence (1:07:27) - The Bell Curve (1:19:58) - Race differences (1:39:11) - Bell curve criticisms (1:48:21) - Intelligence and life success (1:57:57) - Flynn effect (2:02:49) - Nature vs nuture (2:29:42) - Testing artificial intelligence (2:41:46) - Advice (2:45:53) - Mortality

    177. Intimations of Creativity | Dr. Scott Barry Kaufman

    177. Intimations of Creativity | Dr. Scott Barry Kaufman

    This episode was recorded on April 13th, 2021


    On this Season 4, Episode 31 of the Jordan Peterson Podcast, Jordan is joined by Dr. Scott Barry Kaufman. Dr. Scott Barry Kaufman is a cognitive scientist exploring the limits of human potential. He hosts the very popular podcast 'The Psychology Podcast'. He is an author, editor, and co-editor of nine books including his newest 'Transcend: The New Science of Self Actualization'.


    Dr. Kaufman and Jordan discussed cognitive science, behavioral study, and Humanism. They also touched on many points including IQ. tests, personality traits, aggression in hierarchy, dating intelligence, self-actualization, long-form media, and much more.


    Find more Scott Kaufman on his website https://scottbarrykaufman.com/, in his books, and on his podcast show The Psychology Podcast


    The Jordan B. Peterson Podcast can be found at https://www.jordanbpeterson.com/podcast/

    Uncle Bob - The Long Reach of Code

    Uncle Bob - The Long Reach of Code

    Robert Martin (aka Uncle Bob) is a programming pioneer and bestselling author or Clean Code. We discuss the prospect of automating programming, spotting and developing coding talent, occupational licensing, quotas, and the elusive sense of style.  

    Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform.


    Listen to his fascinating talk on the future of programming: https://youtu.be/ecIWPzGEbFc 

    Read his blog about programming: http://blog.cleancoder.com/ 

    Buy his books on Amazon: https://www.amazon.com/kindle-dbs/ent... 

    Thanks for reading The Lunar Society! Subscribe to find out about future episodes!

    Timestamps

    (0:00) - Automating programming 

    (8:40) - Educating programmers (expertise, talent, university) 

    (21:45) - Spotting talent 

    (26:10) - Teaching kids 

    (29:31) - Prose and music sense in coding 

    (32:22) - Occupational licensing for programmers 

    (35:49) - Why is tech political 

    (39:28) - Quotas 

    (42:29) - Advice to 20 yr old



    Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe

    #106 – Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind

    #106 – Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind
    Matt Botvinick is the Director of Neuroscience Research at DeepMind. He is a brilliant cross-disciplinary mind navigating effortlessly between cognitive psychology, computational neuroscience, and artificial intelligence. Support this podcast by supporting these sponsors: - The Jordan Harbinger Show: https://www.jordanharbinger.com/lex - Magic Spoon: https://magicspoon.com/lex and use code LEX at checkout If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 03:29 - How much of the brain do we understand? 14:26 - Psychology 22:53 - The paradox of the human brain 32:23 - Cognition is a function of the environment 39:34 - Prefrontal cortex 53:27 - Information processing in the brain 1:00:11 - Meta-reinforcement learning 1:15:18 - Dopamine 1:19:01 - Neuroscience and AI research 1:23:37 - Human side of AI 1:39:56 - Dopamine and reinforcement learning 1:53:07 - Can we create an AI that a human can love?

    Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI

    Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI
    Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code "LexPodcast".  Episode Links: AI: A Guide for Thinking Humans (book) Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 - Introduction 02:33 - The term "artificial intelligence" 06:30 - Line between weak and strong AI 12:46 - Why have people dreamed of creating AI? 15:24 - Complex systems and intelligence 18:38 - Why are we bad at predicting the future with regard to AI? 22:05 - Are fundamental breakthroughs in AI needed? 25:13 - Different AI communities 31:28 - Copycat cognitive architecture 36:51 - Concepts and analogies 55:33 - Deep learning and the formation of concepts 1:09:07 - Autonomous vehicles 1:20:21 - Embodied AI and emotion 1:25:01 - Fear of superintelligent AI 1:36:14 - Good test for intelligence 1:38:09 - What is complexity? 1:43:09 - Santa Fe Institute 1:47:34 - Douglas Hofstadter 1:49:42 - Proudest moment

    Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI

    Gary Marcus: Toward a Hybrid of Deep Learning and Symbolic AI
    Gary Marcus is a professor emeritus at NYU, founder of Robust.AI and Geometric Intelligence, the latter is a machine learning company acquired by Uber in 2016. He is the author of several books on natural and artificial intelligence, including his new book Rebooting AI: Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning and discussing the challenges before the AI community that must be solved in order to achieve artificial general intelligence. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on iTunes or support it on Patreon. Here's the outline with timestamps for this episode (on some players you can click on the timestamp to jump to that point in the episode): 00:00 - Introduction 01:37 - Singularity 05:48 - Physical and psychological knowledge 10:52 - Chess 14:32 - Language vs physical world 17:37 - What does AI look like 100 years from now 21:28 - Flaws of the human mind 25:27 - General intelligence 28:25 - Limits of deep learning 44:41 - Expert systems and symbol manipulation 48:37 - Knowledge representation 52:52 - Increasing compute power 56:27 - How human children learn 57:23 - Innate knowledge and learned knowledge 1:06:43 - Good test of intelligence 1:12:32 - Deep learning and symbol manipulation 1:23:35 - Guitar

    Steven Pinker: AI in the Age of Reason

    Steven Pinker: AI in the Age of Reason
    Steven Pinker is a professor at Harvard and before that was a professor at MIT. He is the author of many books, several of which have had a big impact on the way I see the world for the better. In particular, The Better Angels of Our Nature and Enlightenment Now have instilled in me a sense of optimism grounded in data, science, and reason. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.