Logo
    Search

    Podcast Summary

    • Find quality candidates efficiently with Indeed or manage finances effectively with Rocket MoneySpecialized platforms like Indeed for hiring and Rocket Money for personal finance can help streamline processes, save time, and lead to significant cost savings.

      When it comes to hiring or managing your personal finances, the best solution isn't to search aimlessly but rather to utilize specialized platforms like Indeed and Rocket Money. Indeed, with over 350 million monthly visitors, is a matching and hiring platform that helps you find quality candidates quickly and efficiently. It's not just about saving time but also delivering the highest quality matches compared to other job sites. Rocket Money, on the other hand, is a personal finance app that helps you find and cancel unwanted subscriptions, monitor your spending, and lower your bills. With over 5 million users, it has helped save its members an average of $720 a year and over $500 billion in canceled subscriptions. Both platforms offer significant benefits and can help you streamline your hiring process or manage your finances more effectively. Additionally, Mindscape listeners can get a $75 sponsored job credit on Indeed and a discount on their purchase at Blue Nile using the promo code "audio." Ultimately, the key takeaway is that there are specialized platforms designed to help you solve complex problems, from hiring to personal finance, and utilizing them can lead to significant time and cost savings.

    • Approximately correct and educable - Key concepts in Leslie Valiant's thoughtComputer scientist Leslie Valiant emphasizes the importance of being 'approximately correct' and 'educable' in a complex world, shifting focus from intelligence and knowledge to learning and adaptation, and viewing human uniqueness as our capacity to be educated.

      Computer scientist Leslie Valiant emphasizes the importance of being "approximately correct" and being "educable" in a complex world. Valiant, known for his deep thinking in theoretical computer science, encourages a shift in perspective from focusing on intelligence and knowledge to the ability to learn and adapt. He argues that human uniqueness lies in our capacity to be educated, which goes beyond simple learning. Valiant's research on learning as a computational process led him to believe that it's the most fundamental aspect of AI and the mind. He saw the need for a model of learning that could handle the vast amount of computation required, and his predictions proved correct with the rise of large language models. Valiant's ideas revolve around the concepts of being approximately correct and educability, making him a thought leader in understanding the complexities of learning and human uniqueness.

    • The origins and growth of Machine LearningMachine Learning originated in the 1980s, with theorists focusing on formulating learning from examples and creating models like the Probably Approximately Correct (PAC) model. The PAC model and experimental ML community's successes laid the foundation for current ML achievements.

      Machine Learning (ML) is a broad field that includes various algorithms, with neural networks being just one of them. Machine Learning is the name of an academic field focusing on what happens when machines learn. The term ML goes beyond neural networks and encompasses various algorithms used in different contexts, especially when dealing with large datasets. The field gained significant attention in the 1980s, with theorists focusing on formulating learning from examples and creating models like the Probably Approximately Correct (PAC) model. This model captures how we generalize and predict future examples while ensuring moderate effort and reasonable rewards. The experimental ML community grew in the mid-80s, comparing the efficacy of different algorithms on various datasets. Neural networks were not competitive initially due to their poor performance on small datasets but became more competitive as datasets grew larger. The PAC model and the experimental ML community's successes laid the foundation for current ML achievements. The term "probably approximately correct" can be taken as a metaphor for the general epistemological goal of striving for accuracy in our understanding.

    • AI's predictions are not absoluteWhile AI can make accurate predictions on average, its predictions for individual cases can be weak due to lack of complete theory. We learn and improve through many examples, even without full understanding of underlying rules.

      While AI and machine learning show promise for making accurate predictions on average, their predictions for individual cases are not absolute and can be weak. This is because much of what we do, including AI, operates on a "theory less" basis, meaning we don't have a complete understanding of the rules or theories behind every action or prediction. However, this lack of complete theory doesn't mean that AI or human actions are not effective, predictable, or useful. In fact, we learn and improve through a process of many examples, just like how a chatbot or language learning app can make progress despite not fully understanding the underlying rules. This idea relates to philosophical concepts like induction, where we make predictions based on patterns observed in the past, even though we can't know for certain that the next instance will follow the same pattern. The field of computer science has made significant strides in formalizing and addressing the problem of induction, providing a meaningful solution in the context of AI and machine learning. Overall, it's important to remember that AI is a powerful tool, but it's not infallible, and its predictions should be used with caution, especially in safety-critical applications.

    • Understanding the Efficiency and Effectiveness of Machine Learning CalculationsPAC learning is a concept that focuses on the efficiency and quantifiable effectiveness of machine learning calculations, and is foundational knowledge for anyone working in AI.

      PAC learning, or Probably Approximately Correct learning, is an approach to machine learning that focuses on the efficiency and quantifiable effectiveness of calculations. It's important to understand this concept because not everything in the world is easy to learn. Some things, like children learning language, are relatively easy. Others, like understanding physical laws of the universe, are much more difficult. The spectrum of easy to learn and hard to learn is clear, and we use both in different ways. For example, cryptographic functions are designed to be hard to learn, while machine learning algorithms are used to discover rules from labeled data. These rules can then be used to classify new data. The process of discovering these rules is not predetermined, but rather depends on the learning algorithm used. PAC learning was first explored in the 1980s and has since become foundational knowledge for anyone working in AI. It's a way of understanding how machine learning algorithms improve with more data and how they generalize from examples to new data. Currently, there's debate about the scalability of machine learning and its future intelligence abilities, but the basic principles of PAC learning remain an important framework for understanding the field.

    • Exploring the future of AI with multiple learning boxesThe future of AI may involve a system with multiple learning boxes, each with different capabilities and reasoning abilities, to simulate more human-like reasoning. These boxes may recognize different concepts or words, and collaborate to understand complex tasks.

      While large language models have made significant strides in recent years, they still make mistakes and their capabilities are limited to predicting the next token. The future of AI may involve a system with multiple learning boxes, each with different capabilities and reasoning abilities, to simulate more human-like reasoning. These boxes may recognize different concepts or words, and collaborate to understand complex tasks. However, it's important to remember that these models are not truly thinking like humans, but rather mimicking human speech based on the data they've been given. The quality of these models depends heavily on the data they're trained on, and the human effort put into curating it. While there's ongoing debate about whether to focus on getting more data or pushing algorithms towards cognition or reasoning, it's possible to explore different approaches to make AI more human-like in its thought processes.

    • Marrying learning and reasoning in AITo make AI more reliable and authoritative, we need to integrate reasoning capabilities with learning, starting with limited domains of knowledge.

      The future of AI lies in the marriage of learning and reasoning, but with a reasoning process that is compatible with learning. Classical logic, which is deterministic and unforgiving to errors, has proven to be insufficient for AI. Machine learning, on the other hand, is more forgiving to inconsistencies and errors. To make AI more reliable and authoritative, we need to create systems that conform to our commonsense understanding of reasoning. This means that reasoning capabilities can be integrated with learning, and it is likely that this will be done for more limited domains of knowledge initially. Another application of this integration of learning and reasoning is in the field of theoretical physics. Large language models have been suggested as a potential tool for theoretical physicists to discover new concepts by stringing together concepts in new ways. However, it is important to note that machine learning on concepts in theoretical physics does not have to be presented as text and sentences and paragraphs. It is the representation of knowledge that matters. Moreover, it is essential to remember that AI research should not be limited to computer science alone. It is crucial to consider the wider implications and draw larger lessons. As the speaker has demonstrated in their books, it is legitimate to go beyond the technical aspects and explore the philosophical and ethical questions raised by AI.

    • Understanding human learning for AI developmentNature, including human cognition and Darwinian evolution, can be seen as an approximately correct learning process. This learning process drives adaptation and survival through trial and error. Human learning from examples is fundamental and can inform AI development.

      Nature, including the process of Darwinian evolution, can be seen as an approximately correct learning process. This means that survival in the natural world acts as the feedback mechanism, driving the evolution of species through a process of trial and error. The idea that humans can create machines capable of replicating human abilities is not new, but understanding what it is that humans do that machines can learn from has been a major challenge in the development of artificial intelligence. The speaker suggests that humans learn from examples and that this is a fundamental aspect of our cognition. In the context of evolution, this learning process can be seen as the drive towards reproductive fitness, with the genome acting as the learning algorithm and mutations providing the means of exploration and adaptation. While the details of the evolutionary process are not fully understood, the speaker argues that it must involve some form of approximately correct learning, as it is the most effective way for a species to adapt to its environment and survive. This perspective provides a fascinating connection between the fields of computer science and biology, highlighting the potential for insights from one discipline to inform and advance the other.

    • The three aspects of human educability: reasoning, chaining knowledge, and learning from othersHuman educability goes beyond learning from experience and adapting to new environments. It includes the ability to reason, chain together learned knowledge, and learn from others, setting us apart from other species and driving the rapid development of human civilization.

      The human ability to be educable goes beyond learning from experience and adapting to new environments. It involves the capacity to reason and chain together learned knowledge, as well as the ability to learn from the experiences of others through explicit instruction. These three aspects of educability, when combined, set humans apart from other species and have contributed to the rapid development of human civilization. However, there are still unsolved questions about how to ensure consistent and principled training of these interconnected pieces of knowledge. The evolution of human educability remains an intriguing and unsolved mystery in the realm of science.

    • The capacity to learn and apply new knowledgeHuman educability is the ability to learn and apply new knowledge, which is essential for growth, understanding, and adapting to new situations, and is a defining characteristic of human intelligence

      The ability to learn and apply new knowledge, often through a combination of personal experience and instruction, is a crucial aspect of intelligence and educability. This chaining of knowledge allows us to plan, reason, and even imagine future situations, making us unique in the animal kingdom. Unlike the vague concept of intelligence, educability is explicitly defined as the capacity to learn and apply new knowledge, regardless of its source or truth. This capacity is essential for growth, understanding, and adapting to new situations. It's the foundation of our ability to learn from lectures, podcasts, and even fictional stories. It's what allows us to test theories, draw implications, and make predictions. It's the engine that drives our mental time travel and our ability to imagine and plan for the future. It's a defining characteristic of human intelligence and a key factor in our ability to adapt and thrive in a constantly changing world.

    • Understanding Intelligence and Educability: Elusive Concepts in Human DevelopmentResearch suggests a connection between intelligence and educability, but definitive definitions and origins remain unclear. The ability to transfer knowledge is crucial for human culture, but its origins and measurability are uncertain.

      While the concept of intelligence and educability are important in understanding human development and civilization, they remain elusive and open to interpretation. The correlation between performance in different areas suggests a connection between various skills, but a definitive definition of intelligence or educability remains unclear. The spread of human culture relies on the ability to transfer explicit knowledge, but the origins and timeline of this capability are uncertain. It's possible that it existed even before humans, and the capacity to be educated varies from person to person. The concept of educability, if it has meaning, should be measurable, and research could focus on testing new information gained within a specific time frame. Overall, while intelligence and educability are significant concepts, more research is needed to fully understand their implications and potential for enhancement.

    • Measuring educability in a rapidly changing worldRecognizing and addressing inherent human weaknesses in evaluating theories and trusting sources is crucial for effective learning in the information age.

      The concept of educability, or the ability to learn and adapt to new information, is crucial for leaders in today's rapidly changing world. However, measuring educability poses challenges, as it's difficult to determine if new information is truly new or if the person being tested has already been exposed to similar ideas. Additionally, humans have an inherent weakness in evaluating theories and are easily fooled by false information. This weakness is not new but rather an inherent human trait that needs to be addressed at the human level, not just through technology. The social aspect of learning, where we trust other humans more than other species, can be both a strength and a weakness. While it enables us to learn faster through trusted teachers, it also makes us more susceptible to accepting information from favored people without proper verification. Therefore, it's essential to recognize and address these inherent weaknesses to navigate the information age effectively.

    • The role of trust in educabilityExploring educability requires a focus on trust, the importance of a scientific approach, and individual growth through better thinking and understanding.

      The concept of educability, or the ability to be educated, raises new questions about how we approach education and learning. According to the discussion, people have preferences for who they trust when given information, and this preference can impact their educability. The current education system could benefit from a more scientific approach, with a focus on empirical testing and theoretical understanding. The idea of improving individual educability through better thinking and scientific understanding is also worth exploring. Additionally, the idea of education as a model of computation, as discussed in the book, suggests that there may be a deeper scientific understanding to be gained from computational models in education. Overall, the conversation highlights the importance of considering the role of trust, the need for a more scientific approach to education, and the potential for individual growth in the realm of educability.

    • Understanding Educability in AIEducability in AI is vital for robustness and desirable outcomes. However, deciding the content of education and managing risks are challenges.

      The concept of educability, as discussed in the chapter, is crucial to understanding the scientific approach to creating intelligent machines. It's an attempt to justify that the model has robustness and can produce similar results even if expressed differently. However, being educable, whether for machines or humans, comes with its challenges. Someone must decide the content of the education, and if it's not beneficial, it won't produce desirable outcomes. The same applies to machines, where the training set is crucial in current pure learning systems. But if machines are made educable, more decisions need to be made about the knowledge they're given, and the results will vary based on that knowledge. The speaker also addressed concerns about AI risks, stating that while there are dangers, they can be managed with scientific understanding and cautious implementation. The biggest impact on human lives from AI advances may be the creation of new opportunities and solutions to complex problems, but it's essential to ensure that the knowledge and values we impart to AI are beneficial and ethical.

    • Embracing the Future: Coexistence of Humans and MachinesAs technology, particularly AI and machines, becomes more integrated into our economy and lives, it's crucial to accept and adapt to the changes rather than resist them. Embrace the future and continue living our lives to the fullest.

      Our economy and lives are likely to become more intertwined with technology, specifically artificial intelligence and machines, leading to a mixed economy where both humans and machines coexist. This evolution is inevitable and we should not be alarmed by it. Instead, we should focus on what we can control and adapt to the changes as they come. Leslie Valiant, a renowned computer scientist, emphasized this perspective during his conversation on the Mindscape podcast. He highlighted that computers will increasingly influence various aspects of our lives, but it's essential to accept this trend rather than resist it. Valiant also reminded us that we should not be upset by the changes technology brings, but instead, embrace them and continue living our lives to the fullest. Ultimately, the future holds a blend of human and machine capabilities, and we must be prepared to navigate this new landscape.

    Recent Episodes from Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

    276 | Gavin Schmidt on Measuring, Predicting, and Protecting Our Climate

    276 | Gavin Schmidt on Measuring, Predicting, and Protecting Our Climate

    The Earth's climate keeps changing, largely due to the effects of human activity, and we haven't been doing enough to slow things down. Indeed, over the past year, global temperatures have been higher than ever, and higher than most climate models have predicted. Many of you have probably seen plots like this. Today's guest, Gavin Schmidt, has been a leader in measuring the variations in Earth's climate, modeling its likely future trajectory, and working to get the word out. We talk about the current state of the art, and what to expect for the future.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/05/20/276-gavin-schmidt-on-measuring-predicting-and-protecting-our-climate/

    Gavin Schmidt received his Ph.D. in applied mathematics from University College London. He is currently Director of NASA's Goddard Institute for Space Studies, and an affiliate of the Center for Climate Systems Research at Columbia University. His research involves both measuring and modeling climate variability. Among his awards are the inaugural Climate Communications Prize of the American Geophysical Union. He is a cofounder of the RealClimate blog.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    275 | Solo: Quantum Fields, Particles, Forces, and Symmetries

    275 | Solo: Quantum Fields, Particles, Forces, and Symmetries

    Publication week! Say hello to Quanta and Fields, the second volume of the planned three-volume series The Biggest Ideas in the Universe. This volume covers quantum physics generally, but focuses especially on the wonders of quantum field theory. To celebrate, this solo podcast talks about some of the big ideas that make QFT so compelling: how quantized fields produce particles, how gauge symmetries lead to forces of nature, and how those forces can manifest in different phases, including Higgs and confinement.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/05/13/275-solo-quantum-fields-particles-forces-and-symmetries/

    Support Mindscape on Patreon.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | May 2024

    AMA | May 2024

    Welcome to the May 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/05/06/ama-may-2024/

    Support Mindscape on Patreon.

    Here is the memorial to Dan Dennett at Ars Technica.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    274 | Gizem Gumuskaya on Building Robots from Human Cells

    274 | Gizem Gumuskaya on Building Robots from Human Cells

    Modern biology is advancing by leaps and bounds, not only in understanding how organisms work, but in learning how to modify them in interesting ways. One exciting frontier is the study of tiny "robots" created from living molecules and cells, rather than metal and plastic. Gizem Gumuskaya, who works with previous guest Michael Levin, has created anthrobots, a new kind of structure made from living human cells. We talk about how that works, what they can do, and what future developments might bring.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/29/274-gizem-gumuskaya-on-building-robots-from-human-cells/

    Support Mindscape on Patreon.

    Gimez Gumuskaya received her Ph.D. from Tufts University and the Harvard Wyss Institute for Biologically-Inspired Engineering. She is currently a postdoctoral researcher at Tufts University. She previously received a dual master's degree in Architecture and Synthetic Biology from MIT.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    273 | Stefanos Geroulanos on the Invention of Prehistory

    273 | Stefanos Geroulanos on the Invention of Prehistory

    Humanity itself might be the hardest thing for scientists to study fairly and accurately. Not only do we come to the subject with certain inevitable preconceptions, but it's hard to resist the temptation to find scientific justifications for the stories we'd like to tell about ourselves. In his new book, The Invention of Prehistory, Stefanos Geroulanos looks at the ways that we have used -- and continue to use -- supposedly-scientific tales of prehistoric humanity to bolster whatever cultural, social, and political purposes we have at the moment.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/22/273-stefanos-geroulanos-on-the-invention-of-prehistory/

    Support Mindscape on Patreon.

    Stefanos Geroulanos received his Ph.D. in humanities from Johns Hopkins. He is currently director of the Remarque Institute and a professor of history at New York University. He is the author and editor of a number of books on European intellectual history. He serves as a Co-Executive Editor of the Journal of the History of Ideas.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    272 | Leslie Valiant on Learning and Educability in Computers and People

    272 | Leslie Valiant on Learning and Educability in Computers and People

    Science is enabled by the fact that the natural world exhibits predictability and regularity, at least to some extent. Scientists collect data about what happens in the world, then try to suggest "laws" that capture many phenomena in simple rules. A small irony is that, while we are looking for nice compact rules, there aren't really nice compact rules about how to go about doing that. Today's guest, Leslie Valiant, has been a pioneer in understanding how computers can and do learn things about the world. And in his new book, The Importance of Being Educable, he pinpoints this ability to learn new things as the crucial feature that distinguishes us as human beings. We talk about where that capability came from and what its role is as artificial intelligence becomes ever more prevalent.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/15/272-leslie-valiant-on-learning-and-educability-in-computers-and-people/

    Support Mindscape on Patreon.

    Leslie Valiant received his Ph.D. in computer science from Warwick University. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. He has been awarded a Guggenheim Fellowship, the Knuth Prize, and the Turing Award, and he is a member of the National Academy of Sciences as well as a Fellow of the Royal Society and the American Association for the Advancement of Science. He is the pioneer of "Probably Approximately Correct" learning, which he wrote about in a book of the same name.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | April 2024

    AMA | April 2024

    Welcome to the April 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/04/08/ama-april-2024/

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    271 | Claudia de Rham on Modifying General Relativity

    271 | Claudia de Rham on Modifying General Relativity

    Einstein's theory of general relativity has been our best understanding of gravity for over a century, withstanding a variety of experimental challenges of ever-increasing precision. But we have to be open to the possibility that general relativity -- even at the classical level, aside from any questions of quantum gravity -- isn't the right theory of gravity. Such speculation is motivated by cosmology, where we have a good model of the universe but one with a number of loose ends. Claudia de Rham has been a leader in exploring how gravity could be modified in cosmologically interesting ways, and we discuss the current state of the art as well as future prospects.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/01/271-claudia-de-rham-on-modifying-general-relativity/

    Support Mindscape on Patreon.

    Claudia de Rham received her Ph.D. in physics from the University of Cambridge. She is currently a professor of physics and deputy department head at Imperial College, London. She is a Simons Foundation Investigator, winner of the Blavatnik Award, and a member of the American Academy of Arts and Sciences. Her new book is The Beauty of Falling: A Life in Pursuit of Gravity.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    270 | Solo: The Coming Transition in How Humanity Lives

    270 | Solo: The Coming Transition in How Humanity Lives

    Technology is changing the world, in good and bad ways. Artificial intelligence, internet connectivity, biological engineering, and climate change are dramatically altering the parameters of human life. What can we say about how this will extend into the future? Will the pace of change level off, or smoothly continue, or hit a singularity in a finite time? In this informal solo episode, I think through what I believe will be some of the major forces shaping how human life will change over the decades to come, exploring the very real possibility that we will experience a dramatic phase transition into a new kind of equilibrium.

    Blog post with transcript and links to additional resources: https://www.preposterousuniverse.com/podcast/2024/03/25/270-solo-the-coming-transition-in-how-humanity-lives/

    Support Mindscape on Patreon.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    269 | Sahar Heydari Fard on Complexity, Justice, and Social Dynamics

    269 | Sahar Heydari Fard on Complexity, Justice, and Social Dynamics

    When it comes to social change, two questions immediately present themselves: What kind of change do we want to see happen? And, how do we bring it about? These questions are distinct but related; there's not much point in spending all of our time wanting change that won't possibly happen, or working for change that wouldn't actually be good. Addressing such issues lies at the intersection of philosophy, political science, and social dynamics. Sahar Heydari Fard looks at all of these issues through the lens of complex systems theory, to better understand how the world works and how it might be improved.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/03/18/269-sahar-heydari-fard-on-complexity-justice-and-social-dynamics/

    Support Mindscape on Patreon.

    Sahar Heydari Fard received a Masters in applied economics and a Ph.D. in philosophy from the University of Cincinnati. She is currently an assistant professor in philosophy at the Ohio State University. Her research lies at the intersection of social and behavioral sciences, social and political philosophy, and ethics, using tools from complex systems theory.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Related Episodes

    43 | Matthew Luczy on the Pleasures of Wine

    43 | Matthew Luczy on the Pleasures of Wine
    Some people never drink wine; for others, it’s an indispensable part of an enjoyable meal. Whatever your personal feelings might be, wine seems to exhibit a degree of complexity and nuance that can be intimidating to the non-expert. Where does that complexity come from, and how can we best approach wine? To answer these questions, we talk to Matthew Luczy, sommelier and wine director at Mélisse, one of the top fine-dining restaurants in the Los Angeles area. Matthew insisted that we actually drink wine rather than just talking about it, so drink we do. Therefore, in a Mindscape first, I recruited a third party to join us and add her own impressions of the tasting: science writer Jennifer Ouellette, who I knew would be available because we’re married to each other. We talk about what makes different wines distinct, the effects of aging, and what’s the right bottle to have with pizza. You are free to drink along at home, with exactly these wines or some other choices, but I think the podcast will be enjoyable whether you do or not. Support Mindscape on Patreon or Paypal. Mattew Luczy is a Certified Sommelier as judged by the Court of Master Sommeliers. He currently works as the Wine Director at Mélisse in Santa Monica, California. He is also active in photography and music. Mélisse home page Personal/photography page Instagram Ask a Somm: When Should I Decant Wine? See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    25 | David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation

    25 | David Chalmers on Consciousness, the Hard Problem, and Living in a Simulation
    The "Easy Problems" of consciousness have to do with how the brain takes in information, thinks about it, and turns it into action. The "Hard Problem," on the other hand, is the task of explaining our individual, subjective, first-person experiences of the world. What is it like to be me, rather than someone else? Everyone agrees that the Easy Problems are hard; some people think the Hard Problem is almost impossible, while others think it's pretty easy. Today's guest, David Chalmers, is arguably the leading philosopher of consciousness working today, and the one who coined the phrase "the Hard Problem," as well as proposing the philosophical zombie thought experiment. Recently he has been taking seriously the notion of panpsychism. We talk about these knotty issues (about which we deeply disagree), but also spend some time on the possibility that we live in a computer simulation. Would simulated lives be "real"? (There we agree -- yes they would.) David Chalmers got his Ph.D. from Indiana University working under Douglas Hoftstadter. He is currently University Professor of Philosophy and Neural Science at New York University and co-director of the Center for Mind, Brain, and Consciousness. He is a fellow of the Australian Academy of Humanities, the Academy of Social Sciences in Australia, and the American Academy of Arts and Sciences. Among his books are The Conscious Mind: In Search of a Fundamental Theory, The Character of Consciousness, and Constructing the World. He and David Bourget founded the PhilPapers project. Web site NYU Faculty page Wikipedia page PhilPapers page Amazon author page NYU Center for Mind, Brain, and Consciousness TED talk: How do you explain consciousness? See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    18 | Clifford Johnson on What's So Great About Superstring Theory

    18 | Clifford Johnson on What's So Great About Superstring Theory
    String theory is a speculative and highly technical proposal for uniting the known forces of nature, including gravity, under a single quantum-mechanical framework. This doesn't seem like a recipe for creating a lightning rod of controversy, but somehow string theory has become just that. To get to the bottom of why anyone (indeed, a substantial majority of experts in the field) would think that replacing particles with little loops of string was a promising way forward for theoretical physics, I spoke with expert string theorist Clifford Johnson. We talk about the road string theory has taken from a tentative proposal dealing with the strong interactions, through a number of revolutions, to the point it's at today. Also, where all those extra dimensions might have gone. At the end we touch on Clifford's latest project, a graphic novel that he wrote and illustrated about how science is done. Clifford Johnson is a Professor of Physics at the University of Southern California. He received his Ph.D. in mathematics and physics from the University of Southampton. His research area is theoretical physics, focusing on string theory and quantum field theory. He was awarded the Maxwell Medal from the Institute of Physics. Johnson is the author of the technical monograph D-Branes, as well as the graphic novel The Dialogues. Home page Wikipedia page Publications A talk on The Dialogues Asymptotia blog Twitter See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | November 2021

    AMA | November 2021

    Welcome to the November 2021 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). I take the large number of questions asked by Patreons, whittle them down to a more manageable size — based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good — and sometimes group them together if they are about a similar topic. Enjoy!

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | March 2024

    AMA | March 2024

    Welcome to the March 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic.

    Big congrats this month to Ryan Funakoshi, winner of this year's Mindscape Big Picture Scholarship! And enormous, heartfelt thanks to everyone who contributed. We're going to keep doing this in years to come.

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/03/11/ama-march-2024/

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.