Logo
    Search

    Podcast Summary

    • Discovering High-Quality Candidates and Managing Personal Finances with TechnologyLeverage platforms like Indeed for effective hiring and use apps like Rocket Money to manage personal finances, saving an average of $720 per year

      While we're all driven to seek improvement, the most effective way to find the best candidates for hiring isn't through active searching. Instead, relying on platforms like Indeed, which boasts over 350 million monthly visitors and a matching engine, can help streamline the process and deliver high-quality matches. Meanwhile, managing personal finances can be a challenge with the numerous subscriptions we sign up for. Rocket Money, a personal finance app, can help identify and cancel unwanted subscriptions, monitor spending, and lower bills, saving users an average of $720 per year. In the realm of artificial intelligence, the rapid progress in the field raises many questions, including how AI programs think and whether they can be considered sentient. Rafael Millier, our guest on the podcast, is a scholar in society and neuroscience at Columbia who delves into the philosophical aspects of AI, cognitive science, and mind. Together, we'll explore the intricacies of backpropagation, the potential for AI consciousness, and the ethical implications of AI rights and consciousness.

    • Understanding AI: Neural Networks, Machine Learning, and Deep LearningAI includes neural networks, machine learning, and deep learning. Neural networks are a type of machine learning using artificial neural networks, while deep learning is a variant of machine learning with deep neural networks.

      Artificial intelligence (AI) is a broad term used to describe systems designed to exhibit intelligent behavior, encompassing various approaches such as neural networks, machine learning, and deep learning. Neural networks and machine learning emerged from different research paradigms: the symbolic, rule-based approach and the biology-inspired, data-driven approach. Neural networks, a key component of machine learning, use artificial neural networks that process information from inputs to outputs through interconnected nodes, often referred to as a "black box." Deep learning is a variant of machine learning using deep neural networks with multiple hidden layers for hierarchical processing. Deep learning has seen significant success since the 2010s, particularly in computer vision and natural language processing, leading to innovations like the transformer architecture used in modern language models and chatbots.

    • Transformer Neural Networks: A Revolution in AITransformer neural networks, invented in 2017, revolutionized AI by enabling large language models to learn from vast text data and perform tasks without explicit training, leading to chatbots and advancements in natural language processing.

      The transformer neural network architecture, invented in 2017, revolutionized the field of artificial intelligence by enabling large language models like GPT 3 to learn from a vast amount of text data and perform various tasks without explicit training. These models, such as chatbots like ChatGPT, have been fine-tuned to be more helpful, less harmful, and more truthful through human evaluation and reinforcement learning. However, it's important to note that artificial neural networks are not the same as biological brains, and a neuron in an artificial neural network is much simpler than a real neuron. The transformer architecture has led to significant advancements in natural language processing and chatbots, but it's still a far cry from fully understanding or replicating the complexity of the human brain. This breakthrough in engineering has led to models that can generate text fluently, answer questions, translate languages, and even write stories or poems, all based on pretrained data. The efficiency and scalability of this architecture have made it the go-to solution for many AI applications since 2017, with most advancements coming from engineering improvements rather than new architectures. These language models, like GPT 3 and ChatGPT, have taken the world by storm and have shown remarkable capabilities in handling various tasks, making them an essential tool for many industries and applications.

    • Large language models learn through next token predictionLarge language models, like GPT 3 and potential future models, learn through predicting the next token based on context, improving performance through backpropagation, and are guided by their architecture.

      Large language models, such as GPT 3 and rumored upcoming models like GPT 4, have a vast number of parameters, with GPT 3 having 170 billion and GPT 4 potentially having 1 trillion. This is in comparison to the estimated 100 sextillion synapses in the human brain. These models start with randomly initialized weights, which are then adjusted during training to improve performance. They learn through a process called next token prediction, where they try to predict the statistically most likely next word based on the context of the text. This process involves backpropagation, which allows for adjusting the weights based on errors. However, the models are not completely blank slates as their architecture is not random and provides a structure that guides learning. The recent excitement in deep learning has been around language models, which, despite being primarily focused on next word prediction, can generate coherent text by processing tokens rather than words. These tokens can be subdivisions of a single word, allowing the model to understand and generate text even if some words are out of its vocabulary.

    • Neural networks' capabilities go beyond next word predictionNeural networks like GPT-3 are more complex than just next word predictors, as they're trained on vast text corpora and adapt to various linguistic contexts.

      While neural networks like GPT-3 primarily function by predicting the next word in a text, it's a mistake to underestimate or overestimate their capabilities based on this single aspect. These models are trained on vast corpora of text, enabling them to adapt to a wide range of linguistic contexts. In order to make accurate next word predictions in various contexts, they likely need to acquire sophisticated capacities that may not be fully understood by focusing solely on the next learning objective. An analogy can be drawn to evolution, which optimizes functions such as maximizing inclusive genetic fitness. It would be misleading to claim that all we're doing when communicating is maximizing our inclusive genetic fitness, as this doesn't capture the full extent of our reasoning, thinking, and intelligent competence. Similarly, these models' ability to perform next word prediction is just one aspect of their functionality. Intelligence and consciousness are complex, multifaceted terms, and their meanings can vary. While these models don't possess consciousness or human-like traits inherently, their advanced language processing abilities can give the impression of self-awareness. It's essential to recognize the limitations and complexities of these models, acknowledging that they primarily function through next word prediction while also possessing the capacity to learn and adapt to various linguistic contexts.

    • Understanding the distinction between consciousness and intelligence in AIRecognize the limitations of AI models and evaluate their cognitive capacities on a case-by-case basis, distinguishing between different aspects of intelligence like reasoning, beliefs, desires, and language understanding.

      The distinction between consciousness and intelligence in the context of artificial intelligence is crucial. While these models may exhibit capabilities that resemble human cognitive functions like reasoning, they might not fully encompass the complexity and depth of human reasoning. The anthropocentric bias, which arises from comparing these models to human intelligence, can lead to an inflationary interpretation of their capabilities or a deflationary view that they only perform expert prediction. However, these perspectives are not mutually exclusive. The impressive behavior of these models, such as generating human-like text or solving complex problems, can easily lead us to attribute psychological properties to them. This is a natural response given our evolutionary history and the novelty of interacting with systems that can communicate fluently. A more nuanced approach involves recognizing the limitations of these models and investigating their cognitive capacities on a case-by-case basis. We should distinguish between different aspects of intelligence, such as reasoning, beliefs, desires, and language understanding, and evaluate each capacity separately. By adopting a divide and conquer strategy, we can make progress in understanding the philosophical implications of artificial intelligence.

    • The shift towards data-driven learning in AIConnectionist approaches, based on artificial neural networks and deep learning, have been more successful due to their ability to learn from data, while traditional symbolic models have been less effective in comparison. The beta lesson raises questions about the role of innate knowledge in learning and may nudge us towards a more empiricist stance.

      Connectionist approaches, which are based on artificial neural networks and deep learning, have been more successful in recent years due to their ability to learn from data and adjust their internal representations through feedback, rather than being taught common sense or human knowledge beforehand. This is known as the "beta lesson" in AI research. Traditional symbolic models, which attempt to distill human knowledge into neatly interpretable rules and axioms, have been less effective in comparison. This shift towards data-driven learning has been observed across various domains of AI research, including image classification and natural language processing. It may be intellectually unsatisfying for us to think that we have little to contribute in terms of innate knowledge to these models, but the most efficient solution for engineering goals is often to leverage the learning power of artificial neural networks. However, for scientific goals, such as understanding human or animal cognition, the beta lesson also raises intriguing questions about the role of innate knowledge in learning and may nudge us towards a more empiricist stance. The recent evolution of language models, which learn from raw text without being given any innate grammar, adds pressure to claims of universal grammar encoded in our DNA as the sole means of learning languages.

    • Understanding the similarities and differences between human and artificial neural network learningArtificial neural networks offer insights into human and animal learning, but they don't learn exactly like us, and have different biases built in.

      The study of artificial neural networks and their learning processes provides valuable insights into the nature of human and animal learning, but it's important to remember that they are not identical. The human brain and artificial neural networks learn in different ways, and while the human brain may have certain innate biases, modern artificial neural networks also have biases built into their design. These biases enable them to learn efficiently, but they are of a different kind than the biases hypothesized for human cognition. The debate between connectionist and domain-specific approaches to cognition is not as clear-cut as it may seem, and there may be a continuum of strengths and domain specificity of these biases. Additionally, considering the evolutionary history of organisms as a learning process adds another layer of complexity to the comparison between artificial neural networks and biological learning. Ultimately, the study of artificial neural networks can help us understand the limitations and potential of both human and machine learning, and remind us of the importance of humility in our pursuit of knowledge.

    • Understanding Brain Architecture and CompositionalityResearch explores how brain architecture and compositionality influence neural network design, with potential for future breakthroughs.

      The brain's architecture, which involves a combination of genetic programming and randomness, is different from the architecture of neural networks, which is still hand-coded by humans. However, research is being conducted into using evolutionary algorithms to find better neural network architectures through an evolutionary search. Compositionality is an important concept that defines how the meaning of complex expressions, such as sentences or visual scenes, is determined by the meaning of the constituent parts and the way they are combined. While language conforms to this principle to some degree, there are exceptions such as idioms and contextual influences. The idea can be loosely applied to visual scenes as well, suggesting that understanding the different elements in a scene and how they come together may be important for interpreting its meaning. The research into neural network architectures inspired by brain development is still in its infancy and has not yet led to major breakthroughs, but it holds promise for the future.

    • Understanding complex meanings through compositionalityCompositionality enables complex meanings to be formed from simpler constituents, and large language models may be discovering this property, bridging the gap between AI and human cognition.

      Compositionality is a fundamental property of both language and cognitive systems, allowing complex meanings to be formed from simpler constituents. This concept applies not only to linguistic and visual representations, but also to the way the human mind processes information. Compositionality enables us to understand complex expressions by understanding the meanings of their constituent parts and combining them together. While it is relatively straightforward to account for compositionality in classical symbolic systems, connectionist models, which use distributed representations, have faced criticism for their inability to do so. However, recent research suggests that large language models may be discovering some form of compositionality, and there is a growing field of research called mechanistic interpretability that aims to understand the internal workings of these models by borrowing tools from cognitive science and neuroscience. This involves decoding information from the internal representations of the models and training classifiers to identify decodable information, such as syntactic parse trees. Ultimately, understanding the mechanisms of these models can help bridge the gap between artificial intelligence and human cognition.

    • Study finds neurons in image generation algorithms respond to concepts across modalitiesNeural networks, including language models, have representations at the level of single neurons that encode information about both text and images, suggesting a form of semantic competence.

      While large language models like ChatGPT may not have consciousness or sentience, they do possess certain competencies associated with intelligent behaviors in humans and animals. These competencies include semantic understanding, or the ability to parse the meaning of linguistic expressions. This was explored in a study on neurons in image generation algorithms like DALL E, which found neurons that responded to concepts across modalities, such as a "spider neuron" that was activated by both images of spiders and the word "spider." These findings suggest that neural networks, including language models, have representations at the level of single neurons that encode information about both text and images. So, while ChatGPT may not understand language in the same way humans do, it does possess a form of semantic competence that allows it to generate human-like text based on given prompts. This is an important distinction to make when discussing the intelligence of large language models.

    • Language models' semantic competence: More than just parrotsLanguage models can learn statistical relationships between words to understand inferential meaning, but they don't truly understand meaning like humans do.

      While language models may not have access to the worldly reference of words and only deal with the surface form of text, they are not merely "stochastic parrots" without semantic competence. Semantic competence can be broken down into referential and inferential competence. Referential competence is the ability to relate word meaning to their worldly reference, while inferential competence is the ability to understand relationships between words themselves. Language models, through their training on large corpora of text, are well-equipped to learn statistical relationships between words and can induce the inferential aspect of meaning. However, it's important to note that language models do not have the ability to understand meaning in the same way humans do, as they lack the ability to experience the world and have common sense knowledge. The discussion also highlighted that the debate around language models' semantic competence is ongoing and complex, with various perspectives and nuances.

    • Understanding Language Models' Representation of the WorldLarge language models can infer information and represent relationships based on training data, aligning with real-world structures, but their ability to generate novel outputs is debated

      Large language models, such as those trained on vast corpora of text, can demonstrate a form of understanding about the world based on the statistics of language. This understanding is not anthropomorphic imagination in the sense of creating original content with intent, but rather an ability to infer information and represent relationships that reflect the structure of the world. For instance, research has shown that the representational geometry of color terms in language models aligns with the geometry of color spaces in the real world. However, the question of whether these models have an imagination, in the sense of generating novel outputs that are not in the training data, is still a topic of debate. Some argue that these models are merely stitching together bits and pieces from the training data, while others suggest they demonstrate general novelty by generalizing from the data to new domains. Ultimately, the debate highlights the complex relationship between language, statistics, and understanding, and the ongoing exploration of how AI systems process and generate meaning.

    • Comparing Language Models to Lossy Image CompressionLanguage models generalize and generate new content, not just interpolating or memorizing parts of training data.

      While large language models like ChatGPT can memorize information, they also have the ability to generalize and generate new content, going beyond simple interpolation or memorization. This was discussed in the context of comparing language models to lossy image compression algorithms, where the models are seen as compressing and decompressing text data. However, it was emphasized that this analogy may not fully capture the complexity of what these models are actually doing. They are not just interpolating or memorizing parts of the training data, but rather finding themes, ideas, or styles and generating new content within those domains. This is a form of generalization, which is a crucial aspect of their capabilities. The metaphor of lossy image compression can be useful in understanding some aspects of machine learning and compression, but it should not be overgeneralized to describe the full scope of what these models can do.

    • Understanding the limitations of large language and image modelsWhile these models can generate meaningful text and images, they don't truly understand or interact with the world or language, and their goals are purely based on data.

      While large language models and image generation models can generate meaningful text and images, the process goes beyond simple interpolation or memorization. Interpolation in high-dimensional spaces, such as those used by these models, is not the same as interpolation in lower dimensions. These models exhibit some semantic competence and creativity, but they do not possess human-like understanding or intrinsic goals. They learn purely from data through a passive process of next word prediction and do not interact with the world or the language in a meaningful way. The concept of ascribing intrinsic goals or desires to these models is currently unclear and requires further research.

    • AI models can process but not truly understand or respond dynamicallyAI models can't adjust internal representations or goals based on new info, raising concerns about unwanted behaviors as they're scaled, but their lack of physical bodies and inherent drives makes human-like understanding a complex question.

      While large language models can process inputs and produce outputs, they lack the ability to adjust their internal representations or goals based on new information. This means they cannot truly understand or respond dynamically to ongoing interactions. Additionally, there are concerns that scaling these models may lead to unwanted behaviors, such as power seeking, but current evidence suggests this is not yet a problem. It's important to remember that unlike biological organisms, these models do not have physical bodies or inherent drives, making it an open question whether and how human-like understanding or goals could be incorporated into them.

    • Understanding Natural Language Instructions for AIGoogle's SayCan project uses pre-trained language models to parse instructions for robots, but questions remain about the validity of a modular approach to understanding cognition and whether AI can truly interact with the world as humans or animals do.

      Researchers are making strides in creating AI systems that can understand and act on natural language instructions, like "go fetch the apple in the kitchen." Google's project, SayCan, is an example of this, using pre-trained language models to parse instructions and translate them into actionable formats for robots. However, these systems are not end-to-end trained and interacting with the world, but rather rely on pre-existing models. This raises questions about the validity of a modular approach to understanding human or animal cognition, and whether we need models that can learn from the ground up by interacting with the world. Additionally, there is ongoing debate about whether large language models or chatbots can be considered sentient or conscious, and if so, whether they deserve moral or legal rights. These discussions revolve around the idea that consciousness and personhood, whether in organic or artificial systems, are intrinsically valuable and worthy of moral consideration.

    • Considering Ethical Implications of Advanced AIAdvanced AI development requires ethical considerations to avoid creating sentient or personified systems, addressing moral and legal dilemmas, and considering philosophical questions surrounding consciousness and intelligence.

      As we continue to develop advanced artificial intelligence, particularly deep learning systems, it's crucial to consider the ethical implications carefully. Ascribing rights to these systems could lead to complex moral dilemmas, potentially causing harm to humans. It's essential to avoid building systems that might be serious candidates for sentience or personhood until we have a clearer understanding of the moral and legal considerations involved. The Turing test, which once seemed like a good criterion for determining thinking abilities, is no longer sufficient. We must now confront the philosophical questions surrounding consciousness and intelligence, especially since these advanced systems can produce detailed and precise reports without experiencing feelings. This uncharted territory requires a duty to consider the practical and moral implications.

    • Exploring consciousness in AI raises complex questionsWhile AI may not currently possess consciousness, acknowledging it could have significant implications and spark further philosophical inquiry

      The discovery of consciousness in artificial intelligence raises intriguing and challenging questions for researchers in the field of consciousness studies. Traditionally, we have relied on direct access to human consciousness to investigate its nature. However, with AI, we are in uncharted territory as we don't have access to the ground truth. Instead, we must make inferences based on certain properties of the system. While there is currently strong empirical evidence to deny AI consciousness, the implications of acknowledging it are thought-provoking and potentially alarming. This situation presents an exciting opportunity for philosophers to explore these complex issues further. Rafael Yaghoubi, thank you for sharing your insights on this topic on the Mindscape podcast. It was a pleasure having you.

    Recent Episodes from Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

    276 | Gavin Schmidt on Measuring, Predicting, and Protecting Our Climate

    276 | Gavin Schmidt on Measuring, Predicting, and Protecting Our Climate

    The Earth's climate keeps changing, largely due to the effects of human activity, and we haven't been doing enough to slow things down. Indeed, over the past year, global temperatures have been higher than ever, and higher than most climate models have predicted. Many of you have probably seen plots like this. Today's guest, Gavin Schmidt, has been a leader in measuring the variations in Earth's climate, modeling its likely future trajectory, and working to get the word out. We talk about the current state of the art, and what to expect for the future.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/05/20/276-gavin-schmidt-on-measuring-predicting-and-protecting-our-climate/

    Gavin Schmidt received his Ph.D. in applied mathematics from University College London. He is currently Director of NASA's Goddard Institute for Space Studies, and an affiliate of the Center for Climate Systems Research at Columbia University. His research involves both measuring and modeling climate variability. Among his awards are the inaugural Climate Communications Prize of the American Geophysical Union. He is a cofounder of the RealClimate blog.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    275 | Solo: Quantum Fields, Particles, Forces, and Symmetries

    275 | Solo: Quantum Fields, Particles, Forces, and Symmetries

    Publication week! Say hello to Quanta and Fields, the second volume of the planned three-volume series The Biggest Ideas in the Universe. This volume covers quantum physics generally, but focuses especially on the wonders of quantum field theory. To celebrate, this solo podcast talks about some of the big ideas that make QFT so compelling: how quantized fields produce particles, how gauge symmetries lead to forces of nature, and how those forces can manifest in different phases, including Higgs and confinement.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/05/13/275-solo-quantum-fields-particles-forces-and-symmetries/

    Support Mindscape on Patreon.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | May 2024

    AMA | May 2024

    Welcome to the May 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/05/06/ama-may-2024/

    Support Mindscape on Patreon.

    Here is the memorial to Dan Dennett at Ars Technica.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    274 | Gizem Gumuskaya on Building Robots from Human Cells

    274 | Gizem Gumuskaya on Building Robots from Human Cells

    Modern biology is advancing by leaps and bounds, not only in understanding how organisms work, but in learning how to modify them in interesting ways. One exciting frontier is the study of tiny "robots" created from living molecules and cells, rather than metal and plastic. Gizem Gumuskaya, who works with previous guest Michael Levin, has created anthrobots, a new kind of structure made from living human cells. We talk about how that works, what they can do, and what future developments might bring.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/29/274-gizem-gumuskaya-on-building-robots-from-human-cells/

    Support Mindscape on Patreon.

    Gimez Gumuskaya received her Ph.D. from Tufts University and the Harvard Wyss Institute for Biologically-Inspired Engineering. She is currently a postdoctoral researcher at Tufts University. She previously received a dual master's degree in Architecture and Synthetic Biology from MIT.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    273 | Stefanos Geroulanos on the Invention of Prehistory

    273 | Stefanos Geroulanos on the Invention of Prehistory

    Humanity itself might be the hardest thing for scientists to study fairly and accurately. Not only do we come to the subject with certain inevitable preconceptions, but it's hard to resist the temptation to find scientific justifications for the stories we'd like to tell about ourselves. In his new book, The Invention of Prehistory, Stefanos Geroulanos looks at the ways that we have used -- and continue to use -- supposedly-scientific tales of prehistoric humanity to bolster whatever cultural, social, and political purposes we have at the moment.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/22/273-stefanos-geroulanos-on-the-invention-of-prehistory/

    Support Mindscape on Patreon.

    Stefanos Geroulanos received his Ph.D. in humanities from Johns Hopkins. He is currently director of the Remarque Institute and a professor of history at New York University. He is the author and editor of a number of books on European intellectual history. He serves as a Co-Executive Editor of the Journal of the History of Ideas.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    272 | Leslie Valiant on Learning and Educability in Computers and People

    272 | Leslie Valiant on Learning and Educability in Computers and People

    Science is enabled by the fact that the natural world exhibits predictability and regularity, at least to some extent. Scientists collect data about what happens in the world, then try to suggest "laws" that capture many phenomena in simple rules. A small irony is that, while we are looking for nice compact rules, there aren't really nice compact rules about how to go about doing that. Today's guest, Leslie Valiant, has been a pioneer in understanding how computers can and do learn things about the world. And in his new book, The Importance of Being Educable, he pinpoints this ability to learn new things as the crucial feature that distinguishes us as human beings. We talk about where that capability came from and what its role is as artificial intelligence becomes ever more prevalent.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/15/272-leslie-valiant-on-learning-and-educability-in-computers-and-people/

    Support Mindscape on Patreon.

    Leslie Valiant received his Ph.D. in computer science from Warwick University. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. He has been awarded a Guggenheim Fellowship, the Knuth Prize, and the Turing Award, and he is a member of the National Academy of Sciences as well as a Fellow of the Royal Society and the American Association for the Advancement of Science. He is the pioneer of "Probably Approximately Correct" learning, which he wrote about in a book of the same name.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | April 2024

    AMA | April 2024

    Welcome to the April 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/04/08/ama-april-2024/

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    271 | Claudia de Rham on Modifying General Relativity

    271 | Claudia de Rham on Modifying General Relativity

    Einstein's theory of general relativity has been our best understanding of gravity for over a century, withstanding a variety of experimental challenges of ever-increasing precision. But we have to be open to the possibility that general relativity -- even at the classical level, aside from any questions of quantum gravity -- isn't the right theory of gravity. Such speculation is motivated by cosmology, where we have a good model of the universe but one with a number of loose ends. Claudia de Rham has been a leader in exploring how gravity could be modified in cosmologically interesting ways, and we discuss the current state of the art as well as future prospects.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/01/271-claudia-de-rham-on-modifying-general-relativity/

    Support Mindscape on Patreon.

    Claudia de Rham received her Ph.D. in physics from the University of Cambridge. She is currently a professor of physics and deputy department head at Imperial College, London. She is a Simons Foundation Investigator, winner of the Blavatnik Award, and a member of the American Academy of Arts and Sciences. Her new book is The Beauty of Falling: A Life in Pursuit of Gravity.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    270 | Solo: The Coming Transition in How Humanity Lives

    270 | Solo: The Coming Transition in How Humanity Lives

    Technology is changing the world, in good and bad ways. Artificial intelligence, internet connectivity, biological engineering, and climate change are dramatically altering the parameters of human life. What can we say about how this will extend into the future? Will the pace of change level off, or smoothly continue, or hit a singularity in a finite time? In this informal solo episode, I think through what I believe will be some of the major forces shaping how human life will change over the decades to come, exploring the very real possibility that we will experience a dramatic phase transition into a new kind of equilibrium.

    Blog post with transcript and links to additional resources: https://www.preposterousuniverse.com/podcast/2024/03/25/270-solo-the-coming-transition-in-how-humanity-lives/

    Support Mindscape on Patreon.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    269 | Sahar Heydari Fard on Complexity, Justice, and Social Dynamics

    269 | Sahar Heydari Fard on Complexity, Justice, and Social Dynamics

    When it comes to social change, two questions immediately present themselves: What kind of change do we want to see happen? And, how do we bring it about? These questions are distinct but related; there's not much point in spending all of our time wanting change that won't possibly happen, or working for change that wouldn't actually be good. Addressing such issues lies at the intersection of philosophy, political science, and social dynamics. Sahar Heydari Fard looks at all of these issues through the lens of complex systems theory, to better understand how the world works and how it might be improved.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/03/18/269-sahar-heydari-fard-on-complexity-justice-and-social-dynamics/

    Support Mindscape on Patreon.

    Sahar Heydari Fard received a Masters in applied economics and a Ph.D. in philosophy from the University of Cincinnati. She is currently an assistant professor in philosophy at the Ohio State University. Her research lies at the intersection of social and behavioral sciences, social and political philosophy, and ethics, using tools from complex systems theory.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Related Episodes

    94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans

    94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans

    Artificial intelligence has made great strides of late, in areas as diverse as playing Go and recognizing pictures of dogs. We still seem to be a ways away from AI that is “intelligent” in the human sense, but it might not be too long before we have to start thinking seriously about the “motivations” and “purposes” of artificial agents. Stuart Russell is a longtime expert in AI, and he takes extremely seriously the worry that these motivations and purposes may be dramatically at odds with our own. In his book Human Compatible, Russell suggests that the secret is to give up on building our own goals into computers, and rather programming them to figure out our goals by actually observing how humans behave.

    Support Mindscape on Patreon.

    Stuart Russell received his Ph.D. in computer science from Stanford University. He is currently a Professor of Computer Science and the Smith-Zadeh Professor in Engineering at the University of California, Berkeley, as well as an Honorary Fellow of Wadham College, Oxford. He is a co-founder of the Center for Human-Compatible Artificial Intelligence at UC Berkeley. He is the author of several books, including (with Peter Norvig) the classic text Artificial Intelligence: A Modern Approach. Among his numerous awards are the IJCAI Computers and Thought Award, the Blaise Pascal Chair in Paris, and the World Technology Award. His new book is Human Compatible: Artificial Intelligence and the Problem of Control.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    248 | Yejin Choi on AI and Common Sense

    248 | Yejin Choi on AI and Common Sense

    Over the last year, AI large-language models (LLMs) like ChatGPT have demonstrated a remarkable ability to carry on human-like conversations in a variety of different concepts. But the way these LLMs "learn" is very different from how human beings learn, and the same can be said for how they "reason." It's reasonable to ask, do these AI programs really understand the world they are talking about? Do they possess a common-sense picture of reality, or can they just string together words in convincing ways without any underlying understanding? Computer scientist Yejin Choi is a leader in trying to understand the sense in which AIs are actually intelligent, and why in some ways they're still shockingly stupid.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/08/28/248-yejin-choi-on-ai-and-common-sense/

    Support Mindscape on Patreon.

    Yejin Choi received a Ph.D. in computer science from Cornell University. She is currently the Wissner-Slivka Professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research director at AI2 overseeing the project Mosaic. Among her awards are a MacArthur fellowship and a fellow of the Association for Computational Linguistics.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    255 | Michael Muthukrishna on Developing a Theory of Everyone

    255 | Michael Muthukrishna on Developing a Theory of Everyone

    A "Theory of Everything" is physicists' somewhat tongue-in-cheek phrase for a hypothetical model of all the fundamental physical interactions. Of course, even if we had such a theory, it would tell us nothing new about higher-level emergent phenomena, all the way up to human behavior and society. Can we even imagine a "Theory of Everyone," providing basic organizing principles for society? Michael Muthukrishna believes we can, and indeed that we can see the outlines of such a theory emerging, based on the relationships of people to each other and to the physical resources available.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/10/30/255-michael-muthukrishna-on-developing-a-theory-of-everyone/

    Support Mindscape on Patreon.

    Michael Muthukrishna received his Ph.D. in psychology from the University of British Columbia. He is currently Associate Professor of Economic Psychology at the London School of Economics and Political Science. Among his awards are an Emerging Scholar Award from the Society for Personality and Social Psychology and a Dissertation Excellence Award from the Canadian Psychological Association. His new book is A Theory of Everyone: The New Science of Who We Are, How We Got Here, and Where We're Going.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    216 | John Allen Paulos on Numbers, Narratives, and Numeracy

    216 | John Allen Paulos on Numbers, Narratives, and Numeracy

    People have a complicated relationship to mathematics. We all use it in our everyday lives, from calculating a tip at a restaurant to estimating the probability of some future event. But many people find the subject intimidating, if not off-putting. John Allen Paulos has long been working to make mathematics more approachable and encourage people to become more numerate. We talk about how people think about math, what kinds of math they should know, and the role of stories and narrative to make math come alive. 

    Support Mindscape on Patreon.

    John Allen Paulos received his Ph.D. in mathematics from the University of Wisconsin, Madison. He is currently a professor of mathematics at Temple University. He s a bestselling author, and frequent contributor to publications such as ABCNews.com, the Guardian, and Scientific American. Among his awards are the Science Communication award from the American Association for the Advancement of Science and the Mathematics Communication Award from the Joint Policy Board of Mathematics. His new book is Who’s Counting? Uniting Numbers and Narratives with Stories from Pop Culture, Puzzles, Politics, and More.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    18 | Clifford Johnson on What's So Great About Superstring Theory

    18 | Clifford Johnson on What's So Great About Superstring Theory
    String theory is a speculative and highly technical proposal for uniting the known forces of nature, including gravity, under a single quantum-mechanical framework. This doesn't seem like a recipe for creating a lightning rod of controversy, but somehow string theory has become just that. To get to the bottom of why anyone (indeed, a substantial majority of experts in the field) would think that replacing particles with little loops of string was a promising way forward for theoretical physics, I spoke with expert string theorist Clifford Johnson. We talk about the road string theory has taken from a tentative proposal dealing with the strong interactions, through a number of revolutions, to the point it's at today. Also, where all those extra dimensions might have gone. At the end we touch on Clifford's latest project, a graphic novel that he wrote and illustrated about how science is done. Clifford Johnson is a Professor of Physics at the University of Southern California. He received his Ph.D. in mathematics and physics from the University of Southampton. His research area is theoretical physics, focusing on string theory and quantum field theory. He was awarded the Maxwell Medal from the Institute of Physics. Johnson is the author of the technical monograph D-Branes, as well as the graphic novel The Dialogues. Home page Wikipedia page Publications A talk on The Dialogues Asymptotia blog Twitter See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.