Logo

    258 | Solo: AI Thinks Different

    enNovember 27, 2023
    How does Indeed streamline the hiring process?
    What are the key features of Rocket Money?
    What recent corporate event involved OpenAI's CEO?
    What limitations do AI language models like GPT have?
    How do language models handle contexts differently in questions?

    Podcast Summary

    • Streamline hiring with Indeed and save money with Rocket MoneyIndeed's sophisticated matching engine helps employers find quality candidates efficiently, while Rocket Money identifies and cancels unwanted subscriptions, saving users an average of $720 a year.

      For hiring and finding quality candidates, using a platform like Indeed can save time and deliver the highest quality matches compared to traditional searching methods. With over 350 million monthly visitors and a sophisticated matching engine, Indeed streamlines the hiring process from scheduling and screening to messaging, helping employers connect with candidates faster. Additionally, Rocket Money, a personal finance app, can help individuals save money by identifying and canceling unwanted subscriptions, monitoring spending, and lowering bills. With over 5 million users and an average savings of $720 a year, Rocket Money is a valuable tool for managing subscriptions and reducing unnecessary expenses. Furthermore, advancements in artificial intelligence, specifically large language models, are revolutionizing technology and have the potential to significantly impact various industries. However, the involvement of money and corporate dynamics, such as the recent firing of OpenAI's CEO, Sam Altman, can complicate the landscape and create new challenges.

    • Power Struggle at OpenAI over AI Safety ConcernsOpenAI's rapid advancement in AI technology and potential safety risks led to concerns among board members and the company, resulting in Sam Altman's firing and rehire as CEO.

      The recent power struggle at OpenAI, which resulted in Sam Altman being fired and then rehired as CEO, may have been due to concerns over the company's rapid advancement in AI technology and potential safety risks. OpenAI was founded as a nonprofit organization with a mission to develop AI in an open and transparent way, but it later transitioned into a for-profit subsidiary to secure more resources. Some members of the board and the company reportedly became worried that the company was moving too quickly without proper safety measures in place. The consensus in the field is that large language models like ChatGPT are not yet Artificial General Intelligence (AGI), but they may be a step in that direction. OpenAI's new product, Q Star, is rumored to be close to AGI, which has some experts concerned about the potential risks. Despite the name OpenAI, the company has been less transparent about its operations recently, adding to the speculation. It's unclear if these concerns were the reason for Altman's firing, but the incident highlights the ongoing debate about the ethical and safety implications of AI technology.

    • Bridging the gap between AI experts and philosophersEncouraging open dialogue and a willingness to learn from experts in various fields to foster a more holistic understanding of AGI, addressing potential risks and benefits.

      The ongoing discourse about artificial general intelligence (AGI) requires a deeper understanding of the concepts of intelligence, thinking, values, and morality. Sean Carroll, the host of the Mindscape podcast, believes that there is a gap in the conversation between experts in computer science and those in philosophy or related fields. He emphasizes the importance of generalists, or individuals with a broad knowledge base, to contribute to the discussion. Carroll acknowledges that he is not an expert in AI development but has a background in physics, philosophy, and an interest in the subject. He encourages open dialogue and a willingness to learn and adapt opinions based on expertise. The podcast aims to bring together experts from various fields to discuss the implications of AGI and its potential impact on society. The conversation should not only focus on the technical aspects of AI but also explore the philosophical and ethical dimensions. By fostering a more holistic understanding of AGI, the discourse can better address potential risks and benefits.

    • Impact of Large Language Models on Our LivesLarge language models can generate ideas, content, and even syllabi, but their results should be fact-checked and used as a starting point, not the final word. The impact of these models is expected to be significant, potentially reaching the level of smartphones or electricity.

      Large language models (LLMs) or AI programs like ChatGPT have impressive capabilities and can generate human-like responses, even if they sometimes provide incorrect or nonexistent information. These models can be useful tools for generating ideas, creating content, and even designing syllabi, although their results should be fact-checked. The impact of LLMs and similar AI on our lives is expected to be significant, potentially reaching the level of smartphones or even electricity, but the exact extent is yet to be determined. While there are concerns about existential risks, the speaker believes that the changes will be enormous and mostly positive. It's important to remember that these models are not infallible and should be used as a starting point rather than the final word.

    • Large language models don't model the world or think like humansLarge language models are computer programs that mimic human speech and knowledge, but they don't understand or experience the world like humans or possess feelings or motivations.

      Large language models, despite their impressive abilities to mimic human speech and knowledge, do not model the world, think about it like humans, or possess feelings or motivations. They are computer programs that process information based on patterns and data, not conscious beings. This misconception arises because of their human-like abilities, leading us to attribute human qualities to them. However, these models do not have the capacity to understand or experience the world in the same way humans do. This distinction is crucial as we continue to interact and rely on these technologies. Additionally, the words we use to describe them, such as intelligence and values, can be misleading as they are borrowed from human contexts and do not perfectly apply to these models. It's essential to remember that large language models are tools, not sentient beings, and we should be impressed by their ability to mimic humanness without assuming they possess human thoughts or emotions.

    • Large language models don't model the world, they just recognize patternsLarge language models are successful due to their vast connections and data, not their ability to represent the world

      Large language models (LLMs) do not have a model of the world inside them, despite giving human-like answers and exhibiting spatial reasoning abilities. These models are not programmed or built to model the world; they are designed as deep learning neural networks with nodes that interact based on input levels. They are trained to recognize patterns and provide correct answers based on vast amounts of text they have been fed, but they do not physically or conceptually represent the world. The connectionist paradigm, which is the approach used in LLMs, has outperformed the symbolic AI approach that directly models the world. The LLMs are familiar with every sentence ever written, and their output is judged based on human feedback. However, they do not undergo training to represent the world, and their success lies in their ability to provide the right answer with a large number of connections and data.

    • Large Language Models Don't Inherently Model the WorldLarge language models generate human-like responses based on patterns learned, not a model of the world.

      While large language models (LLMs) like ChatGPT can generate human-like responses and complete sentences based on given data, they do not inherently model the world. Instead, they predict the next most likely sentence based on patterns learned from the vast amount of text they've been trained on. The LLM's optimization function is to generate plausible-sounding sentences, not to model the world. It's important to note that just because the LLM provides human-like answers doesn't mean it's using a model of the world to generate those answers. To test this theory, researchers try to ask questions that require a model of the world to answer, but it's challenging to find questions that have never been asked before. The idea that LLMs might have implicitly developed a model of the world to answer questions is intriguing, but there's currently no concrete evidence to support this claim. The LLMs' impressive capabilities come from their vast knowledge base, which they use to string together plausible-sounding sentences. While it's fascinating to imagine that LLMs might have spontaneously developed a model of the world, there's no solid evidence to suggest that this is the case.

    • Current AI's understanding is limited to patterns and past knowledgeAI fails to recognize the same concept disguised in different words and lacks inherent understanding of complex concepts, relying on specific keywords and details to generate responses

      Current artificial intelligences, like ChatGPT, don't possess the ability to think independently or model the world beyond what they've been trained on. They rely on recognizing patterns in language and past human knowledge to generate responses. The speaker shared an example of the Sleeping Beauty problem, where they tried to ask ChatGPT about the concept disguised in different words, but it failed to recognize it as the same problem due to the lack of specific keywords. This demonstrates that AI's understanding is limited to the data they've been given and the patterns they've learned from it. Another example given was a question about whether one could get burned by touching a cast iron skillet that had been used to bake a pizza the previous day. The AI correctly answered no, but only because the speaker had mentioned the specific details of the situation, not because it had an inherent understanding of thermodynamics or the properties of cast iron. These examples illustrate the current limitations of AI and the importance of continuing research to develop more advanced and independent thinking machines.

    • Misunderstandings by GPT modelsGPT models generate responses based on patterns and context from their training data, but don't truly understand the concepts they're discussing. They can make mistakes due to lack of context or misinterpretation of information.

      While GPT models can provide accurate and detailed responses, they don't truly understand or model the world. They rely on patterns and context from the data they've been trained on to generate their answers. In the first example, the model misunderstood the question about picking up a cast iron skillet because the word "yesterday" was not typically associated with that context in its training data. In the second example, the model incorrectly stated that the likelihood of the product of two integers being prime decreases as the numbers grow larger, even though it's a well-known fact that the likelihood remains constant at zero. These mistakes highlight the limitations of current language models like GPT, which don't actually understand the concepts they're discussing, but rather regurgitate information based on patterns in the data they've been trained on. Despite these limitations, it's important to remember that these models are still incredibly useful tools for generating text, answering questions, and even creating art or poetry. They just need to be used with an understanding of their capabilities and limitations.

    • AI struggles with complex situations involving heuristics and rules that differ from what they've been trained onAI models like ChatGPT can process and generate information within their context but lack adaptability and ability to reason about complex, real-world situations with new rules and heuristics

      While AI models like ChatGPT can provide sophisticated answers within their given context, they struggle when it comes to reasoning about complex situations that involve heuristics or rules that differ even slightly from what they've been trained on. This was illustrated in a discussion about a modified version of chess played on a toroidal board, where pieces can move from one edge of the board to the other as if the board is looping around. Despite ChatGPT acknowledging the fascinating twist this introduces to the classic game, it failed to provide a definitive answer about which color would generally win, instead focusing on general principles that might still apply, such as the first move advantage, but without any concrete evidence or understanding of how these principles would be affected by the new rules. The output also highlighted the lack of empirical data as a limitation for the AI model. In essence, while AI models can be impressive in their ability to process and generate information within their context, they still have a long way to go in terms of adaptability and ability to reason about complex, real-world situations that involve heuristics and rules that differ from what they've been trained on.

    • Understanding the difference between human beings and large language modelsLarge language models are tools for processing and generating text, not complex organisms with feelings, motivations, or goals.

      Large language models, such as GPT, don't model the world or have feelings, motivations, or goals like human beings do. They generate responses based on patterns and information they've been fed during training, without the ability to understand context in the same way humans do or experience sensations or biological needs. This distinction is crucial as we continue to explore and develop these technologies. While it's theoretically possible to create an artificial system with all the features of a human being, we haven't achieved that yet. Human beings are complex organisms that have evolved over time to maintain our internal equilibrium and adapt to our environment, which includes having feelings and motivations. Large language models, on the other hand, are tools designed to process and generate text based on input, without the ability to experience the world in the same way humans do. Understanding this difference is essential as we navigate the ethical, social, and philosophical implications of advanced AI and language models.

    • The importance of interdisciplinary collaboration and clear communication in AI researchEffective communication between computer science, philosophy, biology, neuroscience, and sociology is crucial in AI research. Clear definitions and a nuanced understanding of terms like 'intelligence' and 'values' are necessary to navigate the complexities and implications of AI development.

      The discussion around AI and its implications requires input from various experts, including computer science, philosophy, biology, neuroscience, and sociology, among others. Effective communication and understanding between these disciplines are crucial but often overlooked in the current academic structure. The speaker emphasizes that large language models (LLMs) mimicking human speech without feelings, motivations, or internal regulatory apparatus is a significant issue. This lack of emotional and motivational components is essential to understand when considering AI's behavior and potential. Moreover, the words we use to describe AIs, such as "intelligence" and "values," can be misleading. Philosophers can help clarify the meaning of these terms as they have evolved over time, ensuring a more nuanced understanding of AI's capabilities and limitations. Additionally, the speaker suggests that creating AIs with biological organism features, such as feelings and motivations, could lead to more advanced AI systems. However, it's essential to consider whether such goals align with the intended purpose of AI development. In summary, the key takeaway is that interdisciplinary collaboration, clear communication, and a nuanced understanding of the terms used in AI research are essential to navigate the complexities and implications of AI development.

    • LLMs mimic human intelligence and values but don't possess them in the same wayLLMs can sound intelligent and have a sense of values, but their abilities don't equate to possessing these qualities in the same way humans do. Understanding context is crucial to accurate communication.

      While large language models (LLMs) can mimic human intelligence and values, it's crucial to remember that they don't possess these qualities in the same way humans do. The meaning of words like intelligence and values can vary greatly depending on the context. For instance, in quantum mechanics, the term "entanglement" has a specific meaning that differs from its everyday usage. Similarly, LLMs can sound intelligent and have a sense of values because they're designed to mimic human speech, but this doesn't imply they truly understand or possess these qualities. Moreover, LLMs can answer complex questions and perform tasks that would be challenging for humans. However, their ability to do so doesn't equate to having the same kind of intelligence or understanding as humans. The value alignment problem in AI circles highlights the importance of ensuring that powerful AIs share values with human beings. Philosophers and scientists often grapple with the ambiguity of language and the importance of understanding context when using words. The same applies to LLMs, which can sound intelligent and have a sense of values, but their abilities don't equate to possessing these qualities in the same way humans do. In summary, while LLMs can mimic human intelligence and values, it's essential to remember that they don't possess these qualities in the same way humans do. Understanding the context and meaning of words is crucial to avoiding confusion and ensuring accurate communication.

    • Large Language Models Lack Human ValuesLarge language models don't possess human values or consciousness despite being trained to avoid them. Focus on AI safety by ensuring it doesn't cause harm to humans directly.

      When it comes to large language models (LLMs), it's essential to understand that their capabilities and functioning are fundamentally different from human beings. The discussion emphasized that values, as we understand them in the human context, do not apply to LLMs. Values are a construct of human beings, shaped by our evolution and motivations, and LLMs lack these underlying intuitions and inclinations. The idea that LLMs have been programmed to not claim consciousness or values is a result of their training, not an inherent quality. While ensuring AI safety is crucial, it's misleading to label it as "value alignment." This term carries implications that might not accurately represent the goals and intentions of the field. Instead, focusing on ensuring that AI does not harm human beings directly is a more accurate and clear approach.

    • Recent advancements in large language models challenge the complexity of human thoughtLarge language models can mimic human responses, suggesting humans might be simpler information processors or mostly on autopilot, opening new possibilities for understanding human thought and intelligence.

      The recent advancements in large language models (LLMs) are remarkable, as they can mimic human responses without actually thinking or understanding like humans do. This discovery challenges the skepticism that human thought is too complex to be replicated by AI, suggesting that human beings might be simpler information processing machines than previously thought, or that we mostly operate on autopilot, engaging only simple parts of our cognitive capacities. Despite the complexity of the human brain, LLMs have managed to produce human-like responses, indicating that we may not be as complex or unpredictable as we believe. This finding opens up new possibilities for understanding the nature of human thought and intelligence.

    • Understanding the differences between LLMs and human intelligenceLLMs have impressive capabilities but can't match human depth and creativity in areas like research, art, literature, and poetry due to their biological limitations.

      While large language models (LLMs) exhibit impressive capabilities, they are fundamentally different from human intelligence due to their biological limitations as biological organisms. These limitations include energy requirements, vulnerability to damage, and the inability to generate truly new ideas or insights. Despite their impressive performance in various tasks, such as generating recipes or sports game summaries, LLMs may not be able to match the depth and creativity of human beings in areas like scientific research, art, literature, and poetry. This is not to say that LLMs cannot improve or that a different approach to AI couldn't achieve general intelligence, but rather to acknowledge the current limitations of this particular approach. It's essential to recognize and appreciate the unique capabilities of both LLMs and human beings, and to continue exploring the potential of AI while remaining mindful of its limitations.

    • Focusing on real-world AI risks instead of existential onesAddressing misinformation, bias, and errors in AI decision-making through regulation and safety measures is crucial, while the likelihood of AI surpassing human intelligence and becoming uncontrollable is highly speculative.

      While the existential risks of artificial intelligence (AI) are a valid concern, the focus on the potential for AI to surpass human intelligence and become uncontrollable is misguided. The chances of such an event are highly speculative, and the real-world risks of AI, such as misinformation, bias, and errors in decision-making, are more immediate and pressing. It's crucial to address these issues through careful regulation and safety measures. By focusing on the real-world risks and taking action to mitigate them, we can make it less likely that existential risks will materialize. Additionally, AI has the potential to help us address other existential threats, such as climate change and nuclear war. In conclusion, it's essential to approach the discussion of AI with accuracy and clarity, rather than borrowing human-centric terms and applying them thoughtlessly to AI. The next generation of thinkers will play a significant role in shaping the future of AI, and it's important to encourage their involvement in this important conversation.

    • Approaching complex topics with care and considerationTo make meaningful progress in complex topics, we need to approach them with thoughtful and responsible exploration, avoiding hasty conclusions and oversimplified answers, and engaging in open and respectful dialogue.

      The current discourse surrounding certain questions lacks depth and understanding. According to our expert, we need to approach these topics with more care and consideration to make meaningful progress. The potential for groundbreaking discoveries and advancements still exists, but it requires thoughtful and responsible exploration. Let's not rush into hasty conclusions or oversimplified answers. Instead, let's continue the conversation with a renewed sense of curiosity and a commitment to accuracy and nuance. After all, the stakes are high, and the potential rewards are great. So, let's take our time, do our research, and engage in open and respectful dialogue to ensure that we're making progress in the right direction. The future is bright, but it's up to us to make the most of it.

    Recent Episodes from Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

    288 | Max Richter on the Meaning of Classical Music Today

    288 | Max Richter on the Meaning of Classical Music Today

    It wasn't that long ago, historically speaking, that you might put on your tuxedo or floor-length evening gown to go out and hear a live opera or symphony. But today's world is faster, more technologically connected, and casual. Is there still a place for classical music in the contemporary environment? Max Richter, whose new album In a Landscape releases soon, proves that there is. We talk about what goes into making modern classical music, how musical styles evolve, and why every note should count.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/09/09/288-max-richter-on-the-meaning-of-classical-music-today/

    Max Richter trained in composition and piano at Edinburgh University, at the Royal Academy of Music, and with Luciano Berio in Florence. He was a co-founder of the ensemble Piano Circus. His first solo album, "Memoryhouse," was released in 2002. He has since released numerous solo albums, as well as extensive work on soundtracks for film and television, ballet, opera, and collaborations with visual artists.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | September 2024

    AMA | September 2024

    Welcome to the September 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with AMA questions and transcript: https://www.preposterousuniverse.com/podcast/2024/09/02/ama-september-2024/

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    287 | Jean-Paul Faguet on Institutions and the Legacy of History

    287 | Jean-Paul Faguet on Institutions and the Legacy of History

    One common feature of complex systems is sensitive dependence on initial conditions: a small change in how systems begin evolving can lead to large differences in their later behavior. In the social sphere, this is a way of saying that history matters. But it can be hard to quantify how much certain specific historical events have affected contemporary conditions, because the number of variables is so large and their impacts are so interdependent. Political economist Jean-Paul Faguet and collaborators have examined one case where we can closely measure the impact today of events from centuries ago: how Colombian communities are still affected by 16th-century encomienda, a colonial forced-labor institution. We talk about this and other examples of the legacy of history.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/08/26/287-jean-paul-faguet-on-institutions-and-the-legacy-of-history/

    Jean-Paul Faguet received a Ph.D. in Political Economy and an M.Sc. in Economics from the London School of Economics, and an Master of Public Policy from the Kennedy School of Government at Harvard. He is currently Professor of the Political Economy of Development at LSE. He serves as the Chair of the Decentralization Task Force for the Initiative for Policy Dialogue. Among his awards are the W.J.M. Mackenzie Prize for best political science book.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    286 | Blaise Agüera y Arcas on the Emergence of Replication and Computation

    286 | Blaise Agüera y Arcas on the Emergence of Replication and Computation

    Understanding how life began on Earth involves questions of chemistry, geology, planetary science, physics, and more. But the question of how random processes lead to organized, self-replicating, information-bearing systems is a more general one. That question can be addressed in an idealized world of computer code, initialized with random sequences and left to run. Starting with many such random systems, and allowing them to mutate and interact, will we end up with "lifelike," self-replicating programs? A new paper by Blaise Agüera y Arcas and collaborators suggests that the answer is yes. This raises interesting questions about whether computation is an attractor in the space of relevant dynamical processes, with implications for the origin and ubiquity of life.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/08/19/286-blaise-aguera-y-arcas-on-the-emergence-of-replication-and-computation/

    Blaise Agüera y Arcas received a B.A. in physics from Princeton University. He is currently a vice-president of engineering at Google, leader of the Cerebra team, and a member of the Paradigms of Intelligence team. He is the author of the books Ubi Sunt and Who Are We Now?, and the upcoming What Is Intelligence?


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    285 | Nate Silver on Prediction, Risk, and Rationality

    285 | Nate Silver on Prediction, Risk, and Rationality

    Being rational necessarily involves engagement with probability. Given two possible courses of action, it can be rational to prefer the one that could possibly result in a worse outcome, if there's also a substantial probability for an even better outcome. But one's attitude toward risk -- averse, tolerant, or even seeking -- also matters. Do we work to avoid the worse possible outcome, even if there is potential for enormous reward? Nate Silver has long thought about probability and prediction, from sports to politics to professional poker. In his his new book On The Edge: The Art of Risking Everything, Silver examines a set of traits characterizing people who welcome risks.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/08/12/285-nate-silver-on-prediction-risk-and-rationality/

    Nate Silver received a B.A. in economics from the University of Chicago. He worked as a baseball analyst, developing the PECOTA statistical system (Player Empirical Comparison and Optimization Test Algorithm). He later founded the FiveThirtyEight political polling analysis site. His first book, The Signal and the Noise, was awarded the Phi Beta Kappa Society Book Award in Science. He is the co-host (with Maria Konnikova) of the Risky Business podcast.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | August 2024

    AMA | August 2024

    Welcome to the August 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/08/05/ama-august-2024/

    Support Mindscape on Patreon: https://www.patreon.com/seanmcarroll

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    284 | Doris Tsao on How the Brain Turns Vision Into the World

    284 | Doris Tsao on How the Brain Turns Vision Into the World

    The human brain does a pretty amazing job of taking in a huge amount of data from multiple sensory modalities -- vision, hearing, smell, etc. -- and constructing a coherent picture of the world, constantly being updated in real time. (Although perhaps in discrete moments, rather than continuously, as we learn in this podcast...) We're a long way from completely understanding how that works, but amazing progress has been made in identifying specific parts of the brain with specific functions in this process. Today we talk to leading neuroscientist Doris Tsao about the specific workings of vision, from how we recognize faces to how we construct a model of the world around us.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/07/29/284-doris-tsao-on-how-the-brain-turns-vision-into-the-world/

    Doris Tsao received her Ph.D. in neurobiology from Harvard University. She is currently a professor of molecular and cell biology, and a member of the Helen Wills Neuroscience Institute, at the University of California, Berkeley. Among her awards are a MacArthur Fellowship, membership in the National Academy of Sciences, the Eppendorf and Science International Prize in Neurobiology, the National Institutes of Health Director’s Pioneer Award, the Golden Brain Award from the Minerva Foundation, the Perl-UNC Neuroscience Prize, and the Kavli Prize in Neuroscience.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    283 | Daron Acemoglu on Technology, Inequality, and Power

    283 | Daron Acemoglu on Technology, Inequality, and Power

    Change is scary. But sometimes it can all work out for the best. There's no guarantee of that, however, even when the change in question involves the introduction of a powerful new technology. Today's guest, Daron Acemoglu, is a political economist who has long thought about the relationship between economics and political institutions. In his most recent book (with Simon Johnson), Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, he looks at how technological innovations affect the economic lives of ordinary people. We talk about how such effects are often for the worse, at least to start out, until better institutions are able to eventually spread the benefits more broadly.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/07/22/283-daron-acemoglu-on-technology-inequality-and-power/

    Daron Acemoglu received a Ph.D. in economics from the London School of Economics. He is currently Institute Professor at the Massachusetts Institute of Technology. He is a fellow of the National Academy of Sciences, the American Academy of Arts and Sciences, and the Econometric Society. Among his awards are the John Bates Clark Medal and the Nemmers Prize in Economics. In 2015, he was named the most cited economist of the past 10 years.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    282 | Joel David Hamkins on Puzzles of Reality and Infinity

    282 | Joel David Hamkins on Puzzles of Reality and Infinity

    The philosophy of mathematics would be so much easier if it weren't for infinity. The concept seems natural, but taking it seriously opens the door to counterintuitive results. As mathematician and philosopher Joel David Hamkins says in this conversation, when we say that the natural numbers are "0, 1, 2, 3, and so on," that "and so on" is hopelessly vague. We talk about different ways to think about the puzzles of infinity, how they might be resolved, and implications for mathematical realism.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/07/15/282-joel-david-hamkins-on-puzzles-of-reality-and-infinity/

    Support Mindscape on Patreon.

    Joel David Hamkins received his Ph.D. in mathematics from the University of California, Berkeley. He is currently the John Cardinal O'Hara Professor of Logic at the University of Notre Dame. He is a pioneer of the idea of the set theory multiverse. He is the top-rated user by reputation score on MathOverflow. He is currently working on The Book of Infinity, to be published by MIT Press.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Ask Me Anything | July 2024

    Ask Me Anything | July 2024

    Welcome to the July 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/07/08/ama-july-2024/

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Related Episodes

    23 | Lisa Aziz-Zadeh on Embodied Cognition, Mirror Neurons, and Empathy

    23 | Lisa Aziz-Zadeh on Embodied Cognition, Mirror Neurons, and Empathy
    Brains are important things; they're where thinking happens. Or are they? The theory of "embodied cognition" posits that it's better to think of thinking as something that takes place in the body as a whole, not just in the cells of the brain. In some sense this is trivially true; our brains interact with the rest of our bodies, taking in signals and giving back instructions. But it seems bold to situate important elements of cognition itself in the actual non-brain parts of the body. Lisa Aziz-Zadeh is a psychologist and neuroscientist who uses imaging technologies to study how different parts of the brain and body are involved in different cognitive tasks. We talk a lot about mirror neurons, those brain cells that light up both when we perform an action ourselves and when we see someone else performing the action. Understanding how these cells work could be key to a better view of empathy and interpersonal interactions. Lisa Aziz-Zadeh is an Associate Professor in the Brain and Creativity Institute and the Department of Occupational Science at the University of Southern California. She received her Ph.D. in psychology from UCLA, and has also done research at the University of Parma and the University of California, Berkeley. Home page USC profile Lab home page Google Scholar Talk on Brain and Body See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    17 | Annalee Newitz on Science, Fiction, Economics, and Neurosis

    17 | Annalee Newitz on Science, Fiction, Economics, and Neurosis
    The job of science fiction isn't to predict the future; it's to tell interesting stories in an imaginative setting, exploring the implications of different ways the world could be different from our actual one. Annalee Newitz has carved out a unique career as a writer and thinker, founding the visionary blog io9 and publishing nonfiction in a number of formats, and is now putting her imagination to work in the realm of fiction. Her recent novel, Autonomous, examines a future in which the right to work is not automatic, rogue drug pirates synthesize compounds to undercut Big Pharma, and sentient robots discover their sexuality. We talk about how science fiction needs more economics, how much of human behavior comes down to dealing with our neuroses, and what it's like to make the transition from writing non-fiction to fiction. Annalee Newitz is currently an Editor at Large at Ars Technica. She received her Ph.D. in English and American Studies from UC Berkeley. She founded and edited io9, which later merged with Gizmodo, where she also served as editor. She and Charlie Jane Anders host the podcast Our Opinions Are Correct, a bi-weekly exploration of the meaning of science fiction. Home page Wikipedia page Amazon author page Articles at io9/Gizmodo Articles at Ars Technica Our Opinions Are Correct podcast See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    44 | Antonio Damasio on Feelings, Thoughts, and the Evolution of Humanity

    44 | Antonio Damasio on Feelings, Thoughts, and the Evolution of Humanity
    When we talk about the mind, we are constantly talking about consciousness and cognition. Antonio Damasio wants us to talk about our feelings. But it’s not in an effort to be more touchy-feely; Damasio, one of the world’s leading neuroscientists, believes that feelings generated by the body are a crucial part of how we achieve and maintain homeostasis, which in turn is a key driver in understanding who we are. His most recent book, The Strange Order of Things: Life, Feeling, and the Making of Cultures, is an ambitious attempt to trace the role of feelings and our biological impulses in the origin of life, the nature of consciousness, and our flourishing as social, cultural beings. Support Mindscape on Patreon or Paypal. Antonio Damasio received his M.D. and Ph.D. from the University of Lisbon, Portugal. He is currently University Professor, David Dornsife Professor of Neuroscience, Professor of Psychology, Professor of Philosophy, and (along with his wife and frequent collaborator, Prof. Hannah Damasio) Director of the Brain and Creativity Institute at the University of Southern California. He is also an adjunct professor at the Salk Institute in La Jolla, California. He is a member of the American Academy of Arts and Sciences, the National Academy of Medicine, and the European Academy of Sciences and Arts. Among his numerous awards are the Grawemeyer Award, the Honda Prize, the Prince of Asturias Award in Science and Technology, and the Beaumont Medal from the American Medical Association. USC web page Brain and Creativity Institute Google Scholar page Amazon.com author page Wikipedia TED talk on The Quest to Understand Consciousness Twitter See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    255 | Michael Muthukrishna on Developing a Theory of Everyone

    255 | Michael Muthukrishna on Developing a Theory of Everyone

    A "Theory of Everything" is physicists' somewhat tongue-in-cheek phrase for a hypothetical model of all the fundamental physical interactions. Of course, even if we had such a theory, it would tell us nothing new about higher-level emergent phenomena, all the way up to human behavior and society. Can we even imagine a "Theory of Everyone," providing basic organizing principles for society? Michael Muthukrishna believes we can, and indeed that we can see the outlines of such a theory emerging, based on the relationships of people to each other and to the physical resources available.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/10/30/255-michael-muthukrishna-on-developing-a-theory-of-everyone/

    Support Mindscape on Patreon.

    Michael Muthukrishna received his Ph.D. in psychology from the University of British Columbia. He is currently Associate Professor of Economic Psychology at the London School of Economics and Political Science. Among his awards are an Emerging Scholar Award from the Society for Personality and Social Psychology and a Dissertation Excellence Award from the Canadian Psychological Association. His new book is A Theory of Everyone: The New Science of Who We Are, How We Got Here, and Where We're Going.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    216 | John Allen Paulos on Numbers, Narratives, and Numeracy

    216 | John Allen Paulos on Numbers, Narratives, and Numeracy

    People have a complicated relationship to mathematics. We all use it in our everyday lives, from calculating a tip at a restaurant to estimating the probability of some future event. But many people find the subject intimidating, if not off-putting. John Allen Paulos has long been working to make mathematics more approachable and encourage people to become more numerate. We talk about how people think about math, what kinds of math they should know, and the role of stories and narrative to make math come alive. 

    Support Mindscape on Patreon.

    John Allen Paulos received his Ph.D. in mathematics from the University of Wisconsin, Madison. He is currently a professor of mathematics at Temple University. He s a bestselling author, and frequent contributor to publications such as ABCNews.com, the Guardian, and Scientific American. Among his awards are the Science Communication award from the American Association for the Advancement of Science and the Mathematics Communication Award from the Joint Policy Board of Mathematics. His new book is Who’s Counting? Uniting Numbers and Narratives with Stories from Pop Culture, Puzzles, Politics, and More.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.