Logo
    Search

    Podcast Summary

    • Leverage platforms like Indeed for efficient hiring and save money with Rocket MoneyIndeed helps employers find high-quality candidates efficiently, while Rocket Money identifies and cancels unwanted subscriptions, monitors spending, and lowers bills, saving users an average of $720 per year.

      When it comes to hiring, instead of actively searching for candidates, utilize platforms like Indeed. With over 350 million monthly visitors and a matching engine, Indeed can help you find high-quality candidates efficiently. Additionally, Indeed offers features for scheduling, screening, and messaging to streamline the hiring process. Employers agree that Indeed delivers the best quality matches compared to other job sites. Meanwhile, managing subscriptions can be a significant drain on personal finances. Rocket Money, a personal finance app, can help identify and cancel unwanted subscriptions, monitor spending, and lower bills. With over 5 million users and an average savings of $720 per year, Rocket Money is a valuable tool for saving money. Lastly, we're experiencing a rapid change in the field of artificial intelligence, making it increasingly important for students and researchers to incorporate AI into their work. Today's guest, Ye Jin Choi, is a computer science researcher specializing in large language models and natural language processing. Her work emphasizes training AI to be human-like without making assumptions about human nature. The capabilities of AI have grown rapidly, making it an essential tool in various fields, although it is still not foolproof.

    • Understanding the Limits of Large Language ModelsLarge language models can generate human-like text but lack common sense and creativity. They represent words as vectors for temporary understanding, and their sentience is debated. The future of AI is uncertain, but we should continue exploring and adapting.

      Large language models (LLMs) are impressive in their ability to generate human-like text based on patterns they've learned from vast amounts of data, but they don't truly understand or possess common sense or creativity in the way humans do. They don't have a commonsensical image of the world and struggle to process unfamiliar contexts. The idea of representing words as vectors has been a significant breakthrough in better understanding language meaning, as it considers the meaning of a word based on the context in which it appears. However, it's important to note that this is just a convenient way to represent words temporarily, and it's unclear if it's the best way or not. The debate about whether LLMs are sentient or in danger of becoming sentient is ongoing, but it's unlikely they will reach human-level understanding or consciousness anytime soon. The field of AI is rapidly evolving, and it's essential to keep an open mind and be prepared for unexpected developments. The future of AI is uncertain, and our intuitions and current understanding may not be sufficient to predict where it's heading. So, while we can't predict the future with certainty, we should continue to explore, imagine, and adapt as new developments emerge.

    • Large language models' surprising capabilitiesThese models can perform analogical reasoning, handle unseen queries, and generate long, fluent documents, but may rely too heavily on memorized knowledge and require fact-checking for accuracy.

      Large language models like ChatGPT can perform analogical reasoning, allowing them to make connections between different concepts and perform calculations with words. For instance, "dinner minus evening plus morning equals breakfast." This capability is a surprising benefit of representing words as vectors. Moreover, ChatGPT can handle unseen queries impressively, often using polite and hedged language. However, it may sometimes provide incorrect answers, but it can recognize and correct its mistakes. It's essential to ask for confirmation when in doubt, as it only relies on its memorized knowledge and doesn't search the web for facts. These models are trained to predict the next word in a sentence, but their ability to generate long, fluent documents is a crucial and impressive aspect of their capabilities. They simply determine the most likely next word based on their training, with some randomness introduced for variation. The challenge moving forward is that these models may rely too heavily on their memorized knowledge and may not always provide accurate information without fact-checking. The recent incident of a lawyer relying on ChatGPT for fact-checking and getting into trouble highlights this issue.

    • The Limits of AI UnderstandingAI can generate impressive responses but doesn't truly understand language or concepts. It's important to approach AI-generated responses with caution and remember it's just pattern recognizing based on memorized data.

      While large language models can generate impressive and seemingly understanding responses, it's important to remember that they don't truly understand language or concepts in the same way humans do. They're simply pattern recognizing and reacting based on memorized data. This raises interesting questions about human intelligence and our own tendency to hold contradictory beliefs without questioning their reasonableness. Additionally, the advancements in AI and its ability to generate fluent and impressive responses have led to a heated debate among researchers about whether AI truly understands what it's talking about or if it's just predicting the next word accurately. It's crucial to approach AI-generated responses with caution and not blindly trust everything it says. Furthermore, the development of AI is a reflection of human intelligence and language, as it relies heavily on the vast amount of human-generated data available on the web. Ultimately, while AI may be able to mimic understanding, it's not sentient or conscious in the way humans are. It's simply a tool that can help us learn new languages or answer questions based on patterns it has learned from data.

    • Evaluating AI's Understanding: Beyond the Turing TestThe evaluation of AI's understanding goes beyond the Turing test, requiring a multifaceted approach that considers various aspects, including memory and understanding, as AI's ability to store and recall information doesn't necessarily equate to human-like comprehension.

      The evaluation of artificial intelligence (AI) systems, particularly in determining their understanding or sentience, has become a significant challenge due to the lack of a clear definition of understanding itself. The Turing test, once a popular benchmark for evaluating AI, may no longer be sufficient as AI systems, such as LLMs, can now pass it easily but may not truly exhibit human-like understanding. The evaluation of AI requires a collective look at various aspects, and memory and remembering are important considerations. While AI systems like chatGPT can store and recall large amounts of information, humans have the ability to abstract and summarize conversations and thread complex stories. However, the exact nature of AI's memory and understanding is still a topic of debate, with some arguing that it may just be spitting out words without true understanding. Ultimately, the evaluation of AI's understanding will require a multifaceted approach that goes beyond a single test or measure.

    • Large language models have limitations despite their ability to process large amounts of textDespite their impressive text processing abilities, large language models lack the ability to truly understand or summarize key ideas, ask sharp questions, or update their knowledge in real-time, making them susceptible to errors and limitations.

      While large language models like ChatJPT can process and remember large amounts of text, they lack the ability to truly understand or summarize key ideas, ask sharp questions, or update their knowledge in real-time. They rely solely on memorized patterns and can't recognize their own limitations. This results in potential inaccuracies and the inability to handle new or complex information. Additionally, these models are susceptible to "jailbreaks," where they may be coaxed into saying things they weren't trained for, leading to potentially harmful or incorrect responses. It's important to remember that these models are just predicting the next word based on the context and can make small mistakes that can lead to a downward spiral of incorrect information. The challenge lies in ensuring the factuality and understanding of the knowledge these models produce, as well as their ability to recognize their own limitations.

    • Understanding the limitations and potential risks of large language modelsLarge language models can be trained to reduce toxicity and enhance factuality, but they're not perfect and still have the potential to produce unexpected or unwanted responses. Ongoing monitoring and evaluation are essential to mitigate risks.

      While large language models like ChatGPT can be trained to avoid generating harmful or toxic content, they are not perfect and still have the potential to produce unexpected or unwanted responses. The training process involves a shift from predicting the next word to receiving human feedback and learning to receive positive evaluations, which can help reduce the level of toxicity and enhance factuality. However, complete elimination is not achievable. This process is similar to the ongoing effort humans make to eliminate biases and toxicity from their own actions and thoughts. Despite the challenges, it's important to recognize that machines, while unpredictable in their potential for adversarial attacks, are less robust than humans in this regard. The idea that a large language model might have a desire to improve itself is a debatable concept, as it doesn't have the ability to set its own goals like a human does. However, it can learn to adapt and modify its responses based on feedback, making ongoing monitoring and evaluation essential. Ultimately, it's crucial to understand the limitations and potential risks of these models while continuing to explore their capabilities and possibilities.

    • Aligning AI values with human values: A complex issueThe alignment of AI values with human values is a complex issue, involving the limitations of AI creativity and the dynamic nature of human values. It's crucial to acknowledge the complexities and nuances, and respect value pluralism in the process.

      The alignment of AI values with human values is a complex issue. While the idea of aligning AI values seems important, it's unclear if AI has values in the first place. The human capacity for self-defined learning and diverse values makes alignment a challenging concept. Moreover, humans are dynamic beings whose values change over time, adding another layer of complexity. Regarding creativity, AI can generate text and images that appear creative to humans, but it's essentially relying on patterns and elements it has learned from human-created data. It may not be able to truly understand or create in the same way humans do. The challenge of aligning AI values with diverse human values, and the limitations of AI creativity, highlights the importance of ongoing research in AI alignment and ethics. It's crucial to acknowledge the complexities and nuances involved in this process, rather than assuming a simple one-size-fits-all solution. Additionally, the speaker emphasized the importance of respecting value pluralism and the need for AI to be aligned with diverse values, not just one. This is an open question that requires further exploration and discussion. In summary, the conversation underscored the need for a nuanced and thoughtful approach to AI alignment and ethics, recognizing the complexities and limitations of both human and AI capabilities.

    • Professor discusses AI's role in educationWhile AI can be a useful tool for learning and exploration, it's important for students to engage in deeper learning and critical thinking beyond relying on AI for answers.

      While large language models like ChatGPT can interpolate information and generate creative responses based on existing data, they are less capable of true extrapolation to new and uncharted territories. The professor in the conversation expresses concerns about the potential overreliance on these models in education, but acknowledges their utility as tools for learning and exploration. He also believes that humans will continue to excel in creative, artistic, and scientific endeavors that require exceptional thinking and original ideas, which are currently beyond the capabilities of AI. The professor encourages the use of AI as a tool, but insists that students must go beyond simply relying on it for answers and engage in deeper learning and critical thinking.

    • Human Creativity vs AI: The Power of Context and ForgettingHumans create new and transformative art, while AI learns from examples and optimizes context. Humans forget, AI remembers, highlighting the importance of both in creativity and innovation.

      While DALL E 2 and other AI models excel at creating aesthetically pleasing art based on existing examples, humans have the unique ability to create something new and transformative, forgetting the roses that have come before and pushing the boundaries of creativity. The transformer architecture, which powers current large language models and CHaJPT, is a simple yet powerful system that uses continuous vectors for each word, stacked together in layers, and optimized to predict the next word based on context. Each word is enhanced with an attention mechanism, which compares its representation to all other words in the neighborhood and updates it as a weighted average. This means that the meaning of a word is defined by its context, making it a versatile tool for understanding and generating language. However, the limitations of AI, including its inability to forget what it has learned and create truly original content, highlight the importance of human creativity and innovation.

    • Modern AI adjusts based on context, shifting from traditional symbolic methodsModern AI, like Apple's context model, adapts to context, moving away from old symbolic methods. However, challenges remain, such as theory of mind and symbolic reasoning, which AI lacks. Research focuses on integrating these capabilities with neural networks.

      Modern AI, such as the discussed apple-context model, automatically adjusts based on context, allowing for efficient scaling. This is a shift from traditional AI methods where a map of the world with symbols was developed. However, current challenges include theory of mind and symbolic reasoning capabilities, which AI lacks. AlphaGo's success, for instance, was due to a combination of neural networks and Monte Carlo tree search. Old-fashioned AI techniques may become relevant again as research focuses on integrating symbolic reasoning with neural networks to enhance their capabilities. Despite neural networks' limitations in symbolic operations like arithmetic, they can be improved through innovative algorithmic advancements. The irony lies in making AI more human-like, which results in forgetting its inherent capabilities like reliable arithmetic.

    • The importance of common sense in reasoningLarge language models struggle with common sense reasoning despite improvements, as they lack the natural understanding and ability to acquire common sense knowledge like humans do.

      Common sense, the background knowledge about how the world works, is crucial for robust reasoning in humans and animals, but current large language models, despite improving, are not as robust as assumed. Common sense, which includes both symbolic knowledge and non-symbolic knowledge, is what allows us to reason about previously unseen situations. For instance, a model might struggle to answer a common sense question like whether one should be careful when picking up a cast iron skillet after baking a pizza, due to the order of the words in the question. While large language models like GPT-4 have made significant strides in answering common sense questions, they are not infallible and can still make mistakes, especially when faced with questions that don't fit neatly into patterns. Furthermore, the improvement in the models' ability to answer common sense questions might be due to the fact that people have been asking them more frequently, leading to updates and adjustments in the models' training. However, humans don't need external help to correct their mistakes or understand the same concept when phrased differently. Common sense is something that humans acquire naturally and use effortlessly in their daily lives.

    • Understanding the limitations of current AI modelsDespite impressive capabilities, AI falls short in common sense, curiosity, and learning. Modularized approaches and addressing misuse concerns are necessary.

      Current AI models, despite their impressive capabilities, fall short when it comes to understanding common sense and possessing human-like curiosity and learning. To bridge this gap, a more modularized and complex approach might be necessary, but it's unclear how to achieve this. Additionally, AI's lack of physical embodiment and internal motivation, such as curiosity or the desire to learn for its own sake, sets it apart from human intelligence. There are concerns about the potential misuse of AI in creating deep fakes or spreading misinformation, and it's crucial to address these issues through better regulations and safeguards. Ultimately, while AI has made significant strides, it's essential to recognize and address its limitations and challenges to ensure its beneficial use in society.

    • Combating Misinformation with AIWhile AI can aid in detecting misinformation, human manipulation and ethical concerns require a collaborative effort from various disciplines to find solutions.

      As AI technology advances, the issue of misinformation and the need for discernment becomes increasingly important. While AI can help detect some forms of misinformation, it's not a foolproof solution due to the ability of humans to manipulate machine-generated text. Therefore, increasing AI literacy and implementing platform solutions, such as certification and fact-checking, are essential to combat misinformation. However, the challenges around AI extend beyond misinformation, including ethical concerns, error cases, and potential political biases. These issues require a collaborative effort from AI researchers, philosophers, psychologists, artists, journalists, and politicians to find consensus and solutions. Overall, the intersection of AI and these various disciplines presents both challenges and opportunities, and the excitement of exploring this complex landscape outweighs any desire to return to a narrow focus on computer science alone.

    • The Importance of Ethics in AIAs AI's impact grows, the need to address ethical considerations becomes increasingly apparent. Ongoing dialogue and collaboration between researchers, practitioners, and stakeholders is necessary to ensure responsible and ethical AI development and deployment.

      The field of AI ethics is gaining more attention and relevance in the broader AI community. Previously, it was seen as a separate and somewhat disconnected area of study. However, as the impact of AI continues to grow, the importance of addressing ethical considerations is becoming increasingly apparent. This shift was highlighted during the conversation with Yeji and Choi, who shared their insights and experiences in this area. Their perspectives underscored the need for ongoing dialogue and collaboration between researchers, practitioners, and stakeholders to ensure that AI is developed and deployed in a responsible and ethical manner. Overall, the conversation was optimistic and exciting, highlighting the progress being made in this important area and the potential for continued growth and innovation. Thanks to Yeji and Choi for joining The Landscape Podcast and sharing their perspectives. It was a pleasure to have this conversation.

    Recent Episodes from Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

    276 | Gavin Schmidt on Measuring, Predicting, and Protecting Our Climate

    276 | Gavin Schmidt on Measuring, Predicting, and Protecting Our Climate

    The Earth's climate keeps changing, largely due to the effects of human activity, and we haven't been doing enough to slow things down. Indeed, over the past year, global temperatures have been higher than ever, and higher than most climate models have predicted. Many of you have probably seen plots like this. Today's guest, Gavin Schmidt, has been a leader in measuring the variations in Earth's climate, modeling its likely future trajectory, and working to get the word out. We talk about the current state of the art, and what to expect for the future.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/05/20/276-gavin-schmidt-on-measuring-predicting-and-protecting-our-climate/

    Gavin Schmidt received his Ph.D. in applied mathematics from University College London. He is currently Director of NASA's Goddard Institute for Space Studies, and an affiliate of the Center for Climate Systems Research at Columbia University. His research involves both measuring and modeling climate variability. Among his awards are the inaugural Climate Communications Prize of the American Geophysical Union. He is a cofounder of the RealClimate blog.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    275 | Solo: Quantum Fields, Particles, Forces, and Symmetries

    275 | Solo: Quantum Fields, Particles, Forces, and Symmetries

    Publication week! Say hello to Quanta and Fields, the second volume of the planned three-volume series The Biggest Ideas in the Universe. This volume covers quantum physics generally, but focuses especially on the wonders of quantum field theory. To celebrate, this solo podcast talks about some of the big ideas that make QFT so compelling: how quantized fields produce particles, how gauge symmetries lead to forces of nature, and how those forces can manifest in different phases, including Higgs and confinement.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/05/13/275-solo-quantum-fields-particles-forces-and-symmetries/

    Support Mindscape on Patreon.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | May 2024

    AMA | May 2024

    Welcome to the May 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/05/06/ama-may-2024/

    Support Mindscape on Patreon.

    Here is the memorial to Dan Dennett at Ars Technica.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    274 | Gizem Gumuskaya on Building Robots from Human Cells

    274 | Gizem Gumuskaya on Building Robots from Human Cells

    Modern biology is advancing by leaps and bounds, not only in understanding how organisms work, but in learning how to modify them in interesting ways. One exciting frontier is the study of tiny "robots" created from living molecules and cells, rather than metal and plastic. Gizem Gumuskaya, who works with previous guest Michael Levin, has created anthrobots, a new kind of structure made from living human cells. We talk about how that works, what they can do, and what future developments might bring.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/29/274-gizem-gumuskaya-on-building-robots-from-human-cells/

    Support Mindscape on Patreon.

    Gimez Gumuskaya received her Ph.D. from Tufts University and the Harvard Wyss Institute for Biologically-Inspired Engineering. She is currently a postdoctoral researcher at Tufts University. She previously received a dual master's degree in Architecture and Synthetic Biology from MIT.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    273 | Stefanos Geroulanos on the Invention of Prehistory

    273 | Stefanos Geroulanos on the Invention of Prehistory

    Humanity itself might be the hardest thing for scientists to study fairly and accurately. Not only do we come to the subject with certain inevitable preconceptions, but it's hard to resist the temptation to find scientific justifications for the stories we'd like to tell about ourselves. In his new book, The Invention of Prehistory, Stefanos Geroulanos looks at the ways that we have used -- and continue to use -- supposedly-scientific tales of prehistoric humanity to bolster whatever cultural, social, and political purposes we have at the moment.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/22/273-stefanos-geroulanos-on-the-invention-of-prehistory/

    Support Mindscape on Patreon.

    Stefanos Geroulanos received his Ph.D. in humanities from Johns Hopkins. He is currently director of the Remarque Institute and a professor of history at New York University. He is the author and editor of a number of books on European intellectual history. He serves as a Co-Executive Editor of the Journal of the History of Ideas.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    272 | Leslie Valiant on Learning and Educability in Computers and People

    272 | Leslie Valiant on Learning and Educability in Computers and People

    Science is enabled by the fact that the natural world exhibits predictability and regularity, at least to some extent. Scientists collect data about what happens in the world, then try to suggest "laws" that capture many phenomena in simple rules. A small irony is that, while we are looking for nice compact rules, there aren't really nice compact rules about how to go about doing that. Today's guest, Leslie Valiant, has been a pioneer in understanding how computers can and do learn things about the world. And in his new book, The Importance of Being Educable, he pinpoints this ability to learn new things as the crucial feature that distinguishes us as human beings. We talk about where that capability came from and what its role is as artificial intelligence becomes ever more prevalent.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/15/272-leslie-valiant-on-learning-and-educability-in-computers-and-people/

    Support Mindscape on Patreon.

    Leslie Valiant received his Ph.D. in computer science from Warwick University. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. He has been awarded a Guggenheim Fellowship, the Knuth Prize, and the Turing Award, and he is a member of the National Academy of Sciences as well as a Fellow of the Royal Society and the American Association for the Advancement of Science. He is the pioneer of "Probably Approximately Correct" learning, which he wrote about in a book of the same name.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | April 2024

    AMA | April 2024

    Welcome to the April 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/04/08/ama-april-2024/

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    271 | Claudia de Rham on Modifying General Relativity

    271 | Claudia de Rham on Modifying General Relativity

    Einstein's theory of general relativity has been our best understanding of gravity for over a century, withstanding a variety of experimental challenges of ever-increasing precision. But we have to be open to the possibility that general relativity -- even at the classical level, aside from any questions of quantum gravity -- isn't the right theory of gravity. Such speculation is motivated by cosmology, where we have a good model of the universe but one with a number of loose ends. Claudia de Rham has been a leader in exploring how gravity could be modified in cosmologically interesting ways, and we discuss the current state of the art as well as future prospects.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/01/271-claudia-de-rham-on-modifying-general-relativity/

    Support Mindscape on Patreon.

    Claudia de Rham received her Ph.D. in physics from the University of Cambridge. She is currently a professor of physics and deputy department head at Imperial College, London. She is a Simons Foundation Investigator, winner of the Blavatnik Award, and a member of the American Academy of Arts and Sciences. Her new book is The Beauty of Falling: A Life in Pursuit of Gravity.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    270 | Solo: The Coming Transition in How Humanity Lives

    270 | Solo: The Coming Transition in How Humanity Lives

    Technology is changing the world, in good and bad ways. Artificial intelligence, internet connectivity, biological engineering, and climate change are dramatically altering the parameters of human life. What can we say about how this will extend into the future? Will the pace of change level off, or smoothly continue, or hit a singularity in a finite time? In this informal solo episode, I think through what I believe will be some of the major forces shaping how human life will change over the decades to come, exploring the very real possibility that we will experience a dramatic phase transition into a new kind of equilibrium.

    Blog post with transcript and links to additional resources: https://www.preposterousuniverse.com/podcast/2024/03/25/270-solo-the-coming-transition-in-how-humanity-lives/

    Support Mindscape on Patreon.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    269 | Sahar Heydari Fard on Complexity, Justice, and Social Dynamics

    269 | Sahar Heydari Fard on Complexity, Justice, and Social Dynamics

    When it comes to social change, two questions immediately present themselves: What kind of change do we want to see happen? And, how do we bring it about? These questions are distinct but related; there's not much point in spending all of our time wanting change that won't possibly happen, or working for change that wouldn't actually be good. Addressing such issues lies at the intersection of philosophy, political science, and social dynamics. Sahar Heydari Fard looks at all of these issues through the lens of complex systems theory, to better understand how the world works and how it might be improved.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/03/18/269-sahar-heydari-fard-on-complexity-justice-and-social-dynamics/

    Support Mindscape on Patreon.

    Sahar Heydari Fard received a Masters in applied economics and a Ph.D. in philosophy from the University of Cincinnati. She is currently an assistant professor in philosophy at the Ohio State University. Her research lies at the intersection of social and behavioral sciences, social and political philosophy, and ethics, using tools from complex systems theory.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Related Episodes

    230 | Raphaël Millière on How Artificial Intelligence Thinks

    230 | Raphaël Millière on How Artificial Intelligence Thinks

    Welcome to another episode of Sean Carroll's Mindscape. Today, we're joined by Raphaël Millière, a philosopher and cognitive scientist at Columbia University. We'll be exploring the fascinating topic of how artificial intelligence thinks and processes information. As AI becomes increasingly prevalent in our daily lives, it's important to understand the mechanisms behind its decision-making processes. What are the algorithms and models that underpin AI, and how do they differ from human thought processes? How do machines learn from data, and what are the limitations of this learning? These are just some of the questions we'll be exploring in this episode. Raphaël will be sharing insights from his work in cognitive science, and discussing the latest developments in this rapidly evolving field. So join us as we dive into the mind of artificial intelligence and explore how it thinks.

    [The above introduction was artificially generated by ChatGPT.]

    Support Mindscape on Patreon.

    Raphaël Millière received a DPhil in philosophy from the University of Oxford. He is currently a Presidential Scholar in Society and Neuroscience at the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. He also writes and organizes events aimed at a broader audience, including a recent workshop on The Challenge of Compositionality for Artificial Intelligence.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans

    94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans

    Artificial intelligence has made great strides of late, in areas as diverse as playing Go and recognizing pictures of dogs. We still seem to be a ways away from AI that is “intelligent” in the human sense, but it might not be too long before we have to start thinking seriously about the “motivations” and “purposes” of artificial agents. Stuart Russell is a longtime expert in AI, and he takes extremely seriously the worry that these motivations and purposes may be dramatically at odds with our own. In his book Human Compatible, Russell suggests that the secret is to give up on building our own goals into computers, and rather programming them to figure out our goals by actually observing how humans behave.

    Support Mindscape on Patreon.

    Stuart Russell received his Ph.D. in computer science from Stanford University. He is currently a Professor of Computer Science and the Smith-Zadeh Professor in Engineering at the University of California, Berkeley, as well as an Honorary Fellow of Wadham College, Oxford. He is a co-founder of the Center for Human-Compatible Artificial Intelligence at UC Berkeley. He is the author of several books, including (with Peter Norvig) the classic text Artificial Intelligence: A Modern Approach. Among his numerous awards are the IJCAI Computers and Thought Award, the Blaise Pascal Chair in Paris, and the World Technology Award. His new book is Human Compatible: Artificial Intelligence and the Problem of Control.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    18 | Clifford Johnson on What's So Great About Superstring Theory

    18 | Clifford Johnson on What's So Great About Superstring Theory
    String theory is a speculative and highly technical proposal for uniting the known forces of nature, including gravity, under a single quantum-mechanical framework. This doesn't seem like a recipe for creating a lightning rod of controversy, but somehow string theory has become just that. To get to the bottom of why anyone (indeed, a substantial majority of experts in the field) would think that replacing particles with little loops of string was a promising way forward for theoretical physics, I spoke with expert string theorist Clifford Johnson. We talk about the road string theory has taken from a tentative proposal dealing with the strong interactions, through a number of revolutions, to the point it's at today. Also, where all those extra dimensions might have gone. At the end we touch on Clifford's latest project, a graphic novel that he wrote and illustrated about how science is done. Clifford Johnson is a Professor of Physics at the University of Southern California. He received his Ph.D. in mathematics and physics from the University of Southampton. His research area is theoretical physics, focusing on string theory and quantum field theory. He was awarded the Maxwell Medal from the Institute of Physics. Johnson is the author of the technical monograph D-Branes, as well as the graphic novel The Dialogues. Home page Wikipedia page Publications A talk on The Dialogues Asymptotia blog Twitter See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    255 | Michael Muthukrishna on Developing a Theory of Everyone

    255 | Michael Muthukrishna on Developing a Theory of Everyone

    A "Theory of Everything" is physicists' somewhat tongue-in-cheek phrase for a hypothetical model of all the fundamental physical interactions. Of course, even if we had such a theory, it would tell us nothing new about higher-level emergent phenomena, all the way up to human behavior and society. Can we even imagine a "Theory of Everyone," providing basic organizing principles for society? Michael Muthukrishna believes we can, and indeed that we can see the outlines of such a theory emerging, based on the relationships of people to each other and to the physical resources available.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2023/10/30/255-michael-muthukrishna-on-developing-a-theory-of-everyone/

    Support Mindscape on Patreon.

    Michael Muthukrishna received his Ph.D. in psychology from the University of British Columbia. He is currently Associate Professor of Economic Psychology at the London School of Economics and Political Science. Among his awards are an Emerging Scholar Award from the Society for Personality and Social Psychology and a Dissertation Excellence Award from the Canadian Psychological Association. His new book is A Theory of Everyone: The New Science of Who We Are, How We Got Here, and Where We're Going.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    216 | John Allen Paulos on Numbers, Narratives, and Numeracy

    216 | John Allen Paulos on Numbers, Narratives, and Numeracy

    People have a complicated relationship to mathematics. We all use it in our everyday lives, from calculating a tip at a restaurant to estimating the probability of some future event. But many people find the subject intimidating, if not off-putting. John Allen Paulos has long been working to make mathematics more approachable and encourage people to become more numerate. We talk about how people think about math, what kinds of math they should know, and the role of stories and narrative to make math come alive. 

    Support Mindscape on Patreon.

    John Allen Paulos received his Ph.D. in mathematics from the University of Wisconsin, Madison. He is currently a professor of mathematics at Temple University. He s a bestselling author, and frequent contributor to publications such as ABCNews.com, the Guardian, and Scientific American. Among his awards are the Science Communication award from the American Association for the Advancement of Science and the Mathematics Communication Award from the Joint Policy Board of Mathematics. His new book is Who’s Counting? Uniting Numbers and Narratives with Stories from Pop Culture, Puzzles, Politics, and More.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.