Logo
    Search

    Podcast Summary

    • Ethical Dilemmas in Programming Autonomous MachinesAs technology advances, it's crucial to consider ethical dilemmas and program moral decision-making processes for autonomous machines to minimize harm.

      As we advance in technology, particularly in the development of autonomous machines like self-driving cars, we're facing ethical dilemmas that require programming moral decision-making processes. These machines will make choices that impact human lives, and it's crucial to determine what actions are permissible. Philosopher Derek Lieben, in his book "Ethics for Robots," explores these complex issues, drawing on concepts from moral philosophy and economics. The trolley problem, for instance, raises questions about whether it's acceptable to cause fewer harm to some people at the expense of more harm to others. As technology advances and machines make increasingly complex decisions, it's essential to consider these ethical dilemmas and program moral decision-making processes to minimize harm.

    • Establishing rules for effective programming in AI and roboticsAsimov's three laws are a starting point for ethical programming, but they have limitations and ambiguities. Recognizing the impact of our normative assumptions is crucial for creating a robust ethical framework in AI and robotics.

      As we delve into the world of programming machines and artificial intelligence, it's essential to establish clear rules and guidelines. The use of terms like "decisions" or "responsible" for machines might be debated, but the focus should be on creating effective programming. Asimov's three laws are a starting point, but they have limitations and ambiguities, especially when dealing with complex moral dilemmas. The idea that a robot cannot allow harm through inaction is impractical, and philosophers like Peter Singer have shown that almost every human action has an impact on others. As we move forward, it's crucial to recognize that every choice made in programming reveals our normative assumptions and that ethics cannot be entirely avoided. Ultimately, the challenge is to create a robust framework that can navigate the complex moral landscape of AI and robotics.

    • Exploring moral algorithms for robots: Beyond vague conceptsWe need to look beyond vague moral concepts and seek precise, quantitative approaches for developing moral algorithms for robots, possibly drawing inspiration from human judgments and historical moral theories.

      As we explore the development of moral algorithms for robots, we need to move beyond vague concepts like "being excellent to each other" and instead look for more precise, quantitative approaches. Virtue ethics, which suggests doing whatever a noble person would do, may not be specific enough for this purpose. Instead, we can look to human judgments or historical moral theories such as utilitarianism, libertarianism, and contractarianism for guidance. However, even these theories may not be objective or universally applicable. It's important to remember that the evolution of our moral intuitions may not have a clear evolutionary function, and there may be multiple internally consistent sets of moral rules that could be derived from them. Ultimately, we need to strive for greater clarity and objectivity in defining what it means to be a moral person, both for robots and for humans.

    • Resolving disagreements between utilitarian and Kantian perspectivesMoral dilemmas can be understood as a set of hypothetical imperatives based on shared goals, leading to objective answers within these domains. Outside of shared goals, it's impossible to determine one state as better or worse, and moral systems can be built based on what we agree upon, acknowledging the historical contingency and complexity of human beings.

      The disagreement between a utilitarian and a Kantian perspective on moral dilemmas can be resolved by considering the evolutionary function of the intuitions they are drawing from and seeking a unified framework that aligns with their original goals. Morality, according to the speaker, can be understood as a set of hypothetical imperatives based on shared goals. Objective answers to moral questions can be found within these domains. However, outside of these shared goals, it's impossible to determine one state as better or worse than another. The speaker agrees with this perspective and believes that moral systems can be built based on what we agree upon, acknowledging that our moral goals are historically contingent and reliant on human beings as complex, evolving beings. Additionally, the speaker shares their appreciation for Babbel, a language learning app, and Rocket Money, a personal finance app, highlighting their personal experiences and the benefits these tools have brought to their lives. The conversation also touched upon the metaethical question of whether moral rules are objectively real or not, with the speaker expressing agreement with the idea that moral rules are dependent on shared goals and historical contingencies. The speaker and their interlocutor seemed to find common ground on many issues, despite their differing perspectives on metaethics.

    • Designing self-driving cars requires moral theories to handle ethical dilemmasMoral theories like utilitarianism guide ethical decision-making in self-driving cars, helping promote cooperative behavior and address complex moral dilemmas like the trolley problem.

      When designing self-driving cars, ethical dilemmas will arise and we need a moral theory to guide the decision-making process. These theories aim to promote cooperative behavior among self-interested organisms. The trolley problem is an example of a moral dilemma, where one must choose between causing harm to one person or allowing harm to multiple people. While some may argue that cars won't face such complex ethical issues, moral reasoning is essential for making everyday decisions that impact others. For instance, buying a cappuccino or watching Netflix involves weighing personal pleasure against the happiness and suffering of others. The trolley problem highlights the differences between various moral theories, such as utilitarianism, which prioritizes the greatest good for the greatest number, and those that consider intentions or physical intrusion into personal space. Ultimately, a moral theory that effectively promotes cooperative behavior should be the foundation for ethical decision-making in self-driving car technology.

    • Moral theories can lead to inconsistent moral intuitionsMoral theories like deontology and utilitarianism have inconsistencies and contractarianism acknowledges the complexity of moral judgments

      Moral theories, such as deontology, can lead to inconsistent moral intuitions when applied to real-life situations. Deontologists, who focus on rights and duties, often argue that it's not wrong to stand by and allow harm to occur, as opposed to actively causing it (positive vs negative obligations). However, when faced with concrete situations, people's moral judgments can be inconsistent and sensitive to factors like genetic relatedness. The famous trolley problem is an example of this, where people's moral judgments can change based on the situation and who is affected. Utilitarianism, another moral theory, also has its quirks, and no consistent moral theory will align perfectly with our moral intuitions all the time. I advocate for a moral theory called contractarianism, which holds that moral rules should be based on what individuals would agree to in a hypothetical, fair social contract. While not without its own challenges, this theory acknowledges the complexity and inconsistency of moral judgments and offers a framework for understanding moral dilemmas.

    • Understanding Primary Goods in John Rawls' Theory of JusticeJohn Rawls' theory of justice emphasizes the importance of primary goods like life, health, and essential resources for a fair and just society. Individuals, when making decisions, should prioritize ensuring the least advantaged have access to these goods.

      Learning from John Rawls' theory of justice is that all human beings, regardless of individual differences, value certain primary goods such as life, health, and essential resources for survival. These goods form the foundation for pursuing any goal, making them essential for a fair and just society. Rawls proposed that individuals, when placed in an original position of uncertainty, would agree on distributing these goods to ensure the well-being of the least advantaged. While Rawls primarily focused on using this theory for designing a fair society, it can also be applied to individual decision-making, helping determine what actions are wrong or permissible. This theory emphasizes fairness and equality, aiming to make society as just as possible for everyone.

    • Moral constraints in Rawls' theory of justice don't eliminate all disagreements about valuesReligious beliefs, among other factors, can shape moral theories and influence how we think about justice, even within the framework of Rawls' theory.

      Within the framework of John Rawls' theory of justice, there are moral constraints that limit the range of acceptable distributions of primary goods. However, beyond this constraint, there can be disagreements about values. Religious beliefs, for instance, might influence one's values and lead to different moral theories. While Rawls' original position asks us to set aside certain aspects of ourselves, it doesn't necessarily mean that religious convictions are irrelevant to ethics. Instead, they might shape how we think about the world and influence our moral beliefs. Ultimately, even in a diverse society, there will be conflicts between beliefs, and some may not be respected by the institution or other individuals.

    • Understanding Cooperation in a Liberal Democratic SocietyGame theory's Nash Equilibria concept helps us evaluate theories and find common ground in a diverse society, promoting cooperative behavior and resolving moral dilemmas.

      Despite our different religious or moral beliefs, we need to strive for equality and cooperation in a liberal democratic society. This means making compromises and finding common ground based on universal values. Game theory, specifically the concept of Nash Equilibria, provides a useful tool for understanding and promoting cooperative behavior. This relatively new concept has emerged in the last 50-60 years and can help us evaluate theories and resolve tensions between different moral frameworks. For instance, in situations like the prisoner's dilemma, it might seem intuitive to stay quiet, but according to game theory, both parties should confess to achieve the best outcome for both. This principle, known as the Maximian Principle, can guide us in designing moral frameworks for machines and living together in a cooperative and equitable society.

    • Pareto Improvements and Cooperation ProblemPareto improvements lead to mutually beneficial changes, but self-interested agents may not make them, causing a cooperation problem. External intervention is often necessary to ensure Pareto improvements.

      The concept of Pareto improvement illustrates situations where everyone could benefit from a change, but self-interested rational agents may not make that change due to the Nash equilibrium. This leads to a cooperation problem, which can result in suboptimal outcomes for everyone. Economist Vilfredo Pareto introduced this concept, and it's the foundation for identifying improvements that benefit everyone without making anyone worse off. However, achieving Pareto improvements can be challenging for self-interested agents, necessitating external intervention like rules, governments, or cooperation mechanisms to ensure mutual benefits. Utilitarianism, as a moral theory, can sometimes lead to incorrect cooperative solutions, especially in the case of the prisoner's dilemma. Understanding cooperation in a formal sense allows us to evaluate moral theories based on their cooperative outcomes. In most cases, utilitarianism, contractarianism, natural rights theories, and Kantian ethics promote cooperation. However, libertarianism may not always produce the same cooperative results, depending on how causing harm is defined.

    • Maximin Principle vs. UtilitarianismThe Maximin principle prioritizes the welfare of the worst-off individual, while utilitarianism aims to maximize overall happiness. The Maximin principle promotes cooperative behavior and aligns with prioritarianism, but requires careful consideration of primary goods.

      While there is a Nash equilibrium in the Prisoner's Dilemma where both players defect, most sensible people believe that cooperating leads to better outcomes for everyone in the long run. The Maximin principle, a key tenet of contractarianism, suggests prioritizing the welfare of the worst-off individual, and it can lead to different outcomes than utilitarianism, which aims to maximize overall happiness or pleasure. The Maximin principle is preferred because it promotes cooperative behavior and ensures that the worst-off individuals are not left behind. In practical scenarios, this principle can align with prioritarianism, which prioritizes helping the less fortunate, but it requires careful consideration of primary goods, such as health and safety, and their effects on individuals.

    • Philosophical Approaches to Ethical Decision-makingRawls prioritizes reducing inequality, while contractarianism focuses on essential resources for a minimal threshold. In self-driving cars, contractarianism evaluates collisions based on health and safety impact.

      When it comes to ethical decision-making, whether in politics or self-driving cars, there are different philosophical approaches. John Rawls' theory of justice as fairness prioritizes reducing inequality to ensure the well-being of the least advantaged, while contractarianism focuses on distributing essential resources, such as health and safety, to bring everyone to a minimal threshold. While both theories have their merits, they lead to different societal structures and economic systems. In the context of self-driving cars, a contractarian approach would involve evaluating potential collisions and their impact on health and safety to determine the best outcome, rather than treating all collisions as equally undesirable. This approach acknowledges that not all collisions are equal in terms of harm caused.

    • Self-driving cars face ethical dilemmas in decision-makingSelf-driving cars evaluate potential risks and make decisions based on potential consequences, not biases or prejudices.

      Self-driving cars will constantly make decisions between higher and lower risk actions, even if it means choosing between potential collisions with different types of obstacles or people. This reality, while down-to-earth, poses a public relations challenge for the industry as it works to convince people that these vehicles are safe. Researchers and academics have started to explore ethical dilemmas, such as trolley problems, by gathering people's preferences and evaluating the relevance of various factors in making these decisions. However, the results can be complex and counterintuitive. For instance, a self-driving car might evaluate a collision with a person wearing a helmet as less dangerous than one without, leading to potential discrimination against certain groups. The MIT Media Lab's moral machine experiment highlighted the importance of understanding what kinds of information are relevant in creating ethical decision-making frameworks for self-driving cars. Ultimately, it's crucial to recognize that these vehicles will make decisions based on the potential consequences of each action, not on any inherent biases or prejudices.

    • Understanding Contractarianism and Utilitarianism in Ethical Decision-MakingBoth contractarianism and utilitarianism are consequentialist ethical theories, but they prioritize different values: contractarianism focuses on individual rights and the distribution of primary goods, while utilitarianism prioritizes the greatest good for the greatest number. Understanding these differences is crucial for effective ethical decision-making.

      When it comes to ethical decision-making, particularly in complex scenarios like the trolley problem, it's essential to clarify the underlying moral theories being used. In this discussion, it was noted that contractarianism and utilitarianism, two consequentialist theories, have some similarities but also significant differences. Contractarianism focuses on the distribution of primary goods and the protection of individual rights, while utilitarianism prioritizes the greatest good for the greatest number. The speakers acknowledged that these theories can lead to different conclusions in certain scenarios, and that some may find these differences strange or counterintuitive. However, they agreed that it's important to understand and respect these ethical frameworks, even if they don't align with one's personal intuition or preferences. Ultimately, the goal is to find common ground and make decisions that promote the greatest good for all, while respecting individual rights and well-being.

    • Aligning Moral Theories with FactsConsider factual evidence when determining moral theories and acknowledge actions leading to greater cooperation or well-being, even if they conflict with our intuitions.

      While philosophers like Rawls advocate for the process of reflective equilibrium to align our moral theories with our intuitions, it's important to consider whether our moral theories align with facts. If there's a clear fact about what action leads to greater cooperation or well-being, we should acknowledge it, even if we don't want to do it. However, determining when to apply which moral theory can be challenging, and finding a satisfactory hybrid or third theory is not straightforward. Additionally, some philosophers have objections to utilitarianism, such as its counterintuitive implications and difficulty in implementation. Ultimately, figuring out the right thing to do involves grappling with complex moral theories and our own intuitions.

    • Recognizing the potential incoherence of moral intuitionsMoral anti-realists suggest we should be open to revising or discarding moral intuitions for a more rational and logical moral system, but also consider theories based on their ability to promote cooperative behavior and rationality.

      Our moral intuitions, though important, may not align with a logical and coherent moral theory. As a moral anti-realist, recognizing the potential incoherence of our moral intuitions, we should be open to revising or discarding them in the pursuit of a more rational and logical moral system. However, we must also be cautious not to rely solely on our intuitions and dismiss any theory that conflicts with them without proper consideration. The history of false intuitions highlights the importance of evaluating theories based on their ability to promote cooperative behavior and rationality, rather than relying solely on our contingent moral intuitions. Ultimately, the debates between moral realism and anti-realism are not necessarily about what is moral, but rather the function of moral theories and the role of intuitions in their development.

    • Lack of consensus in ethics and morality among professionalsDespite various moral theories, no consensus exists among professionals, creating challenges in designing ethical AI and autonomous systems

      There is a lack of consensus among professionals in the fields of ethics and morality, similar to the lack of consensus in quantum mechanics. A survey conducted by philpapers.org revealed that 25% of philosophers lean towards deontology, 23% towards consequentialism, 18% towards virtue ethics, and 32.3% towards other moral theories. This lack of consensus is significant because moral theories have real-world applications, particularly in the development of artificial intelligence and autonomous systems. The inability to determine which moral theory is the most accurate or effective creates challenges in designing these technologies. For example, it is unclear how feasible it is to create autonomous vehicles that can make moral decisions based on the principles of contractarianism. Additionally, there is the question of whether an artificially intelligent system should be able to articulate why it is making ethical decisions, similar to how humans do. While there are arguments for and against this requirement, it adds to the complexity of designing ethical AI. Overall, the lack of consensus in ethics and morality is a challenge that requires further exploration and discussion.

    • Applying the Maximin principle for ethical AI decision-makingThe Maximin principle can help AI systems make ethical decisions by prioritizing the worst possible outcome, but human oversight and responsibility are essential for sensitive areas.

      Accountability for ethical decision-making in AI systems requires the ability to articulate reasons behind actions. The Maximin principle, which prioritizes minimizing the worst possible outcome, can be applied in a top-down or bottom-up approach. While demanding AI systems to be articulate moral philosophers might be too much, they should at least provide a justification for their choices, such as minimizing the worst collision. Moral considerations can differ between everyday life situations and wartime or intentional harm cases. For instance, security robots may use the Maximin principle to neutralize threats, but it may not be the best approach when the goal is to kill people. Ultimately, human oversight and responsibility are crucial for ethical decision-making in AI systems, especially in sensitive areas like war.

    • Ethical challenges of autonomous systems in warThe development of autonomous military robots raises ethical questions, with some arguing against their use altogether. The capabilities for ethical decision making are unlikely to be achieved soon, and corporations may still pursue development despite ethical concerns.

      The development of autonomous systems for use in war raises complex ethical questions that require careful consideration. A group of ethicists and political philosophers have argued against the use of autonomous systems in war altogether, but others, including the speaker, acknowledge the challenges of ensuring these systems make ethical decisions. The speaker argues that the capabilities required for autonomous military robots to make ethical decisions are unlikely to be achieved soon, and that philosophy is being sharpened by the progress of technology, forcing us to confront moral dilemmas. The speaker is skeptical about the long-term benefits of autonomous weapon systems, but acknowledges that corporations may still pursue their development. The speaker encourages facing up to the ethical challenges rather than ignoring them.

    Recent Episodes from Sean Carroll's Mindscape: Science, Society, Philosophy, Culture, Arts, and Ideas

    276 | Gavin Schmidt on Measuring, Predicting, and Protecting Our Climate

    276 | Gavin Schmidt on Measuring, Predicting, and Protecting Our Climate

    The Earth's climate keeps changing, largely due to the effects of human activity, and we haven't been doing enough to slow things down. Indeed, over the past year, global temperatures have been higher than ever, and higher than most climate models have predicted. Many of you have probably seen plots like this. Today's guest, Gavin Schmidt, has been a leader in measuring the variations in Earth's climate, modeling its likely future trajectory, and working to get the word out. We talk about the current state of the art, and what to expect for the future.

    Support Mindscape on Patreon.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/05/20/276-gavin-schmidt-on-measuring-predicting-and-protecting-our-climate/

    Gavin Schmidt received his Ph.D. in applied mathematics from University College London. He is currently Director of NASA's Goddard Institute for Space Studies, and an affiliate of the Center for Climate Systems Research at Columbia University. His research involves both measuring and modeling climate variability. Among his awards are the inaugural Climate Communications Prize of the American Geophysical Union. He is a cofounder of the RealClimate blog.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    275 | Solo: Quantum Fields, Particles, Forces, and Symmetries

    275 | Solo: Quantum Fields, Particles, Forces, and Symmetries

    Publication week! Say hello to Quanta and Fields, the second volume of the planned three-volume series The Biggest Ideas in the Universe. This volume covers quantum physics generally, but focuses especially on the wonders of quantum field theory. To celebrate, this solo podcast talks about some of the big ideas that make QFT so compelling: how quantized fields produce particles, how gauge symmetries lead to forces of nature, and how those forces can manifest in different phases, including Higgs and confinement.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/05/13/275-solo-quantum-fields-particles-forces-and-symmetries/

    Support Mindscape on Patreon.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | May 2024

    AMA | May 2024

    Welcome to the May 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/05/06/ama-may-2024/

    Support Mindscape on Patreon.

    Here is the memorial to Dan Dennett at Ars Technica.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    274 | Gizem Gumuskaya on Building Robots from Human Cells

    274 | Gizem Gumuskaya on Building Robots from Human Cells

    Modern biology is advancing by leaps and bounds, not only in understanding how organisms work, but in learning how to modify them in interesting ways. One exciting frontier is the study of tiny "robots" created from living molecules and cells, rather than metal and plastic. Gizem Gumuskaya, who works with previous guest Michael Levin, has created anthrobots, a new kind of structure made from living human cells. We talk about how that works, what they can do, and what future developments might bring.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/29/274-gizem-gumuskaya-on-building-robots-from-human-cells/

    Support Mindscape on Patreon.

    Gimez Gumuskaya received her Ph.D. from Tufts University and the Harvard Wyss Institute for Biologically-Inspired Engineering. She is currently a postdoctoral researcher at Tufts University. She previously received a dual master's degree in Architecture and Synthetic Biology from MIT.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    273 | Stefanos Geroulanos on the Invention of Prehistory

    273 | Stefanos Geroulanos on the Invention of Prehistory

    Humanity itself might be the hardest thing for scientists to study fairly and accurately. Not only do we come to the subject with certain inevitable preconceptions, but it's hard to resist the temptation to find scientific justifications for the stories we'd like to tell about ourselves. In his new book, The Invention of Prehistory, Stefanos Geroulanos looks at the ways that we have used -- and continue to use -- supposedly-scientific tales of prehistoric humanity to bolster whatever cultural, social, and political purposes we have at the moment.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/22/273-stefanos-geroulanos-on-the-invention-of-prehistory/

    Support Mindscape on Patreon.

    Stefanos Geroulanos received his Ph.D. in humanities from Johns Hopkins. He is currently director of the Remarque Institute and a professor of history at New York University. He is the author and editor of a number of books on European intellectual history. He serves as a Co-Executive Editor of the Journal of the History of Ideas.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    272 | Leslie Valiant on Learning and Educability in Computers and People

    272 | Leslie Valiant on Learning and Educability in Computers and People

    Science is enabled by the fact that the natural world exhibits predictability and regularity, at least to some extent. Scientists collect data about what happens in the world, then try to suggest "laws" that capture many phenomena in simple rules. A small irony is that, while we are looking for nice compact rules, there aren't really nice compact rules about how to go about doing that. Today's guest, Leslie Valiant, has been a pioneer in understanding how computers can and do learn things about the world. And in his new book, The Importance of Being Educable, he pinpoints this ability to learn new things as the crucial feature that distinguishes us as human beings. We talk about where that capability came from and what its role is as artificial intelligence becomes ever more prevalent.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/15/272-leslie-valiant-on-learning-and-educability-in-computers-and-people/

    Support Mindscape on Patreon.

    Leslie Valiant received his Ph.D. in computer science from Warwick University. He is currently the T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics at Harvard University. He has been awarded a Guggenheim Fellowship, the Knuth Prize, and the Turing Award, and he is a member of the National Academy of Sciences as well as a Fellow of the Royal Society and the American Association for the Advancement of Science. He is the pioneer of "Probably Approximately Correct" learning, which he wrote about in a book of the same name.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    AMA | April 2024

    AMA | April 2024

    Welcome to the April 2024 Ask Me Anything episode of Mindscape! These monthly excursions are funded by Patreon supporters (who are also the ones asking the questions). We take questions asked by Patreons, whittle them down to a more manageable number -- based primarily on whether I have anything interesting to say about them, not whether the questions themselves are good -- and sometimes group them together if they are about a similar topic. Enjoy!

    Blog post with questions and transcript: https://www.preposterousuniverse.com/podcast/2024/04/08/ama-april-2024/

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    271 | Claudia de Rham on Modifying General Relativity

    271 | Claudia de Rham on Modifying General Relativity

    Einstein's theory of general relativity has been our best understanding of gravity for over a century, withstanding a variety of experimental challenges of ever-increasing precision. But we have to be open to the possibility that general relativity -- even at the classical level, aside from any questions of quantum gravity -- isn't the right theory of gravity. Such speculation is motivated by cosmology, where we have a good model of the universe but one with a number of loose ends. Claudia de Rham has been a leader in exploring how gravity could be modified in cosmologically interesting ways, and we discuss the current state of the art as well as future prospects.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/04/01/271-claudia-de-rham-on-modifying-general-relativity/

    Support Mindscape on Patreon.

    Claudia de Rham received her Ph.D. in physics from the University of Cambridge. She is currently a professor of physics and deputy department head at Imperial College, London. She is a Simons Foundation Investigator, winner of the Blavatnik Award, and a member of the American Academy of Arts and Sciences. Her new book is The Beauty of Falling: A Life in Pursuit of Gravity.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    270 | Solo: The Coming Transition in How Humanity Lives

    270 | Solo: The Coming Transition in How Humanity Lives

    Technology is changing the world, in good and bad ways. Artificial intelligence, internet connectivity, biological engineering, and climate change are dramatically altering the parameters of human life. What can we say about how this will extend into the future? Will the pace of change level off, or smoothly continue, or hit a singularity in a finite time? In this informal solo episode, I think through what I believe will be some of the major forces shaping how human life will change over the decades to come, exploring the very real possibility that we will experience a dramatic phase transition into a new kind of equilibrium.

    Blog post with transcript and links to additional resources: https://www.preposterousuniverse.com/podcast/2024/03/25/270-solo-the-coming-transition-in-how-humanity-lives/

    Support Mindscape on Patreon.

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    269 | Sahar Heydari Fard on Complexity, Justice, and Social Dynamics

    269 | Sahar Heydari Fard on Complexity, Justice, and Social Dynamics

    When it comes to social change, two questions immediately present themselves: What kind of change do we want to see happen? And, how do we bring it about? These questions are distinct but related; there's not much point in spending all of our time wanting change that won't possibly happen, or working for change that wouldn't actually be good. Addressing such issues lies at the intersection of philosophy, political science, and social dynamics. Sahar Heydari Fard looks at all of these issues through the lens of complex systems theory, to better understand how the world works and how it might be improved.

    Blog post with transcript: https://www.preposterousuniverse.com/podcast/2024/03/18/269-sahar-heydari-fard-on-complexity-justice-and-social-dynamics/

    Support Mindscape on Patreon.

    Sahar Heydari Fard received a Masters in applied economics and a Ph.D. in philosophy from the University of Cincinnati. She is currently an assistant professor in philosophy at the Ohio State University. Her research lies at the intersection of social and behavioral sciences, social and political philosophy, and ethics, using tools from complex systems theory.


    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    Related Episodes

    40 | Adrienne Mayor on Gods and Robots in Ancient Mythology

    40 | Adrienne Mayor on Gods and Robots in Ancient Mythology
    The modern world is full of technology, and also with anxiety about technology. We worry about robot uprisings and artificial intelligence taking over, and we contemplate what it would mean for a computer to be conscious or truly human. It should probably come as no surprise that these ideas aren’t new to modern society — they go way back, at least to the stories and mythologies of ancient Greece. Today’s guest, Adrienne Mayor, is a folklorist and historian of science, whose recent work has been on robots and artificial humans in ancient mythology. From the bronze warrior Talos to the evil fembot Pandora, mythology is rife with stories of artificial beings. It’s both fun and useful to think about our contemporary concerns in light of these ancient tales. Support Mindscape on Patreon or Paypal. Adrienne Mayor is a Research Scholar Classics and History and Philosophy of Science at Stanford University. She is also a Berggruen Fellow at Stanford’s Center for Advanced Study in the Behavioral Sciences. Her work has encompasses fossil traditions in classical antiquity and Native America, the origins of biological weapons, and the historical precursors of the stories of Amazon warriors. In 2009 she was a finalist for the National Book Award. Web page at Stanford Amazon author page Wikipedia Google Scholar Video of a talk on Amazons Twitter See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    53 | Solo -- On Morality and Rationality

    53 | Solo -- On Morality and Rationality
    What does it mean to be a good person? To act ethically and morally in the world? In the old days we might appeal to the instructions we get from God, but a modern naturalist has to look elsewhere. Today I do a rare solo podcast, where I talk both about my personal views on morality, a variety of “constructivism” according to which human beings construct their ethical stances starting from basic impulses, logical reasoning, and communicating with others. In light of this view, I consider two real-world examples of contemporary moral controversies: Is it morally permissible to eat meat? Or is there an ethical imperative to be a vegetarian? Do inequities in society stem from discrimination, or from the natural order of things? As a jumping-off point I take the loose-knit group known as the Intellectual Dark Web, which includes Jordan Peterson, Sam Harris, Ben Shapiro, and others, and their nemeses the Social Justice Warriors (though the discussion is about broader issues, not just that group of folks). Probably everyone will agree with my takes on these issues once they listen to my eminently reasonable arguments. Actually this is a more conversational, exploratory episode, rather than a polished, tightly-argued case from start to finish. I don’t claim to have all the final answers. The hope is to get people thinking and conversing, not to settle things once and for all. These issues are, on the one hand, very tricky, and none of us should be too certain that we have everything figured out; on the other hand, they can get very personal, and consequently emotions run high. The issues are important enough that we have to talk about them, and we can at least aspire to do so in the most reasonable way possible.   Support Mindscape on Patreon or Paypal. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    46 | Kate Darling on Our Connections with Robots

    46 | Kate Darling on Our Connections with Robots
    Most of us have no trouble telling the difference between a robot and a living, feeling organism. Nevertheless, our brains often treat robots as if they were alive. We give them names, imagine that they have emotions and inner mental states, get mad at them when they do the wrong thing or feel bad for them when they seem to be in distress. Kate Darling is a research at the MIT Media Lab who specializes in social robotics, the interactions between humans and machines. We talk about why we cannot help but anthropomorphize even very non-human-appearing robots, and what that means for legal and social issues now and in the future, including robot companions and helpers in various forms. Support Mindscape on Patreon or Paypal. Kate Darling has a degree in law as well as a doctorate of sciences from ETH Zurich. She currently works at the Media Lab at MIT, where she conducts research in social robotics and serves as an advisor on intellectual property policy. She is an affiliate at the Harvard Berkman Klein Center for Internet & Society and at the Institute for Ethics and Emerging Technologies. Among her awards are the Mark T. Banner award in Intellectual Property from the American Bar Association. She is a contributing writer to Robohub and IEEE Spectrum. Web page Publications Twitter TED talk on why we have an emotional connection to robots See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    43 | Matthew Luczy on the Pleasures of Wine

    43 | Matthew Luczy on the Pleasures of Wine
    Some people never drink wine; for others, it’s an indispensable part of an enjoyable meal. Whatever your personal feelings might be, wine seems to exhibit a degree of complexity and nuance that can be intimidating to the non-expert. Where does that complexity come from, and how can we best approach wine? To answer these questions, we talk to Matthew Luczy, sommelier and wine director at Mélisse, one of the top fine-dining restaurants in the Los Angeles area. Matthew insisted that we actually drink wine rather than just talking about it, so drink we do. Therefore, in a Mindscape first, I recruited a third party to join us and add her own impressions of the tasting: science writer Jennifer Ouellette, who I knew would be available because we’re married to each other. We talk about what makes different wines distinct, the effects of aging, and what’s the right bottle to have with pizza. You are free to drink along at home, with exactly these wines or some other choices, but I think the podcast will be enjoyable whether you do or not. Support Mindscape on Patreon or Paypal. Mattew Luczy is a Certified Sommelier as judged by the Court of Master Sommeliers. He currently works as the Wine Director at Mélisse in Santa Monica, California. He is also active in photography and music. Mélisse home page Personal/photography page Instagram Ask a Somm: When Should I Decant Wine? See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    18 | Clifford Johnson on What's So Great About Superstring Theory

    18 | Clifford Johnson on What's So Great About Superstring Theory
    String theory is a speculative and highly technical proposal for uniting the known forces of nature, including gravity, under a single quantum-mechanical framework. This doesn't seem like a recipe for creating a lightning rod of controversy, but somehow string theory has become just that. To get to the bottom of why anyone (indeed, a substantial majority of experts in the field) would think that replacing particles with little loops of string was a promising way forward for theoretical physics, I spoke with expert string theorist Clifford Johnson. We talk about the road string theory has taken from a tentative proposal dealing with the strong interactions, through a number of revolutions, to the point it's at today. Also, where all those extra dimensions might have gone. At the end we touch on Clifford's latest project, a graphic novel that he wrote and illustrated about how science is done. Clifford Johnson is a Professor of Physics at the University of Southern California. He received his Ph.D. in mathematics and physics from the University of Southampton. His research area is theoretical physics, focusing on string theory and quantum field theory. He was awarded the Maxwell Medal from the Institute of Physics. Johnson is the author of the technical monograph D-Branes, as well as the graphic novel The Dialogues. Home page Wikipedia page Publications A talk on The Dialogues Asymptotia blog Twitter See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.