Logo

    #346 - Rob Reid - How To Avoid Destroying Humanity

    enJuly 15, 2021
    What are the main existential risks discussed in the text?
    How can advanced printers help mitigate synthetic biology risks?
    Why is artificial general intelligence considered a threat?
    What lessons should we learn from the COVID-19 crisis?
    How does privatization affect risks from super AI development?

    • Evolutionary origins of fascination with existential risksUnderstand the evolutionary roots of our fascination with existential risks, prioritize safety measures and ethical considerations in technological development, and learn from the COVID-19 crisis to prepare for and mitigate potential risks from synthetic biology and artificial general intelligence.

      Our fascination with existential risks, which could lead to the end of civilization, may stem from our evolutionary past where the leaders of our ancestors' clans had to consider potential threats to their entire group for survival. However, despite our fascination with these risks, we often remain blind to them due to their newness and the fact that there were no plausible steps for civilization destruction before the mid-1950s. Now, one of the most significant existential risks we face today is synthetic biology, which could potentially lead to the creation of deadly organisms. Another risk is artificial general intelligence, which could surpass human intelligence and potentially pose a threat to our existence. We should learn from the COVID-19 crisis and invest in preparing for and mitigating these risks. Rob Reid, an entrepreneur, podcaster, and author, believes that we should continue technological development but prioritize safety measures and ethical considerations. Overall, it's crucial to acknowledge and address these existential risks to ensure the long-term survival and progress of humanity.

    • The concern over existential risks from nuclear weapons has grown over the last 60 yearsContinuously increasing awareness is crucial to prevent existential risks, especially from nuclear weapons, as policymakers may be the last to recognize and address these threats

      The existence of nuclear weapons in the late fifties led to a new level of existential risk for humanity, which has been a concern for the last 60 years. However, the attention given to this risk is still not enough compared to what it should be. The concept of existential risk is a relatively new development, and the amount of attention it receives today is significantly more than it was 15 years ago. The attention given to climate change, which has been a concern for longer, has grown exponentially due to compounding awareness and the development of large industries and economic interests around it. To prevent existential risks, it is crucial to continue spreading and increasing awareness, as policymakers will likely be the last to recognize and address these threats.

    • Near Misses with Nuclear WeaponsIndividual human decisions prevented nuclear disasters during the Cuban missile crisis and early 1980s, highlighting the importance of human judgment and potential risks of miscommunication or misunderstanding in nuclear situations.

      Throughout history, there have been numerous close calls between nuclear-armed nations that could have led to catastrophic consequences. Two notable incidents were during the Cuban missile crisis when Russian submarines were threatened by American depth charges, and during the early 1980s when a Russian facility mistakenly detected an incoming American missile attack. In both cases, the decisions of individual men to question orders and not launch nuclear weapons prevented escalation and potential disaster. These incidents underscore the importance of human judgment and the potential for miscommunication or misunderstanding in the context of nuclear weapons. Despite advancements in technology and safeguards, the risk of nuclear conflict still exists.

    • The democratization of dangerous technologiesWith advances in synbio and AI, the risk of catastrophic events is no longer limited to a select few. The democratization of these technologies poses a significant challenge as it's impossible to monitor and prevent all potential misuses, increasing the importance of safeguards.

      As we make advances in synthetic biology and technologies like super AI, the potential for catastrophic events is no longer limited to a select few with access to vast resources. Instead, the risk is spreading to a larger population, making it increasingly difficult to prevent potential disasters. This is a shift from the Cold War era, where billions of dollars were spent to prevent a few people from pressing a proverbial "red button." With synbio, for instance, the rapid improvement of tools and methodologies means that complex and Nobel-worthy work will soon be accessible to high school students and lower-budget companies. This democratization of dangerous technologies poses a significant challenge, as it's impossible to monitor and prevent all potential misuses. The stakes are higher than ever, and the consequences of a mistake or intentional misuse could be catastrophic. This privatization and democratization of the apocalypse underscores the importance of addressing these risks and ensuring that safeguards are in place to mitigate potential dangers.

    • Absence of nuclear deterrent during geopolitical tensions vs. super AI developmentThe absence of a nuclear deterrent during geopolitical tensions led to potential devastation, while the privatization of super AI development introduces new risks with potential societal costs, emphasizing the need for responsible governance and regulation.

      The absence of nuclear weapons during the geopolitical tensions between superpowers in the 20th century might have led to even more devastating conventional wars. The decision to avoid nuclear war was a public good owned and operated by governments, but the potential creation of super AI in private hands introduces a new level of risk with privatized gains and socialized losses. The financial crisis serves as a reminder of the consequences when individuals or companies take risks with significant societal costs. The potential for catastrophic risks from super AI development underscores the importance of responsible governance and regulation to prevent the creation of existential risks. The nuclear threat acted as a deterrent, but the absence of such a deterrent in the context of super AI development could lead to disastrous consequences.

    • Potential risks of democratized advanced technology in synthetic biologyThe democratization of advanced technology in synthetic biology poses a significant risk due to potential inclusion of agents who may not adhere to safety standards, potentially leading to catastrophic consequences, such as the creation and release of deadly and highly transmissible organisms.

      The democratization of advanced technology through privatization poses a significant risk due to the potential inclusion of numerous agents who may not adhere to safety standards, leading to potentially catastrophic consequences. This risk is amplified in the context of synthetic biology, where a malignant actor could create a deadly and highly transmissible organism. While COVID-19 serves as a reminder of the potential destructive power of even a relatively benign virus, a deliberate release of a more deadly organism could have civilization-ending consequences. The lab leak hypothesis for COVID-19 is plausible, but the virus's relatively low lethality and transmissibility suggest it was not intentionally engineered for maximum destruction. Instead, it may have been released as a warning or a coordination problem to prepare the world for future engineered pandemics. However, if an engineered pandemic were intended to cause widespread panic, the perpetrator would likely make it clear that it was engineered to maximize fear and chaos.

    • Impact of Virus Factors: Transmissibility, Asymptomatic Period, and LethalityA highly lethal virus with high transmissibility and a long asymptomatic period poses the greatest threat to society. Public health practices and research are crucial in mitigating its impact.

      The transmissibility, asymptomatic period, and lethality of a virus can determine its potential impact on society. The speaker mentions measles, SARS, h5m1 flu, and tuberculosis as examples of viruses with varying levels of these factors. The speaker also discusses the danger of lab leaks and the difficulty in preventing them, using the anthrax attacks post-9/11 as an example. The most dangerous virus in history could have the lethality of h5m1 flu, the transmissibility of measles, and a long incubation period like tuberculosis. The speaker emphasizes the importance of public health practices and research in mitigating the impact of such viruses.

    • Anthrax letters from a US army lab reached the Senate despite heightened securityDespite advanced precautions, vulnerabilities in biosafety measures can lead to accidental or malicious leaks, emphasizing the need for ongoing vigilance and improvements.

      The infamous anthrax letters sent after 9/11, which caused widespread fear and panic, were not as securely contained as one might have assumed, even by a highly security-conscious nation like the United States. The anthrax spores, which originated from a US army lab, managed to reach the office of the Senate majority leader, despite the country being on high alert following the terrorist attacks. This incident highlights the potential vulnerabilities of even the most secure labs and the importance of robust biosafety measures to prevent both accidental and malicious leaks. Despite the advanced precautions in place, there have been instances of leaks from high-security biosafety level 4 labs, making it crucial to remain vigilant and continually improve safety protocols.

    • Biotech research: Balancing benefits and risksBiotech research, including gain-of-function experiments, offer potential benefits but also carry significant risks. Careful consideration and robust safety measures are essential to mitigate potential harms.

      The potential risks of biotechnology research, particularly gain-of-function research, cannot be ignored. The discussion highlighted the example of h5n1 flu, which was made airborne transmissible in labs, raising concerns about the potential consequences if such research falls into the wrong hands or results in unintended leaks. While proponents argue that such research helps us prepare for potential pandemics, critics caution that the creation of deadly and transmissible pathogens, even in contained lab environments, poses significant risks to global health and safety. The discussion underscored the need for careful consideration and robust safety measures when engaging in such research.

    • Gain-of-function research: A double-edged swordGain-of-function research offers valuable scientific knowledge but comes with significant risks, especially when dealing with deadly viruses. Careful consideration and transparency are crucial.

      Gain-of-function research, which involves creating or manipulating pathogens to better understand their potential threats and possible countermeasures, is a double-edged sword. While it can yield valuable scientific knowledge, the risks of a lab accident or unintended consequences are significant, especially with highly contagious and potentially deadly viruses like h5n1. The incident involving two labs working on h5n1 gain-of-function research led to a pause in US government funding for several years due to concerns over bio-risks. However, the research was eventually published, and funding was resumed. Despite the potential dangers, gain-of-function research continues, and it's not limited to the US. Scientists argue that the benefits outweigh the risks, but critics question whether those in the field fully grasp the potential consequences of their work. The incident serves as a reminder of the importance of careful consideration and transparency in scientific research, particularly when dealing with potentially dangerous pathogens.

    • Underestimating Risks to OthersExperts and individuals may underestimate risks they pose to others, leading to potential harm. Consider broader implications before making decisions.

      Individuals, even those with expertise and a strong personal incentive, may underestimate the risks they pose to others when making decisions. This was evident in the discussion about an expert scientist's confidence in his ability to avoid a lab leak, despite the potential consequences for society. The Manhattan Project serves as a historical example, where scientists weighed the minuscule risk of igniting the atmosphere against the real possibility of Nazi Germany developing a nuclear weapon first. These decisions, while made with the best intentions, demonstrate the importance of considering the broader implications and potential harm to others when assessing risk.

    • Weighing the risks of nuclear weapons vs. gain-of-function researchThe Manhattan Project's calculated risk for nuclear weapons had greater potential benefits than risks. In contrast, gain-of-function research poses smaller benefits and much greater risks, especially with access to DNA and RNA synthesis. Preventing the creation and distribution of dangerous pathogens requires stronger regulations and infrastructure.

      The decision to develop nuclear weapons during the Manhattan Project was a carefully considered risk, as the potential consequences of a Nazi regime with nuclear capabilities were deemed greater than the risk of an accidental lab mishap. In contrast, gain-of-function research, which involves creating or enhancing viruses in a lab, presents a smaller benefit and a much greater risk, especially when considering the potential for widespread access to dangerous DNA and RNA synthesis capabilities. While there have been efforts to regulate this research, more comprehensive measures are necessary to prevent the creation and dissemination of dangerous pathogens. The Manhattan Project serves as a reminder that sometimes taking calculated risks for the greater good is justifiable, but in the case of gain-of-function research, the potential risks far outweigh the potential benefits. It is crucial that we take serious steps to prevent the creation and distribution of dangerous pathogens, including strengthening regulations and infrastructure to make it difficult for individuals to obtain dangerous DNA and RNA.

    • Balancing research and potential risksRecognize and address complex risks like deadly viruses in schools. Invest in research and development of universal vaccines and other measures to protect against future pandemics.

      While it's important to address potential risks, such as gain of function research, it's equally important to recognize and address more complex and diffuse risks, like deadly viruses in schools. The debate around BSL 3 and BSL 4 labs raises questions about the balance between valuable research and potential risks. While it might be challenging to completely eliminate all apocalyptic microbes, it's crucial to take necessary precautions and build robust safeguards into research tools. The COVID-19 pandemic serves as a stark reminder of our vulnerabilities and the need for improved pandemic preparedness. We have the opportunity to learn from this experience and invest in research and development of universal vaccines and other measures to better protect ourselves against future pandemics. The cost of these measures is relatively small compared to the potential damage caused by a pandemic. By taking a proactive approach, we can harden ourselves against future threats and minimize the impact of potential outbreaks.

    • Investing in universal and pan-virus vaccines could offer significant relief from economic and health burdensAnnual cost of flu to US is $365B, universal flu vaccine could save billions, estimated cost $200M-$2B, potential impact significant, COVID-19 caused $14T damage, lack of investment in research, potential lab leak implications

      Investing in the development of universal vaccines for influenza and other viruses that cause significant economic and health burdens could be a worthwhile investment, despite the high costs and long timelines involved. For instance, the annual cost of flu to the United States is estimated to be $365 billion, and a universal flu vaccine could potentially offer significant relief. While a universal flu vaccine is estimated to cost between $200 million and $2 billion over 10 years, the potential benefits could far outweigh the costs. Similarly, investing in pan-virus vaccines for all virus families that infect and kill humans could be a game-changer, with an estimated cost of $40 billion over 10 years. The potential impact of such investments could be significant, especially in the context of the economic damage caused by the COVID-19 pandemic, which is estimated to have caused $14 trillion in damage to the global economy. However, despite the obvious benefits, there seems to be a lack of concerted effort to invest in such research and development. The same goes for the development of a pan-coronavirus vaccine, which should have started as soon as the COVID-19 pandemic hit but is still not a priority. The implications of a lab leak theory being proven true for the COVID-19 pandemic could be significant, leading to a global ban on gain-of-function research and a renewed focus on preventative measures.

    • The Importance of Transparency and Accountability in Synthetic Biology ResearchTransparency and accountability are crucial in synthetic biology research to prevent misuse and ensure trustworthiness. A ban on gain-of-function research may not be practical or effective, but improving regulatory frameworks, increasing global awareness, and fostering a culture of openness can help mitigate risks.

      The discussion highlights the importance of transparency and accountability in scientific research, particularly in the context of gain-of-function research in synthetic biology. The speaker believes that a ban on such research could reduce the global risk of synbio being used maliciously, but the opacity surrounding the origins of COVID-19 raises concerns about the trustworthiness of authoritarian governments and the need for thorough investigations. The speaker also questions whether we should consider limiting technological progress to prevent potential existential risks, but ultimately concludes that such a scenario is unlikely and not a practical solution. Instead, the focus should be on improving regulatory frameworks, increasing global awareness, and fostering a culture of openness and transparency in scientific research.

    • Navigating the complexities of technological advancements and geopoliticsAdopt a multilayered and adaptive defense strategy to address the challenges of synthetic biology, artificial intelligence, and nanotechnology, while maximizing their human flourishing potential.

      The complex and interconnected nature of technological advancements and global geopolitics presents significant challenges, particularly in the areas of synthetic biology, artificial intelligence, and nanotechnology. The coordination problem of ensuring global cooperation and ethical use of these technologies is a major concern, as the potential for misuse or unintended consequences increases with the democratization and privatization of these technologies. The future is uncertain, and the risks are vast. Therefore, it is essential to adopt a multilayered and adaptive defense strategy, drawing inspiration from our own immune systems. This strategy includes stopping gain-of-function research, creating industry standards for dangerous DNA, and continually adapting to new threats as they emerge. The human flourishing potential of these technologies is significant, particularly in the areas of therapeutics, clean meat production, and ethical considerations. However, the risks and challenges cannot be ignored, and a proactive and agile approach is necessary to mitigate them.

    • Proposed improvements for regulating dangerous nucleic acids productionMandatory regulation and a distributed model of DNA credit creation are proposed to make regulation more effective and cost-efficient as the ability to synthesize nucleic acids becomes cheaper and more accessible.

      The current system for regulating the production of dangerous nucleic acids, such as synthetic DNA and RNA, is a good start but needs significant improvements. Companies like Twist Bioscience, which specialize in producing long strands of error-corrected nucleic acid, have robust internal safeguards in place to ensure the benign usage of dangerous sequences. However, as the ability to synthesize nucleic acids becomes cheaper and more accessible, the number of orders and the cost of regulation per order are increasing. Voluntary regulation through organizations like the IGSC, which accounts for approximately 80% of the DNA synthesis in the world, is not enough to protect against malicious actors. To address this, it is proposed that regulation become mandatory and that we transition to a distributed model of DNA credit creation, where benchtop DNA synthesizers allow scientists to produce the nucleic acid they need in their own labs. This would reduce the reliance on centralized service bureaus and make regulation more effective and cost-efficient.

    • Advanced printers with safety protocols for synthetic biologyInvesting in advanced printers with robust safety protocols reduces the risk of individuals creating dangerous biological agents in their own labs, making the skills required less accessible and the creation of complex viruses increasingly difficult.

      As technology advances, particularly in the field of synthetic biology, there is a growing potential for individuals to create dangerous biological agents in their own labs. To mitigate this risk, it's essential that we invest in advanced, distributed printers with robust safety protocols. These printers would make it much harder for individuals to create dangerous DNA sequences, reducing the number of potential threats from a large pool to a small group of experts. The future of these printers could be widespread, even in high schools, making the creation of complex viruses from scratch increasingly obsolete. While this doesn't eliminate the risk entirely, it significantly reduces it by making the skills required to create such agents less accessible. The path to self-improvement and career development often leads away from the creation of dangerous biological agents. Additionally, as printers become more sophisticated, the need for traditional wet lab skills will diminish, making the creation of complex viruses from whole cloth increasingly difficult. While some may find the intricacies of hard science fiction challenging, the potential benefits of advanced distributed printers in synthetic biology are significant in reducing the risk of biological threats.

    • Creating self-sufficient, defended communities as a backupConsider creating backup communities with essential resources and defense to ensure human survival in the face of existential risks.

      As the world faces various existential risks, from natural disasters like the moon exploding to man-made threats like malignant AI, it's crucial to consider creating self-sufficient, defended communities as a backup. These communities should be equipped with necessary resources, including books and seeds, and could operate on a rotation system. While ethical questions arise regarding the potential suffering of those living in such communities, the greater good of preserving humanity as a whole may outweigh individual hardships. The conversation also touched upon the relevance of projects like Biosphere 2, which aimed to create an independent biosphere but did not involve bunkering or heavy defense. The prepper movement and wealthy individuals buying up land for potential survival bunkers were also mentioned as related phenomena.

    • The Biosphere 2 experiment: A self-sufficient ecosystem for human survivalDespite the failure of the Biosphere 2 experiment due to agricultural and atmospheric issues, the idea of creating a self-sufficient ecosystem on a larger scale for human survival remains intriguing, with potential risks and unknown unknowns to consider.

      The Biosphere 2 experiment, an ambitious project aimed at creating a self-sufficient ecosystem for human survival, ultimately failed due to agriculture and atmospheric issues. This small-scale endeavor, while charming, lacked the resources and scientific power to sustain its inhabitants for the intended two-year period. However, the idea of creating a self-sufficient ecosystem on a larger scale, such as a national or international effort, is intriguing. While the experience may be intense, similar to those who spend extended periods in extreme environments like Antarctica, the potential risks, including the creation of dangerous forms of matter like ice 9, serve as a reminder of the unknown unknowns and barely known unknowns that come with such ambitious projects. The Biosphere 2 experiment, while failing in its execution, continues to inspire discussions on the possibilities and challenges of creating sustainable ecosystems for human survival.

    • Making Existential Risk Research More Accessible and PopularEngage the public by involving more public-facing figures and storytellers, increase awareness, and put pressure on policymakers to take action.

      The existential risk community, including researchers and institutions, need to prioritize making their work more accessible and popular to the public. This is crucial because decisions regarding existential risks are not just made by a select few in cloistered intellectual communities. Public figures, like Greta Thunberg, have shown that a single voice can rally support for important movements. However, the current state of existential risk research is specialized and gated, which means the general public is not engaged or aware. To address this, more public-facing figures and storytellers, such as science fiction writers, should be involved to make the topic more relatable and compelling. The importance of the continued existence and flourishing of the human species cannot be overstated, and it is essential that we involve a broader audience in the conversation. This will not only increase awareness but also put pressure on policymakers to take action. The success of movies like "War Games" and "Dr. Strangelove" in shaping public perception during the Cold War era demonstrates the power of storytelling in influencing society.

    • The power of stories to shape societal perspectivesStories have the power to influence societal attitudes and raise awareness about potential threats, but it's essential to remain critical and engage with thought-provoking narratives to promote a culture of rational decision-making.

      Stories, whether it be novels or series, have the power to shift societal perspectives and inoculate against potential threats, such as totalitarianism or superintelligent AI. In the past, works like George Orwell's "1984" and the "Terminator" franchise have played a significant role in shaping public opinion and raising awareness about the dangers of totalitarian regimes and advanced artificial intelligence. However, it's important to remember that even the most intellectually sophisticated individuals are not immune to cognitive biases and the allure of certain ideologies. Therefore, continuing to produce and engage with thought-provoking stories is crucial in promoting a culture that values critical thinking and rational decision-making. The recent series "Next" on NBC, which explores the risks of a misaligned superintelligent AI, is a prime example of how storytelling can help push the cultural conversation forward.

    • Movies and culture shaping public consciousnessMovies and culture can influence public perception of existential risks, inspiring research and policy changes. Small, organized countries can prioritize and allocate resources to mitigate these risks.

      Entertainment and culture can significantly shape public consciousness and push important issues into the mainstream, even before their time. This was evident with movies like "The Terminator" and "1984," which opened people's eyes to the potential dangers of supercomputers and totalitarianism, respectively. For those passionate about existential risks, contributing to the cause can be done in various ways, including supporting research centers, applying public pressure on governments, and even influencing local education systems. Small, organized countries with a history of progressive policies, like Denmark or Estonia, could make a significant impact by making existential risk a priority and allocating substantial resources to the issue. Engaging with local officials and implementing educational programs in schools are also effective ways to spread awareness. Ultimately, the cultural side of raising awareness is just as important as academic discussions, and could potentially be even more powerful and accessible.

    • One person's impact on multiple issuesIndividual actions and awareness can make a significant impact on the environment and other important issues. Persistence and hard work can lead to success in various endeavors.

      Individual actions and awareness, whether it's through policy implementation or personal efforts, can make a significant impact on the environment and other important issues. Rob Reid, a multifaceted individual, has demonstrated this through his various projects, including his podcast where he interviews world-class thinkers, his work on turning a novel into a movie, and his new investing partnership with Chris Anderson to fund startups making the world more resilient. It's inspiring to see how one person can make a difference in multiple areas and inspire others to do the same. Additionally, Reid's dedication to his podcast and music projects showcases the importance of persistence and hard work in achieving success.

    Was this summary helpful?

    Recent Episodes from Modern Wisdom

    #840 - Oliver Burkeman - 8 Unexpected Lessons To Be Less Hard On Yourself

    #840 - Oliver Burkeman - 8 Unexpected Lessons To Be Less Hard On Yourself
    Oliver Burkeman is a journalist, a writer for The Guardian and an author. Does trying harder to be extra productive actually work? Does it net more success or just more misery? For the Type-A people in the world, how can we learn to be less tough on ourselves and learn to have more fun in the process? Expect to learn what imperfectionism is, how to overcome decision paralysis & dealing with distractions better, whether or not there is an easy solution to imposter syndrome, the unexpected solution to fixing procrastination, the most effective ways to curb self-criticism, and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (automatically applied at checkout) Sign up for a one-dollar-per-month trial period from Shopify at https://shopify.com/modernwisdom (automatically applied at checkout) Get 10% discount on all Gymshark’s products at https://gym.sh/modernwisdom (use code MW10) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enSeptember 19, 2024

    #839 - Robert Greene - 12 Raw Truths About Gaining Power & Respect

    #839 - Robert Greene - 12 Raw Truths About Gaining Power & Respect
    Robert Greene is an author and historian. Robert is one of the most legendary writers in the world on human nature and today we get to go through some of my favourite lessons from him on seduction, confidence, happiness, masculinity, Machiavelli and more. Expect to learn the biggest problems with modern philosophy, why acquiring knowledge and skill are always the most important thing, why you must protect your reputation at all costs, why you are so often your own worst enemy, which lessons from Machiavelli most people miss, advice for young men wanting to make it in the world and much more…. Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get $350 off the Pod 4 Ultra at https://eightsleep.com/modernwisdom (use code MODERNWISDOM) Get a Free Sample Pack of all LMNT Flavours with your first box at https://drinklmnt.com/modernwisdom (automatically applied at checkout) Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (automatically applied at checkout) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (automatically applied at checkout)  Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enSeptember 16, 2024

    #838 - Dr Paul Eastwick - What Do People Really Want In A Partner?

    #838 - Dr Paul Eastwick - What Do People Really Want In A Partner?
    Dr Paul Eastwick is a psychologist, professor, and a researcher. What do people actually want in a partner compared to what they say they want? Paul is the lead author on largest study of its kind which was just released breaking down exactly this question. Expect to learn the #1 trait people actually look for in a partner, how well people know what they want, what Ideal Partner Preference-Matching is, the biases that affect mate evaluation, the sex differences in stated vs. revealed preferences, whether big data could improve dating app matching and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get the Whoop 4.0 for free and get your first month for free at https://join.whoop.com/modernwisdom (automatically applied at checkout) Get $350 off the Pod 4 Ultra at https://eightsleep.com/modernwisdom (use code MODERNWISDOM) Get a Free Sample Pack of all LMNT Flavours with your first box at https://drinklmnt.com/modernwisdom (automatically applied at checkout) Extra Stuff: Read Paul's Study: https://osf.io/preprints/psyarxiv/fe56h Listen to Paul's Podcast: https://www.lovefactuallypod.com/ Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enSeptember 14, 2024

    #837 - Rory Stewart - The Truth Behind The Fall Of The UK

    #837 - Rory Stewart - The Truth Behind The Fall Of The UK
    Rory Stewart is a British academic, broadcaster, writer, podcaster and former diplomat and politician. From riots to stabbings, useless politicians to corrupt businesses, all wrapped in terrible weather and high taxes, the UK is not having a great time right now. Perhaps Rory can help explain what is going wrong. Expect to learn what Afghanistan is like under new Taliban control, what the real problems in the UK are, why politicians are so reliably stupid, whether immigration really is breaking Britain just how bad extreme poverty is around the world, the latest updates with the Royal family and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Sign up for a one-dollar-per-month trial period from Shopify at https://shopify.com/modernwisdom (automatically applied at checkout) Get $150 discount on Plunge’s amazing sauna or cold plunge at https://plunge.com (use code MW150) Get a 20% discount & free shipping on your Lawnmower 5.0 at https://manscaped.com/modernwisdom (use code MODERNWISDOM) Extra Stuff: Watch Rory's TED Talk: https://www.youtube.com/watch?v=tt0HOe7gf7I Support Rory's effort to end extreme poverty: https://givedirectly.org/tedtalk Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enSeptember 12, 2024

    #836 - Dr Andrew Huberman - The New Science Of Longevity, Resilience & Breaking Bad Habits

    #836 - Dr Andrew Huberman - The New Science Of Longevity, Resilience & Breaking Bad Habits
    Dr Andrew Huberman is a neuroscientist, Associate Professor at the Stanford University School of Medicine and a podcaster. From personal dramas to scientific uproars, it's been a wild year for the biggest health & fitness podcaster in the world. And today we get to discover his biggest new insights about life, relationships and protocols. Expect to learn whether you actually should drink coffee within 90 minutes of waking up, how to get the best sleep of your life according to the latest science, how to become a morning person, what Andrew has learned about the perils of fame and public scrutiny, what new research says on the world of longevity supplements, why you should always do your research when testing with peptides and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (automatically applied at checkout) Get a Free Sample Pack of all LMNT Flavours with your first box at https://drinklmnt.com/modernwisdom (automatically applied at checkout) Get a 20% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout) Get a 10% discount on Marek Health’s comprehensive blood panels at https://marekhealth.com/modernwisdom (use code MODERNWISDOM) Timestamps: 00:00 Adenosine in the First 90 Minutes of the Day 12:15 Why Jocko Never Gets Tired 23:59 How to Become a Morning Person 35:05 Andrew’s New Daily Routine 43:39 Mentally Dealing With a Rapid News Cycle 53:46 Why Stories Are More Powerful Than Statistics 1:04:46 The Tim Kennedy Alarm Clock 1:09:55 Dissecting the Story You Tell Yourself 1:21:28 The Blessing & Curse of Good Memory 1:31:48 How Andrew Deals With Public Scrutiny 1:42:53 What it Was Like to Wake Up to the Hit Piece 1:55:28 Advice to People Going Through an Intense Time 2:04:50 The Lonely Chapter 2:14:28 Thoughts on Bryan Johnson 2:20:50 Current State of Longevity Research 2:32:00 Thinking About Your Long Arc 2:44:18 Using BPC-157 to Recover Faster 2:53:30 Why Andrew is Teaching an Undergraduate Course 2:57:47 Being a Researcher & Influencer 3:06:38 How to Follow Your Intuition More 3:24:41 What’s Next for Andrew Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enSeptember 09, 2024

    #835 - 2.5M Q&A - Naval Ravikant, Quitting Alcohol & Having Kids

    #835 - 2.5M Q&A - Naval Ravikant, Quitting Alcohol & Having Kids
    I hit 2.5 million Subscribers on YouTube!! To celebrate, I asked for questions from YouTube, Twitter and Instagram, so here’s another 90 minutes of me trying to answer as many as possible. Expect to learn who I’m thinking of bringing on as guests on the left and right, how to learn to trust your decisions and how to be aware of the biases that can influence them, my advice for getting out of a Rut, how to ask critical questions, what it’s like living with a giraffe and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a 20% discount & free shipping on your Lawnmower 5.0 at https://manscaped.com/modernwisdom (use code MODERNWISDOM) Get a Free Sample Pack of all LMNT Flavours with your first box at https://drinklmnt.com/modernwisdom (automatically applied at checkout) Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (automatically applied at checkout) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enSeptember 07, 2024

    #834 - Lyman Stone - Why Is Everyone Having Fewer Children?

    #834 - Lyman Stone - Why Is Everyone Having Fewer Children?
    Lyman Stone is a demographer, researcher, and a writer. It wasn't long ago that everyone was worried about the population bomb and within a few short decades global birth rates are now declining. What's going on? What is driving such a rapid change in the number of children people are having and should we do anything about it? Expect to learn the best explanations for why birth rates are declining, whether declining birthrates are downstream from declining marriage rates, what winning the lottery does to marriages for both men and women, Lyman's controversial perspective on the impact of sperm count and testosterone levels on fertility and much more…. Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a 20% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout) Get the Whoop 4.0 for free and get your first month for free at https://join.whoop.com/modernwisdom (automatically applied at checkout) Sign up for a one-dollar-per-month trial period from Shopify at https://shopify.com/modernwisdom (automatically applied at checkout) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enSeptember 05, 2024

    #833 - Eric Weinstein - Are We On The Brink Of A Revolution?

    #833 - Eric Weinstein - Are We On The Brink Of A Revolution?
    Eric Weinstein is a mathematician, economist, former Managing Director of Thiel Capital and a podcaster. It feels like the world is reaching a fever-pitch. From deep fakes to cheap fakes, AI girlfriends to senile presidents, we've never had more access to information, and yet it's never been harder to work out what is true. So, what do we do? Expect to learn Eric’s thoughts on the 2024 presidential election, whether we are being gaslit on a global scale by the media, the future of string theory and what's next for theoretical physics, why we have canned humour and what that means as a society, Eric’s thoughts on Joe Rogan, what my biggest weaknesses as a human are and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a Free Sample Pack of all LMNT Flavours with your first box at https://drinklmnt.com/modernwisdom (automatically applied at checkout) Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (automatically applied at checkout) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (automatically applied at checkout) Get $350 off the Pod 4 Ultra at https://eightsleep.com/modernwisdom (use code MODERNWISDOM) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enSeptember 02, 2024

    #832 - Rudyard Lynch - Is This The Most Absurd Time In History?

    #832 - Rudyard Lynch - Is This The Most Absurd Time In History?
    Rudyard Lynch is a YouTuber and a historian. Is the modern world weird? Whether it's incels, brat summer, a broken media landscape, godlessness or a decline in institutional trust it seems like lots of modernity is kind of odd. From the fall of empires to the rise of new world orders, how does our current timeline compare to the rest of recent history? Expect to learn whether we are actually living in a unique time in history, what a far-right backlash might look like, why the left is seen as more feminine, what would have happened if Trump actually had been killed, what it will take for young men to feel happy and fulfilled again and much more… Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Sign up for a one-dollar-per-month trial period from Shopify at https://shopify.com/modernwisdom (automatically applied at checkout) Get a Free Sample Pack of all LMNT Flavours with your first box at https://drinklmnt.com/modernwisdom (automatically applied at checkout) Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (automatically applied at checkout) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (automatically applied at checkout) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enAugust 31, 2024

    #831 - Dan Martell - How To Stop Wasting Your Time & Money On Things That Don’t Matter

    #831 - Dan Martell - How To Stop Wasting Your Time & Money On Things That Don’t Matter
    Dan Martell is an entrepreneur, investor, and author. The saying “money can’t buy time” is often used to emphasise the importance of not wasting your days. But what if there was a way to actually buy back your time. What if using your money well actually can liberate your life? Expect to learn what "the bigger it gets, the harder it gets" means in business, why so many successful people suffer with more chaos rather than less as they grow, Dan's framework for outsourcing all the stuff you don't want to do in life, what the buyback principle is, what it means to run your family like a business, how to work out what things you need to let go of and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (automatically applied at checkout) Get a Free Sample Pack of all LMNT Flavours with your first box at https://drinklmnt.com/modernwisdom (automatically applied at checkout) Sign up for a one-dollar-per-month trial period from Shopify at https://shopify.com/modernwisdom (automatically applied at checkout) Get $350 off the Pod 4 Ultra at https://eightsleep.com/modernwisdom (use code MODERNWISDOM) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enAugust 29, 2024

    Related Episodes

    #83 – Nick Bostrom: Simulation and Superintelligence

    #83 – Nick Bostrom: Simulation and Superintelligence
    Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. Support this podcast by signing up with these sponsors: - Cash App - use code "LexPodcast" and download: - Cash App (App Store): https://apple.co/2sPrUHe - Cash App (Google Play): https://bit.ly/2MlvP5w EPISODE LINKS: Nick's website: https://nickbostrom.com/ Future of Humanity Institute: - https://twitter.com/fhioxford - https://www.fhi.ox.ac.uk/ Books: - Superintelligence: https://amzn.to/2JckX83 Wikipedia: - https://en.wikipedia.org/wiki/Simulation_hypothesis - https://en.wikipedia.org/wiki/Principle_of_indifference - https://en.wikipedia.org/wiki/Doomsday_argument - https://en.wikipedia.org/wiki/Global_catastrophic_risk This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. Here's the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. OUTLINE: 00:00 - Introduction 02:48 - Simulation hypothesis and simulation argument 12:17 - Technologically mature civilizations 15:30 - Case 1: if something kills all possible civilizations 19:08 - Case 2: if we lose interest in creating simulations 22:03 - Consciousness 26:27 - Immersive worlds 28:50 - Experience machine 41:10 - Intelligence and consciousness 48:58 - Weighing probabilities of the simulation argument 1:01:43 - Elaborating on Joe Rogan conversation 1:05:53 - Doomsday argument and anthropic reasoning 1:23:02 - Elon Musk 1:25:26 - What's outside the simulation? 1:29:52 - Superintelligence 1:47:27 - AGI utopia 1:52:41 - Meaning of life

    If DNA is Code, Can AI Help Write It? Scaling Cell Programming and Synthetic Biology, with Ginkgo Bioworks Co-founder and CEO Jason Kelly

    If DNA is Code, Can AI Help Write It? Scaling Cell Programming and Synthetic Biology, with Ginkgo Bioworks Co-founder and CEO Jason Kelly
    Ginkgo Bioworks is using DNA as code to digitize the cell programming revolution. Ginkgo is using AI and synthetic biology to keep the next pandemic at bay, and accelerate our production capabilities for medicine, food, and agriculture. Ginkgo’s co-founder and CEO Jason Kelly joins hosts Sarah Guo and Elad Gil to discuss bioengineering protein as a foundational model, specialized data learning from an evolutionary perspective, what we need to prepare for a future pandemic, and more. Jason has served as a member of our board of directors since Ginkgo’s founding in 2008. He has also served as a director of CM Life Sciences II Inc. (Nasdaq: CMII), a special purpose acquisition company with a focus on the life sciences sector, since its initial public offering in February 2021. Jason holds a Ph.D. in Biological Engineering and a B.S. in Chemical Engineering and Biology from the Massachusetts Institute of Technology. Show Links:  Jason Kelly - Co-founder & CEO of Ginkgo Bioworks | LinkedIn   Ginkgo Bioworks The Plausibility of Life: Resolving Darwin's Dilemma Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @jrkelly Show Notes:  (0:00:00) - The Difference Between Software Engineering and Biological Engineering (0:06:51) - Abstractions and Infrastructure in Synthetic Bio (0:09:23) - The Role of AI, Foundation Models that Speak Biology (0:13:17) - AWS for Cell Engineering (0:17:52) - Where are the AI-discovered Drugs? And Data at Gingko (0:19:12) - Pandemic Response and Biosecurity in the Age of AI (0:22:47) - The Likelihood of Existential AI Risk from Lone Actors Harnessing Viruses, and The Need for Defense-in-Depth (0:31:47) - Will Progress in AI Be Biologically Inspired? And Evolution

    Künstliche Intelligenz – der Weg in die Zukunft?

    Künstliche Intelligenz – der Weg in die Zukunft?
    „Künstliche Intelligenz“, kurz KI, gilt als wichtige Zukunftstechnologie. Doch was genau ist eigentlich gemeint, wenn wir von Machine Learning oder Deep Learning sprechen? Welche Herausforderungen lassen sich mit KI schneller oder besser lösen? Und wo liegen die Grenzen von KI? Im Gespräch mit Vanessa Cann, Co-Vorsitzende des KI-Bundesverbands, und Prof. Dr. Kristian Kersting, KI-Forscher an der Technischen Universität Darmstadt, gehen wir diesen und weiteren Fragen nach. Unsere Gäste sprechen über konkrete Potenziale von künstlicher Intelligenz für Unternehmen, diskutieren aber auch über die Grenzen von KI-Lösungen und mögliche Wechselwirkungen mit globalen Themen wie dem Klimaschutz.
    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io