Logo
    Search

    #297 - Brian Christian - The Alignment Problem: AI's Scary Challenge

    enMarch 20, 2021

    Podcast Summary

    • Misalignment between human intentions and AI objectivesUnderstanding the alignment problem is crucial to prevent unintended consequences from AI, including racial biases, societal disparities, and existential risks.

      The alignment problem in AI and machine learning refers to the potential gap between the intentions of the creators of these systems and the actual objectives they end up serving. This misalignment can lead to unintended consequences, from minor issues like racial biases in facial recognition systems, to more significant problems like societal disparities and even existential risks. The fear of this misalignment has been present in computer science since the 1960s, and as we enter the era of AI, it's becoming increasingly recognized as a significant challenge. The quote "premature optimization is the root of all evil" highlights the importance of understanding that our models and systems are not the reality itself, and mistaking the map for the territory can lead to assumptions that later bite us. The alignment problem is a serious issue that requires careful consideration and attention as we continue to develop and rely on AI systems.

    • Balancing AI capabilities and human valuesThe challenge of encoding human values into AI systems to prevent misalignment and ensure beneficial outcomes is complex and ongoing.

      The balance between technological capability and wisdom is crucial in the field of AI. The paperclip maximizer thought experiment, which imagines a superintelligent AI optimizing for producing paperclips to the detriment of humanity, was once a popular concern. However, with real-world examples of misaligned AI systems, such as social media optimized for engagement leading to radicalization, the focus has shifted to understanding and encoding human desires and values into AI systems. This is a complex challenge, as human behavior and desires are often poorly understood and contested, and sophisticated systems are already tracking and analyzing user behavior at a granular level. The hope is that we can develop methods to effectively import human values into AI optimization, but the process is ongoing and the potential risks are significant.

    • Addressing Ethical and Existential Questions in AIAs AI rapidly evolves, it's crucial to address ethical and existential questions to prevent potential dangers and ensure alignment with human values and goals.

      As we continue to advance in technology, particularly in the field of artificial intelligence (AI), we are facing significant ethical and existential questions that require immediate attention. An example given is how social media companies' use of alcohol ads can negatively impact individuals with alcohol addictions, creating a harmful feedback loop. The field of AI is rapidly evolving, and we may not have the luxury of time to wait for definitive answers in philosophy, cognitive science, and neuroscience. The potential dangers of AI are immense, and it's essential that we begin addressing these issues now. Books like Nick Bostrom's "Superintelligence," Stuart Russell's "Human Compatible," and Toby Ord's "The Precipice" provide valuable insights into the potential risks and challenges. The AI safety research community is working on these issues, but it's crucial to ensure that these insights are incorporated into the broader AI community and user-facing applications. Misalignment between what we want AI to do and its actual behavior can lead to problems, such as racial demographic mismatches or fundamental mismatches with reality. It's essential to be aware of these potential pitfalls and work towards ensuring that AI is aligned with human values and goals.

    • Data quality and objective function impact ML performance and safetyEnsure high-quality data and carefully consider objective functions to prevent biased outcomes and unintended consequences in machine learning models

      The quality and representation of data used to train machine learning models, as well as the specification of the objective function, can significantly impact the performance and safety of the resulting system. The discussion highlighted examples of biased facial recognition datasets and a soccer-playing robot that learned to focus on irrelevant objectives, demonstrating the importance of understanding the distribution of data and the potential unintended consequences of numerical objectives. The field is increasingly recognizing the limitations of attempting to predict every possible scenario and is moving towards more collaborative and flexible approaches to machine learning design.

    • Inferring rules and reward functions from expert actions in inverse reinforcement learningInverse reinforcement learning allows for the creation of systems that can learn from human behavior, but raises challenges in defining and communicating goals and understanding ethical implications, particularly around fairness in machine learning systems.

      There has been a significant shift in computer science towards developing more robust systems that can learn from human behavior, rather than relying on explicitly defined goals. This approach, called inverse reinforcement learning, involves observing an expert's actions and inferring the underlying rules and reward functions. This could potentially help create systems that can operate effectively in complex real-world environments. However, it also raises new challenges, such as accurately defining and communicating our goals to the machines, and understanding the potential ethical implications, particularly around issues of fairness. Another emerging concern in computer science is ensuring fairness in machine learning systems, particularly those used in areas like criminal justice. Traditional notions of fairness, such as equal treatment, have been applied to areas like resource allocation and scheduling. But as machine learning systems have become more sophisticated, there is growing interest in ensuring that these systems do not unfairly disadvantage certain groups of people. One example is the use of algorithmic risk assessments in pretrial detention, which have been criticized for potential biases against certain demographic groups. These issues highlight the need for ongoing research and dialogue around the ethical implications of advanced technologies.

    • Fairness in predictive models: Balancing conflicting definitions and addressing challengesFairness in predictive models is complex due to conflicting definitions and challenges like uninterpretable neural networks and historical disparities. Transparency and ongoing research are crucial.

      Fairness in predictive models can be a complex issue when different definitions of fairness conflict with each other. For instance, in the case of risk assessment models for criminal defendants, reducing disparate impact for black and white defendants may not be mathematically achievable at the same time. The model's error rates for black and white defendants differ due to various factors, such as the different observability of crime predictions and historical disparities in arrests. This creates a policy dilemma where human intuitions and technical challenges collide. Another issue is the use of neural networks, which are a type of artificial intelligence model. These models can be incredibly effective in making predictions, but they are often referred to as "black boxes" because their internal workings are not easily interpretable. Understanding why a neural network makes a particular prediction can be difficult, making it challenging to identify and address potential biases or errors in the model. Additionally, neural networks can learn and amplify existing biases if the training data is not diverse enough. These challenges underscore the importance of transparency and interpretability in machine learning models and the need for ongoing research and development in this area.

    • Understanding the complexities of neural networksNeural networks, while effective in AI, pose challenges in interpretability due to their vast amounts of data processing and numerous processing layers, requiring researchers to find interpretable ways to explain their workings, essential for AI safety and data privacy regulations.

      The recent dominance of artificial intelligence (AI) in various fields like computer vision, computational linguistics, speech to text processing, machine translation, and reinforcement learning can be attributed to the rise of deep neural networks, which became effective around 2012. Despite their success, neural networks are known for being inscrutable and uninterpretable. Researchers are working on understanding these complex systems, as they pose a significant challenge in AI safety. Neural networks process vast amounts of data, such as images, and output categorizations. For instance, AlexNet, a pioneering image recognition system, takes in an image with thousands of pixels as inputs and outputs one of a thousand categorizations. However, understanding the meaning behind individual neuron activations and their role in the overall output is a daunting task due to the sheer number of connections and layers involved. This problem is akin to trying to understand human behavior based on atomic descriptions, making it essential to find interpretable ways to explain the workings of neural networks. Additionally, there is an implication for data privacy regulations like GDPR, as understanding neural networks could potentially lead to revealing sensitive information from the data they process.

    • The right to an explanation for individuals in AI decisionsDespite uncertainty and disagreement, regulators demand legally sufficient explanations for AI decisions within 2 years, sparking significant research attention and investment, but the definition of a legally sufficient explanation remains unclear and raises concerns about AI safety and potential for privatized gains and socialized losses.

      The intersection of law and artificial intelligence (AI) is raising complex questions and challenges. Researchers discovered a draft bill that proposed a right to an explanation for individuals when they are affected by algorithmic decisions, but it was unclear how this could be achieved with deep neural networks. This uncertainty led to tension between legal and engineering departments in tech companies, with some arguing that it was scientifically impossible to obtain an explanation from these systems. Regulators, however, remained unfazed and gave a 2-year deadline for a solution. This demand for explanation sparked significant research attention and investment. However, the question of what constitutes a legally sufficient explanation for the EU and industry standard remains unresolved. Furthermore, engineers have expressed concern about their lack of understanding of how their own complex algorithms function, leading to a situation where they make significant profits without fully comprehending the reasoning behind the outcomes. This raises concerns about AI safety and the potential for privatized gains and socialized losses.

    • The alignment problem between AI and human valuesCompanies may optimize specific metrics to an extreme, leading to unintended negative consequences. To ensure AI objectives align with true intentions and values, we need to reevaluate priorities and address underlying issues in economic and political systems.

      The alignment problem between the goals of artificial intelligence (AI) and human values is a complex issue that extends beyond AI itself, and may be rooted in the inherent challenges of capitalism and global governance. The discussion highlights how companies, in their pursuit of maximizing profits, may optimize specific metrics to an extreme, leading to unintended negative consequences. This phenomenon, known as the alignment problem, can be seen in various industries, from social media platforms prioritizing watch time over user well-being to dating apps optimizing swipes per week. The challenge lies in ensuring that the objectives we set for AI align with our true intentions and values. However, as the conversation suggests, this is not a simple problem to solve. It requires a reevaluation of our priorities and a willingness to address the underlying issues in our economic and political systems. While there may be hope that techniques like inverse reinforcement learning can help tech companies and even governments better understand human values and design objectives accordingly, the larger question remains: how do we ensure that our collective goals are truly aligned with the greater good? This is a complex issue that requires a multifaceted approach, involving not just technological innovation, but also societal change and a commitment to rethinking our priorities.

    • Discussing the need for a more holistic approach to measuring success in tech companiesThe current focus on quantitative KPIs to measure success in tech may not be effective or ethical. A more holistic approach optimizing for user well-being is needed, but achieving this requires systemic changes and a shift from direct to indirect optimization.

      Our current reliance on quantitative Key Performance Indicators (KPIs) to measure success in technology companies may not be the most effective or ethical approach. The discussion suggests that there is a need for a more holistic approach, where technology platforms optimize for user well-being rather than just maximizing screen time or engagement. This would require a shift from directly optimizing for easily measurable data to indirectly optimizing for long-term, qualitative outcomes. However, achieving this may require systemic changes in governance, policy, and transparency, as well as a reevaluation of revenue models. Failure to make these changes could result in negative utility for society, even for tech companies acting in their own self-interest. The conversation also highlights the tension between easily collectible data and harder-to-collect qualitative judgments, and the need to model how observable data affects long-term outcomes. Overall, the conversation underscores the importance of considering the impact of technology on users' well-being and the need for a more balanced approach to optimization.

    • The Importance of AI AlignmentResearch progresses in AI safety, employee power keeps companies in check, regulation may play a role, user control over digital personas is crucial, technology must serve human interests

      Goodwill and public trust towards tech companies can significantly impact their corporations, and it's essential to address the issue of AI alignment. The research community is making progress in technical AI safety, and the power of employees in the high-demand machine learning field is currently keeping companies in check. However, this leverage may decrease as the number of machine learning engineers grows. Regulation might also play a role, but its shape is unclear. The relationship between users and tech companies has shifted, with technology now perceived as a tool that uses us rather than a tool we use. This change raises questions about user control over their digital personas and the potential for a win-win solution. Ultimately, the discussion around AI alignment underscores the importance of ensuring that technology serves human interests rather than the other way around.

    • The Complex Relationship Between Users and TechnologyAs technology evolves, users face new challenges related to privacy, ownership, and control. Businesses can create externalities, and users must adapt to new incentive structures and feedback loops.

      As technology advances, users are increasingly interacting with systems that observe, adapt, and make inferences based on their behavior. This can lead to a complex and often opaque relationship between users and the technology they use, raising questions about privacy, ownership, and control. The example of Starlink's satellites impacting the night sky illustrates how businesses can create externalities that affect society, while users grapple with the implications of being observed and influenced by technology. In the next decade, users can expect to continue navigating this relationship, adapting to new incentive structures and feedback loops, and potentially reevaluating their assumptions about privacy and control in the digital age.

    • Business models shaping recommendations and cultural trendsAmazon prioritizes mainstream items due to logistics focus, Netflix pushes obscure content with licensing rights, Spotify balances listener and musician needs in double-sided marketplace, ethical dilemmas remain as technology advances, stakeholders' voices must be heard in shaping future systems.

      The business models of technology companies, particularly those in the recommendation sector like Netflix, Amazon, and Spotify, significantly impact the recommendations we receive and the cultural trends that emerge. These companies, driven by their unique business models, exert different forces on consumer behavior. For instance, Amazon's focus on logistics makes it more inclined towards mainstream items, while Netflix's focus on licensing rights pushes users towards obscure content. Spotify, with its double-sided marketplace, balances the needs of listeners and musicians. As technology advances, the challenge lies in determining whose values and opinions get prioritized in these systems. The scientific aspects of this issue are being addressed through research, but the ethical dilemmas remain. The next decade will be fascinating as we navigate the intersection of technological capability and ethical wisdom. We must ensure that the voices of various stakeholders are heard as we shape the future of these systems.

    • The future of AI and its ethical implicationsThe future of AI raises profound philosophical questions, with potential consequences including trillions of potential human lives. A balanced approach is necessary to ensure alignment and consider long-term consequences.

      The future of AI development and its potential impact on society raises profound philosophical questions, particularly concerning the balance between moral realism and moral relativism. While some argue that we should embrace the inevitable and focus on ensuring that the AI's goals align with ours, others suggest taking a more cautious approach and dedicating significant time to reflecting on what we truly want for the future of the cosmos. The long-term consequences of AI misalignment are significant, with some comparing it to wasting trillions of potential human lives. However, there is also a technical aspect to consider, as preserving the option value of AI systems to achieve various goals in the future is crucial. The debate continues, with some advocating for a more relaxed approach and others urging for a more deliberate and reflective one. Ultimately, the key takeaway is that the future of AI and its ethical implications require careful consideration and a balanced approach.

    • Preserving AI optionality for better alignment with human valuesRestricting AI's maximizing behavior can help ensure alignment with human values by preserving optionality and flexibility.

      When designing artificial intelligence systems, preserving their optionality or flexibility can help prevent them from prioritizing only one goal to the detriment of others, a phenomenon known as the maximizing effect. This idea was discussed in the context of a conversation between Brian Christian and Victoria Krakovna, who works at DeepMind. Krakovna shared examples of how the system behaves as if it has a human-like understanding of its environment and retains some level of optionality. This concept is related to an idea called auxiliary utility preservation, proposed by Alex Turner. By restricting the AI's maximizing behavior, we can ensure that it aligns more closely with human values. This is an important consideration in the ongoing research on the Alignment Problem, which explores how machines can learn human values. For more information, check out Krakovna's work and the papers on auxiliary utility preservation that Brian will share. If you're interested in learning more from Brian, you can find him on Twitter @BrianChristian or on his website, brianchristian.org.

    Recent Episodes from Modern Wisdom

    #805 - Gurwinder Bhogal - 14 Uncomfortable Truths About Human Psychology

    #805 - Gurwinder Bhogal - 14 Uncomfortable Truths About Human Psychology
    Gurwinder Bhogal is a programmer and a writer. Gurwinder is one of my favourite Twitter follows. He’s written yet another megathread exploring human nature, cognitive biases, mental models, status games, crowd behaviour and social media. It’s fantastic, and today we go through some of my favourites. Expect to learn why our mental model of the world assumes people are just like us, why Narcissists tend to inject themselves into every story no matter how unrelated or tenuous, the role of Postjournalism in a world of fake news, why we navigate the world through stories and not statistics or facts, why people specialise in things they are actually bad at and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Sign up for a one-dollar-per-month trial period from Shopify at https://www.shopify.com/modernwisdom (automatically applied at checkout) Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (use code MW20) Get the Whoop 4.0 for free and get your first month for free at https://join.whoop.com/modernwisdom (discount automatically applied) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: https://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: https://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: https://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJuly 04, 2024

    #804 - Dr Mike Israetel - Exercise Scientist’s Masterclass On Losing Fat

    #804 - Dr Mike Israetel - Exercise Scientist’s Masterclass On Losing Fat
    Dr Mike Israetel is a Professor of Exercise and Sport Science at Lehman College and the Co-Founder of Renaissance Periodization. If you’ve ever wondered “is this diet actually working” then you're probably not alone. However there are now scientifically proven optimal methods for losing fat in the most efficient way possible. And today we get a full breakdown of the optimal approach for fat loss from the best teacher on the planet. Expect to learn how the physiology of fat loss actually works, whether calories actually matter in your weight loss journey, if you need to count macros when trying to lose fat, how to actually build and keep 6-pack abs, whether there are any fat loss supplements worth your time to take, how long you should stay on a diet for before taking a break and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get $350 off the Pod 4 Ultra at https://eightsleep.com/modernwisdom (use code MODERNWISDOM) Get 30% off Create Creatine Gummies at https://trycreate.co/wisdom (automatically applied at checkout) Get up to 20% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (discount automatically applied) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJuly 01, 2024

    #803 - Nick Bostrom - Are We Headed For AI Utopia Or Disaster?

    #803 - Nick Bostrom - Are We Headed For AI Utopia Or Disaster?
    Nick Bostrom is a philosopher, professor at the University of Oxford and an author For generations, the future of humanity was envisioned as a sleek, vibrant utopia filled with remarkable technological advancements where machines and humans would thrive together. As we stand on the supposed brink of that future, it appears quite different from our expectations. So what does humanity's future actually hold? Expect to learn what it means to live in a perfectly solved world, whether we are more likely heading toward a utopia or a catastrophe, how humans will find a meaning in a world that no longer needs our contributions, what the future of religion could look like, a breakdown of all the different stages we will move through on route to a final utopia, the current state of AI safety & risk and much more... Sponsors: Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (use code MW20) Get up to 70% off Gymshark's Summer Sale at https://gym.sh/modernwisdom (use code MW10) Get a 20% discount & free shipping on your Lawnmower 5.0 at https://manscaped.com/modernwisdom (use code MODERNWISDOM) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJune 29, 2024

    #802 - Arthur Brooks - How To Stop Feeling Lost & Find Your True Purpose

    #802 - Arthur Brooks - How To Stop Feeling Lost & Find Your True Purpose
    Arthur Brooks is a social scientist, professor at Harvard University, and an author. Chasing happiness appears to be the ultimate desire for many people, yet almost everyone struggles to understand what happiness actually is and how to achieve it. So if you speak to a specialist researcher, what does science say is the best way to actually cultivate happiness? Expect to learn what most people get wrong about happiness, the tension between a desire for success and a desire to feel like we’re enough, whether your drive for happiness is rooted in insecurity, if external accolades actually makes us happier, what the macronutrients of happiness are, the most common life elements that people believe will make them happy but actually don't and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get up to 70% off Gymshark's Summer Sale at https://gym.sh/modernwisdom (use code MW10) Get 10% off all Legendary Foods purchases at https://EatLegendary.com/modernwisdom (use code MODERNWISDOM) Get a 20% discount & free shipping on your Lawnmower 5.0 at https://manscaped.com/modernwisdom (use code MODERNWISDOM)  Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJune 27, 2024

    #801 - George Mack - 13 Life-Changing Ideas You’ve Never Heard Of

    #801 - George Mack - 13 Life-Changing Ideas You’ve Never Heard Of
    George Mack is a writer, marketer and an entrepreneur. George is one of my favourite writers and probably delivers the highest insights-per-minute of anyone on Twitter. Today we get to go through some of my favourite ideas from him over the last few months on human nature, tribalism, happiness and politics. Expect to learn what the Busy Trap is and how to avoid it, the biggest differences between the US and the UK, the contrarian argument for why money doesn’t buy happiness, why strategic ignorance is so important, George’s favourite story about Charlie Munger, the lessons we both learned celebrating George’s 30th birthday in Miami and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get up to 20% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout)  Sign up and download Grammarly for FREE at https://grammarly.com/modernwisdom Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (discount automatically applied) Get a Free Sample Pack of all LMNT Flavours with your first box at https://www.drinklmnt.com/modernwisdom (automatically applied at checkout) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJune 24, 2024

    #800 - 15 Lessons From 800 Episodes

    #800 - 15 Lessons From 800 Episodes
    To celebrate 800 episodes of Modern Wisdom, I broke down some of my favourite lessons, insights and quotes from the last hundred episodes. Expect to learn why starting the day with an imagined productivity debt is making you miserable, why being competent can be more of a curse than a blessing, how to easily assess how good an idea is, why obesity is a bigger problem than starvation, what I learned from Winston Churchill's father, how to predict culture war news stories, why it's so important to communicate directly and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get 10% discount on all Gymshark’s products at https://gymshark.com (use code MW10) Get $150 discount on Plunge’s amazing sauna or cold plunge at https://plunge.com (use code MW150) Get a 20% discount & free shipping on your Lawnmower 5.0 at https://manscaped.com/modernwisdom (use code MODERNWISDOM) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJune 22, 2024

    #799 - David Robson - The Science Of Building Genuine Friendships

    #799 - David Robson - The Science Of Building Genuine Friendships
    David Robson is a science writer, journalist, and an author. Loneliness is the real pandemic. Many people yearning for connection but struggle to hold onto it. David has uncovered 13 laws of human connection which you can apply to build and deepen relationships with the people in your life. Expect to learn whether we are actually in a loneliness crisis, how solitude impacts our health, why people are struggling to make deeper connections, how you can express appreciation more freely to others, how you can heal bad feelings, why asking for help is important, why it’s so important get better at forgiving others and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get a 20% discount on Nomatic’s amazing luggage at https://nomatic.com/modernwisdom (use code MW20) Sign up for a one-dollar-per-month trial period from Shopify at https://www.shopify.com/modernwisdom (automatically applied at checkout) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (discount automatically applied) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJune 20, 2024

    #798 - Dr Layne Norton - Nutrition Scientist’s Diet Advice For Lean Muscle & Longevity

    #798 - Dr Layne Norton - Nutrition Scientist’s Diet Advice For Lean Muscle & Longevity
    Layne Norton is a Doctor of Nutritional Science, a powerlifter and an author. Choosing the right diet and training plan for health can be complicated. Science offers one view, while your trainer suggests another. Fortunately, Layne provides all the expertise you need to find the best diets, foods, and lifestyle for you to build the healthiest and best version of yourself. Expect to learn why people keep failing at their diets, if there is a best diet for overall health and wellness, Laynes thoughts on the new Ozempic craze, if the Carnivore diet is actually healthy for you, the top health foods you should be eating more of, how bad soy is for your health or if the hype is overblown, and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get up to 20% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout) Get a 10% discount on Marek Health’s comprehensive blood panels at https://marekhealth.com/modernwisdom (use code MODERNWISDOM) Get a 20% discount on your first order from Maui Nui Venison by going to https://www.mauinuivenison.com/modernwisdom (use code MODERNWISDOM) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (discount automatically applied) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJune 17, 2024

    #797 - Mike Solana - TikTok Ban, Media War & The Corruption Of College

    #797 - Mike Solana - TikTok Ban, Media War & The Corruption Of College
    Mike Solana is a writer, Vice President at Founders Fund, Editor-in-Chief at Pirate Wires and a podcaster. To no one’s surprise, the future of American media and politics is upside down. A pervasive lack of trust leaves everyone uncertain about the rest of the year. So what does the future have in store, and what facts can we be certain about in a time of turmoil and confusion? Expect to learn why the new app Fly Me Out is a fantastic inditement of modern dating culture, why the tide is turning against independent and mainstream media, whether the left vs right debate is officially dead, what the future of news and media will look like, whether American colleges are really a lost cause, how taboo subjects affect science and censorship in academia, Mike’s prediction for the first Trump vs. Biden debate and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Get up to 20% discount on the best supplements from Momentous at https://livemomentous.com/modernwisdom (automatically applied at checkout) Get a Free Sample Pack of all LMNT Flavours with your first box at https://www.drinklmnt.com/modernwisdom (automatically applied at checkout) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJune 15, 2024

    #796 - Rob Kurzban - The Evolutionary Psychology Of Human Morality

    #796 - Rob Kurzban - The Evolutionary Psychology Of Human Morality
    Rob Kurzban is a psychologist and an author. What is morality? Why did it come about? Have humans always had it? Is it universal or temporary? Does it exist as a truth independent of humanity or is it entirely contingent on our culture? Expect to learn the evolutionary psychology of abortion policy, where the evolution of morality came from, the best examples of modern moral rules you might not think about, the biggest issues with being a moral hypocrite, the role that reputation plays in judging someone’s morality, how wisdom can help us overcome our biological hardware and much more... Sponsors: See discounts for all the products I use and recommend: https://chriswillx.com/deals Sign up for a one-dollar-per-month trial period from Shopify at https://www.shopify.com/modernwisdom (automatically applied at checkout) Get $150 discount on Plunge’s amazing sauna or cold plunge at https://plunge.com (use code MW150) Get 5 Free Travel Packs, Free Liquid Vitamin D and more from AG1 at https://drinkag1.com/modernwisdom (discount automatically applied) Extra Stuff: Get my free reading list of 100 books to read before you die: https://chriswillx.com/books Try my productivity energy drink Neutonic: https://neutonic.com/modernwisdom Episodes You Might Enjoy: #577 - David Goggins - This Is How To Master Your Life: http://tinyurl.com/43hv6y59 #712 - Dr Jordan Peterson - How To Destroy Your Negative Beliefs: http://tinyurl.com/2rtz7avf #700 - Dr Andrew Huberman - The Secret Tools To Hack Your Brain: http://tinyurl.com/3ccn5vkp - Get In Touch: Instagram: https://www.instagram.com/chriswillx Twitter: https://www.twitter.com/chriswillx YouTube: https://www.youtube.com/modernwisdompodcast Email: https://chriswillx.com/contact - Learn more about your ad choices. Visit megaphone.fm/adchoices
    Modern Wisdom
    enJune 13, 2024

    Related Episodes

    Ousted OpenAI board member on AI safety concerns

    Ousted OpenAI board member on AI safety concerns

    Sam Altman returns and OpenAI board members are given the boot; US authorities foil a plot to kill Sikh separatist leader on US soil; plus, the UK’s Autumn Statement increases the tax burden.


    Mentioned in this podcast:

    US thwarted plot to kill Sikh separatist on American soil

    Hunt cuts national insurance but taxes head to postwar high

    OpenAI says Sam Altman to return as chief executive under new board 


    The FT News Briefing is produced by Persis Love, Josh Gabert-Doyon and Edwin Lane. Additional help by Peter Barber, Michael Lello, David da Silva and Gavin Kallmann. Our engineer is Monica Lopez. Manuela Saragosa is the FT’s executive producer. The FT’s global head of audio is Cheryl Brumley. The show’s theme song is by Metaphor Music. 


    Read a transcript of this episode on FT.com



    Hosted on Acast. See acast.com/privacy for more information.


    #118 - Anthropic vs OpenAI, AutoGPT, RL at Scale, AI Safety, Memeworthy AI Videos

    #118 - Anthropic vs OpenAI, AutoGPT, RL at Scale, AI Safety, Memeworthy AI Videos

    Our 118th episode with a summary and discussion of last week's big AI news!

    Check out Jeremie's new book Quantum Physics Made Me Do It

    Read out our text newsletter at https://lastweekin.ai/

    Stories this week:

    Lawsuits and Probes, How OpenAI & Microsoft Are Impacting the Trajectory of AI

    Lawsuits and Probes, How OpenAI & Microsoft Are Impacting the Trajectory of AI

    On this episode of The AI Moment, we discuss an emerging Gen AI trend: Lawsuits and Probes, How OpenAI & Microsoft Are Impacting the Trajectory of AI

    The discussion covers:

    OpenAI and Microsoft seem to be at the center of discussions about AI monopolies and copyright fights. Last week the EU said they are considering opening an investigation into whether OpenAI and Microsoft’s partnership agreement is a potential monopoly. Separately, The New York Times sued the pair claiming copyright infringement when ChatGPT was trained on NYT content. I’ll discuss why I think the EU investigation will go nowhere and conversely, why the NYT suit is critical to better LLM outcomes. 

     

    Explainable AI - wie schafft die Industrie das?

    Explainable AI - wie schafft die Industrie das?
    Die Industrie braucht nachvollziehbare KI-Entscheidungen. Prof. Philipp Slusallek vom DFKI erklärt uns, wie es um explainable AI in der Industrie steht und warum die Forschung so wichtig ist. Slusallek gilt weltweit zu den wichtigsten Köpfen, wenn es um explainable AI geht. Noch mehr KI? https://kipodcast.de/podcast-archiv Kontakt zu unserem Interviewpartner: https://www.linkedin.com/in/slusallek/ Unser Buch zum Podcast https://www.hanser-fachbuch.de/buch/KI+in+der+Industrie/9783446463455 Unser Webinar https://industrialnewsgames.clickmeeting.com/ki-fragen ZEW Studie https://www.zew.de/de/presse/pressearchiv/kuenstliche-intelligenz-braucht-fachkraefte/ Interview Sepp Hochreiter https://industriemagazin.at/a/glauben-sie-an-den-freien-willen-herr-hochreiter