Logo

    statistical models

    Explore " statistical models" with insightful episodes like "Model Validation: Performance", "Model validation: Robustness and resilience", "S2E05: Model Based Thinking", "SCCM Pod-246 Interaction Between Fluids and Vasoactive Agents on Mortality in Septic Shock" and "SCCM Pod-246 Interaction Between Fluids and Vasoactive Agents on Mortality in Septic Shock" from podcasts like ""The AI Fundamentalists", "The AI Fundamentalists", "Quantitude", "iCritical Care: Critical Care Medicine" and "SCCM Podcast"" and more!

    Episodes (5)

    Model Validation: Performance

    Model Validation: Performance

    Episode 9. Continuing our series run about model validation. In this episode, the hosts focus on aspects of performance, why we need to do statistics correctly, and not use metrics without understanding how they work, to ensure that models are evaluated in a meaningful way.

    • AI regulations, red team testing, and physics-based modeling. 0:03
    • Evaluating machine learning models using accuracy, recall, and precision. 6:52
      • The four types of results in classification: true positive, false positive, true negative, and false negative.
      • The three standard metrics are composed of these elements: accuracy, recall, and precision.
    • Accuracy metrics for classification models. 12:36
      • Precision and recall are interrelated aspects of accuracy in machine learning.
      • Using F1 score and F beta score in classification models, particularly when dealing with imbalanced data.
    • Performance metrics for regression tasks. 17:08
      • Handling imbalanced outcomes in machine learning, particularly in regression tasks.
      • The different metrics used to evaluate regression models, including mean squared error.
    • Performance metrics for machine learning models. 19:56
      • Mean squared error (MSE) as a metric for evaluating the accuracy of machine learning models, using the example of predicting house prices.
      • Mean absolute error (MAE) as an alternative metric, which penalizes large errors less heavily and is more straightforward to compute.
    • Graph theory and operations research applications. 25:48
      • Graph theory in machine learning, including the shortest path problem and clustering. Euclidean distance is a popular benchmark for measuring distances between data points. 
    • Machine learning metrics and evaluation methods. 33:06
    • Model validation using statistics and information theory. 37:08
      • Entropy, its roots in classical mechanics and thermodynamics, and its application in information theory, particularly Shannon entropy calculation. 
      • The importance the use case and validation metrics for machine learning models.

    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

    Model validation: Robustness and resilience

    Model validation: Robustness and resilience

    Episode 8. This is the first in a series of episodes dedicated to model validation. Today, we focus on model robustness and resilience. From complex financial systems to why your gym might be overcrowded at New Year's, you've been directly affected by these aspects of model validation.

    AI hype and consumer trust (0:03) 

    Model validation and its importance in AI development (3:42)

    • Importance of model validation in AI development, ensuring models are doing what they're supposed to do.
    • FTC's heightened awareness of responsibility and the need for fair and unbiased AI practices.
    • Model validation (targeted, specific) vs model evaluation (general, open-ended).

    Model validation and resilience in machine learning (8:26)

    • Collaboration between engineers and businesses to validate models for resilience and robustness.
    • Resilience: how well a model handles adverse data scenarios.
    • Robustness: model's ability to generalize to unforeseen data.
    • Aerospace Engineering: models must be resilient and robust to perform well in real-world environments.

    Statistical evaluation and modeling in machine learning (14:09)

    • Statistical evaluation involves modeling distribution without knowing everything, using methods like Monte Carlo sampling.
    • Monte Carlo simulations originated in physics for assessing risk and uncertainty in decision-making.

    Monte Carlo methods for analyzing model robustness and resilience (17:24)

    • Monte Carlo simulations allow exploration of potential input spaces and estimation of underlying distribution.
    • Useful when analytical solutions are unavailable.
    • Sensitivity analysis and uncertainty analysis as major flavors of analyses.

    Monte Carlo techniques and model validation (21:31)

    • Versatility of Monte Carlo simulations in various fields.
    • Using Monte Carlo experiments to explore semantic space vectors of language models like GPT.
    • Importance of validating machine learning models through negative scenario analysis.

    Stress testing and resiliency in finance and engineering (25:48)

    Using operations research and model validation in AI development (30:13)

    • Operations research can help find an equilibrium in overcrowding in gyms.
    • Robust methods for solving complex problems in logistics and healthcare.
    • Model validation's importance in addressing issues of bias and fairness in AI systems.


    Do you have a question or a discussion topic for the AI Fundamentalists? Connect with them to comment on your favorite topics:

    • LinkedIn - Episode summaries, shares of cited articles, and more.
    • YouTube - Was it something that we said? Good. Share your favorite quotes.
    • Visit our page - see past episodes and submit your feedback! It continues to inspire future episodes.

    S2E05: Model Based Thinking

    S2E05: Model Based Thinking

    In this episode Greg and Patrick wander semi-drunkenly around the topic of model-based inference and discuss how this perspective can help move us forward as a scientific discipline. They also somehow manage to discuss explosives, sniffing glue, homemade 787s, catfish noodling, the Ikea helpline, Calvinball, Ludwig Beethoven, Rube Goldberg, hell's half acre, denouements, and intolerable hypocrisy.

    Stay in contact with Quantitude!

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io