Logo

    reproducibility

    Explore " reproducibility" with insightful episodes like "Bomb Dog Research, Odor Purity, and Research Update with Dr. Lauryn DeGreeff, Dr. Michele Maughan and Jenna Gadberry", "549: Will it Nixcloud?", "Automation, Algorithms, and Research Integrity: What does a Hybrid Human-Machine Future Look Like for Academic Publishing?", "The Value of Reproducibility and Ease of AI Deployment with Daniel Lenton" and "513: There Is No Distro" from podcasts like ""K9 Detection Collaborative", "LINUX Unplugged", "Insights Xchange: Conversations Shaping Academic Research", "Open||Source||Data" and "LINUX Unplugged"" and more!

    Episodes (12)

    Bomb Dog Research, Odor Purity, and Research Update with Dr. Lauryn DeGreeff, Dr. Michele Maughan and Jenna Gadberry

    Bomb Dog Research, Odor Purity, and Research Update with Dr. Lauryn DeGreeff, Dr. Michele Maughan and Jenna Gadberry

    What to listen for:

    Our hosts Robin Greubel and Stacy Barnett dive headfirst into the high-stakes world of canine explosive detection with an esteemed panel of experts, Dr. Lauryn DeGreeff, Dr. Michele Maughan, and Jenna Gadberry. As they dissect the complexities of non-detonable canine training aids, you'll get a rare behind-the-scenes look at the intricate dance of odor chemistry and the safety measures paramount in training with both traditional and peroxide-based explosives. Safety for furry detectives and their handlers leads the charge in the discussion, as they navigate the laboratory and field labyrinth to analyze the effectiveness of different training aids.

    Ever pondered how a detection dog's nose works like a sophisticated bio-sensor, dissecting the world's odors? Our hosts and guests tackle the unfortunate issue of variability in training aids. With insights from Dr. Nathan Hall, they unravel the scent detection conundrum, especially when dealing with volatile compounds. Drawing from real-world applications and scientific scrutiny, this episode uncovers the essential need for multifaceted exposure and stringent training to enhance the fidelity of our four-legged bomb detectors.



    Key Topics:

    • Explosives Dogs and Training Aids (0:02:04)
    • Working with Green (naive to odor) Dogs (0:07:40)
    • Dog Training Aids, Inconsistency, and Quality Control (0:12:38)
    • Explosive TTP’s (0:16:59)
    • Training Aids for Explosive Detection Dogs (0:19:03)
    • Training Aids and Quality Control (0:26:51)
    • Canine Detection Capabilities and Training Aids (0:33:40)
    • Risk Assessment around Canine Explosives Work (0:38:44)
    • Explosives and Aids Aging (0:49:32)
    • Canine Scent Detection and Research (0:58:22)



    Resources:



    We want to hear from you:

    Automation, Algorithms, and Research Integrity: What does a Hybrid Human-Machine Future Look Like for Academic Publishing?

    Automation, Algorithms, and Research Integrity: What does a Hybrid Human-Machine Future Look Like for Academic Publishing?

    This episode features Nikesh Gosalia in captivating conversation with Dr. Leslie McIntosh, Vice President of Research Integrity at Digital Science and Founder of Ripeta, a software and services company with a mission to enhance the transparency, reproducibility, and quality of scientific research. Together, they delve into the dynamic intersection of artificial intelligence and research integrity.

    Dr. McIntosh discusses how as AI evolves, distinguishing genuine discoveries from synthetic data becomes increasingly challenging. She urges us to contemplate how AI is transforming the essence of research and our ability to uphold its honesty. Dr. McIntosh also shares her vision for the future, highlighting the significance of recognizing researchers who epitomize meticulous and trustworthy science.

    Talking about her initiative, Dimensions Research Integrity, she underscores the power of leveraging tools for the common good while remaining vigilant against potential biases. Through engaging anecdotes and expert insights, Dr. McIntosh invites us to reflect on how we can ensure the ongoing transparency, trustworthiness, and authenticity of scientific research amid the AI revolution.

    Tune in to gain a deeper understanding of the nuanced interplay between AI, research ethics, and the future of scholarly knowledge. 


    Nikesh Gosalia

    https://twitter.com/NikeshGo

    https://www.linkedin.com/in/nikeshgosalia/


    Leslie McIntosh

    https://twitter.com/mcintold

    https://www.linkedin.com/in/leslie-mcintosh/



    Show notes

    Thank you for listening! Here are links to some of the things that Leslie McIntosh and Nikesh discussed in the podcast. 

    We would love to hear from you, so feel free to drop a line at insightsxchange@cactusglobal.com

    Repata | https://ripeta.com/
    Elisabeth Bik | https://en.wikipedia.org/wiki/Elisabeth_Bik 
    World Conference on Research Integrity | https://www.wcrif.org/ 
    Washington University of Medicine | https://medicine.wustl.edu/ 
    Retraction Watch | https://retractionwatch.com/ 
    Frontiers | https://www.frontiersin.org/ 
    Research Data Alliance | https://www.rd-alliance.org/ 
    Harvard Institute for Policy (Shorenstein Center) | https://shorensteincenter.org/ 
    ORCiD | https://orcid.org/ 
    Archive | https://arxiv.org/ 
    Digital Science | https://www.digital-science.com/ 
    Dimensions | https://www.dimensions.ai/  

    Insights Xchange is a fortnightly podcast brought to you by Cactus Communications (CACTUS). Follow us:

    The Value of Reproducibility and Ease of AI Deployment with Daniel Lenton

    The Value of Reproducibility and Ease of AI Deployment with Daniel Lenton

    This episode features an interview with Daniel Lenton, Founder and CEO of Ivy, where the team is on a mission to unify the fragmented AI stack. Prior to Ivy, Daniel was a Robotics Research Engineer at Dyson and a Deep Learning Research Scientist for Amazon Prime Air. During his PhD, Daniel explored the intersection between learning-based geometric representations, ego-centric perception, spatial memory, and visuomotor control for robotics.

    In this episode, Sam and Daniel discuss the inspiration behind Ivy, open source reproducibility, and democratizing AI.

    -------------------

    "There's too much amazing stuff going on, from too many different parties. We just want to be the objective source of truth to show you the data and show you where your model will be doing best, and continue to do this as a service or something like this. This is high-level, some of the areas we see and going into, we really want to be a useful tool for anybody that wants to just kind of understand this fragmented complex space quickly and intuitively, and we are trying to be the tool that does that." – Daniel Lenton

    -------------------

    Episode Timestamps:

    (01:00): What open source data means to Daniel

    (05:37): The challenges of building Ivy

    (15:37): The future of Ivy

    (25:19): Who should know about Ivy

    (28:46): Daniel’s advice for the audience

    (32:00): Backstage takeaways with executive producer, Audra Montenegro

    -------------------

    Links:

    LinkedIn - Connect with Daniel

    Learn more about Ivy

    S2E12: Post-Publication Review -- Patrick's Truly Horrible Idea

    S2E12: Post-Publication Review -- Patrick's Truly Horrible Idea

    Greg and Patrick discuss Patrick's potentially horrible idea for post-publication review, in which a journal is created for the sole purpose of reviewing the soundness of the quantitative methods used in papers that have already been published in flagship journals. Hair-brained scheme, or sheer genius? You be the judge. Along the way they also mention driving without headlights, homemade explosives, modified wheelchairs, the IJCC, guacamole at Margaret's Cantina, dishwashers and JetDry, the IAD, blind pigs and acorns, the 15 commandments, and Jamaican bobsleds. 

    Stay in contact with Quantitude!

    S2E11: The Replication ... Dilemma: A Conversation with Samantha Anderson

    S2E11: The Replication ... Dilemma: A Conversation with Samantha Anderson

    Today Patrick & Greg step off-camera and pull in Dr. Samantha Anderson who is a quantitative psychologist at Arizona State University and is an expert in all things related to the so-called “replication crisis.” They are uncharacteristically quiet as she talks about the past, present, and future of replication in the social sciences. Along the way they also discuss: Ask Sammy, bioluminescence, cheat day, podcast pre-registration, music that makes you younger, Defcon 5, or maybe 1, or 3, mother-in-laws, air quotes, and the queen of the world. 

    Stay in contact with Quantitude!

    Turning Over a New Leif with Leif Nelson

    Turning Over a New Leif with Leif Nelson

    In this episode, we talk to Leif Nelson, professor at Haas School of Business at UC Berkeley. We talk about open science, the False Positive Psychology paper and its aftermath, current Data Colada replication efforts, and Paul's undying fascination with Leif.


    False Positive Psychology: https://journals.sagepub.com/doi/full/10.1177/0956797611417632
    Data Colada (blog by Leif, Uri Simonsohn, and Joe Simmons): http://datacolada.org/
    As Predicted: https://aspredicted.org/ 

    78: Large-scale collaborative science (with Lisa DeBruine)

    78: Large-scale collaborative science (with Lisa DeBruine)
    In this episde, we chat with Lisa DeBruine (University of Glasgow) about her experience with large-scale collaborative science and how her psychology department made the switch from SPSS to R. Discussion points and links galore: Deborah Apthorp's tweet on having to teach SPSS (https://twitter.com/deborahapthorp/status/1092599860212068352), "because that's what students know" People who are involved with teaching R for psychology at the University of Glasgow: @Eavanmac @dalejbarr @McAleerP @clelandwoods @PatersonHelena @emilynordmann Why the #psyTeachR started teaching R for reproducible science Data wrangling vs. statistical analysis The psyTeachR website (https://psyteachr.github.io) Danielle Navarro (https://djnavarro.net), and her R text book (https://learningstatisticswithr.com) that you should read Lisa's "faux" package (https://github.com/debruine/faux) for data simulation Sometimes you can't share data, simulations are a good way around this problem "synthpop" is the name of the package (https://cran.r-project.org/web/packages/synthpop/vignettes/synthpop.pdf) that Dan mentioned that can simulate census data Power analysis can be hard once you go beyond the more conventional statistical tests (e.g., t-tests, ANOVAs etc...) Lisa's OSF page (https://osf.io/4i578/) Dirty code is always better than no code (but the cleaner the better) Live coding is terrifying but a useful teaching tool. Here's Dan live coding how to build a website in R (https://twitter.com/dsquintana/status/1070392412445401088), typos and all Using a Slack group for help The psychological science accelerator (https://psysciacc.org) Chris Chartier (Psych Science Accelerator Director) on Twitter (https://twitter.com/CRChartier) A few of the other (hundreds) of folks involved with the Psych Science Accelerator Director: @PsySciAcc: @CRChartier @BenCJ @JkayFlake @hmoshontz Lisa's Registered Report project (https://osf.io/f7v3n/) on face rating The challenges associated with collaborating with 100+ labs Authorship order Author contributions: CRediT taxonomy (http://dev.biologists.org/content/author-contributions) The DARPA-funding project (https://www.wired.com/story/darpa-wants-to-solve-sciences-replication-crisis-with-robots/) on using AI to determine reproducibility Interacting Minds workshop (http://interactingminds.au.dk/events/single-events/artikel/2-day-workshop-open-science-and-reproducibility/) in Denmark in March on open science and reproducibility Lisa shares what Glasgow is like Lisa has changed her mind about the importance of research metrics (h-index, impact factors etc...) Lisa thinks you should read this paper (https://journals.sagepub.com/doi/abs/10.1177/2515245918770963) on equivalence testing, which includes two former guests, Daniel Lakens (https://everythinghertz.com/guests/daniel-lakens), Anne Scheel (https://everythinghertz.com/guests/anne-scheel), and friend of the show Peder Isager. Here's the latest episode (https://anchor.fm/psychsococlock/episodes/Making-and-breaking-habits---Psych-Soc-OClock---Episode-4-e3327v) from Psych Soc O'Clock Other links - Dan on twitter (www.twitter.com/dsquintana) - James on twitter (www.twitter.com/jamesheathers) - Everything Hertz on twitter (www.twitter.com/hertzpodcast) - Everything Hertz on Facebook (www.facebook.com/everythinghertzpodcast/) Music credits: Lee Rosevere (freemusicarchive.org/music/Lee_Rosevere/) Support us on Patreon (https://www.patreon.com/hertzpodcast) and get bonus stuff! $1 a month or more: Monthly newsletter + Access to behind-the-scenes photos & video via the Patreon app + the the warm feeling you're supporting the show $5 a month or more: All the stuff you get in the first tier PLUS a bonus mini episode every month (extras + the bits we couldn't include in our regular episodes) Episode citation and permanent link Quintana, D.S., Heathers, J.A.J. (Hosts). (2019, February 18) "Large-scale collaborative science (with Lisa DeBruine)", Everything Hertz [Audio podcast], doi: 10.17605/OSF.IO/JDT6F (https://osf.io/jdt6f/) Special Guest: Lisa DeBruine.

    Rigor, Reproducibility, and Transparency in Research

    Rigor, Reproducibility, and Transparency  in Research
    This podcast aims to help the extramural research community better understand the NIH’s Rigor and Transparency policy. Dr. Patricia Valdez, NIH’s Extramural Research Integrity Officer, describes how to address the key policy elements, including rigor of the prior research, rigorous experimental design, consideration of key biological variables, and authentication of key biological and/or chemical resources, in an application, how they are considered during peer review, and annual progress reporting following award.
    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io