Logo

    safety of work

    Explore "safety of work" with insightful episodes like "Ep. 114 How do we manage safety for work from home workers?", "Ep. 113 When are seemingly impossible goals good for performance?", "Ep 112 How biased are incident investigators?", "Ep. 111 Are management walkarounds effective?" and "Ep. 109 Do safety performance indicators mean the same thing to different stakeholders?" from podcasts like ""The Safety of Work", "The Safety of Work", "The Safety of Work", "The Safety of Work" and "The Safety of Work"" and more!

    Episodes (49)

    Ep. 114 How do we manage safety for work from home workers?

    Ep. 114 How do we manage safety for work from home workers?

    Lastly, we delve into the role of leadership in addressing psychosocial hazards, the importance of standardized guidance for remote work, and the challenges faced by line managers in managing remote workers. We wrap up the episode by providing a toolkit for managers to effectively navigate the challenges of remote work, and highlight the need for tailored safety strategies for different work arrangements. 

     

    Discussion Points:

    • Different work-from-home arrangements
    • Safety needs of work from home
    • Challenges of remote worker representation
    • Understanding and managing psychosocial risks
    • Leadership and managing technical risks
    • Remote work challenges and physical presence
    • Practical takeaways and general discussion
    • Safety strategies for different work arrangements
    • The answer to our episode’s question – the short answer is that there definitely isn't a short answer. But this paper comes from a larger project and I know that the people who did the work have gathered together a list of existing resources and toolboxes and, they've even created a few prototype tools and training packages

    Quotes:

    "There's a risk that we're missing important contributions from workers with different needs, neurodiverse workers, workers with mental health issues, workers with particular reasons for working at home and we’re not going to be able to comment on the framework and how it might affect them." - Drew 

    “When organizations' number of incident reports go up and up and up and we struggle to understand, is that a sign of worsening safety or is that a sign of better reporting?” - David

    “They do highlight just how inconsistent organisations approaches are and perhaps the need for just some sort of standardised guidance on what is an organisation responsible for when you ask to work from home, or when they ask you to work from home.” - Drew

    “I think a lot of people's response to work from home is let's try to subtly discourage it because we're uncomfortable with it, at the same time as we recognise that it's probably inevitable.” - Drew

     

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 113 When are seemingly impossible goals good for performance?

    Ep. 113 When are seemingly impossible goals good for performance?

    The conversation stems from a review of a noteworthy paper from the Academy of Management Review Journal titled "The Paradox of Stretch Goals: Organizations in Pursuit of the Seemingly Impossible," which offers invaluable insights into the world of goal setting in senior management.

     

    Discussion Points:

    • The concept of seemingly impossible goals in organizations
    • Controversial nature and impact of ‘zero harm’
    • The role of stretch goals in promoting innovation
    • Potential negative effects of setting stretch goals
    • Psychological effects of ambitious organizational targets
    • Paradoxical outcomes of setting seemingly impossible goals
    • The role of emotions in achieving stretch goals
    • Factors that contribute to the success of stretch goals
    • Real-world examples of successful stretch goal implementation
    • Cautions against blind imitation of successful stretch goal strategies
    • The concept of zero harm in safety initiatives
    • Need for long-term research on zero harm effectiveness
    • The answer to our episode’s question – they're good when the organization is currently doing well enough, but stretch goals are not good when the organization is struggling and trying to turn a corner using that stretch goal.

     

    Quotes:

    "The basic idea [of ‘zero harm’] is that companies should adopt a visionary goal of having zero accidents. Often that comes along with commitment statements by managers, sometimes by workers as well that everyone is committed to the vision of having no accidents." - Drew 

    “I think organizations are in this loop, where I know maybe I can't achieve zero, but I can't say anything other than zero because that wouldn't be moral or responsible, because I'd be saying it's okay to hurt people. So I set zero because it's the best thing for me to do.” - David

    “The “stretch goal” was credited with the introduction of hybrid cars. You've got to have a whole new way of managing your car to get that seemingly impossible goal of doubling your efficiency.”-  Drew

     

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep 112 How biased are incident investigators?

    Ep 112 How biased are incident investigators?

    You’ll hear David and Drew delve into the often overlooked role of bias in accident investigations. They explore the potential pitfalls of data collection, particularly confirmation bias, and discuss the impacts of other biases such as anchoring bias and hindsight bias. Findings from the paper are examined, revealing insights into confirmation bias and its prevalence in interviews. Strategies for enhancing the quality of incident investigations are also discussed, emphasizing the need to shift focus from blaming individuals to investigating organizational causes. The episode concludes with the introduction of Safety Exchange, a platform for global safety community collaboration.

     

    Discussion Points:

    • Exploring the role of bias in accident investigations
    • Confirmation bias in data collection can validate initial assumptions
    • Review of a study examining confirmation bias among industry practitioners
    • Anchoring bias and hindsight bias on safety strategies
    • Recognizing and confronting personal biases 
    • Counterfactuals in steering conversations towards preconceived solutions
    • Strategies to enhance the quality of incident investigations
    • Shifting focus from blaming individuals to investigating organizational causes
    • Safety Exchange - a platform for global safety community
    • The challenges organizations face when conducting good quality investigations
    • Standardization, trust, and managing time and production constraints
    • Confirmation bias in shaping investigation outcomes
    • Techniques to avoid bias in accident investigations and improve their quality
    • Safety Exchange - a safe place for open discussion
    • Six key questions
    • The answer to our episode’s question – Very, and we all are as human beings. It does mean that we should probably worry more about the data collection phase of our investigations more than the causal analysis methodology and taxonomy that we concern ourselves with

     

    Quotes:

    "If we actually don't understand how to get a good data collection process, then it really doesn't matter what happens after that." - David 

    "The trick is recognizing our biases and separating ourselves from prior experiences to view each incident with fresh eyes." - Drew

    "I have heard people in the industry say this to me, that there's no new problems in safety, we've seen them all before." - David

    "In talking with people in the industry around this topic, incident investigation and incident investigation quality, 80% of the conversation is around that causal classification taxonomy." - David

     

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 111 Are management walkarounds effective?

    Ep. 111 Are management walkarounds effective?

    The research paper discussed is by Anita Tucker and Sarah Singer, titled "The Effectiveness of Management by Walking Around: A Randomised Field Study," published in Production and Operations Management. 

     

    Discussion Points:

    • Understanding senior leadership safety visits and management walkarounds
    • Best practices for safety management programs
    • How management walkarounds influence staff perception
    • Research findings comparing intervention and control groups
    • Consequences of management inaction
    • Effective implementation of changes 
    • Role of senior managers in prioritizing problems
    • Impact of patchy implementation
    • How leadership visits affect staff perception
    • Investigating management inaction 
    • Effective implementation and consultation
    • Key Takeaways:
    • The same general initiative can have very different effectiveness depending on how it's implemented and who's implementing it
    • When we do any sort of consultation effort, whether it's forums, walkarounds, reporting systems, or learning teams, what do we judge those on? Do we judge them on their success at consulting or do we judge them on their success at generating actions that get taken?
    • The answer to our episode’s question – Your answer here at the end of our notes is sometimes yes, sometimes no. It depends on the resulting actions.

     

    Quotes:

    "I've definitely lived and breathed this sort of a program a lot during my career." - David

    "The effectiveness of management walkarounds depends on the resulting actions." - David

    "The worst thing you can do is spend lots of time deciding what is a high-value problem." - Drew

    "Having the senior manager allocated really means that something serious has been done about it." - Drew

    "The individual who walks around with the leader and talks about safety with the leader, thinks a lot better about the organization." - David

     

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 109 Do safety performance indicators mean the same thing to different stakeholders?

    Ep. 109 Do safety performance indicators mean the same thing to different stakeholders?

    Show Notes -  The Safety of Work - Ep. 109 Do safety performance indicators mean the same thing to different stakeholders

    Dr. Drew Rae and Dr. David Provan

     

    The abstract reads:

    Indicators are used by most organizations to track their safety performance. Research attention has been drawn to what makes for a good indicator (specific, proactive, etc.) and the sometimes perverse and unexpected consequences of their introduction. While previous research has demonstrated some of the complexity, uncertainties and debates that surround safety indicators in the scientific community, to date, little attention has been paid to how a safety indicator can act as a boundary object that bridges different social worlds despite being the social groups’ diverse conceptualization. We examine how a safety performance indicator is interpreted and negotiated by different social groups in the context of public procurement of critical services, specifically fixed-wing ambulance services. The different uses that the procurer and service providers have for performance data are investigated, to analyze how a safety performance indicator can act as a boundary object, and with what consequences. Moving beyond the functionality of indicators to explore the meanings ascribed by different actors, allows for greater understanding of how indicators function in and between social groups and organizations, and how safety is more fundamentally conceived and enacted. In some cases, safety has become a proxy for other risks (reputation and financial). Focusing on the symbolic equivocality of outcome indicators and even more tightly defined safety performance indicators ultimately allows a richer understanding of the priorities of each actor within a supply chain and indicates that the imposition of oversimplified indicators may disrupt important work in ways that could be detrimental to safety performance.

     

    Discussion Points:

    • What we turn into numbers in an organization
    • Background of how this paper came about
    • Four main groups - procurement, incoming operator, outgoing operator, pilots
    • Availability is key for air ambulances
    • Incentivizing availability
    • Outgoing operators/providers feel they lost the contract unfairly
    • The point of view of the incoming operators/providers 
    • Military pilots fill in between providers
    • Using numbers to show how good/bad the service is
    • Pilots - caught in the middle
    • Contracts always require a trade-off
    • Boundary objects- what does availability mean to different people?
    • Maximizing core deliverables safely
    • Problems with measuring availability
    • Pressure within the system
    • Putting a number on performance 
    • Takeaways:
    • Choice of a certain metric that isn’t what you need leads to perverse behavior
    • Placing indicators on things can make other things invisible
    • Financial penalties tied to indicators can be counteractive
    • The answer to our episode’s question – Yes, metrics on the boundaries can communicate in different directions

     

    Quotes:

    “The way in which we turn things into numbers reveals a lot about the logic that is driving the way that we act and give meaning to our actions.” - Drew

    “You’ve got these different measures of the service that are vastly different, depending on what you’re counting, and what you’re looking for..” - David

    “The paper never draws a final conclusion - was the service good, was the service bad?” - Drew

    “The pilots are always in this sort of weird, negotiated situation, where ‘doing the right thing’ could be in either direction.” - Drew

    “If someone’s promising something better, bigger, faster and cheaper, make sure you take the effort to understand how that company is going to do that….” - David 

     

    Resources:

    Link to the Paper 

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 108 Could a 4 day work week improve employee well-being?

    Ep. 108 Could a 4 day work week improve employee well-being?

    This report details the full findings of the world’s largest four-day working week trial to date, comprising 61 companies and around 2,900 workers, that took place in the UK from June to December 2022. The design of the trial involved two months of preparation for participants, with workshops, coaching, mentoring and peer support, drawing on the experience of companies who had already moved to a shorter working week, as well as leading research and consultancy organisations. The report results draw on administrative data from companies, survey data from employees, alongside a range of interviews conducted over the pilot period, providing measurement points at the beginning, middle, and end of the trial.

     

    Discussion Points:

    • Background on the five-day workweek
    • We’ll set out to prove or review two central claims:
    • Reduce hours worked, and maintain same productivity
    • Reduced hours will provide benefits to the employees
    • Digging in to the Autonomy organization and the researchers and authors
    • Says “trial” but it’s more like a pilot program
    • 61 companies, June to December 2022
    • Issues with methodology - companies will change in 6 months coming out of Covid- a controlled trial would have been better
    • The pilot only includes white collar jobs - no physical, operational, high-hazard businesses
    • The revenue numbers
    • Analysing the staff numbers- how many filled out the survey? What positions did the respondents hold in the company?
    • Who experienced positive vs. negative changes in individual results
    • Interviews from the “shop floor” was actually CEOs and office staff
    • Eliminating wasted time from the five-day week
    • What different companies preferred employees to do with their ‘extra time’
    • Assumption 1: there is a business use case benefit- not true
    • Assumption 2: benefits for staff - mixed results
    • Takeaways:
    • Don’t use averages
    • Finding shared goals can be good for everyone
    • Be aware of burden-shifting
    • The answer to our episode’s question – It’s a promising idea, but results are mixed, and it requires more controlled trial research

     

    Quotes:

    “It’s important to note that this is a pre-Covid idea, this isn’t a response to Covid.” - Dr. Drew

    “...there's a reason why we like to do controlled trials. That reason is that things change in any company over six months.” - Drew

    “ …a lot of the qualitative data sample is very tiny. Only a third of the companies got spoken to, and only one senior representative who was already motivated to participate in the trial, would like to think that anything that their company does is successful.” - David

    “I'm pretty sure if you picked any company, you're taking into account things like government subsidies for Covid, grants, and things like that. Everyone had very different business in 2021-2022.” - Drew

    “We're not trying to accelerate the pace of work, we're trying to remove all of the unnecessary work.” - Drew

    “I think people who plan the battle don't battle the plan. I like collaborative decision-making in general, but I really like it in relation to goal setting and how to achieve those goals.” - David

     

    Resources:

    Link to the Pilot Study

    Autonomy

    The Harwood Experiment Episode

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 107 What research is needed to implement the Safework Australia WHS strategy?

    Ep. 107 What research is needed to implement the Safework Australia WHS strategy?

    Summary: 

    The purpose of the Australian Work Health and Safety (WHS) Strategy 2023–2033 (the Strategy) is to outline a national vision for WHS — Safe and healthy work for all — and set the platform for delivering on key WHS improvements. To do this, the Strategy articulates a primary goal supported by national targets, and the enablers, actions and system-wide shifts required to achieve this goal over the next ten years. This Strategy guides the work of Safe Work Australia and its Members, including representatives of governments, employers and workers – but should also contribute to the work and understanding of all in the WHS system including  researchers, experts and practitioners who play a role in owning, contributing to and realising the national vision.

     

    Discussion Points:

    • Background on Safe Work Australia 
    • The strategy includes six goals for reducing:
    • Worker fatalities caused by traumatic injuries by 30%          
    • The frequency rate of serious claims resulting in one or more weeks off work by 20%       
    • The frequency rate of claims resulting in permanent impairment by 15%    
    • The overall incidence of work-related injury or illness among workers to below 3.5%         
    • The frequency rate of work-related respiratory disease by 20% 
    • No new cases of accelerated silicosis by 2033
    • The strategy is a great opportunity to set a direction for research and education
    • Five actions covered by the strategy:
    • Information and raising awareness
    • National Coordination
    • Data and intelligence gathering
    • Health and safety leadership
    • Compliance and enforcement
    • When regulators fund research - they demand tangible results quickly
    • Many safety documents and corporate safety systems never reach the most vulnerable workers, who don’t have ‘regular’ long-term jobs
    • Standardization can increase unnecessary work
    • When and where do organizations access safety information?
    • Data - AI use for the future
    • Strategy lacks milestones within the ten-year span
    • Enforcement - we don’t have evidence-based data on the effects
    • Takeaways:
    • The idea of a national strategy? Good.
    • Balancing safety with innovation, evidence
    • Answering our episode question: Need research into specific workforces, what is the evidence behind specific industry issues.  “Lots of research is needed!”

     

    Quotes:

    “The fact is, that in Australia, traumatic injury fatalities - which are the main ones that they are counting - are really quite rare, even if you add the entire country together.” - Drew

    “I really see no point in these targets. They are not tangible, they’re not achievable, they’re not even measurable, with the exception of respiratory disease…” - Drew

    “These documents are not only an opportunity to set out a strategic direction for research and policy, and industry activity, but also an opportunity to educate.” - David

    “When regulators fund research, they tend to demand solutions. They want research that’s going to produce tangible results very quickly.” - Drew

    “I would have loved a concrete target for improving education and training- that is something that is really easy to quantify.” - Drew

     

    Resources:

    Link to the strategy document

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 106 Is it possible to teach critical thinking?

    Ep. 106 Is it possible to teach critical thinking?

    Baron's work focuses primarily on judgment and decision-making, a multi-disciplinary area that applies psychology to problems of ethical decisions and resource allocation in economics, law, business, and public policy. 

     

    The paper’s summary:

    Recent efforts to teach thinking could be unproductive without a theory of what needs to be taught and why. Analysis of where thinking goes wrong suggests that emphasis is needed on 'actively open-minded thinking'. including the effort to search for reasons why an initial conclusion might be wrong, and on reflection about rules of inference, such as heuristics used for making decisions and judgments. Such instruction has two functions. First. it helps students to think on their own. Second. it helps them to understand the nature of expert knowledge, and, more generally, the nature of academic disciplines. The second function, largely neglected in discussions of thinking instruction. can serve as the basis for thinking instruction in the disciplines. Students should learn how knowledge is obtained through actively open-minded thinking. Such learning will also teach students to recognize false claims to systematic knowledge.

     

    Discussion Points:

    • Critical thinking and Chat AI 
    • Teaching knowledge vs. critical thinking
    • Section One: Introduction- critical thinking is a stated goal of many teaching institutions
    • Section Two: The Current Rationale/What is thinking? 
    • Reading about thinking is quite difficult!
    • Baron’s “Myside Bias” is today’s confirmation or selection bias
    • Reflective learning- does it help with learning?
    • Section Three: Abuses - misapplying thinking in schools and business
    • Breaking down learning into sub-sections
    • Section Four: The growth of knowledge - beginning in Medieval times
    • Section Five: The basis of expertise - what is an ‘expert’? Every field has its own self-critiques
    • Drew’s brain is hurting just getting through this discussion
    • Section Six: What the educated person should know
    • Studying accidents in safety science - student assignments
    • Takeaways:
    • Good thinking means being able to make good decisions re: experts
    • Precision is required around what is necessary for learning
    • Well-informed self-criticism is necessary 
    • Answering our episode question: Can we teach critical thinking? It was never answered in this paper, but it gave us a lot to think about

     

    Quotes:

    “It’s a real stereotype that old high schools were all about rote learning. I don’t think that was ever the case. The best teachers have always tried to inspire their students to do more than just learn the material.” - Drew

    “Part of the point he’s making is, is that not everyone who holds themself out to be an expert IS an expert…that’s when we have to have good thinking tools .. who IS an expert and how do we know who to trust?” - Drew

    “Baron also says that even good thinking processes won’t necessarily help much when specific knowledge is lacking…” - David

    ‘The smarter students are, the better they are at using knowledge about cognitive biases to criticize other people’s beliefs, rather than to help themselves think more critically.” - Drew

    “Different fields advance by different sorts of criticism..to understand expertise a field you need to understand how that field does its internal critique.” - Drew

     

    Resources:

    Link to the paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 105 How can organisations learn faster?

    Ep. 105 How can organisations learn faster?

    You’ll hear a little about Schein’s early career at Harvard and MIT, including his Ph.D. work – a paper on the experience of POWs during wartime contrasted against the indoctrination of individuals joining an organization for employment. Some of Schein’s 30-year-old concepts that are now common practice and theory in organizations, such as “psychological safety”

     

    Discussion Points:

    • A brief overview of Schein’s career, at Harvard and MIT’s School of Management and his fascinating Ph.D. on POWs during the Korean War
    • A bit about the book, Humble Inquiry
    • Digging into the paper
    • Three types of learning:
    • Knowledge acquisition and insight learning
    • Habits and skills
    • Emotional conditioning and learned anxiety
    • Practical examples and the metaphor of Pavlov’s dog
    • Countering Anxiety I with Anxiety II
    • Three processes of ‘unfreezing’ an organization or individual to change:
    • Disconfirmation
    • Creation of guilt or anxiety
    • Psychological safety
    • Mistakes in organizations and how they respond
    • There are so many useful nuggets in this paper
    • Schein’s solutions: Steering committees/change teams/groups to lead the organizations and manage each other’s anxiety
    • Takeaways:
    • How an organization deals with mistakes will determine how change happens
    • Assessing levels of fear and anxiety
    • Know what stands in your way if you want progress
    • Answering our episode question: How can organizations learn faster? 1) Don't make people afraid to enter the green room. 2) Or make them more afraid to stand on the black platform.

     

    Quotes:

    “...a lot of people credit [Schein] with being the granddaddy of organizational culture.” - Drew

    “[Schein] says .. in order to learn skills, you've got to be willing to be temporarily incompetent, which is great if you're learning soccer and not so good if you're learning to run a nuclear power plant.” - Drew

    “Schein says quite clearly that punishment is very effective in eliminating certain kinds of behavior, but it's also very effective in inducing anxiety when in the presence of the person or the environment that taught you that lesson.” - Drew

    “We've said before that we think sometimes in safety, we're about three or four decades behind some of the other fields, and this might be another example of that.” - David

    “Though curiosity and innovation are values that are praised in our society, within organizations and particularly large organizations, they're not actually rewarded.” - Drew

     

    Resources:

    Link to the paper

    Humble Inquiry by Edgar Schein

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 104 How can we get better at using measurement?

    Ep. 104 How can we get better at using measurement?

    You’ll hear some dismaying statistics around the validity of research papers in general, some comments regarding the peer review process, and then we’ll dissect each of six questions that should be asked BEFORE you design your research.

     

    The paper’s abstract reads:

    In this article, we define questionable measurement practices (QMPs) as decisions researchers make that raise doubts about the validity of the measures, and ultimately the validity of study conclusions. Doubts arise for a host of reasons, including a lack of transparency, ignorance, negligence, or misrepresentation of the evidence. We describe the scope of the problem and focus on how transparency is a part of the solution. A lack of measurement transparency makes it impossible to evaluate potential threats to internal, external, statistical-conclusion, and construct validity. We demonstrate that psychology is plagued by a measurement schmeasurement attitude: QMPs are common, hide a stunning source of researcher degrees of freedom, and pose a serious threat to cumulative psychological science, but are largely ignored. We address these challenges by providing a set of questions that researchers and consumers of scientific research can consider to identify and avoid QMPs. Transparent answers to these measurement questions promote rigorous research, allow for thorough evaluations of a study’s inferences, and are necessary for meaningful replication studies.

     

    Discussion Points:

    • The appeal of the foundational question, “are we measuring what we think we’re measuring?”
    • Citations of studies - 40-93% of studies lack evidence that the measurement is valid
    • Psychological research and its lack of defining what measures are used, and the validity of their measurement, etc.
    • The peer review process - it helps, but can’t stop bad research being published
    • Why care about this issue? Lack of validity- the research answer may be the opposite
    • Designing research - like choosing different paths through a garden
    • The six main questions to avoid questionable measurement practices (QMPs)
    • What is your construct? 
    • Why/how did you select your measure?
    • What measure to operationalize the construct?
    • How did you quantify your measure?
    • Did you modify the scale? How and why?
    • Did you create a measure on the fly? 
    • Takeaways:
    • Expand your methods section in research papers
    • Ask these questions before you design your research
    • As research consumers, we can’t take results at face value
    • Answering our episode question: How can we get better? Transparency is the starting point.

     

    Resources:

    Link to the paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 101 When should incidents cause us to question risk assessments?

    Ep. 101 When should incidents cause us to question risk assessments?

    The paper’s abstract reads:

    This paper reflects on the credibility of nuclear risk assessment in the wake of the 2011 Fukushima meltdown. In democratic states, policymaking around nuclear energy has long been premised on an understanding that experts can objectively and accurately calculate the probability of catastrophic accidents. Yet the Fukushima disaster lends credence to the substantial body of social science research that suggests such calculations are fundamentally unworkable. Nevertheless, the credibility of these assessments appears to have survived the disaster, just as it has resisted the evidence of previous nuclear accidents. This paper looks at why. It argues that public narratives of the Fukushima disaster invariably frame it in ways that allow risk-assessment experts to “disown” it. It concludes that although these narratives are both rhetorically compelling and highly consequential to the governance of nuclear power, they are not entirely credible.

     

    Discussion Points:

    • Following up on a topic in episode 100 - nuclear safety and risk assessment
    • The narrative around planes, trains, cars and nuclear - risks vs. safety
    • Planning for disaster when you’ve promised there’s never going to be a nuclear disaster
    • The 1975 WASH-1400 Studies
    • Japanese disasters in the last 100 years
    • Four tenets of Downer’s paper:
      • The risk assessments themselves did not fail 
      • Relevance Defense: The failure of one assessment is not relevant to the other assessments
      • Compliance Defense: The assessments were sound, but people did not behave the way they were supposed to/did not obey the rules
      • Redemption Defense: The assessments were flawed, but we fixed them
    • Theories such as: Fukushima did happen - but not an actual ‘accident/meltdown’ - it basically withstood a tsunami when the country was flattened
    • Residents of Fukushima - they were told the plant was ‘safe’
    • The relevance defense, Chernobyl, and 3 Mile Island
    • Boeing disasters, their risk assessments, and blame
    • At the time of Fukushima, Japanese regulation and engineering was regarded as superior
    • This was not a Japanese reactor! It’s a U.S. design
    • The compliance defense, human error
    • The redemption defense, regulatory bodies taking all Fukushima elements into account
    • Downer quotes Peanuts comics in the paper - lessons - Lucy can’t be trusted!
    • This paper is not about what’s wrong with risk assessments- it’s about how we defend what we do
    • Takeaways:
    • Uncertainty is always present in risk assessments
    • You can never identify all failure modes
    • Three things always missing: anticipating mistakes, anticipating how complex tech is always changing, anticipating all of the little plastic connectors that can break
    • Assumptions - be wary, check all the what-if scenarios
    • Just because a regulator declares something safe, doesn’t mean it is
    • Answering our episode question: You must question risk assessments CONSTANTLY

     

    Quotes:

    “It’s a little bit surprising we don’t scrutinize the ‘control’ every time it fails.” - Drew

    “In the case of nuclear power, we’re in this awkward situation where, in order to prepare emergency plans, we have to contradict ourselves.” - Drew

    “If systems have got billions of potential ’billion to one’ accidents then it’s only expected that we’re going to see accidents from time to time.” - David

    “As the world gets more and more complex, then our parameters for these assessments need to become equally as complex.” - David

    “The mistakes that people make in these [risk assessments] are really quite consistent.” - Drew

     

    Resources:

    Disowning Fukushima Paper by John Downer

    WASH-1400 Studies

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep. 100 Can major accidents be prevented?

    Ep. 100 Can major accidents be prevented?

    The book explains Perrow’s theory that catastrophic accidents are inevitable in tightly coupled and complex systems. His theory predicts that failures will occur in multiple and unforeseen ways that are virtually impossible to predict. 

    Charles B. Perrow (1925 – 2019) was an emeritus professor of sociology at Yale University and visiting professor at Stanford University. He authored several books and many articles on organizations and their impact on society. One of his most cited works is Complex Organizations: A Critical Essay, first published in 1972.

     

    Discussion Points:

    • David and Drew reminisce about the podcast and achieving 100 episodes
    • Outsiders from sociology, management, and engineering entered the field in the 70s and 80s
    • Perrow was not a safety scientist, as he positioned himself against the academic establishment
    • Perrow’s strong bias against nuclear power weakens his writing
    • The 1979 near-disaster at Three Mile Island - Perrow was asked to write a report, which became the book, “Normal Accidents…”
    • The main tenets of Perrow’s core arguments:
    • Start with a ‘complex high-risk technology’ - aircraft, nuclear, etc
    • Two or more values start the accident
    • “Interactive Complexity”
    • 787 Boeing failures - failed system + unexpected operator response lead to disaster
    • There will always be separate individual failures, but can we predict or prevent the ‘perfect storm’ of mulitple failures at once?
    • Better technology is not the answer
    • Perrow predicted complex high-risk technology to be a major part of future accidents
    • Perrow believed nuclear power/nuclear weapons should be abandoned - risks outweigh benefits
    • Three reasons people may see his theories as wrong:
    • If you believe the risk assessments of nuclear are correct, then my theories are wrong
    • If they are contrary to public opinion and values
    • If safety requires more safe and error-free organizations
    • If there is a safer way to run the systems outside all of the above
    • The modern takeaway is a tradeoff between adding more controls, and increased complexity
    • The hierarchy of designers vs operators
    • We don’t think nearly enough about the role of power- who decides vs. who actually takes the risks?
    • There should be incentives to reduce complexity of systems and the uncertainty it creates
    • To answer this show’s question - not entirely, and we are constantly asking why 

     

    Quotes:

    “Perrow definitely wouldn’t consider himself a safety scientist, because he deliberately positioned himself against the academic establishment in safety.” - Drew

    “For an author whom I agree with an awful lot about, I absolutely HATE the way all of his writing is colored by…a bias against nuclear power.” - Drew

    [Perrow] has got a real skepticism of technological power.” - Drew

    "Small failures abound in big systems.” - David

    “So technology is both potentially a risk control, and a hazard itself, in [Perrow’s] simple language.” - David

     

    Resources:

    The Book – Normal accidents: Living with high-risk technologies

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep.99 When is dropping tools the right thing to do for safety?

    Ep.99 When is dropping tools the right thing to do for safety?

    The paper’s abstract reads: 

    The failure of 27 wildland firefighters to follow orders to drop their heavy tools so they could move faster and outrun an exploding fire led to their death within sight of safe areas. Possible explanations for this puzzling behavior are developed using guidelines proposed by James D. Thompson, the first editor of the Administrative Science Quarterly. These explanations are then used to show that scholars of organizations are in analogous threatened positions, and they too seem to be keeping their heavy tools and falling behind. ASQ's 40th anniversary provides a pretext to reexamine this potentially dysfunctional tendency and to modify it by reaffirming an updated version of Thompson's original guidelines.

     

    The Mann Gulch fire was a wildfire in Montana where 15 smokejumpers approached the fire to begin fighting it, and unexpected high winds caused the fire to suddenly expand. This "blow-up" of the fire covered 3,000 acres (1,200 ha) in ten minutes, claiming the lives of 13 firefighters, including 12 of the smokejumpers. Only three of the smokejumpers survived. 

    The South Canyon Fire was a 1994 wildfire that took the lives of 14 wildland firefighters on Storm King Mountain, near Glenwood Springs, Colorado, on July 6, 1994. It is often also referred to as the "Storm King" fire.

     

    Discussion Points:

    • Some details of the Mann Gulch fire deaths due to refusal to drop their tools 
    • Weich lays out ten reasons why these firefighters may have refused to drop their tools:
    • Couldn't hear the order
    • Lack of explanation for order - unusual, counterintuitive
    • You don’t trust the leader
    • Control- if you lose your tools, lose capability, not a firefighter
    • Skill at dropping tools - ie survivor who leaned a shovel against a tree instead of dropping
    • Skill with replacement activity - it’s an unfamiliar situation
    • Failure - to drop your tools, as a firefighter,  is to fail
    • Social dynamics - why would I do it if others are not
    • Consequences - if people believe it won’t make a difference, they won’t drop.These men should have been shown the difference it would make
    • Identity- being a firefighter, without tools they are throwing away their identity.  This was also shortly after WWII, where you are a coward if you throw away your weapons, and would be alienated from your group
    • Thomson had four principles necessary for research in his publication: 
    • Administrative science should focus on relationships - you can’t understand without structures and people and variables. 
    • Abstract concepts - not on single concrete ideas, but theories that apply to the field
    • Development of operational definitions that bridge concepts and raw experience - not vague fluffy things with confirmation bias - sadly, we still don’t have all the definitions today
    • Value of the problem - what do they mean? What is the service researchers are trying to provide? 
    • How Weick applies these principles to the ten reasons, then looks at what it means for researchers
    • Weick’s list of ten- they are multiple, interdependent reasons – they can all be true at the same time
    • Thompsons list of four, relating them to Weick’s ten, in today’s organizations
    • What are the heavy tools that we should get rid of? Weick links heaviest tools with identity
    • Drew’s thought - getting rid of risk assessments would let us move faster, but people won’t drop them, relating to the ten reasons above
    • Takeaways: 
    • 1) Emotional vs. cognitive  (did I hear that, do I know what to do) emotional (trust, failure, etc.) in individuals and teams
    • 2) Understanding group dynamics/first person/others to follow - the pilot diversion story, Piper Alpha oil rig jumpers, first firefighter who drops tools. 
    • Next week is episode 100 - we’ve got a plan!

     

    Quotes:

    “Our attachment to our tools is not a simple, rational thing.” - Drew

    “It’s really hard to recognize that you’re well past that point where success is not an option at all.” - Drew

    “These firefighters were several years since they’d been in a really raging, high-risk fire situation…” - David

    “I encourage anyone to read Weick’s papers, they’re always well-written.” - David

    “Well, I think according to Weick, the moment you begin to think that dropping your tools is impossible and unthinkable, that might be the moment you actually have to start wondering why you’re not dropping your tools.” - Drew

    “The heavier the tool is, the harder it is to drop.” - Drew 



    Resources:

    Karl Weick - Drop Your Tools Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Episode 97: Should we link safety performance to bonus pay?

    Episode 97: Should we link safety performance to bonus pay?

    This was very in-depth research within a single organization, and the survey questions it used were well-structured.  With 48 interviews to pull from, it definitely generated enough solid data to inform the paper’s results and make it a valuable study.We’ll be discussing the pros and cons of linking safety performance to monetary bonuses, which can often lead to misreporting, recategorizing, or other “perverse” behaviors regarding safety reporting and metrics, in order to capture that year-end dollar amount, especially among mid-level and senior management.

     

    Discussion Points:

    • Do these bonuses work as intended?
    • Oftentimes profit sharing within a company only targets senior management teams, at the expense of the front-line employees
    • If safety and other measures are tied monetarily to bonuses, organizations need to spend more than a few minutes determining what is being measured
    • Bonuses – do they really support safety? They don’t prevent accidents
    • “What gets measured gets managed” OR “What gets measured gets manipulated”
    • Supervisors and front-line survey respondents did not understand how metrics were used for bonuses
    • 87% replied that the safety measures had limited or negative effect
    • Nearly half said the bonus structure tied to safety showed that the organization felt safety was a priority
    • Nothing negative was recorded by the respondents in senior management- did they believe this is a useful tool?
    • Most organizations have only 5% or less performance tied to safety
    • David keeps giving examples in the hopes that Drew will agree that at least one of them is a good idea
    • Drew has “too much faith in humanity” around reporting and measuring safety in these organizations
    • Try this type of survey in your own organization and see what you find

     

    Quotes:

    “I’m really mixed, because I sort of agree on principle, but I disagree on any practical form.” - Drew

    “I think there’s a challenge between the ideals here and the practicalities.” - David

    “I think sometimes we can really put pretty high stakes on pretty poorly thought out things, we oversimplify what we’re going to measure and reward.” - Drew

    “If you look at the general literature on performance bonuses, you see that they cause trouble across the board…they don’t achieve their purposes…they cause senior executives to do behaviors that are quite perverse.” - Drew

    “I don’t like the way they’ve written up the analysis I think that there’s some lost opportunity due to a misguided desire to be too statistically methodical about something that doesn’t lend itself to the statistical analysis.” - Drew

    “If you are rewarding anything, then my view is that you’ve got to have safety alongside that if you want to signal an importance there.” - David

     

    Resources:

    Link to the Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork

    Ep.94 What makes a quality leadership engagement for safety?

    Ep.94 What makes a quality leadership engagement for safety?

    The authors’ goal was to produce a scoring protocol for safety-focused leadership engagements that reflects the consensus of a panel of industry experts. Therefore, the authors adopted a multiphased focus group research protocol to address three fundamental questions: 

     

    1. What are the characteristics of a high-quality leadership engagement? 

    2. What is the relative importance of these characteristics? 

    3. What is the reliability of the scorecard to assess the quality of leadership engagement?

     

    Just like the last episode’s paper, the research has merit, even though it was published in a trade journal and not an academic one.  The researchers interviewed 11 safety experts and identified 37 safety protocols to rank. This is a good starting point, but it would be better to also find out what these activities look like when they’re “done well,” and what success looks like when the safety measures, protocols, or attributes “work well.” 

     

    The Paper’s Main Research Takeaways:

    • Safety-focused leadership engagements are important because, if performed well, they can convey company priorities, demonstrate care and reinforce positive safety culture.
    • A team of 11 safety experts representing the four construction industry sectors identified and prioritized the attributes of an effective leadership engagement.
    • A scorecard was created to assess the quality of a leadership engagement, and the scorecard was shown to be reliable in independent validation.

     

    Discussion Points:

    • Dr. Drew and Dr. David’s initial thoughts on the paper
    • Thoughts on quality vs. quantity
    • How do the researchers define “leadership safety engagements”
    • The three key phases:
      • Phase 1: Identification of key attributes of excellent engagements
      • Phase 2: Determining the relative importance of potential predictors
      • Phase 3: Reliability check
    • The 15 key indicators–some are just common sense, some are relatively creepy
    • The end product, the checklist, is actually quite useful
    • The next phase should be evaluating results – do employees actually feel engaged with this approach?
    • Our key takeaways:
    • It is possible to design a process that may not actually be valid
    • The 37 items identified– a good start, but what about asking the people involved: what does it look like when “done well”
    • No matter what, purposeful safety engagement is very important
    • Ask what the actual leaders and employees think!
    • We look forward to the results in the next phase of this research
    • Send us your suggestions for future episodes, we are actively looking!

     

    Quotes:

    “If the measure itself drives a change to the practice, then I think that is helpful as well.” - Dr. David

    “I think just the exercise of trying to find those quality metrics gets us to think harder about what are we really trying to achieve by this activity.” - Dr. Drew

    “So I love the fact that they’ve said okay, we’re talking specifically about people who aren’t normally on-site, who are coming on-site, and the purpose is specifically a conversation about safety engagement. So it’s not to do an audit or some other activity.” - Dr. Drew

    “The goal of this research was to produce a scoring protocol for safety-focused leadership engagements, that reflects the common consensus of a panel of industry experts.” - Dr. David

    “We’ve been moving towards genuine physical disconnections between people doing work and the people trying to lead, and so it makes sense that over the next little while, companies are going to make very deliberate conscious efforts to reconnect, and to re-engage.” - Dr. Drew

    “I suspect people are going to be begging for tools like this in the next couple of years.” - Dr. Drew

    “At least the researchers have put a tentative idea out there now, which can be directly tested in the next phase, hopefully, of their research, or someone else’s research.” - Dr. Drew

     

    Resources:

    Link to the Research Paper

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork.com

    Ep.92 How do different career paths affect the roles and training needs of safety practitioners?

    Ep.92 How do different career paths affect the roles and training needs of safety practitioners?

    The paper results center on a survey sent to a multitude of French industries, and although the sampling is from only one country, 15 years ago, the findings are very illustrative of common issues among safety professionals within their organizations.  David used this paper as a reference for his PhD thesis, and we are going to dig into each section to discuss.

     

    The paper’s abstract introduction reads: 

    What are the training needs of company preventionists? An apparently straightforward question, but one that will very quickly run into a number of difficulties. The first involves the extreme variability of situations and functions concealed behind the term preventionist and which stretch way beyond the term’s polysemous nature. Moreover, analysis of the literature reveals that very few research papers have endeavoured to analyse the activities associated with prevention practices, especially those of preventionists. This is a fact, even though prevention-related issues and preventionist responsibilities are becoming increasingly important.

     

    Discussion Points:

    • The paper, reported from French industries, focuses heavily on safety in areas like occupational therapies, ergonomics, pesticides, hygiene, etc.
    • The downside of any “survey” result is that we can only capture what the respondents “say” or self-report about their experiences
    • Most of the survey participants were not originally trained as safety professionals
    • There are three subgroups within the survey:
      1. High school grads with little safety training
      2. Post high school with two-year tech training program paths to safety work
      3. University-educated levels including engineers and managers
    • There were six main positions isolated within this study:
      1. Prevention Specialists - hold a degree in safety, high status in safety management
      2. Field Preventionists - lesser status, operations level, closer to front lines
      3. Prevention Managers - executive status, senior management, engineers/project managers
      4. Preventionist Proxies - may be establishing safety programs, in opposition to the organization, chaotic positions
      5. Basic Coordinators - mainly focused on training others
      6. Unstructured - no established safety procedures, may have been thrown into this role
    • So many of the respondents felt isolated and frustrated within the organizations– which continues to be true in the safety profession
    • There is evidence in this paper and others that a large portion of safety professionals “hate their bosses” and feel ‘great distress’ in their positions
    • Only 2.5% felt comfortable negotiating safety with management
    • Takeaways:
      1. Safety professionals come from widely diverse backgrounds
      2. Training and education are imperative
      3. These are complex jobs that often are not on site
      4. Role clarity is very low, leading to frustration and job dissatisfaction
      5. Send us your suggestions for future episodes, we are actively looking!

     

    Quotes:

    “I think this study was quite a coordinated effort across the French industry that involved a lot of different professional associations.” - David

    “It might be interesting for our readers/listeners to sort of think about which of these six groups do you fit into and how well do you reckon that is a description of what you do.” - Drew

    “I thought it was worth highlighting just how much these different [job] categories are determined by the organization, not by the background or skill of the safety practitioner.” - Drew

    “[I read a paper that stated:] There is a significant proportion of safety professionals that hate their bosses …and it was one of the top five professions that hate their bosses and managers.” - David

    “You don’t have to go too far in the safety profession to find frustrated professionals.” - David

    “There’s a lot to think on and reflect on…it’s one sample in one country 15 years ago, but these are useful reflections as we get to the practical takeaways.” - David 

    “The activity that I like safety professionals to do is to think about the really important parts of their role that add the most value to the safety of work, and then go and ask questions of their stakeholders of what they think are the most valuable parts of the role, …and work toward alignment.” - David

    “Getting that role clarity makes you feel that you’re doing better in your job.” - Drew

     

    Resources:

    Link to the Safety Science Article

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork.com

    Ep.89 When is the process more important than the outcome?

    Ep.89 When is the process more important than the outcome?

    Wastell, who has a BSc and Ph.D. from Durham University, is Emeritus Professor in Operations Management and Information Systems at Nottingham University in the UK. Professor Wastell began his academic career as a cognitive neuroscientist at Durham, studying the relationships between brain activity and psychological processes.  His areas of expertise include neuroscience and social policy: critical perspectives; psychophysiological design of complex human-machine systems; Information systems and public sector reform; design and innovation in the public services; management as design; and human factors design of safe systems in child protection.

    Join us as we delve into the statement (summarized so eloquently in Wastell’s well-crafted abstract): “Methodology, whilst masquerading as the epitome of rationality, may thus operate as an irrational ritual, the enactment of which provides designers with a feeling of security and efficiency at the expense of real engagement with the task at hand.”

     

    Discussion Points:

    • How and when Dr. Rae became aware of this paper
    • Why this paper has many structural similarities to our paper, ”Safety work versus the safety of work” published in 2019
    • Organizations’ reliance on top-heavy processes and rituals such as Gantt charts, milestones, gateways, checklists, etc
    • Thoughts and reaction to Section I: A Cautionary Tale
    • Section II: Methodology: The Lionization of Technique
    • Section III: Methodology as a Social Defense
    • The three elements of social defense against anxiety:
    • Basic assumption (fight or flight)
    • Covert coalition (internal organization protection/family/mafia)
    • Organizational ritual (the focus of this paper)
    • Section IV: The Psychodynamics of Learning: Teddy Bears and Transitional Objects
    • Paul Feyerabend and his “Against Method” book
    • Our key takeaways from this paper and our discussion

     

    Quotes:

    “Methodology may not actually drive outcomes.” - David Provan

    “A methodology can probably never give us, repeatably, exactly what we’re after.” - David Provan

    “We have this proliferation of solutions, but the mere fact that we have so many solutions to that problem suggests that none of the individual solutions actually solve it.” - Drew Rae

    “Wastell calls out this large lack of empirical evidence around the structured methods that organizations use, and concludes that they seem to have more qualities of ‘religious convictions’ than scientific truths.” - David Provan

    “I love the fact that he calls out the ‘journey’ metaphor, which we use all the time in safety.” - Drew Rae

    “You can have transitional objects that don’t serve any of the purposes that they are leading you to.” - Drew Rae

    “Turn up to seminars, and just read papers, that are totally outside of your own field.” - Drew Rae

     

    Resources:

    Wastell’s Paper: The Fetish of Technique

    Paul Feyerabend (1924-1994)

    Book: Against Method by Paul Feyerabend

    Our Paper Safety Work vs. The Safety of Work

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork.com

    Ep.87 What exactly is Systems Thinking?

    Ep.87 What exactly is Systems Thinking?

    We will review each section of Leveson’s paper and discuss how she sets each section up by stating a general assumption and then proceeds to break that assumption down.We will discuss her analysis of:

    1. Safety vs. Reliability
    2. Retrospective vs. Prospective Analysis
    3. Three Levels of Accident Causes:
    4. Proximal event chain
    5. Conditions that allowed the event
    6. Systemic factors that contributed to both the conditions and the event

     

    Discussion Points:

    • Unlike some others, Leveson makes her work openly available on her website
    • Leveson’s books, SafeWare: System Safety and Computers (1995) and Engineering a Safer World: Systems Thinking Applied to Safety (2011)
    • Drew describes Leveson as a “prickly character” and once worked for her, and was eventually fired by her
    • Leveson came to engineering with a psychology background
    • Many safety professionals express concern regarding how major accidents keep happening and bemoaning - ‘why we can’t learn enough to prevent them?’
    • The first section of Leveson’s paper: Safety vs. Reliability - sometimes these concepts are at odds, sometimes they are the same thing
    • How cybernetics used to be ‘the thing’ but the theory of simple feedback loops fell apart
    • Summing up this section: safety is not the sum of reliability components
    • The second section of the paper: Retrospective vs. Prospective Accident Analysis
    • Most safety experts rely on and agree that retrospective accident analysis is still the best way to learn
    • Example - where technology changes slowly, ie airplanes, it’s acceptable to run a two-year investigation into accident causes
    • Example - where technology changes quickly, ie the 1999 Mars Climate Orbiter crash vs. Polar Lander crash, there is no way to use retrospective analysis to change the next iteration in time
    • The third section of the paper: Three Levels of Analysis
    • Its easiest to find the causes that led to the proximal event chain and the conditions that allowed the event, but identifying the systemic factors is more difficult because it’s not as easy to draw a causal link, it’s too indirect
    • The “5 Whys” method to analyzing an event or failure
    • Practical takeaways from Leveson’s paper–
    • STAMP (System-Theoretic Accident Model and Processes) using the accident causality model based on systems theory
    • Investigations should focus on fixing the part of the system that changes slowest
    • The exact front line events of the accident often don’t matter that much in improving safety
    • Closing question: “What exactly is systems thinking?” It is the adoption of the Rasmussian causation model– that accidents arise from a change in risk over time, and analyzing what causes that change in risk

     

    Quotes:

    “Leveson says, ‘If we can get it right some of the time, why can’t we get it right all of the time?’” - Dr. David Provan

    “Leveson says, ‘the more complex your system gets, that sort of local autonomy becomes dangerous because the accidents don’t happen at that local level.’” - Dr. Drew Rae

    “In linear systems, if you try to model things as chains of events, you just end up in circles.’” - Dr. Drew Rae

    “‘Never buy the first model of a new series [of new cars], wait for the subsequent models where the engineers had a chance to iron out all the bugs of that first model!” - Dr. David Provan

    “Leveson says the reason systemic factors don’t show up in accident reports is just because its so hard to draw a causal link.’” - Dr. Drew Rae

    “A lot of what Leveson is doing is drawing on a deep well of cybernetics theory.” - Dr. Drew Rae

     

    Resources:

    Applying Systems Thinking Paper by Leveson

    Nancy Leveson– Full List of Publications

    Nancy Leveson of MIT

    The Safety of Work Podcast

    The Safety of Work on LinkedIn

    Feedback@safetyofwork.com

    Ep.85 Why does safety get harder as systems get safer?

    Ep.85 Why does safety get harder as systems get safer?

    Find out our thoughts on this paper and our key takeaways for the ever-changing world of workplace safety. 

     

    Topics:

    • Introduction to the paper & the Author
    • “Adding more rules is not going to make your system safer.”
    • The principles of safety in the paper
    • Types of safety systems as broken down by the paper
    • Problems in these “Ultrasafe systems”
    • The Summary of developments of human error
    • The psychology of making mistakes
    • The Efficiency trade-off element in safety
    • Suggestions in Amalberti’s conclusion
    • Takeaway messages
    • Answering the question: Why does safety get harder as systems get safer?

     

    Quotes:

    “Systems are good - but they are bad because humans make mistakes” - Dr. Drew Rae

    “He doesn’t believe that zero is the optimal number of human errors” - Dr. Drew Rae

    “You can’t look at mistakes in isolation of the context”  - Dr. Drew Rae

    “The context and the system drive the behavior. - Dr. David Provan

    “It’s part of the human condition to accept mistakes. It is actually an important part of the way we learn and develop our understanding of things. - Dr. David Provan

     

     

    Resources:

    Griffith University Safety Science Innovation Lab

    The Safety of Work Podcast

    The Safety of Work LinkedIn

    Feedback@safetyofwork.com

    The Paradoxes of Almost Totally Safe Transportation Systems by R. Amalberti

    Risk Management in a Dynamic society: a Modeling problem - Jens Rasmussen

    The ETTO Principle: Efficiency-Thoroughness Trade-Off: Why Things That Go Right Sometimes Go Wrong - Book by Erik Hollnagel

    Ep.81 How does simulation training develop Safety II capabilities?

    Navigating safety: Necessary Compromises and Trade-Offs - Theory and Practice - Book by R. Amalberti

    Ep.83 Does the language used in investigations influence the recommendations?

    Ep.83 Does the language used in investigations influence the recommendations?

    This paper reveals some really interesting findings and it would be valuable for companies to take notice and possibly change the way they implement incident report recoMmendations. 

     

    Topics:

    • Introduction to the paper
    • The general process of an investigation
    • The Hypothesis 
    • The differences between the reports and their language
    • The results of the three reports
    • Differences in the recommendations on each of the reports
    • The different ways of interpreting the results
    • Practical Takeaways
    • Not sharing lessons learned from incidents - let others learn it for themselves by sharing the report.
    • Summary and answer to the question

     

     

    Quotes:

    “All of the information in every report is factual, all of the information is about the same real incident that happened.” Drew Rae

    “These are plausibly three different reports that are written for that same incident but they’re in very different styles, they highlight different facts and they emphasize different things.” Drew Rae

    “Incident reports could be doing so much more for us in terms of broader safety in the organization.” David Provan

    “From the same basic facts, what you select to highlight in the report and what story you use to tell seems to be leading us toward a particular recommendation.” - Drew Rae

     

    Resources:

    Griffith University Safety Science Innovation Lab

    The Safety of Work Podcast

    Feedback@safetyofwork.com

    Accident Report Interpretation Paper

    Episode 18 - Do Powerpoint Slides count as a safety hazard?

    Logo

    © 2024 Podcastworld. All rights reserved

    Stay up to date

    For any inquiries, please email us at hello@podcastworld.io