Category Archives: safety culture

“Safe fieldwork strategies for at-risk individuals, their supervisors and institutions” and “Trauma and Design”

CHAS Journal Club Nov 10, 2021

On November 10, the CHAS Journal Club discussed two articles related to social safety considerations in research environments. The discussion was lead by Anthony Appleton, of Colorado State University. Anthony’s slides are below and the comments from the table read of the two articles can be found after that.

Table Read for The Art & State of Safety Journal Club

Excerpts from “Safe fieldwork strategies for at-risk individuals, their supervisors and institutions” and “Trauma and Design”

Full articles can be found here:

Safe fieldwork strategies: https://www.nature.com/articles/s41559-020-01328-5.pdf

Trauma and Design: https://medium.com/surviving-ideo/trauma-and-design-62838cc14e94

Safe fieldwork strategies for at-risk individuals, their supervisors and institutions

Everyone deserves to conduct fieldwork[a] as safely as possible; yet not all fieldworkers face the same risks going into the field. At-risk individuals include minority identities of the following: race/ethnicity, sexual orientation, disability, gender identity and/or religion. When individuals from these backgrounds enter unfamiliar communities in the course of fieldwork[b][c][d], they may be placed in an uncomfortable and potentially unsafe ‘othered’ position, and prejudice may manifest against them. Both immediately and over the long term, prejudice-driven conflict can threaten a researcher’s physical health and safety[e], up to and including their life. Additionally, such situations impact mental health, productivity and professional development.

The risk to a diverse scientific community

Given the value of a diverse scientific community[f], the increased risk to certain populations in the field — and the actions needed to protect such individuals — must be addressed by the entire scientific community if we are to build and retain diversity in disciplines that require fieldwork. While many field-based disciplines are aware of the lack of diversity in their cohorts, there may be less awareness of the fact that the career advancement of minoritized researchers can be stunted or permanently derailed[g] after a negative experience during fieldwork.

Defining and assessing risk

Fieldwork in certain geographic areas and/or working alone has led many researchers to feel uncomfortable, frightened and/or threatened by local community members and/or their scientific colleagues. Local community members may use individuals’ identities as a

biased marker of danger to the community, putting them at risk from law enforcement and vigilante behaviours. Researchers’ feelings of discomfort in the field have been reaffirmed by the murders of Black, Indigenous and people of colour including Emmett Till, Tamir Rice, Ahmaud Arbery and Breonna Taylor; however, fieldwork also presents increased risk for individuals in other demographics. For example, researchers who wear clothing denoting a minority religion or those whose gender identity, disability and/or sexual orientation are made visible can be at increased risk when conducting fieldwork. Several studies have documented the high incidence of harassment or misconduct that occurs in the field. Based on lived experience, many at-risk individuals already consider how they will handle harassment or misconduct before they ever get into the field, but this is a burden that must be shared[h][i] by their lab, departments and institutions[j] as well. Labs, departments and institutions must address such risks by informing future fieldworkers of potential risks and discussing these with them, as well as making available resources and protocols for filing complaints and accessing[k][l][m] training well before the risk presents itself.

Conversations aimed at discussing potential risks rarely occur between researchers and their supervisors, especially in situations where supervisors may not be aware of the risk posed[n] or understand the considerable impact[o] of these threats on the researcher, their productivity and their professional development. Quoted from Barker[p][q]: “…faculty members of majority groups (such as White faculty in predominantly White institutions (PWI)) may not have an understanding of the ‘educational and non-academic experiences’ of ethnic minority graduate students or lack ‘experience in working in diverse contexts’.” This extends to any supervisor who does not share identity(ies) with those whom they supervise, and would have had to receive specific training on this subject matter in order to be aware of these potential risks.

Dispatches from the field

The following are examples of situations that at-risk researchers have experienced in the field: police have been called on them; a gun has been pulled on them[r][s][t][u] (by law enforcement and/or local community members); hate symbols have been displayed at or near the field site; the field site is an area with a history of hate crimes against their identity (including ‘sundown towns’, in which all-white communities physically, or through threats of extreme violence, forced people of colour out of town by sundown); available housing has historically problematic connotations (for example, a former plantation where people were enslaved); service has been refused (for example, food or housing); slurs have been used or researchers verbally abused due to misunderstandings about a disability; undue monitoring or stalking by unknown and potentially aggressive individuals; sexual harassment and/or assault occurred. Such traumatic situations are a routine expectation in the lives of at-risk researchers. The chance of these scenarios arising is exacerbated in field settings where researchers are alone[v][w], in an unfamiliar area with little-to-no institutional or peer support, or are with research team members who may be uninformed, unaware or not trusted. In these situations, many at-risk researchers actively modify their behaviour in an attempt to avoid the kinds of situations described above. However, doing so is mentally draining, with clear downstream effects on their ability to conduct research.[x][y][z]

Mitigating risk[aa][ab][ac]

The isolating and severe burden of fieldwork risks to minoritized individuals means that supervisors[ad] bear a responsibility to educate themselves[ae] on the differential risks posed to their students and junior colleagues in the field. When learning of risks and the realized potential for negative experiences in the field, the supervisor should work with at-risk researchers to develop strategies and practices for mitigation in ongoing and future research environments.[af] Designing best practices for safety in the field for at-risk researchers will inform all team members and supervisors of ways to promote safe research, maximize productivity and engender a more inclusive culture in their community. This means asking[ag][ah][ai][aj][ak][al][am] who is at heightened risk, including but not limited to those expressing visible signs of their race/ethnicity, disability, sexual orientation, gender identity/expression (for example, femme-identifying, transgender, non-binary) and/or religion (for example, Jewish, Muslim and Sikh[an][ao]). Importantly, the condition of being ‘at-risk’ is fluid with respect to fieldwork and extends to any identity that is viewed as different[ap] from the local community in which the research is being conducted. In some cases, fieldwork presents a situation where a majority identity at their home institution can be the minority identity at the field site, whether nearby or international. Supervisors,  colleagues and students must also interrogate where and when risk is likely to occur: an individual could be at-risk whenever someone perceives them as different in the location where they conduct research. Given the variety of places that at-risk situations can occur, both at home, in country or abroad, researchers and supervisors must work under the expectation that prejudice can arise in any situation.[aq]

Strategies for researchers, supervisors, and institutions to minimize risk

Here we provide a list of actions to minimize risk and dange[ar][as]r while in the field compiled from researchers, supervisors and institutional authorities from numerous affiliations. These strategies are used to augment basic safety best practices. Furthermore, the actions can be used in concert with each other and are flexible with regards to the field site and the risk level to the researcher. These strategies are not comprehensive; rather, they can be tailored to a researcher’s situation.

We acknowledge that it is an unfair burden that at-risk populations[at] must take additional precautions to protect themselves. We therefore encourage supervisors, departments and institutions to collectively work to minimize these harms by: (1) meeting with all trainees to discuss these guidelines, and maintaining the accessibility of these guidelines (Box 1) and additional resources (Table 1); (2) fostering a department-wide discussion on

safety during fieldwork for all researchers; (3) urging supervisors to create and integrate contextualized safety guidelines for researchers in lab, departmental and institutional resources.

A hold harmless recommendation for all

Topics related to identity are inherently difficult to broach, and may involve serious legal components. For example, many supervisors have been trained to avoid references to a researcher’s identity and to ensure that all researchers they supervise are treated equally regardless of their identities.[au] Many institutions codify this practice in ways that conflict with the goals outlined in the previous sentence, as engaging in dialogue with at-risk individuals is viewed as a form of targeting or negative bias. In a perfect world, all individuals would be aware of these risks and take appropriate actions to mitigate them and support individuals at heightened risk. In reality, these topics will likely often arise just as an at-risk individual is preparing to engage in fieldwork, or even during the course of fieldwork. We therefore strongly encourage all relevant individuals and institutions to ‘hold harmless’ any good-faith effort to use this document as a framework for engaging in

a dialogue about these core issues of safety and inclusion. Specifically, we recommend that it should never be considered a form of bias or discrimination for a supervisor to offer a discussion on these topics to any individual that they supervise[av][aw][ax][ay]. The researcher or supervisee receiving that offer should have the full discretion and agency to pursue it further, or not. Simply sharing this document [az]is one potential means to make such an offer in a supportive and non-coercive way, and aligns with the goals we have outlined towards making fieldwork safe, equitable and fruitful for all.

Trauma and Design

1. Validating your experience. It’s important to know that workplace trauma can be destabilizing, demoralizing, and dehumanizing. And when it happens in a design-centric organization where there are sometimes shallowly professed[ba] values of human-centeredness, empathy, and the myth of bringing your full, authentic self to work, it can leave you spinning in a dizzying state of cognitive dissonance and moral injury.

A common side effect of workplace abuse is invalidation, which is defined as “the rejection or dismissal of your thoughts, feelings, emotions, and behaviors as being valid

and understandable.” Invalidation can cause significant damage or upset to your psychological health and well-being. What’s worse, the ripple effects of these layers of dismissal are traumatic, often happen in isolation, and may lead to passive or more overt forms of workplace and institutional betrayal. If this is (or has been) your experience, it’s important to know that (1) you are not alone and (2) your experience is valid and real.[bb]

2. Seeking safety. Workplace-induced emotional trauma is very real and, unfortunately, on the rise. The research is also clear: continuous exposure to trauma can hurt our bodies and lead to debilitating levels of burnout, anxiety, depression, traumatic stress, and a host of other health issues. Episodic and patterned experiences like micro- and macro-aggressions, bullying, gaslighting, betrayal, manipulation, and other forms of organizational abuse can have both immediate and lasting psychological and physiological effects. So, what can we do?[bc]

• To go to HR and management or not? There is a natural inclination to document and

report workplace abuse and to then work within the HR structures that are in place where we work. But many profit- and productivity-driven workplaces are remarkably inept at putting employees (the primary human resource) first[bd][be][bf][bg]. The nauseating effects of this can lead to deeply entrenched incompetent or avoidant behaviors by the very people who we expect to listen to and support us (read: HR). Even with this said, there is value in documenting events as they occur so that you can remember the details and not forget the context later. You may also have a situation so egregious or blatantly illegal that documentation will be necessary.

• Do you need accommodations? Employees need to be cared for in ways that our leaders don’t always recognize, nor value. Workplace trauma, as well as current and past trauma, can get exacerbated resulting in impairing symptoms or a legally protected disability accommodation. Sometimes seeking out accommodations as part of the process can hold your immediate supervisor accountable (as well as their respective leadership chain) to meet your needs. The Job Accommodation Network (JAN) is a source of free, expert, and confidential guidance on job accommodations and disability employment issues. JAN provides free one-on-one practical guidance and technical assistance on job accommodation solutions.

• Do not “manage up.” Many of the avenues that HR systems afford us can lead to empty promises and give us a sense of helplessness and hopelessness. As a result, the harm done can lead to a retraumatization of what you’ve already been enduring. Additionally, as a social worker, it would be disingenuous and unethical for me to suggest that you find ways to “manage up” or gray rock[bh] so that you can temporarily minimize the effects[bi][bj][bk]. Managing up is a popular narrative[bl][bm][bn][bo][bp] that, I believe, just perpetuates how we deal with cycles and patterns of abuse — be it in the workplace or elsewhere. And gray rocking, which can be quite effective to get through in the interim, is not a healthy, long-term solution to what you are enduring.

• Where should I turn? Let me be honest: many HR programs are ill-equipped, lack the knowledge, or are simply unwilling to hold perpetrators of workplace abuse accountable. If this is not the case where you work, congratulations! But if the aforementioned is familiar, it is crucial to practice self-compassion and self-trust and to seek reassurance and psychological and cultural safety with trusted friends or colleagues. Let them know that you may not want or need their advice, but rather their trust and confidence in listening to and witnessing your story.[bq][br] Can this friend or colleague help you assess the risks of staying? You may find it empowering to think this through with them and to also write about it. Writing into the wound[bs], as Roxane Gay has said, may also be a helpful, therapeutic exercise with a licensed professional or in community with others who witness, trust, and respect you. Please remember: your friends and colleagues are just that — friends and colleagues. Sometimes situations are more serious and complex and should be referred to someone who has the cultural and clinical training to help you address the layers of complexity. There may be times when your unresolved trauma, prolonged grief, or more serious and long-lasting symptoms of mental health concerns need to be processed with a licensed professional (more on working with a clinician below).

• Creating your freedom map. There are times when you have exhausted your options; you’ve played by the rules set forth and are caught in a neverending wait and see. Your current reality might be that leaving is simply not an option. You might be the only paid worker in your family or perhaps you need the insurance or the job market might be too volatile. These are all valid reasons for choosing to not leave — or to not leave just yet. However, if leaving feels scary for other reasons (fear of failing, worried about what people will think, concerned about damage to your professional reputation), consider this: are you willing and able to test the limits of what your body can endure? Sometimes leaving — a radical act in and of itself — is the best option for your health, well-being, and future work. If you’re at this stage, I strongly suggest devising a plan of action for leaving and map out your escape plan. Some of the questions to consider might be: When will I leave? What do I need in order to leave? What do I want to do next? How can I take care of myself now and in the future? Who can I rely on as part of my support system? Spelling this out and naming what you need in your freedom map will give you power.

3. Healing in community and finding and talking to a mental health professional. There are enormous benefits to healing with others and working with a licensed clinical mental health professional (i.e., clinician, therapist, psychotherapist, counselor, etc). Therapy can provide a safe space to share and understand the interconnected dots of what you’re going through. Sometimes trauma in the workplace can trigger unresolved childhood traumas and other struggles that we, as a society, have been conditioned to either just deal with or suppress. Have grit! Be resilient! It’s not that bad! [bt][bu][bv][bw]These are white supremacy and productivity narratives that infuriate me. If it were that easy, you wouldn’t still be reading. What’s more, the power of community healing is found in the validation, empowerment, and organizing to challenge fear-based work culture — not to just learn to cope with dysfunction[bx][by]. If you are new to therapy or revisiting it after having a break from it, consider this part of the overall commitment to yourself for lifelong healing and recovery. There is a growing amount of culturally responsive therapy options — many of which did not exist even a few years ago. Below are just a few resources for finding an inclusive, culturally responsive therapist.

5. Learn and understand the language of trauma and what it means to be trauma-informed — especially in the context of design. There is a literacy around trauma[bz][ca] that is missing in our organizations, in ourselves, and in our design work.[cb][cc][cd] Now more than ever, we need to be at least trauma-informed so that we can lead and work within trauma-responsive teams and organizations. Responding to this need is one of the reasons why I started Social Workers Who Design. My own practice and research are committed to being trauma-informed and becoming trauma-responsive in design.

[a]Fieldwork certainly broadens both the types of risks and their severity compared to lab work. There were four employee work related deaths at UVM while I was working there for 25 years – 3 were field events and 1 was lab related.

[b]It seems like this would still apply broadly to conferences, workshops, collaborations, and other work-related travel for STEM students

[c]Agreed!

[d]ACS has raised this concern for its meetings; I’m sure it’s not being done in a proactive fashion…

[e]certainly a consideration that is applicable to research well beyond fieldwork!

[f]I think that the 21st Century will need to find ways to move beyond colonial science which is part of the reason this article needed to be written.

[g]Or that those researchers will avoid fieldwork that could benefit from their participation

[h]By sharing this burden we can also develop a wider range of ways to minimize these risks.

[i]Even writing and talking about the burden can help make it seen and believed. Probably not quantitated though, which is what modern management wants to do to address complex issues.

[j]This is another challenge in the academic environment, where the research topic and methods are often driven by the investigator rather than their supervisor. Many times we in EHS hear of the field work being conducted only after it is underway and a regulatory question arises.

[k]Interesting choice to say ”accessing” rather than “requiring”. On the other hand, can cultural literacy be trained or does it have to be lived?

[l]I think the idea behind the training is to get people to practice thinking differently about situations – an opportunity to simulate this conversation before it needs to be had in real life.

[m]Yes, that’s a more valuable approach to training than an information transfer approach (i.e. woman in Afghanistan have to be extra careful this year).

[n]Recognize hazards

[o]Assess risks

[p]Intersectionality is a key element here – even if within ethnic groups, male and female perceptions of risks can be quite different in valid ways.

[q]Yes! Intersectionality is VERY important.

[r]This was one of the deaths at UVM – a anthropologists in Brazil was shot dead by a local

[s]When I did fieldwork for the USDA, I got a very quick rundown of things from my lab manager. Basically, make sure that your USDA badge is easily accessible just in case anyone questions your presence. However, working for the govt may also be reason for someone to be hostile to you, so basically if a local challenges you, just pack up and leave. Their biggest challenge in the region was being shot at by anti-govt folks.

[t]Good point. We need to learn de-escalation approaches that don’t undermine the point we are trying to make.

[u]Is this in the US or overseas?

[v]Is working alone a methodological advantage in some research settings?

[w]Depending on what you are doing, too much noise can be a problem.

[x]I have heard this from some of my students.

[y]I think about this whenever I hear the “don’t work alone” or “don’t work odd hours” policies. I’ve known many people who did these 2 things in order to avoid someone they have had bad interactions with – including in my own graduate lab.

[z]I wonder how much of those “unhealthy attitudes and behaviors” seen in academia are due to some of these situations; trying to avoid certain people or situations.

[aa]This whole section talks basically about a risk assessment that should be done before the fieldwork begins. I wonder if there are any regulatory bodies that have authority or provide guidelines for doing this. Or, if universities have policies on travel for work, etc. Otherwise, the concern if it’s just “you should do this” and not enforced anywhere is that people won’t do it, which is maybe why this is a problem in the first place

[ab]The only “regulatory body” with international jurisdiction is the Dept of State, who tracks political volatiality, but leaves specific activities up to the individual

[ac]CSU offers a huge array of assistance to our high-risk and/or international travelers. You MUST register, otherwise these benefits will and/or may not be available to you.

[ad]Does supervisor = funder in this context? I know that the some parts of the Dept of Defense has much stricter safety expectations than academia.

[ae]For my research group meeting we start with a safety moment and diversity moment where we present something from our culture or background that helps educate us all but not necessarily in predicting risks.

[af]Minimize risks and Prepare for emergencies

[ag]Verbally inquiring?  I’m not sure how this would be approached….

[ah]Not to sound like too much of an enthusiastic convert, but this is potentially a space in which apply RAMP could be used – i.e. if you encourage a general conversation about recognizing hazards in different research settings, you can invite the conversation without making certain individuals feel explicitly called out – and giving them a space where it is expected/comfortable to bring about hazards that others may not have considered.

[ai]A great way to do this is to simply ask “How can I best support you in your research efforts?”

[aj]It seems like part of this is the PI identifying themselves the potential risks of their students based on obvious information available or differences in the environment, so this sort of risk analysis could be made available to students without them having to come forward explicitly

[ak]Jessica and Anthony, those are both great ideas.  Maybe implement RAMP and explain it by saying that we are trying to best support them in their research efforts.

[al]Kali, when I initially read this the idea of trying to predict the situations someone might face in an unfamiliar location seemed impossibly daunting, but the way you phrased it makes it much more achievable!

[am]I think the danger of PIs simply doing this for themselves is exactly what is pointed out in the article. One’s own experiences and biases may blind them to hazards that may seem obvious to others. In some situations it could actually be worse to have a PI think they figured it all out without consulting with their research group members because you then have a top-down approach that those it is enforced upon do not feel invited to discuss or challenge.

[an]This reminded me of the paper that was shared in the slack channel on finding PPE solutions while wearing a hijab and PPE/cleanroom issues with long hair which is important to religions such as Sikhism.

[ao]I like that paper because it took the approach of “what can we do to resolve the issue” rather than “well you are the one creating the problem.”

[ap]I suspect that members of the local community who cooperate with an unwelcome researcher are also at risk.

[aq]Does this increased risk arise from researchers asking culturally insensitive questions in a prying way? Not all cultures will share their thought processes with anyone who asks.

[ar]When you manage people as a group versus individuals, one can create a sense of everyone belonging so that there is no “other.”

[as]This is an interesting comment that I believe I am interpreting another way. I have found that being managed “as a group” is part of the problem. Some managers fail to see the ways individuals in their groups are impacted differently by situations or policies – then if that person questions or challenges it, they are seen as “the problem.”

[at]Are there off-setting opportunities that at risk populations benefit from?

[au]A question of equality versus equity

[av]This definitely depends on the workplace…if you don’t have the support behind you, one could be risking their job if they spoke out or asked about an individual’s identity.

[aw]Or you could be risking your job as a supervisor if you assume your risks are the same as everyone else. Trying to increase safety should not come with a sense of fear.

[ax]Tough one to do well. Offer to group is easier than an individual otherwise you do feel targeted.

[ay]I agree with this. It’s hard when it’s only a single student or small group going though. Perhaps there is a procedure to analyze risk you can use with everyone equally.

[az]I think that this is a good opportunity to raise this concern; particularly with subject matter experts from other disciplines who may have experience in the locations being researched.

[ba]Meaning, it is something the institution says it values but doesn’t follow through on that value?

[bb]This is about acknowledging others feelings and letting them feel heard.

[bc]One of the best books I’ve ever read is “Difficult Conversations” by Douglas Stone, Bruce Patton, and Sheila Heen. In that book, the authors highlight the central problem that we can see the intentions behind our own actions, and the impacts of others’ actions, but others’ intentions and the impacts of our own actions are often opaque to us. It may take, dare I say it, grit, to initiate a conversation about behaviors that have caused you pain, but I don’t believe there’s any other way to resolve those situations.

[bd]True

[be]My experience is the Human Resources staff are in a silo of their own, separated from the mission of the larger organization. This means that they have a hard time connecting to an evolving workplace

[bf]SInce they get all of the hard personnel cases, they tend to manage to avoid the last bad experience rather than to respond to emerging challenges

[bg]I also think it is important to keep in mind that HR is hired to protect the company – not individuals. It is why I find it awkward when people automatically just say “take it to HR.” HR are trained to be mediators to try to get things to quietly blow over. They aren’t going to charge in to fight your battles for you.

[bh]Putting the definition here because I had to look it up: The grey rock method is a strategy some people use when interacting with manipulative or abusive individuals. It involves becoming as unresponsive as possible to the abusive person’s behavior.

[bi]not quite clear on these meanings in this context

[bj]Gray rock basically would mean being unresponsive to “defuse” the traumatic situation (or person initiating it).

[bk]https://180rule.com/the-gray-rock-method-of-dealing-with-psychopaths/

[bl]Agree that this is a very misused term. The concept of “managing up” really only works if all actors are well-meaning. If someone is being hostile or abusive, telling the victim of the behavior to “manage up” is really just telling them that no one is going to help them.

[bm]In this context, “managing up” can mean find allies at your level that can help you identify potential support that are not in your direct reporting line. The caveat here is that my experience is entirely in academia, which has very thin reporting lines with lots of turn over, so working around specific managers is often a less risky approach than it might be in other settings. I have seen it work in some situations and not in other situations. Your Milage May Vary

[bn]To be honest, the definition you used here does not square with the understanding that I have of “managing up.” What you just described is simply looking for other managers (apart from your own) to deal with the situation. “Managing Up” specifically has to do with your relationship with your manager(s).

[bo]I’ve taken one course on “Managing Up”, and the definition I got from it was understanding that your boss is a fallible human being, and that as you get to know them, you should try to interact with them in a way that accommodates that (e.g. following up by phone or in person if you know they’re bad at reading their email).

[bp]I have understood it as being proactive when it comes to solving problems in a way that helps your manager & makes them look good – while also helping yourself. It requires having a clear understanding of what the goals are for both of you – while also recognizing your manager’s strengths and weaknesses relative to your own, and finding a way to work with that. I don’t see anything wrong with that as advice – but the “weaknesses” to be managed here really shouldn’t be outright abusive behaviors that they do not see & work to correct for themselves. You’re an employee – not a punching bag or a therapist.

[bq]I like the suggestion to explicitly mention that a trusting ear and confidence is what is expected instead of advice. I know I appreciate that distinction personally.

[br]As someone who struggles with active listening, and jumps immediately to problem-solving mode, I wholeheartedly agree!

[bs]Particularly if it helps clarify your thoughts, both for yourself and for potential allies

[bt]That’s a sign of a dysfunctional safety culture its just as bad as playing the blame game

[bu]This is well described in a recent article in JCHAS “Listening to the Well, Listening to Each Other, and Listening to the Silence—New Safety Lessons from Deepwater Horizon” https://pubs.acs.org/doi/10.1021/acs.chas.1c00050

The authors describe the HR aspects of the Deepwater Horizon as part of the safety system that led to that explosion

[bv]These are more terms that I feel have been misused and abused. Having grit to try multiple ways to solve a legitimate challenge is good to encourage; “having grit” to put up with someone being abusive towards you is NOT okay to encourage.

[bw]We have been seeing more and more survivors of professional sports experiences describe their experiences publicly and how grit was used as a rhetorical device to avoid case. It’s encouraging to see these people “come out”

[bx]This is certainly very difficult-“challenging fear-based work culture”

[by]I believe that this is one of the reasons that unions arose internationally and are still valued in many countries. Unfortuantely, the regulatory environmenta of the US has taken this peer support function away from unions in favor of simple economic bargaining (I just came from a union meeting at lunch time, where this played out in real time.)

[bz]“Trauma” is a huge word to throw around as its usually meaning a severe emotional response to life-threatening  events or series of events that are emotionally disturbing. In my experince, some minorities even find using the word “trauma” offensive. How about depressed, tired, and all those other descriptives.

[ca]There is a cost to identifying one’s status as a victim, not just in social standing, but in personal mental health. It takes a lot of bravery to identify one’s trauma publicly. (See the comments about athletes above.)

[cb]Even after looking at the complete article, I’m confused about what design work this refers to. I can take it two ways:

[cc]1. People who design spaces, web sites, projects(?) with social dynamics in mind?

[cd]2. People who work in design firms (i.e. creative thinkers) who find those firms toxic. I have known several people with that experience. Some creative thinkers can be rather deaf to other people’s feedback.

Enhancing Research Productivity Through LST’s

On November 4, 2021, CHAS sponsored an ACS webinar presented by 4 current and recent graduate students about their work with Laboratory Safety Teams (LSTs) and why they took up this challenge. A key reason is that the productivity of their work and the safety of their labs are connected by housekeeping issues they faced in the lab.

The recording of the webinar will be available to ACS members soon, but you can review their presentation file here.

The audience provided many questions and comments to the panel. The questions were discussed in the recording available from ACS Webinars. Some of these issues were:

The Impact of Lab Housekeeping

  • Did you ever see serious accidents because of a lack of housekeeping?
  • An audience member responded: A major lab cleanup in the lab where I was finishing up as a graduate student nearly ended in disaster when a waste bottle EXPLODED. Fortunately, no one was present — everyone had left for dinner. Pieces of broken glass were found at the other end of the lab.

Working with the Administration

  • Have there been any situations where your PI encouraged you to deprioritize safety/housekeeping concerns because they did not put emphasis on it? How would you encourage a researcher who is facing this but interested in LSTs?
  • Have you run into management or leadership that is reluctant to implement changes to safety programs? How did you deal with this when not holding a leadership position?
  • How to get students involved in lab safety if PI don’t show interest on the matter?
  • I think a Lab safety team of students is great but I also think a Liaison between the research labs and EHS has proven extremely beneficial because while EHS looks at compliance and waste removal but as Chemists we often are resource for them as well.

Professional Skill Development

  • I have worked on a safety team and found it initially uncomfortable to give feedback to others in regards to housekeeping and safety. How do we support teams so they feel comfortable/empowered to provide feedback to others in their lab?
  • Lab safety is a big priority in industry (as we all know) and experience with lab safety is a GOOD thing to put on your resume. I’m sure comments along these lines helped me get my first industry job.
  • Kudos for all the safety culture building!

LST Strategies

  • Do you think it’s advisable to separate safety leadership in a lab from the responsibilities of a lab manager?
  • What are some strategies for encouraging students to join the LST on their own accord? It seems important that this not be mandatory necessarily, but how do you get people excited about putting more time into something when everyone is stretched pretty thin typically?
  • What fallout has happened, or not, from the fatal lab accident that occurred at UCLA?
  • What hazards do the LST find most frequently?
  • What systematic changes have you seen that are sustainable?
  • What is the gender breakdown of researchers participating in LSTs? As a safety professional I am sensitive to recognizing the majority role women play in participating in “non-promotable” tasks. If a gender discrepancy exists, how can we address it?”

Educational Opportunities

  • Hello, great webinar! This semester I am working with small groups of students from different labs (internship and rotations), and I think working on safety is a great topic to consider as part of the learning process. Any recommendations? greetings from Peru.

If we educate students before they come to the lab , will it benefit of LST?”

For More Information

Safety Culture Transformation – The impact of training on explicit and implicit safety attitudes

On October 27, 2021, the CHAS Journal Club head from the lead author of the paper “Safety culture transformation—The impact of training on explicit and implicit safety attitudes”. The complete open access paper can be found on line at this link. The presentation file used by Nicki Marquadt, the presenting author, includes the graphics and statistical data from the paper.

Comments from the Table Read

On October 20, the journal club did a silent table read of an abridged portion of the article. This version of the article and their comments are below.

1. INTRODUCTION

Safety attitudes of workers and managers have a large impact on safety behavior and performance in many industries (Clarke, 2006, 2010; Ford & Tetrick, 2011, Ricci et al., 2018). They are an integral part of an organizational safety culture[a] and can therefore influence occupational health and safety, organizational reliability, and product safety (Burns et al., 2006; Guldenmund, 2000; Marquardt et al., 2012; Xu et al., 2014).

There are different forms of interventions for safety culture and safety attitude change, trainings are one of them. Safety trainings seek to emphasize the importance of safety behavior and promote appropriate, safety-oriented attitudes among employees[b][c][d][e][f][g][h][i] (Ricci et al., 2016, 2018).-*

However, research in the field of social cognition has shown that attitudes can be grouped in two different forms: On the one hand, there are conscious and reflective so-called explicit attitudes and on the other hand, there are mainly unconscious implicit attitudes (Greenwald & Banaji, 1995). Although there is an ongoing debate whether implicit attitudes are unconscious or partly unconscious (Berger, 2020; Gawronski et al., 2006), most researchers affirm the existence of these two structurally distinctive attitudes (Greenwald & Nosek, 2009). Traditionally, researchers have studied explicit attitudes of employees by using questionnaires [j](e.g., Cox & Cox, 1991; Rundmo, 2000). However, increasingly more researchers now focus on implicit attitudes that can be assessed with reaction time measures like the Implicit Association Test[k][l] (IAT; Greenwald et al., 1998; Ledesma et al., 2015; Marquardt, 2010; Rydell et al., 2006). These implicit attitudes could provide better insights into what influences safety behavior because they are considered to be tightly linked with key safety indicators. Unlike explicit attitudes, they are considered unalterable by social desirable responses (Burns et al., 2006; Ledesma et al., 2018; Marquardt et al., 2012; Xu et al., 2014). Nevertheless, no empirical research on whether implicit and explicit safety attitudes are affected by training could be found yet. Therefore, the aim of this paper is to investigate the effects that training may have on implicit and explicit safety attitudes. The results could be used to draw implications for the improvement of safety training and safety culture development.

1.1 Explicit and implicit attitudes in safety contexts

Explicit attitudes are described as reflected which means a person has conscious control over them[m] (Strack & Deutsch, 2004). In their associative–propositional evaluation (APE) model, Gawronski and Bodenhausen (2006) assume that explicit attitudes are based on propositional processes. These consist of evaluations derived from logical conclusions. In addition, explicit attitudes are often influenced by social desirability[n][o][p][q][r], if the topic is rather sensitive such as moral issues (Maass et al., 2012; Marquardt, 2010; Van de Mortel, 2008). This has also been observed in safety research where, in a study on helmet use, the explicit measure was associated with a Social Desirability Scale (Ledesma et al., 2018). Furthermore, it is said that explicit attitudes can be changed faster and more completely than implicit ones (Dovidio et al., 2001; Gawronski et al., 2017).

On the other hand, implicit attitudes are considered automatic, impulsive, and widely unconscious (Rydell et al., 2006). According to Greenwald and Banaji (1995, p. 5), they can be defined as “introspectively unidentified (or inaccurately identified) traces of past experience” that mediate overt responses. Hence, they use the term “implicit” as a broad label for a wide range of mental states and processes such as unaware, unconscious, intuitive, and automatic which are difficult to identify introspectively by a subject. Gawronski and Bodenhausen (2006) describe implicit attitudes as affective reactions that arise when stimuli activate automatic networks of associations. However, although Gawronski and Bodenhausen (2006) do not deny “that certain affective reactions are below the threshold of experiential awareness” (p. 696), they are critical towards the “potential unconsciousness of implicit attitudes” (p. 696). Therefore, they use the term “implicit” predominantly for the aspect of automaticity of affective reactions. Nevertheless, research has shown that people are not fully aware of the influence of implicit attitudes on their thinking and behavior even though they are not always completely unconscious (Berger, 2020; Chen & Bargh, 1997; De Houwer et al., 2007; Gawronski et al., 2006). Many authors say that implicit attitudes remain more or less stable over time and are hard to change (Charlesworth & Banaji, 2019; Dovidio et al., 2001; Wilson et al., 2000). In line with this, past studies in which attempts were made to change implicit attitudes often failed to achieve significant improvements (e.g., Marquardt, 2016; Vingilis et al., 2015).

1.3 Training and safety attitude change[s][t]

As mentioned in the introduction, the main question of this paper is to find out whether training can change implicit and explicit safety attitudes. Safety training can improve a person’s ability to correctly identify, assess, and respond to possible hazards in the work environment, which in turn can lead to better safety culture (Burke et al., 2006; Duffy, 2003; Wu et al., 2007). Besides individual safety training increasingly more industries such as aviation, medicine, and offshore oil and gas industry implement group trainings labeled as Crew Resource Management (CRM) training to address shared knowledge and task coordination in dynamic and dangerous work settings (Salas et al., 2006).

There are many different factors, which determine the effectiveness of safety trainings (Burke et al., 2006; Ricci et al., 2016) such as the training method (e.g., classroom lectures) and training duration (e.g., 8 h).

As can be seen in Figure 1, it can be stated that associative evaluations[u][v][w][x] (process) can be activated by different safety intervention stimuli such as training (input). These associative evaluations are the foundation for implicit safety attitudes (output) and propositional reasoning (processes), which in turn form the explicit safety attitudes (output). In addition, associative evaluations and propositional reasoning processes affect each other in many complex conscious and unconscious ways (Gawronski & Bodenhausen, 2006). However, change rates might be different. While the propositional processes adapt very quickly to the input (e.g., safety training), the associative evaluations might need longer periods of time for restructuring the associative network (Karpen et al., 2012). Therefore, divergences in the implicit and explicit measures resulting in inconsistent attitudes (output) can occur (McKenzie & Carrie, 2018).

1.4 Hypotheses and overview of the present studies

Based on the theories and findings introduced above, two main hypotheses are presented. Since previous research describes that explicit attitudes can be changed relatively quickly (Dovidio et al., 2001; Karpen et al., 2012), the first hypothesis states that:

  • H1: Explicit safety attitudes can be changed by training.
    Even though implicit attitudes are said to be more stable and harder to change (Dovidio et al.,
    2001; Gawronski et al., 2017; Wilson et al., 2000), changes by training in implicit attitudes can be expected too, due to changes in the associative evaluation processes (Lai et al., 2013) which affect the implicit attitudes (see EISAC model in Figure 1). Empirical research on the subject of implicit attitudinal change through training is scarce (Marquardt, 2016), however, it was shown that an influence on implicit attitudes is possible[y][z][aa] (Charlesworth & Banaji, 2019; Jackson et al., 2014; Lai et al., 2016; Rudman et al., 2001). Therefore, the second hypothesis states that:
  • H2: Implicit safety attitudes can be changed by training.

However, currently, there is a lack of empirical studies on implicit and explicit attitude change using longitudinal designs in different contexts (Lai et al., 2013). Also, in the field of safety training research, studies are needed to estimate training effectiveness over time (Burke et al., 2006). Therefore, to address the issues of time and context in safety attitude change by training, three studies with different training durations and measurement time frames in different safety-relevant contexts were conducted (see Table 1). In the first study, the short-term attitude change was measured 3 days prior and after a 2-h safety training in a chemical laboratory. In the second study, the medium-term attitude change was assessed 1 month prior and after a 2 days of CRM training for production workers. In the third study, the long-term attitude changes were measured within an advanced experimental design (12 months between pre- and post-measure) after a 12 weeks of safety ethics training in an occupational psychology student sample.[ab] To make this paper more succinct and to ease the comparability of used methods and reveled results, all three studies will be presented in parallel in the following method, results, and discussion sections. A summary table of all the studies can be seen in Table 1.

2. METHODS

Study 1

Fifteen participants (eight female and seven were male; mean age = 22.93 years; SD = 2.74) were recruited for the first study. The participants were from different countries with a focus on east and south Asia (e.g., India, Bangladesh, and China). They were enrolled in one class of an international environmental sciences study program with a major focus on practical experimental work in chemical and biological laboratories in Germany. Participation in regular safety training was mandatory for all participants to be admitted to working in these laboratories. To ensure safe working in the laboratories, the environmental sciences study program has traditionally small classes of 15–20 students. Hence, the sample represents the vast majority of one entire class of this study program. However, due to the lockdown caused by the COVID-19 pandemic, there was no opportunity to increase the sample size in a subsequent study. Consequently, the sample size was very small.

2.1.2 Study 2

A sample of 81 German assembly-line workers of an automotive manufacturer participated in Study 2. The workers were grouped into self-directed teams responsible for gearbox manufacturing. Hence, human error during the production process could threaten the health and safety of the affected workers and also the product safety of the gearbox which in turn affects the health and safety of prospective consumers. The gearbox production unit encompassed roughly 85 workers. Thus, the sample represents the vast majority of the production unit’s workforce. Due to the precondition of the evaluation being anonymous, as requested by the firm’s work council, personal data such as age, sex, and qualification could not be collected.

2.1.3 Study 3

In Study 3, complete data sets of 134 German participants (mean age = 24.14; SD = 5.49; 92 female, 42 male) could be collected. All participants were enrolled in Occupational Psychology and International Business study programs with a special focus on managerial decision making under uncertainty and risks. The sample represents the vast majority of two classes of this study program since one class typically includes roughly 60–70 students. Furthermore, 43 of these students also had a few years of work experience (mean = 4.31; SD = 4.07).

4. DISCUSSION

4.1 Discussion of results

The overall research objective of this paper was to find out about the possibility of explicit and implicit safety attitude changes by training. Therefore, two hypotheses were created. H1 stated that explicit safety attitudes can be changed by training. H2 stated that implicit safety attitudes can be changed by training. Based on the results of Studies 1–3, it can be concluded that explicit safety attitudes can be changed by safety training. In respect of effect sizes, significant small effects (Study 2), medium effects (Study 1), and even large effects (Study 3) were observed. Consequently, the first hypothesis (H1) was supported by all three studies. Nevertheless, compared to the meta-analytic results by Ricci et al. (2016) who obtained very large effect sizes, the effects of training on the explicit safety attitudes were lower in the present studies. In contrast, none of the three studies revealed significant changes in the implicit safety attitudes after the training. Even though there were positive changes in the post-measures, the effect sizes were marginal and nonsignificant. Accordingly, the second hypothesis (H2) was not confirmed in any of these three studies. In addition, it seems that the duration of safety training (e.g., 2 h, 2 days, or even 12 weeks) has no effect on the implicit attitudes[ac][ad][ae][af][ag][ah]. However, the effect sizes of short-term and medium-term training of Studies 1 and 2 were larger than those obtained in the study by Lai et al. (2016), whose effect sizes were close to zero after the follow-up measure 2–4 days after the intervention.

The results obtained in these studies differ with regard to effect size. This can partly be explained by the characteristics of the sample. For instance, in Studies 1 and 3, the participants of the training, as well as the control groups (Study 3 only), were students from occupational psychology and environmental sciences degree programs. Therefore, all students—even those of the control groups—are familiar with concepts of health and safety issues, sustainability, and prosocial behavior. Consequently, the degree programs could have had an impact on the implicit sensitization of the students which might have caused high values in implicit safety attitudes even in the control groups. The relatively high IAT-effects in all four groups prior and after the training are therefore an indication of a ceiling effect in the third study (see Table 3). This is line with the few empirical results gained by previous research in the field of implicit and explicit attitude change by training (Jackson et al., 2014; Marquardt, 2016). Specifically, Jackson et al. (2014) have also found a ceiling effect in the favorable implicit attitudes towards women in STEM of female participants, who showed no significant change in implicit attitudes after a diversity training.[ai][aj][ak]

Finally, it seems that the implicit attitudes were mainly unaffected by the training. The IAT data have shown no significant impact in any group comparison or pre- and post-measure comparison. To conclude, based on the current results it can be assumed that when there is a training effect, then it manifests itself in the explicit and not the implicit safety attitudes. One explanation might be that implicit safety attitudes are more stable unconscious dispositions which cannot be easily changed like explicit ones (Charlesworth & Banaji, 2019; Dovidio et al., 2001; Wilson et al., 2000). In respect of the EISAC model (see Section 1.3), unconscious associative evaluations might be activated by safety training, but not sustainably changed. A true implicit safety attitude change would refer to a shift in associative evaluations that persist across multiple safety contexts and over longer periods of time (Lai et al., 2013).[al][am]

5. PRACTICAL IMPLICATIONS AND CONCLUSION

What do the current empirical results mean for safety culture and training development? Based on the assumption that the implicit attitudes are harder to change (Gawronski et al., 2017) and thus may require active engagement via the central route of conviction (Petty & Cacioppo, 1986), this could be an explanation why there was no change in Study 1. This assumption is supported by the meta-analysis of Burke et al. (2006), who found large effect sizes for highly engaging training methods (e.g., behavior modeling, feedback, safety dialog) in general, and by the meta-analysis of Ricci et al. (2016) who obtained large effect sizes on attitudes in particular. However, the more engaging training methods such as interactive tutorials, case analyses, cooperative learning phases, role plays, and debriefs (structured group discussions)—which have proved strong meta-analytic effects (Ricci et al., 2016)—used in Studies 2 (CRM training) and 3 (Safety ethics training) did have a significant impact on the explicit but not implicit attitude change[an][ao]. In addition, it seems that more intense training with longer duration (e.g., such as 12 weeks in Study 3) has again no effect on the implicit attitude change. Therefore, maybe other approaches [ap][aq]can be more promising.

To sum up, even though the outlined conclusions are tentative, it could be very useful in the future to design realistic and affect-inducing training simulations via emergency simulators or virtual reality approaches[ar][as][at][au][av] [aw][ax][ay][az][ba](Sacks et al., 2013; Seymour et al., 2002) for all highly hazardous industries. If these simulations are accompanied by highly engaging behavioral (e.g., behavioral modeling; Burke et al., 2006, 2011), social (e.g., debriefs/structured group discussions; Ricci et al., 2016), and cognitive (e.g., implementation intentions; Lai et al., 2016) training methods, then they might facilitate a positive explicit and even implicit safety attitude change and finally a sustainable safety culture transformation.

[a]A theoretical question that occurs to me when reading this is:

Is “an organizational safety culture” the sum of the safety attitudes of workers and management or is there a synergy among these attitudes that creates a non-linear feedback effect?

[b]I would not have thought of this as the purpose of discreet trainings. I would have thought that the purpose of trainings is to teach the skills necessary to do a job safely.

[c]I agree. Safety Trainings are about acquiring skills to operate safely in a specific process…the collective (Total Environment) affects safety behavior.

[d]I think this could go back to the point below about fostering the environment – safety trainings communicating that safety is a part of the culture here.

[e]Safety professionals (myself included) have historically misused the term “training” to refer to what are really presentations.

[f]Agreed. I always say something that happens in a lecture hall with my butt in a chair is probably not a “training.” While I see the point made above, many places have “trainings” simply because they are legally required to have them. It says little to nothing about the safety culture of the whole environment.

[g]Maybe they go more into the actual training types used in the manuscript, but we typically start in a lecture hall and then move into the labs for our trainings, so I would still classify what we have as a training, but I can see what you mean about a training being more like a presentation in some cases.

[h]This is something I struggle with…but I’m trying to refer to the lecture style component as a safety presentation and the actual working with spill kits as a safety training.  It has been well-received!

[i]This is a core question and has been an ongoing struggle ever since I started EHS training in an education-oriented environment.

As a result, over time I have moved my educational objectives from content based (e.g. what is an MSDS?) to awareness based (what steps should you take when you have a safety question). However, the EHS community is sloppy when talking about training and education, which are distinct activities.

[j]Looks like these would be used for more factual items such as evaluating what the researcher did, not how/why they did it

[k]I’m skeptical that IATs are predictive of real-world behavior in all, or even most, circumstances. I’d be more interested in an extension of this work that looks at whether training (or “training”) changes revealed preferences based on field observations.

[l]Yes – much more difficult to do but also much more relevant. I would be more interested in seeing if decision-making behavior changes under certain circumstances. This would tell you if training was effective or not.

[m]This is a little confusing to me but sounds like language that makes sense in another context.

[n]What are the safety-related social desirabilities of chemistry grad students?

[o]I would think these would be tied to not wanting to “get in trouble.”

[p]Also, likely linked to being wrong about something chemistry-related.

[q]What about the opposite? Not wear PPE to be cool?

[r]In my grad student days, I was primarily learning how to “fake it until I make it”. This often led to the imposter syndrome being socially desirable. This probably arose from the ongoing awareness of grading and other judgement systems that the academic environment relies on

[s]Were study participants aware or were the studies conducted blind? If I am an employee and I know my progress will be measured, I may behave differently than if I had not known.

[t]This points back to last week’s article.

[u]What are some other ways to activate our associative evaluations?

[v]I would think it would include things like witnessing your lab mates follow safety guidance, having your PI explicitly ask you about risk assessment on your experiments, having safety issues remedied quickly by your facility. Basically, the norms you would associate with your workplace.

[w]Right, I just wonder if there’d be another way besides the training (input) to produce the intended change in the associative evaluation process we go through to form an implicit attitude. We definitely have interactions on a daily basis which can influence that, but is there some other way to tell our subconscious mind something is important.

[x]In the days before social media, we used social marketing campaigns that were observably successful, but they relied on a core of career lab techs who supported a rotating cast of medical researchers. The lab techs were quite concerned about both their own safety and the quality of their science as a result of the 3 to 6 month rotation of the MD/PhD researchers.

The social marketing campaigns included 1) word of mouth, 2) supporting graphical materials and 3) ongoing EHS presence in labs to be the bad guys on behalf of the career lab techs

[y]This reminds me of leading vs lagging indicators for cultural change

[z]This also makes me think of the arguments around “get the hands to do the right things and the attitudes will follow” which is along the lines of what Geller describes.

[aa]That’s a great comparison. Emphasizes the importance of embedding it throughout the curriculum to be taught over long periods of time

[ab]A possible confounding variable here would have to do with how much that training was reinforced between the training and the survey period. 12 months out (or even 3 months out) a person may not even remember what was said or done in that specific training, so their attitudes are likely to be influenced by what has been happening in the mean time.

[ac]I don’t find this surprising. I would imagine that what was happening in the mean time (outside of the training) would have a larger impact on implicit attitudes.

[ad]I was really hoping to see a comparison using the same attitude time frame for the 3 different training durations. Like a short-term, medium, and long-term evaluation of the attitudes for all 3 training durations, but maybe this isn’t how things are done in these kinds of studies.

[ae]This seems to be the trouble with many of the behavioral sciences papers I read, where you can study what is available not something that lines up with your hypothesis

[af]I really would probably have been more interested in the long-term evaluation for the medium training duration personally to see their attitude over a longer period of time, for example.

[ag]I think this is incredibly hard to get right though. An individual training is rarely impactful enough for people to remember it. And lots of stuff happens in between when you take the training and when you are “measured” that could also impact your safety attitudes. If the training you just went through isn’t enforced by anyone anywhere, what value did it really have? Alternatively, if people already do things the right way, then the training may have just helped you learn how to do everything right – but was it the training or the environment that led to positive implicit safety attitudes? Very difficult to tease apart in reality.

[ah]Yeah, maybe have training follow-ups or an assessment of some sorts to determine if information was retained to kind of evaluate the impact the training had on other aspects as well as the attitudes.

[ai]What effect does this conclusion have on JEDI or DEI training?

[aj]I also found this point to be very interesting. I wonder if this paper discussed explicit attitudes. I’m not sure what explicit vs implicit attitudes would mean in a DEI context because they seem more interrelated (unconscious bias, etc.)

[ak]I am also curious how Implicit Attitude compares to Unconscious Bias.

[al]i.e. Integrated across the curriculum over time?

[am]One challenge I see here is the competing definitions of “safety”. There are chemical safety, personal security, community safety,  social safety all competing for part of the safety education pie. I think this is why many people’s eyes glaze over when safety training is brought up or presented

[an]The authors mention that social desirability is one reason explicit and implicit attitudes can diverge, but is it the only reason, or even the primary reason? I’m somehwat interested in the degree to which that played a role here (though I’m also still not entirely sure how much I care whether someone is a “true believer” when it comes to safety or just says/does all the right things because they know it’s expected of them).

[ao]This is a good point.

[ap]I am curious to learn more about these approaches.

[aq]I believe the author discusses more thoroughly in the full paper

[ar]Would these trainings only be for emergencies or all trainings? I feel that a lot of times we are told what emergencies might pop up and how you would handle them but never see them in action. This reminds me of a thought I had about making a lab safety-related video game that you could “fail” on handling an emergency situation in lab but you wouldn’t have the direct consequences in the real world.

[as]Love that idea, it makes sense that you would remember it better if you got to walk through the actual process. I wonder what the effect of engagement would be on implicit and explicit attitudes.

[at]Absolutely – I think valuable learning moments come from doing the action and it honestly would be safer to learn by making mistakes in a virtual environment when it comes to our kind of safety. The idea reminds me of the  tennis video games I used to play when I was younger and they helped me learn how to keep score in tennis. Now screen time would be a concern, but something like this could be looked at in some capacity.

[au]This idea is central to trying to bring VR into training. Obviously, you can’t actually have someone spill chemical all over themselves, etc – but VR makes it so you virtually could. And there are papers suggesting that the brain “reads” things happening in the VR world as if they really happened. Although one has to be careful with this because that also opens up the possibility that you could actually traumatize someone in the VR world.

[av]I know I was traumatized just jumping into a VR game where you fell through hoops (10/10 don’t recommend falling-based VR games), but maybe less of a VR game and more of like a cartoon character that they can customize so they see the impact exposure to different chemicals could have but they don’t have that traumatic experience of being burned themselves,for example.

[aw]In limited time and/or limited funding situations, how can academia utilize these training methodologies? Any creative solutions?

[ax]I’m also really surprised that the conclusion is to focus on training for the worker. I would think that changing attitudes (explicit and implicit) would have more to do with the environment that one works in than it does on a specific training.

[ay]I agree on this. I think the environment one finds themselves plays a part in shaping one’s attitudes and behaviors.

[az]AGREED

[ba]100% with the emphasis on the environment rather than the training

Are employee surveys biased? CHAS Journal club, Oct 13, 2021

Impression management as a response bias in workplace safety constructs

In October,, 2021 the CHAS Journal club reviewed the 2019 paper by Keiser & Payne examining the impact of “impression management” on the way workers in different sectors responded to safety climate surveys. The authors were able to attend to discuss their work with the group on October 13. Below is their presentation file as well as the comments from the table read the week before.

Our thanks to Drs. Keiser and Payne for their work and their willingness to talk with us about it!

10/06 Table Read for The Art & State of Safety Journal Club

Excerpts from “Are employee surveys biased? Impression management as a response bias in workplace safety constructs”

Full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753518315340?casa_token=oOShJnb3arMAAAAA:c4AcnB3fwnlDYlol3o2bcizGF_AlpgKLdEC0FPjkKg8h3CBg0YaAETq8mfCY0y-kn7YcLmOWFA

Meeting Plan

  • (5 minutes) Sarah to open meeting
  • (15 minutes) All participants read complete document
  • (10 minutes) All participants use “Comments” function to share thoughts
  • (10 minutes) All participants read others’ Comments & respond
  • (10 minutes) All participants return to their own Comments & respond
  • (5 minutes) Sarah announces next week’s plans & closes meeting

Introduction

The ultimate goal of workplace safety research is to reduce injuries and fatalities on the job.[a] Safety surveys that measure various safety-related constructs,including safety climate (Zohar, 1980), safety motivation and knowledge (Griffin and Neal, 2000), safety participation and compliance (Griffin and Neal, 2000), and outcome indices (e.g., injuries, incidents, and near misses) are the primary way that researchers gather relevant safety data. They are also used extensively in industry. It is quite common to administer self-report measures of both safety predictors and outcomes in the same survey, which introduces the possibility that method biases prevalent in self-report measures contaminate relationships among safety constructs (Podsakoff et al., 2012).

The impetus for the current investigation is the continued reliance by safety researchers and practitioners on self-report workplace safety surveys. Despite evidence that employees frequently underreport in-juries (Probst, 2015; Probst and Estrada, 2010), researchers have not directly examined the possibility that employees portray the workplace as safer than it really is on safety surveys[b]. Correspondingly, the current investigation strives to answer the following question: Are employee safety surveys biased? In this study,we focus on one potential biasing variable, impression management, defined as conscious attempts at exaggerating positive attributes and ignoring negative attributes (Connelly and Chang, 2016; Paulhus, 1984).The purpose of this study is to estimate the prevalence of impression management as a method bias in safety surveys based on the extent to which impression management contaminates self-reports of various workplace safety constructs and relationships among them.[c][d][e]

Study 1

Method

This study was part of a larger assessment of safety climate at a public research university in the United States using a sample of research laboratory personnel. The recruitment e-mail was concurrently sent to people who completed laboratory safety training in the previous two years (1841) and principal investigators (1897). Seven hundred forty-six laboratory personnel responded to the survey… To incentivize participation, respondents were given the option to provide their name and email address after they completed the survey in a separate survey link, in order to be included in a raffle for one of five $100 gift cards.

Measures:

  • Safety climate
  • Safety knowledge, compliance, and participation
  • Perceived job risk and safety outcomes
  • Impression management

Study 2

a second study was conducted to

  1. Further examine impression management as a method bias in self-reports of safety while
  2. Accounting for personality trait variance in impression management scales.

A personality measure was administered to respondents and controlled to more accurately estimate the degree to which self-report measures of safety constructs are susceptible to impression management as a response bias.

Method

A similar survey was distributed to all laboratory personnel at a different university located in Qatar. A recruitment email was sent to all faculty, staff, and students at the university (532 people), which included a link to an online laboratory safety survey. No incentive was provided for participating and no personally identifying information was collected from participants. A total of 123 laboratory personnel responded.[f]

Measures:

  • Same constructs as Study 1, plus
  • Personality

Study 3

Two limitations inherent in Study 1 and Study 2 were addressed in a third study, specifically, score reliability and generalizability.

Method

A safety survey was distributed to personnel at an oil and gas company in Qatar, as part of a larger collaboration to examine the effectiveness of a safety communication workshop. All employees (∼370) were invited to participate in the survey and 107 responded (29% response rate). Respondents were asked to report their employee identification numbers at the start of the survey, which was used to identify those who participated in the workshop. A majority of employees provided their identifying information (96, 90%).

Measures:

  • Same constructs used in Study 1, plus
  • Risk propensity
  • Safety communication
  • Safety motivation
  • Unlikely virtues

Conclusion[g][h][i][j][k][l]

Safety researchers have provided few direct estimates of method bias [m][n][o][p]in self-report measures of safety constructs. This oversight is especially problematic considering they rely heavily on self-reports to measure safety predictors and criteria.

The results from all three studies, but especially the first two, suggest that self-reports of safety are susceptible to dishonesty[q][r][s][t][u] aimed at presenting an overly positive representation of safety.[v][w][x][y][z][aa] In Study 1, self reports of safety knowledge, climate, and behavior appeared to be more susceptible to impression management compared to self-reports of perceived job risk and safety outcomes. Study 2 provided additional support for impression management as a method bias in self-reports of both safety predictors and outcomes. Further, relationships between impression management and safety constructs remained significant even when controlling for Alpha personality trait variance (conscientiousness, agreeableness, emotional stability). Findings from Study 3 provided less support for the biasing effect of impression management on self-report measures of safety constructs (average VRR=11%). However, the unlikely virtues measure [this is a measure of the tendency to claim uncommon positive traits] did reflect more reliable scores as those observed in Study 1 and Study 2 and it was significantly related to safety knowledge, motivation, and compliance. Controlling for the unlikely virtues measure led to the largest reductions in relationships with safety knowledge. Further exploratory comparison of identified vs. anonymous respondents observed that mean scores on the unlikely virtues measure were not significantly different for the identified subsample compared to the anonymous subsample; however, unlikely virtues had a larger impact on relationships among safety constructs for the anonymous subsample.

The argument for impression management as a biasing variable in self-reports of safety relied on the salient social consequences to responding and other costs to providing a less desirable response, including for instance negative reactions from management, remedial training, or overtime work[ab][ac]. Findings suggest that the influence of impression management on self-report measures of safety constructs depends on various factors[ad] (e.g., distinct safety constructs, the identifying approach, industry and/or safety salience) rather than the ubiquitous claim that impression management serves as a pervasive method bias.

The results of Study 1 and Study 3 suggest that impression management was most influential as a method bias in self-report measures of safety climate, knowledge, and behavior, compared to perceived risk and safety outcomes. These results might reflect the more concrete nature of these constructs based on actual experience with hazards and outcomes. Moreover, these findings are in line with Christian et al.’s (2009) conclusion that measurement biases are less of an issue for safety outcomes compared to safety behavior. These findings in combination with theoretical rationale suggest that the social consequences of responding are more strongly elicited by self-report measures of safety climate, knowledge, and behavior, compared to self-reports of perceived job risk and safety outcomes. Items in safety perception and behavior measures fittingly tend to be more personally (e.g., safety compliance – “I carry out my work in a safe manner.”) and socially relevant (e.g., safety climate – “My coworkers always follow safety procedures.”).

The results from Study 2, compared to findings from Study 1 and Study 3, suggest that assessments of job risk and outcomes are also susceptible to impression management. The Alpha personality factor generally accounted for a smaller portion of the variance in the relationships between impression management and perceived risk and safety outcomes. The largest effects of impression management on the relationships among safety constructs were for relationships with perceived risk and safety outcomes. These results align with research on injury underreporting (Probst et al., 2013; Probst and Estrada, 2010) and suggest that employees may have been reluctant to report safety outcomes even when they were administered on an anonymous survey used for research purposes.

We used three samples in part to determine if the effect of impression management generalizes. However, results from Study 3 were inconsistent with the observed effect of impression management in Studies 1 and 2. One possible explanation is that these findings are due to industry differences and specifically the salience of safety. There are clear risks associated with research laboratories as exemplified by notable incidents; [ae]however, the risks of bodily harm and death in the oil and gas industry tend to be much more salient (National Academies of Sciences, Engineering, and Medicine, 2018). Given these differences, employees from the oil and gas industry as reflected in this investigation might have been more motivated to provide a candid and honest response to self-report measures of safety.[af][ag][ah][ai][aj] This explanation, however, is in need of more rigorous assessment.

These results in combination apply more broadly to method bias [ak][al][am]in workplace safety research. The results of these studies highlight the need for safety researchers to acknowledge the potential influence of method bias and to assess the extent to which measurement conditions elicit particular biases.

It is also noteworthy that impression management suppressed relationships in some cases; thus, accounting for impression management might strengthen theoretically important relationships. These results also have meaningful implications for organizations because positively biased responding on safety surveys can contribute to the incorrect assumption that an organization is safer than it really is[an][ao][ap][aq][ar][as][at].

The results of Study 2 are particularly concerning and practically relevant as they suggest that employees in certain cases are likely to underreport the number of safety outcomes that they experience even when their survey responses are anonymous. However, these findings were not reflected in results from Study 1 and Study 3. Thus, it appears that impression management serves as a method bias among self-reports of safety outcomes only in particular situations. Further research[au][av][aw] is needed to explicate the conditions under which employees are more/less likely to provide honest responses to self-report measures of safety outcomes.

———————————————————————————————————————

BONUS MATERIAL FOR YOUR REFERENCE:

For reference only, not for reading during the table read

Respondents and Measures

  • Study 1

Respondents:

graduate students (229,37%),

undergraduate students (183, 30%),

research scientists and associates (123,20%),

post-doctoral researchers (28,5%),

laboratory managers (25, 4%),

principal investigators (23, 4%)

329 [53%] female;

287 [47%] male

377 [64%] White;

16 [3%] Black;

126 [21%] Asian;

72 [12%] Hispanic

Age (M=31,SD=13.24)

Respondents worked in various types of laboratories, including:

biological (219,29%),

Animal biological (212,28%),

human subjects/computer (126,17%),

Chemical (124,17%),

mechanical/electrical (65,9%)

Measures:

  • Safety Climate

Nine items from Beus et al. (2019) 30-item safety climate measure were used in the current study. The nine-item measure included one item from each of five safety climate dimensions (safety communication, co-worker safety practices, safety training, safety involvement, safety rewards) and two items from the management commitment and safety equipment and  housekeeping dimensions. The nine items were identified based on factor loadings from Beus et al. (2019). Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Safety knowledge, compliance, and participation

Respondents completed slightly modified versions of Griffin and Neal’s (2000) four-item measures of safety knowledge (e.g., “I know how to perform my job in the lab in a safe manner.”), compliance (e.g., “I carry out my work in the lab in a safe manner.”), and participation (e.g., “I promote safety within the laboratory.”). Items were completed using a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Perceived job risk and safety outcomes

Respondents completed a three-item measure of perceived job risk (e.g., “I encounter personally hazardous situations while in the laboratory;” 1=almost always untrue, 5=almost always true; Jermier et al., 1989). Respondents also provided safety incident data regarding the number of injuries, incidents, and near misses that they experienced in the last 12 months.

  • Impression Management

Four items were selected from Paulhus’s (1991) 20-item Balanced Inventory of Desirable Responding. These items were selected based on a review of Paulhus’s (1991) full measure and an assessment of those items that were most relevant and best representative of the full measure (Table 1). Items were completed using a five-point accuracy scale (1=very inaccurate, 5=very accurate). Ideally this survey would have included Paulhus’s (1991) full 20-item measure. However, as is often the case in survey research, we had to balance construct validity with survey length and concerns about respondent fatigue and for these reasons only a subset of Paulhus’s (1991) measure was included.

  • Study 2

Respondents:

research scientists or post-doctoral researchers (43; 39%)

principal investigators (12; 11%)

laboratory managers and coordinators (12; 11%)

graduate students (3; 3%)

Faculty teaching in a laboratory (3; 3%)

 one administrator (1%)

Respondents primarily worked in:

chemical (55; 45%)

mechanical/electrical (39; 32%)

Uncategorized laboratory (29; 24%)

Measures:

  • Safety Constructs

Respondents completed the same six self-report measures of safety constructs that were used in Study 1: safety climate, safety knowledge, safety compliance, safety participation, perceived job risk, and injuries, incidents, and near misses in the previous 12 months.

  • Impression Management

Respondents completed a five-item measure of impression management from the Bidimensional Impression Management Index (Table 1; Blasberg et al., 2014). Five items from the Communal Management subscale were selected based on an assessment of their quality and degree to which they represent the 10-item scale.5 A subset of Blasberg et al.’s (2014) full measure was used because of concerns from management about survey length. Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Personality

Conscientiousness, agreeableness, and emotional stability were assessed using six items from Gosling et al. (2003) 10-item personality measure. Four items from the 10-item measure assessing openness to experience and extraversion were not included in this study. Respondents were asked to indicate the degree to which adjectives were representative of them (i.e., Conscientiousness – “dependable, self-disciplined;”  agreeableness – “sympathetic, warm;” Emotional stability – “calm, emotionally stable”; 1=strongly disagree, 7=strongly agree) and combined to represent the Alpha personality factor. One conscientiousness item was dropped because it had a negative item-total correlation (“disorganized, careless” [reverse coded]). This was not surprising as it was the only reverse-scored personality item administered.

  • Study 3

Respondents:

The typical respondent was male (101, 94%) and had no supervisory responsibility (72, 67%); however, some women (6, 6%), supervisors (17, 16%), and managers/senior managers (16, 15%) also completed the survey.7 The sample was diverse in national origin with most respondents from India (44, 42%) and Pakistan (25, 24%).

Measures:

  • Safety Constructs

Respondents completed five of the same self-report measures of

safety constructs used in Study 1 and Study 2, including safety climate (Beus et al., 2019), safety knowledge (Griffin and Neal, 2000), safety compliance (Griffin and Neal, 2000), safety participation (Griffin and Neal, 2000), and injuries, incidents, and near misses in the previous 6 months. Respondents completed a similar measure of perceived job risk (Jermier et al., 1989) that included three additional items assessing the degree to which physical, administrative, and personal controls

  • Unlikely Virtues

Five items were selected from Weekley’s (2006) 10-item unlikely

virtues measure (see also Levashina et al., 2014; Table 1) and were responded to on a 5-point agreement scale (1=strongly disagree; 5=strongly agree). Akin to the previous studies, an abbreviated version of the measure was used because of constraints with survey length and the need to balance research and organizational objectives.

[a]In my mind, this is a negative way to start a safety research project. The ultimate goal of the organization is to complete its mission and injuries and fatalities are not part of the mission. So this puts the safety researcher immediately at odds with the organization.

[b]I wonder if this happens beyond surveys—do employees more generally portray a false sense of safety to co-workers, visitors, employers, trainees, etc? Is that made worse by surveying, or do surveys pick up on bias that exists more generally in the work culture?

[c]Employees always portray things in a better light on surveys because who really knows if its confidential

[d]Not just with regard to safety; most employees, I suspect, want to portray their businesses in a positive light. Good marketing…

[e]I think that this depends on the quality of the survey. If someone is pencil whipping a questionnaire, they are probably giving answers that will draw the least attention. However, if the questions are framed in an interesting way, I believe it is possible to have a survey be both a data collection tool and a discussion starter. Surveys are easy to generate, but hard to do well.

[f]In my experience, these are pretty high response rates for the lab surveys (around 20%).

[g]A concern that was raised by a reviewer on this paper was that it leads to a conclusion of blaming the workers. We certainly didn't set out to do that, but I can understand that perspective. I'm curious if others had that reaction.

[h]I had the same reaction and I can see how it could lead to a rosier estimate of safety conditions.

[i]There is an interesting note below where you mention the possible outcomes of surveys that "go poorly" if you will. If the result is that the workers are expected to spend more of their time and energy "fixing" the problem, it is probably no surprise that they will just say that there is no problem.

[j]I am always thinking about this type of thing—how results are framed and who the finger is being pointed at. I can see how this work can be interpreted that way, but I also see it from an even bigger picture—if people are feeling that they have to manage impressions (for financial safety, interpersonal safety, etc) then to me it stinks of a bigger cultural, systemic problem. Not really an individual one.

[k]Well – the "consequences" of the survey are really in the hands of the company or institution. A researcher can go in with the best of intentions, but a company can (and often does) respond in a way that discourages others from being forthright.

[l]Oh for sure! I didn't mean to shoulder the bigger problem on researchers or the way that research is conducted—rather, that there are other external pressures that are making individuals feel like managing people's impressions of them is somehow more vital than reporting safety issues, mistakes, needs, etc. Whether that's at the company, institution, or greater cultural level (probably everywhere), I don't think it's at the individual level.

[m]My first thought on bias in safety surveys had to do more with the survey model rather than method bias.  Most all safety surveys I have taken are based on the same template and questions generally approach safety from the same angle.  I haven't seen a survey that asks the same question several ways in the course of the survey or seen any control questions to attempt to determine validity of answers.  Perhaps some of the bias comes from the general survey format itself….

[n]I agree. In reviewing multiple types of surveys trying to target safety, there are many confounding variables. Trying to find a really good survey is tough – and I'm not entirely sure that it is possible to create something that can be applied by all. It is one of the reasons I was so intrigued by the BMS approach.

[o]Though—a lot of that work (asking questions multiple ways, asking control questions, determining validity and reliability, etc) is done in the original work that initially develops the survey metric. Just because it's not in a survey that one is taking or administering, doesn't necessarily mean that work isn't there

[p]Agreed – There are a lot of possible method biases in safety surveys. Maybe impression management isn't the most impactful. There just hasn't been much research in this area as it relates to safety measures, but certainly there is a lot out there on method biases more broadly. Stephanie and I had a follow up study (conference paper) looking at blatant extreme responding (using only the extreme endpoints on safety survey items). Ultimately, that too appears to be an issue

[q]In looking back over the summary, I was drawn to the use of the word 'dishonesty."  That implies intent.  I'm wondering whether it is equally likely that people are lousy at estimating risk and generally overestimate their own capabilities (Dunning Kruger anyone?).  So it is not so much about dishonesty but more about incompetency.

[r]They are more likely scared of retribution.

[s]This is an interesting point and I do think there is a part of the underestimation that has to do with an unintentional miscalibration. But, I think the work in this paper does go to show that some of the underestimation is related to people's proclivity to attempt to control how people perceive them and their performance.

[t]Even so, that proclivity is not necessarily outright dishonesty.

[u]I agree. I doubt that the respondents set out with an intent to be fraudulent or dishonest. Perhaps a milder or softer term would be more accurate?

[v]I wonder how strong this effect is for, say, graduate students who are in research labs under a PI who doesn't value safety

[w]I thinks its huge. I know I see difference in speaking with people in private versus our surveys

[x]Within my department, I know I became very cynical about surveys that were administered by the department or faculty members. Nothing ever seemed to change, so it didn't really matter what you said on them.

[y]I also think it is very significant. We are currently dealing with an issue where the students would not report safety violations to our Safety Concerns and Near Misses database because they were afraid of faculty reprisal. The lab is not especially safe, but if no one reports it, the conclusion might be drawn that no problems exist.

[z]And to bring it back to another point that was made earlier: when you're not sure if reporting will even trigger any helpful benefits, is the perceived risk of retribution worth some unknown maybe-benefit?

[aa]I heard a lot of the same concerns when we tried doing a "Near Miss" project. Even when anonymity was included, I had several people tell me that the details of the Near Miss would give away who they were, so they didn't want to share it.

[ab]Interesting point. It would seem here that folks fear if they say something is amiss with safety in the workplace, it will be treated as something wrong with themselves that must be fixed.

[ac]Yeah I feel like this kind of plays in to our discussion from last week, when we were talking about people feeling like they're personally in trouble if there is an incident

[ad]A related finding has been cited in other writings on surveys – if you give a survey, and nothing changes after the survey, then people catch on that the survey is essentially meaningless and they either don't take surveys anymore or just give positive answers because it isn't worth explaining negative answers.

[ae]There are risks associated with research labs, but I don't know if I would call them "clear". My sense is that "notable incidents" is a catchphrase people are using about academic lab safety to avoid quantitating the risks any more specifically.

[af]This is interesting to think about. One the one hand, if one works in a higher hazard environment maybe they just NOTICE hazardous situations more and think of them as more important. On the other hand, there is a lot of discussion around the normalization of hazards in an environment that would seem to suggest that they would not report on the hazards because they are normal.

[ag]Maybe they receive more training as well which helps them identify hazards easier. Oil & Gas industry Chemical engineers certainly get more training from my experience.

[ah]Oil and gas workers were also far more likely to participate in the study than the academic groups.  I think private industry has internalized safety differently (not necessarily better or worse) than academia.  And high hazard industries like oil and gas have a good feel for the cost of safety-related incidents.  That definitely gets passed on to the workforce

[ai]How does normalization take culture into effect? Industries have a much longer history of self-reporting and reporting of accidents in general than do academic institutions.

[aj]Some industries have histories of self-reporting in some periods of time. For example, oil and gas did a lot of soul searching after the Deepwater explosion (which occurred the day of a celebration of 3 years with no injury reports), but this trend can fade with time. Alcoa in the 1990s and 2000s is a good example of this. For example, I've looked into Paul H. O'Neill's history with Alcoa. He was safety champion whose work faded soon after he left.

[ak]I wonder if this can be used as a way to normalize the surveys somehow

[al]Hmm, yeah I think you could, but you would also have to take a measure of impression management so that you could remove the variance caused by that from your model.

Erg, but then long surveys…. the eternal dilemma.

[am]I bet there are overlapping biases too that have opposite effects, maybe all you could do is determine to what extent of un-reliability your survey has

[an]In the BMS paper we covered last semester, it was noted that after they started to do the managerial lab visits, the committee actually received MORE information about hazardous situations. They attributed this to the fact that the committee was being very serious about doing something about each issue that was discovered. Once people realized that their complaints would actually be heard & addressed, they were more willing to report.

[ao]and the visits allowed for personal interactions which can be kept confidential as opposed to a paper trail of a complaint

[ap]I imagine that it was also just vindicating to have another human listen to you about your concerns like you are also a human. I do find there is something inherently dehumanizing about surveys (and I say this as someone who relies on them for multiple things!). When it comes to safety in my own workplace, I would think having a human make time for me to discuss my concerns would draw out very different answers.

[aq]Prudent point

[ar]The Hawthorne Effect?

[as]I thought that had to do with simply being "studied" and how it impacts behavior. With the BMS study, they found that people were reporting more BECAUSE their problems were actually getting solved. So now it was actually "worth it" to report issues.

[at]It would be interesting to ask the same question of upper management in terms of whether their safety attitudes are "true" or not. I don't know of any organizations that don't talk the safety talk. Even Amazon includes a worker safety portion to its advertising campaign despite its pretty poor record in that regard.

[au]I wish they would have expanded on this more, I'm really curious to see what methods to do this are out there or what impact it would have, besides providing more support that self-reporting surveys shouldn't be used

[av]That is an excellent point and again something that the reviewers pushed for. We added some text to the discussion about alternative approaches to measure these constructs. Ultimately, what can we do if we buy into the premise that self-report surveys of safety are biased? Certainly one option is to use another referent (e.g., managers) instead of the workers themselves. But that also introduces its own set of bias. Additionally, there are some constructs that would be odd to measure based on anything other than self-report (e.g., safety climate). So I think it's still somewhat of an open question, but a very good one. I'm sure Stephanie will have thoughts on this too for our discussion next week. 🙂 But to me that is the crux of the issue: what do we do with self-reports that tend to be biased?

[aw]Love this, I will have to go read the full paper. Especially your point about safety climate, it will be interesting to see what solutions the field comes up with because everyone in academia uses surveys for this. Maybe it will end up being the same as incident reports, where they aren't a reliable indicator for the culture.