Tag Archives: Safety Culture

Safety Culture Transformation – The impact of training on explicit and implicit safety attitudes

On October 27, 2021, the CHAS Journal Club head from the lead author of the paper “Safety culture transformation—The impact of training on explicit and implicit safety attitudes”. The complete open access paper can be found on line at this link. The presentation file used by Nicki Marquadt, the presenting author, includes the graphics and statistical data from the paper.

Comments from the Table Read

On October 20, the journal club did a silent table read of an abridged portion of the article. This version of the article and their comments are below.

1. INTRODUCTION

Safety attitudes of workers and managers have a large impact on safety behavior and performance in many industries (Clarke, 2006, 2010; Ford & Tetrick, 2011, Ricci et al., 2018). They are an integral part of an organizational safety culture[a] and can therefore influence occupational health and safety, organizational reliability, and product safety (Burns et al., 2006; Guldenmund, 2000; Marquardt et al., 2012; Xu et al., 2014).

There are different forms of interventions for safety culture and safety attitude change, trainings are one of them. Safety trainings seek to emphasize the importance of safety behavior and promote appropriate, safety-oriented attitudes among employees[b][c][d][e][f][g][h][i] (Ricci et al., 2016, 2018).-*

However, research in the field of social cognition has shown that attitudes can be grouped in two different forms: On the one hand, there are conscious and reflective so-called explicit attitudes and on the other hand, there are mainly unconscious implicit attitudes (Greenwald & Banaji, 1995). Although there is an ongoing debate whether implicit attitudes are unconscious or partly unconscious (Berger, 2020; Gawronski et al., 2006), most researchers affirm the existence of these two structurally distinctive attitudes (Greenwald & Nosek, 2009). Traditionally, researchers have studied explicit attitudes of employees by using questionnaires [j](e.g., Cox & Cox, 1991; Rundmo, 2000). However, increasingly more researchers now focus on implicit attitudes that can be assessed with reaction time measures like the Implicit Association Test[k][l] (IAT; Greenwald et al., 1998; Ledesma et al., 2015; Marquardt, 2010; Rydell et al., 2006). These implicit attitudes could provide better insights into what influences safety behavior because they are considered to be tightly linked with key safety indicators. Unlike explicit attitudes, they are considered unalterable by social desirable responses (Burns et al., 2006; Ledesma et al., 2018; Marquardt et al., 2012; Xu et al., 2014). Nevertheless, no empirical research on whether implicit and explicit safety attitudes are affected by training could be found yet. Therefore, the aim of this paper is to investigate the effects that training may have on implicit and explicit safety attitudes. The results could be used to draw implications for the improvement of safety training and safety culture development.

1.1 Explicit and implicit attitudes in safety contexts

Explicit attitudes are described as reflected which means a person has conscious control over them[m] (Strack & Deutsch, 2004). In their associative–propositional evaluation (APE) model, Gawronski and Bodenhausen (2006) assume that explicit attitudes are based on propositional processes. These consist of evaluations derived from logical conclusions. In addition, explicit attitudes are often influenced by social desirability[n][o][p][q][r], if the topic is rather sensitive such as moral issues (Maass et al., 2012; Marquardt, 2010; Van de Mortel, 2008). This has also been observed in safety research where, in a study on helmet use, the explicit measure was associated with a Social Desirability Scale (Ledesma et al., 2018). Furthermore, it is said that explicit attitudes can be changed faster and more completely than implicit ones (Dovidio et al., 2001; Gawronski et al., 2017).

On the other hand, implicit attitudes are considered automatic, impulsive, and widely unconscious (Rydell et al., 2006). According to Greenwald and Banaji (1995, p. 5), they can be defined as “introspectively unidentified (or inaccurately identified) traces of past experience” that mediate overt responses. Hence, they use the term “implicit” as a broad label for a wide range of mental states and processes such as unaware, unconscious, intuitive, and automatic which are difficult to identify introspectively by a subject. Gawronski and Bodenhausen (2006) describe implicit attitudes as affective reactions that arise when stimuli activate automatic networks of associations. However, although Gawronski and Bodenhausen (2006) do not deny “that certain affective reactions are below the threshold of experiential awareness” (p. 696), they are critical towards the “potential unconsciousness of implicit attitudes” (p. 696). Therefore, they use the term “implicit” predominantly for the aspect of automaticity of affective reactions. Nevertheless, research has shown that people are not fully aware of the influence of implicit attitudes on their thinking and behavior even though they are not always completely unconscious (Berger, 2020; Chen & Bargh, 1997; De Houwer et al., 2007; Gawronski et al., 2006). Many authors say that implicit attitudes remain more or less stable over time and are hard to change (Charlesworth & Banaji, 2019; Dovidio et al., 2001; Wilson et al., 2000). In line with this, past studies in which attempts were made to change implicit attitudes often failed to achieve significant improvements (e.g., Marquardt, 2016; Vingilis et al., 2015).

1.3 Training and safety attitude change[s][t]

As mentioned in the introduction, the main question of this paper is to find out whether training can change implicit and explicit safety attitudes. Safety training can improve a person’s ability to correctly identify, assess, and respond to possible hazards in the work environment, which in turn can lead to better safety culture (Burke et al., 2006; Duffy, 2003; Wu et al., 2007). Besides individual safety training increasingly more industries such as aviation, medicine, and offshore oil and gas industry implement group trainings labeled as Crew Resource Management (CRM) training to address shared knowledge and task coordination in dynamic and dangerous work settings (Salas et al., 2006).

There are many different factors, which determine the effectiveness of safety trainings (Burke et al., 2006; Ricci et al., 2016) such as the training method (e.g., classroom lectures) and training duration (e.g., 8 h).

As can be seen in Figure 1, it can be stated that associative evaluations[u][v][w][x] (process) can be activated by different safety intervention stimuli such as training (input). These associative evaluations are the foundation for implicit safety attitudes (output) and propositional reasoning (processes), which in turn form the explicit safety attitudes (output). In addition, associative evaluations and propositional reasoning processes affect each other in many complex conscious and unconscious ways (Gawronski & Bodenhausen, 2006). However, change rates might be different. While the propositional processes adapt very quickly to the input (e.g., safety training), the associative evaluations might need longer periods of time for restructuring the associative network (Karpen et al., 2012). Therefore, divergences in the implicit and explicit measures resulting in inconsistent attitudes (output) can occur (McKenzie & Carrie, 2018).

1.4 Hypotheses and overview of the present studies

Based on the theories and findings introduced above, two main hypotheses are presented. Since previous research describes that explicit attitudes can be changed relatively quickly (Dovidio et al., 2001; Karpen et al., 2012), the first hypothesis states that:

  • H1: Explicit safety attitudes can be changed by training.
    Even though implicit attitudes are said to be more stable and harder to change (Dovidio et al.,
    2001; Gawronski et al., 2017; Wilson et al., 2000), changes by training in implicit attitudes can be expected too, due to changes in the associative evaluation processes (Lai et al., 2013) which affect the implicit attitudes (see EISAC model in Figure 1). Empirical research on the subject of implicit attitudinal change through training is scarce (Marquardt, 2016), however, it was shown that an influence on implicit attitudes is possible[y][z][aa] (Charlesworth & Banaji, 2019; Jackson et al., 2014; Lai et al., 2016; Rudman et al., 2001). Therefore, the second hypothesis states that:
  • H2: Implicit safety attitudes can be changed by training.

However, currently, there is a lack of empirical studies on implicit and explicit attitude change using longitudinal designs in different contexts (Lai et al., 2013). Also, in the field of safety training research, studies are needed to estimate training effectiveness over time (Burke et al., 2006). Therefore, to address the issues of time and context in safety attitude change by training, three studies with different training durations and measurement time frames in different safety-relevant contexts were conducted (see Table 1). In the first study, the short-term attitude change was measured 3 days prior and after a 2-h safety training in a chemical laboratory. In the second study, the medium-term attitude change was assessed 1 month prior and after a 2 days of CRM training for production workers. In the third study, the long-term attitude changes were measured within an advanced experimental design (12 months between pre- and post-measure) after a 12 weeks of safety ethics training in an occupational psychology student sample.[ab] To make this paper more succinct and to ease the comparability of used methods and reveled results, all three studies will be presented in parallel in the following method, results, and discussion sections. A summary table of all the studies can be seen in Table 1.

2. METHODS

Study 1

Fifteen participants (eight female and seven were male; mean age = 22.93 years; SD = 2.74) were recruited for the first study. The participants were from different countries with a focus on east and south Asia (e.g., India, Bangladesh, and China). They were enrolled in one class of an international environmental sciences study program with a major focus on practical experimental work in chemical and biological laboratories in Germany. Participation in regular safety training was mandatory for all participants to be admitted to working in these laboratories. To ensure safe working in the laboratories, the environmental sciences study program has traditionally small classes of 15–20 students. Hence, the sample represents the vast majority of one entire class of this study program. However, due to the lockdown caused by the COVID-19 pandemic, there was no opportunity to increase the sample size in a subsequent study. Consequently, the sample size was very small.

2.1.2 Study 2

A sample of 81 German assembly-line workers of an automotive manufacturer participated in Study 2. The workers were grouped into self-directed teams responsible for gearbox manufacturing. Hence, human error during the production process could threaten the health and safety of the affected workers and also the product safety of the gearbox which in turn affects the health and safety of prospective consumers. The gearbox production unit encompassed roughly 85 workers. Thus, the sample represents the vast majority of the production unit’s workforce. Due to the precondition of the evaluation being anonymous, as requested by the firm’s work council, personal data such as age, sex, and qualification could not be collected.

2.1.3 Study 3

In Study 3, complete data sets of 134 German participants (mean age = 24.14; SD = 5.49; 92 female, 42 male) could be collected. All participants were enrolled in Occupational Psychology and International Business study programs with a special focus on managerial decision making under uncertainty and risks. The sample represents the vast majority of two classes of this study program since one class typically includes roughly 60–70 students. Furthermore, 43 of these students also had a few years of work experience (mean = 4.31; SD = 4.07).

4. DISCUSSION

4.1 Discussion of results

The overall research objective of this paper was to find out about the possibility of explicit and implicit safety attitude changes by training. Therefore, two hypotheses were created. H1 stated that explicit safety attitudes can be changed by training. H2 stated that implicit safety attitudes can be changed by training. Based on the results of Studies 1–3, it can be concluded that explicit safety attitudes can be changed by safety training. In respect of effect sizes, significant small effects (Study 2), medium effects (Study 1), and even large effects (Study 3) were observed. Consequently, the first hypothesis (H1) was supported by all three studies. Nevertheless, compared to the meta-analytic results by Ricci et al. (2016) who obtained very large effect sizes, the effects of training on the explicit safety attitudes were lower in the present studies. In contrast, none of the three studies revealed significant changes in the implicit safety attitudes after the training. Even though there were positive changes in the post-measures, the effect sizes were marginal and nonsignificant. Accordingly, the second hypothesis (H2) was not confirmed in any of these three studies. In addition, it seems that the duration of safety training (e.g., 2 h, 2 days, or even 12 weeks) has no effect on the implicit attitudes[ac][ad][ae][af][ag][ah]. However, the effect sizes of short-term and medium-term training of Studies 1 and 2 were larger than those obtained in the study by Lai et al. (2016), whose effect sizes were close to zero after the follow-up measure 2–4 days after the intervention.

The results obtained in these studies differ with regard to effect size. This can partly be explained by the characteristics of the sample. For instance, in Studies 1 and 3, the participants of the training, as well as the control groups (Study 3 only), were students from occupational psychology and environmental sciences degree programs. Therefore, all students—even those of the control groups—are familiar with concepts of health and safety issues, sustainability, and prosocial behavior. Consequently, the degree programs could have had an impact on the implicit sensitization of the students which might have caused high values in implicit safety attitudes even in the control groups. The relatively high IAT-effects in all four groups prior and after the training are therefore an indication of a ceiling effect in the third study (see Table 3). This is line with the few empirical results gained by previous research in the field of implicit and explicit attitude change by training (Jackson et al., 2014; Marquardt, 2016). Specifically, Jackson et al. (2014) have also found a ceiling effect in the favorable implicit attitudes towards women in STEM of female participants, who showed no significant change in implicit attitudes after a diversity training.[ai][aj][ak]

Finally, it seems that the implicit attitudes were mainly unaffected by the training. The IAT data have shown no significant impact in any group comparison or pre- and post-measure comparison. To conclude, based on the current results it can be assumed that when there is a training effect, then it manifests itself in the explicit and not the implicit safety attitudes. One explanation might be that implicit safety attitudes are more stable unconscious dispositions which cannot be easily changed like explicit ones (Charlesworth & Banaji, 2019; Dovidio et al., 2001; Wilson et al., 2000). In respect of the EISAC model (see Section 1.3), unconscious associative evaluations might be activated by safety training, but not sustainably changed. A true implicit safety attitude change would refer to a shift in associative evaluations that persist across multiple safety contexts and over longer periods of time (Lai et al., 2013).[al][am]

5. PRACTICAL IMPLICATIONS AND CONCLUSION

What do the current empirical results mean for safety culture and training development? Based on the assumption that the implicit attitudes are harder to change (Gawronski et al., 2017) and thus may require active engagement via the central route of conviction (Petty & Cacioppo, 1986), this could be an explanation why there was no change in Study 1. This assumption is supported by the meta-analysis of Burke et al. (2006), who found large effect sizes for highly engaging training methods (e.g., behavior modeling, feedback, safety dialog) in general, and by the meta-analysis of Ricci et al. (2016) who obtained large effect sizes on attitudes in particular. However, the more engaging training methods such as interactive tutorials, case analyses, cooperative learning phases, role plays, and debriefs (structured group discussions)—which have proved strong meta-analytic effects (Ricci et al., 2016)—used in Studies 2 (CRM training) and 3 (Safety ethics training) did have a significant impact on the explicit but not implicit attitude change[an][ao]. In addition, it seems that more intense training with longer duration (e.g., such as 12 weeks in Study 3) has again no effect on the implicit attitude change. Therefore, maybe other approaches [ap][aq]can be more promising.

To sum up, even though the outlined conclusions are tentative, it could be very useful in the future to design realistic and affect-inducing training simulations via emergency simulators or virtual reality approaches[ar][as][at][au][av] [aw][ax][ay][az][ba](Sacks et al., 2013; Seymour et al., 2002) for all highly hazardous industries. If these simulations are accompanied by highly engaging behavioral (e.g., behavioral modeling; Burke et al., 2006, 2011), social (e.g., debriefs/structured group discussions; Ricci et al., 2016), and cognitive (e.g., implementation intentions; Lai et al., 2016) training methods, then they might facilitate a positive explicit and even implicit safety attitude change and finally a sustainable safety culture transformation.

[a]A theoretical question that occurs to me when reading this is:

Is “an organizational safety culture” the sum of the safety attitudes of workers and management or is there a synergy among these attitudes that creates a non-linear feedback effect?

[b]I would not have thought of this as the purpose of discreet trainings. I would have thought that the purpose of trainings is to teach the skills necessary to do a job safely.

[c]I agree. Safety Trainings are about acquiring skills to operate safely in a specific process…the collective (Total Environment) affects safety behavior.

[d]I think this could go back to the point below about fostering the environment – safety trainings communicating that safety is a part of the culture here.

[e]Safety professionals (myself included) have historically misused the term “training” to refer to what are really presentations.

[f]Agreed. I always say something that happens in a lecture hall with my butt in a chair is probably not a “training.” While I see the point made above, many places have “trainings” simply because they are legally required to have them. It says little to nothing about the safety culture of the whole environment.

[g]Maybe they go more into the actual training types used in the manuscript, but we typically start in a lecture hall and then move into the labs for our trainings, so I would still classify what we have as a training, but I can see what you mean about a training being more like a presentation in some cases.

[h]This is something I struggle with…but I’m trying to refer to the lecture style component as a safety presentation and the actual working with spill kits as a safety training.  It has been well-received!

[i]This is a core question and has been an ongoing struggle ever since I started EHS training in an education-oriented environment.

As a result, over time I have moved my educational objectives from content based (e.g. what is an MSDS?) to awareness based (what steps should you take when you have a safety question). However, the EHS community is sloppy when talking about training and education, which are distinct activities.

[j]Looks like these would be used for more factual items such as evaluating what the researcher did, not how/why they did it

[k]I’m skeptical that IATs are predictive of real-world behavior in all, or even most, circumstances. I’d be more interested in an extension of this work that looks at whether training (or “training”) changes revealed preferences based on field observations.

[l]Yes – much more difficult to do but also much more relevant. I would be more interested in seeing if decision-making behavior changes under certain circumstances. This would tell you if training was effective or not.

[m]This is a little confusing to me but sounds like language that makes sense in another context.

[n]What are the safety-related social desirabilities of chemistry grad students?

[o]I would think these would be tied to not wanting to “get in trouble.”

[p]Also, likely linked to being wrong about something chemistry-related.

[q]What about the opposite? Not wear PPE to be cool?

[r]In my grad student days, I was primarily learning how to “fake it until I make it”. This often led to the imposter syndrome being socially desirable. This probably arose from the ongoing awareness of grading and other judgement systems that the academic environment relies on

[s]Were study participants aware or were the studies conducted blind? If I am an employee and I know my progress will be measured, I may behave differently than if I had not known.

[t]This points back to last week’s article.

[u]What are some other ways to activate our associative evaluations?

[v]I would think it would include things like witnessing your lab mates follow safety guidance, having your PI explicitly ask you about risk assessment on your experiments, having safety issues remedied quickly by your facility. Basically, the norms you would associate with your workplace.

[w]Right, I just wonder if there’d be another way besides the training (input) to produce the intended change in the associative evaluation process we go through to form an implicit attitude. We definitely have interactions on a daily basis which can influence that, but is there some other way to tell our subconscious mind something is important.

[x]In the days before social media, we used social marketing campaigns that were observably successful, but they relied on a core of career lab techs who supported a rotating cast of medical researchers. The lab techs were quite concerned about both their own safety and the quality of their science as a result of the 3 to 6 month rotation of the MD/PhD researchers.

The social marketing campaigns included 1) word of mouth, 2) supporting graphical materials and 3) ongoing EHS presence in labs to be the bad guys on behalf of the career lab techs

[y]This reminds me of leading vs lagging indicators for cultural change

[z]This also makes me think of the arguments around “get the hands to do the right things and the attitudes will follow” which is along the lines of what Geller describes.

[aa]That’s a great comparison. Emphasizes the importance of embedding it throughout the curriculum to be taught over long periods of time

[ab]A possible confounding variable here would have to do with how much that training was reinforced between the training and the survey period. 12 months out (or even 3 months out) a person may not even remember what was said or done in that specific training, so their attitudes are likely to be influenced by what has been happening in the mean time.

[ac]I don’t find this surprising. I would imagine that what was happening in the mean time (outside of the training) would have a larger impact on implicit attitudes.

[ad]I was really hoping to see a comparison using the same attitude time frame for the 3 different training durations. Like a short-term, medium, and long-term evaluation of the attitudes for all 3 training durations, but maybe this isn’t how things are done in these kinds of studies.

[ae]This seems to be the trouble with many of the behavioral sciences papers I read, where you can study what is available not something that lines up with your hypothesis

[af]I really would probably have been more interested in the long-term evaluation for the medium training duration personally to see their attitude over a longer period of time, for example.

[ag]I think this is incredibly hard to get right though. An individual training is rarely impactful enough for people to remember it. And lots of stuff happens in between when you take the training and when you are “measured” that could also impact your safety attitudes. If the training you just went through isn’t enforced by anyone anywhere, what value did it really have? Alternatively, if people already do things the right way, then the training may have just helped you learn how to do everything right – but was it the training or the environment that led to positive implicit safety attitudes? Very difficult to tease apart in reality.

[ah]Yeah, maybe have training follow-ups or an assessment of some sorts to determine if information was retained to kind of evaluate the impact the training had on other aspects as well as the attitudes.

[ai]What effect does this conclusion have on JEDI or DEI training?

[aj]I also found this point to be very interesting. I wonder if this paper discussed explicit attitudes. I’m not sure what explicit vs implicit attitudes would mean in a DEI context because they seem more interrelated (unconscious bias, etc.)

[ak]I am also curious how Implicit Attitude compares to Unconscious Bias.

[al]i.e. Integrated across the curriculum over time?

[am]One challenge I see here is the competing definitions of “safety”. There are chemical safety, personal security, community safety,  social safety all competing for part of the safety education pie. I think this is why many people’s eyes glaze over when safety training is brought up or presented

[an]The authors mention that social desirability is one reason explicit and implicit attitudes can diverge, but is it the only reason, or even the primary reason? I’m somehwat interested in the degree to which that played a role here (though I’m also still not entirely sure how much I care whether someone is a “true believer” when it comes to safety or just says/does all the right things because they know it’s expected of them).

[ao]This is a good point.

[ap]I am curious to learn more about these approaches.

[aq]I believe the author discusses more thoroughly in the full paper

[ar]Would these trainings only be for emergencies or all trainings? I feel that a lot of times we are told what emergencies might pop up and how you would handle them but never see them in action. This reminds me of a thought I had about making a lab safety-related video game that you could “fail” on handling an emergency situation in lab but you wouldn’t have the direct consequences in the real world.

[as]Love that idea, it makes sense that you would remember it better if you got to walk through the actual process. I wonder what the effect of engagement would be on implicit and explicit attitudes.

[at]Absolutely – I think valuable learning moments come from doing the action and it honestly would be safer to learn by making mistakes in a virtual environment when it comes to our kind of safety. The idea reminds me of the  tennis video games I used to play when I was younger and they helped me learn how to keep score in tennis. Now screen time would be a concern, but something like this could be looked at in some capacity.

[au]This idea is central to trying to bring VR into training. Obviously, you can’t actually have someone spill chemical all over themselves, etc – but VR makes it so you virtually could. And there are papers suggesting that the brain “reads” things happening in the VR world as if they really happened. Although one has to be careful with this because that also opens up the possibility that you could actually traumatize someone in the VR world.

[av]I know I was traumatized just jumping into a VR game where you fell through hoops (10/10 don’t recommend falling-based VR games), but maybe less of a VR game and more of like a cartoon character that they can customize so they see the impact exposure to different chemicals could have but they don’t have that traumatic experience of being burned themselves,for example.

[aw]In limited time and/or limited funding situations, how can academia utilize these training methodologies? Any creative solutions?

[ax]I’m also really surprised that the conclusion is to focus on training for the worker. I would think that changing attitudes (explicit and implicit) would have more to do with the environment that one works in than it does on a specific training.

[ay]I agree on this. I think the environment one finds themselves plays a part in shaping one’s attitudes and behaviors.

[az]AGREED

[ba]100% with the emphasis on the environment rather than the training

Are employee surveys biased? CHAS Journal club, Oct 13, 2021

Impression management as a response bias in workplace safety constructs

In October,, 2021 the CHAS Journal club reviewed the 2019 paper by Keiser & Payne examining the impact of “impression management” on the way workers in different sectors responded to safety climate surveys. The authors were able to attend to discuss their work with the group on October 13. Below is their presentation file as well as the comments from the table read the week before.

Our thanks to Drs. Keiser and Payne for their work and their willingness to talk with us about it!

10/06 Table Read for The Art & State of Safety Journal Club

Excerpts from “Are employee surveys biased? Impression management as a response bias in workplace safety constructs”

Full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753518315340?casa_token=oOShJnb3arMAAAAA:c4AcnB3fwnlDYlol3o2bcizGF_AlpgKLdEC0FPjkKg8h3CBg0YaAETq8mfCY0y-kn7YcLmOWFA

Meeting Plan

  • (5 minutes) Sarah to open meeting
  • (15 minutes) All participants read complete document
  • (10 minutes) All participants use “Comments” function to share thoughts
  • (10 minutes) All participants read others’ Comments & respond
  • (10 minutes) All participants return to their own Comments & respond
  • (5 minutes) Sarah announces next week’s plans & closes meeting

Introduction

The ultimate goal of workplace safety research is to reduce injuries and fatalities on the job.[a] Safety surveys that measure various safety-related constructs,including safety climate (Zohar, 1980), safety motivation and knowledge (Griffin and Neal, 2000), safety participation and compliance (Griffin and Neal, 2000), and outcome indices (e.g., injuries, incidents, and near misses) are the primary way that researchers gather relevant safety data. They are also used extensively in industry. It is quite common to administer self-report measures of both safety predictors and outcomes in the same survey, which introduces the possibility that method biases prevalent in self-report measures contaminate relationships among safety constructs (Podsakoff et al., 2012).

The impetus for the current investigation is the continued reliance by safety researchers and practitioners on self-report workplace safety surveys. Despite evidence that employees frequently underreport in-juries (Probst, 2015; Probst and Estrada, 2010), researchers have not directly examined the possibility that employees portray the workplace as safer than it really is on safety surveys[b]. Correspondingly, the current investigation strives to answer the following question: Are employee safety surveys biased? In this study,we focus on one potential biasing variable, impression management, defined as conscious attempts at exaggerating positive attributes and ignoring negative attributes (Connelly and Chang, 2016; Paulhus, 1984).The purpose of this study is to estimate the prevalence of impression management as a method bias in safety surveys based on the extent to which impression management contaminates self-reports of various workplace safety constructs and relationships among them.[c][d][e]

Study 1

Method

This study was part of a larger assessment of safety climate at a public research university in the United States using a sample of research laboratory personnel. The recruitment e-mail was concurrently sent to people who completed laboratory safety training in the previous two years (1841) and principal investigators (1897). Seven hundred forty-six laboratory personnel responded to the survey… To incentivize participation, respondents were given the option to provide their name and email address after they completed the survey in a separate survey link, in order to be included in a raffle for one of five $100 gift cards.

Measures:

  • Safety climate
  • Safety knowledge, compliance, and participation
  • Perceived job risk and safety outcomes
  • Impression management

Study 2

a second study was conducted to

  1. Further examine impression management as a method bias in self-reports of safety while
  2. Accounting for personality trait variance in impression management scales.

A personality measure was administered to respondents and controlled to more accurately estimate the degree to which self-report measures of safety constructs are susceptible to impression management as a response bias.

Method

A similar survey was distributed to all laboratory personnel at a different university located in Qatar. A recruitment email was sent to all faculty, staff, and students at the university (532 people), which included a link to an online laboratory safety survey. No incentive was provided for participating and no personally identifying information was collected from participants. A total of 123 laboratory personnel responded.[f]

Measures:

  • Same constructs as Study 1, plus
  • Personality

Study 3

Two limitations inherent in Study 1 and Study 2 were addressed in a third study, specifically, score reliability and generalizability.

Method

A safety survey was distributed to personnel at an oil and gas company in Qatar, as part of a larger collaboration to examine the effectiveness of a safety communication workshop. All employees (∼370) were invited to participate in the survey and 107 responded (29% response rate). Respondents were asked to report their employee identification numbers at the start of the survey, which was used to identify those who participated in the workshop. A majority of employees provided their identifying information (96, 90%).

Measures:

  • Same constructs used in Study 1, plus
  • Risk propensity
  • Safety communication
  • Safety motivation
  • Unlikely virtues

Conclusion[g][h][i][j][k][l]

Safety researchers have provided few direct estimates of method bias [m][n][o][p]in self-report measures of safety constructs. This oversight is especially problematic considering they rely heavily on self-reports to measure safety predictors and criteria.

The results from all three studies, but especially the first two, suggest that self-reports of safety are susceptible to dishonesty[q][r][s][t][u] aimed at presenting an overly positive representation of safety.[v][w][x][y][z][aa] In Study 1, self reports of safety knowledge, climate, and behavior appeared to be more susceptible to impression management compared to self-reports of perceived job risk and safety outcomes. Study 2 provided additional support for impression management as a method bias in self-reports of both safety predictors and outcomes. Further, relationships between impression management and safety constructs remained significant even when controlling for Alpha personality trait variance (conscientiousness, agreeableness, emotional stability). Findings from Study 3 provided less support for the biasing effect of impression management on self-report measures of safety constructs (average VRR=11%). However, the unlikely virtues measure [this is a measure of the tendency to claim uncommon positive traits] did reflect more reliable scores as those observed in Study 1 and Study 2 and it was significantly related to safety knowledge, motivation, and compliance. Controlling for the unlikely virtues measure led to the largest reductions in relationships with safety knowledge. Further exploratory comparison of identified vs. anonymous respondents observed that mean scores on the unlikely virtues measure were not significantly different for the identified subsample compared to the anonymous subsample; however, unlikely virtues had a larger impact on relationships among safety constructs for the anonymous subsample.

The argument for impression management as a biasing variable in self-reports of safety relied on the salient social consequences to responding and other costs to providing a less desirable response, including for instance negative reactions from management, remedial training, or overtime work[ab][ac]. Findings suggest that the influence of impression management on self-report measures of safety constructs depends on various factors[ad] (e.g., distinct safety constructs, the identifying approach, industry and/or safety salience) rather than the ubiquitous claim that impression management serves as a pervasive method bias.

The results of Study 1 and Study 3 suggest that impression management was most influential as a method bias in self-report measures of safety climate, knowledge, and behavior, compared to perceived risk and safety outcomes. These results might reflect the more concrete nature of these constructs based on actual experience with hazards and outcomes. Moreover, these findings are in line with Christian et al.’s (2009) conclusion that measurement biases are less of an issue for safety outcomes compared to safety behavior. These findings in combination with theoretical rationale suggest that the social consequences of responding are more strongly elicited by self-report measures of safety climate, knowledge, and behavior, compared to self-reports of perceived job risk and safety outcomes. Items in safety perception and behavior measures fittingly tend to be more personally (e.g., safety compliance – “I carry out my work in a safe manner.”) and socially relevant (e.g., safety climate – “My coworkers always follow safety procedures.”).

The results from Study 2, compared to findings from Study 1 and Study 3, suggest that assessments of job risk and outcomes are also susceptible to impression management. The Alpha personality factor generally accounted for a smaller portion of the variance in the relationships between impression management and perceived risk and safety outcomes. The largest effects of impression management on the relationships among safety constructs were for relationships with perceived risk and safety outcomes. These results align with research on injury underreporting (Probst et al., 2013; Probst and Estrada, 2010) and suggest that employees may have been reluctant to report safety outcomes even when they were administered on an anonymous survey used for research purposes.

We used three samples in part to determine if the effect of impression management generalizes. However, results from Study 3 were inconsistent with the observed effect of impression management in Studies 1 and 2. One possible explanation is that these findings are due to industry differences and specifically the salience of safety. There are clear risks associated with research laboratories as exemplified by notable incidents; [ae]however, the risks of bodily harm and death in the oil and gas industry tend to be much more salient (National Academies of Sciences, Engineering, and Medicine, 2018). Given these differences, employees from the oil and gas industry as reflected in this investigation might have been more motivated to provide a candid and honest response to self-report measures of safety.[af][ag][ah][ai][aj] This explanation, however, is in need of more rigorous assessment.

These results in combination apply more broadly to method bias [ak][al][am]in workplace safety research. The results of these studies highlight the need for safety researchers to acknowledge the potential influence of method bias and to assess the extent to which measurement conditions elicit particular biases.

It is also noteworthy that impression management suppressed relationships in some cases; thus, accounting for impression management might strengthen theoretically important relationships. These results also have meaningful implications for organizations because positively biased responding on safety surveys can contribute to the incorrect assumption that an organization is safer than it really is[an][ao][ap][aq][ar][as][at].

The results of Study 2 are particularly concerning and practically relevant as they suggest that employees in certain cases are likely to underreport the number of safety outcomes that they experience even when their survey responses are anonymous. However, these findings were not reflected in results from Study 1 and Study 3. Thus, it appears that impression management serves as a method bias among self-reports of safety outcomes only in particular situations. Further research[au][av][aw] is needed to explicate the conditions under which employees are more/less likely to provide honest responses to self-report measures of safety outcomes.

———————————————————————————————————————

BONUS MATERIAL FOR YOUR REFERENCE:

For reference only, not for reading during the table read

Respondents and Measures

  • Study 1

Respondents:

graduate students (229,37%),

undergraduate students (183, 30%),

research scientists and associates (123,20%),

post-doctoral researchers (28,5%),

laboratory managers (25, 4%),

principal investigators (23, 4%)

329 [53%] female;

287 [47%] male

377 [64%] White;

16 [3%] Black;

126 [21%] Asian;

72 [12%] Hispanic

Age (M=31,SD=13.24)

Respondents worked in various types of laboratories, including:

biological (219,29%),

Animal biological (212,28%),

human subjects/computer (126,17%),

Chemical (124,17%),

mechanical/electrical (65,9%)

Measures:

  • Safety Climate

Nine items from Beus et al. (2019) 30-item safety climate measure were used in the current study. The nine-item measure included one item from each of five safety climate dimensions (safety communication, co-worker safety practices, safety training, safety involvement, safety rewards) and two items from the management commitment and safety equipment and  housekeeping dimensions. The nine items were identified based on factor loadings from Beus et al. (2019). Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Safety knowledge, compliance, and participation

Respondents completed slightly modified versions of Griffin and Neal’s (2000) four-item measures of safety knowledge (e.g., “I know how to perform my job in the lab in a safe manner.”), compliance (e.g., “I carry out my work in the lab in a safe manner.”), and participation (e.g., “I promote safety within the laboratory.”). Items were completed using a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Perceived job risk and safety outcomes

Respondents completed a three-item measure of perceived job risk (e.g., “I encounter personally hazardous situations while in the laboratory;” 1=almost always untrue, 5=almost always true; Jermier et al., 1989). Respondents also provided safety incident data regarding the number of injuries, incidents, and near misses that they experienced in the last 12 months.

  • Impression Management

Four items were selected from Paulhus’s (1991) 20-item Balanced Inventory of Desirable Responding. These items were selected based on a review of Paulhus’s (1991) full measure and an assessment of those items that were most relevant and best representative of the full measure (Table 1). Items were completed using a five-point accuracy scale (1=very inaccurate, 5=very accurate). Ideally this survey would have included Paulhus’s (1991) full 20-item measure. However, as is often the case in survey research, we had to balance construct validity with survey length and concerns about respondent fatigue and for these reasons only a subset of Paulhus’s (1991) measure was included.

  • Study 2

Respondents:

research scientists or post-doctoral researchers (43; 39%)

principal investigators (12; 11%)

laboratory managers and coordinators (12; 11%)

graduate students (3; 3%)

Faculty teaching in a laboratory (3; 3%)

 one administrator (1%)

Respondents primarily worked in:

chemical (55; 45%)

mechanical/electrical (39; 32%)

Uncategorized laboratory (29; 24%)

Measures:

  • Safety Constructs

Respondents completed the same six self-report measures of safety constructs that were used in Study 1: safety climate, safety knowledge, safety compliance, safety participation, perceived job risk, and injuries, incidents, and near misses in the previous 12 months.

  • Impression Management

Respondents completed a five-item measure of impression management from the Bidimensional Impression Management Index (Table 1; Blasberg et al., 2014). Five items from the Communal Management subscale were selected based on an assessment of their quality and degree to which they represent the 10-item scale.5 A subset of Blasberg et al.’s (2014) full measure was used because of concerns from management about survey length. Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Personality

Conscientiousness, agreeableness, and emotional stability were assessed using six items from Gosling et al. (2003) 10-item personality measure. Four items from the 10-item measure assessing openness to experience and extraversion were not included in this study. Respondents were asked to indicate the degree to which adjectives were representative of them (i.e., Conscientiousness – “dependable, self-disciplined;”  agreeableness – “sympathetic, warm;” Emotional stability – “calm, emotionally stable”; 1=strongly disagree, 7=strongly agree) and combined to represent the Alpha personality factor. One conscientiousness item was dropped because it had a negative item-total correlation (“disorganized, careless” [reverse coded]). This was not surprising as it was the only reverse-scored personality item administered.

  • Study 3

Respondents:

The typical respondent was male (101, 94%) and had no supervisory responsibility (72, 67%); however, some women (6, 6%), supervisors (17, 16%), and managers/senior managers (16, 15%) also completed the survey.7 The sample was diverse in national origin with most respondents from India (44, 42%) and Pakistan (25, 24%).

Measures:

  • Safety Constructs

Respondents completed five of the same self-report measures of

safety constructs used in Study 1 and Study 2, including safety climate (Beus et al., 2019), safety knowledge (Griffin and Neal, 2000), safety compliance (Griffin and Neal, 2000), safety participation (Griffin and Neal, 2000), and injuries, incidents, and near misses in the previous 6 months. Respondents completed a similar measure of perceived job risk (Jermier et al., 1989) that included three additional items assessing the degree to which physical, administrative, and personal controls

  • Unlikely Virtues

Five items were selected from Weekley’s (2006) 10-item unlikely

virtues measure (see also Levashina et al., 2014; Table 1) and were responded to on a 5-point agreement scale (1=strongly disagree; 5=strongly agree). Akin to the previous studies, an abbreviated version of the measure was used because of constraints with survey length and the need to balance research and organizational objectives.

[a]In my mind, this is a negative way to start a safety research project. The ultimate goal of the organization is to complete its mission and injuries and fatalities are not part of the mission. So this puts the safety researcher immediately at odds with the organization.

[b]I wonder if this happens beyond surveys—do employees more generally portray a false sense of safety to co-workers, visitors, employers, trainees, etc? Is that made worse by surveying, or do surveys pick up on bias that exists more generally in the work culture?

[c]Employees always portray things in a better light on surveys because who really knows if its confidential

[d]Not just with regard to safety; most employees, I suspect, want to portray their businesses in a positive light. Good marketing…

[e]I think that this depends on the quality of the survey. If someone is pencil whipping a questionnaire, they are probably giving answers that will draw the least attention. However, if the questions are framed in an interesting way, I believe it is possible to have a survey be both a data collection tool and a discussion starter. Surveys are easy to generate, but hard to do well.

[f]In my experience, these are pretty high response rates for the lab surveys (around 20%).

[g]A concern that was raised by a reviewer on this paper was that it leads to a conclusion of blaming the workers. We certainly didn't set out to do that, but I can understand that perspective. I'm curious if others had that reaction.

[h]I had the same reaction and I can see how it could lead to a rosier estimate of safety conditions.

[i]There is an interesting note below where you mention the possible outcomes of surveys that "go poorly" if you will. If the result is that the workers are expected to spend more of their time and energy "fixing" the problem, it is probably no surprise that they will just say that there is no problem.

[j]I am always thinking about this type of thing—how results are framed and who the finger is being pointed at. I can see how this work can be interpreted that way, but I also see it from an even bigger picture—if people are feeling that they have to manage impressions (for financial safety, interpersonal safety, etc) then to me it stinks of a bigger cultural, systemic problem. Not really an individual one.

[k]Well – the "consequences" of the survey are really in the hands of the company or institution. A researcher can go in with the best of intentions, but a company can (and often does) respond in a way that discourages others from being forthright.

[l]Oh for sure! I didn't mean to shoulder the bigger problem on researchers or the way that research is conducted—rather, that there are other external pressures that are making individuals feel like managing people's impressions of them is somehow more vital than reporting safety issues, mistakes, needs, etc. Whether that's at the company, institution, or greater cultural level (probably everywhere), I don't think it's at the individual level.

[m]My first thought on bias in safety surveys had to do more with the survey model rather than method bias.  Most all safety surveys I have taken are based on the same template and questions generally approach safety from the same angle.  I haven't seen a survey that asks the same question several ways in the course of the survey or seen any control questions to attempt to determine validity of answers.  Perhaps some of the bias comes from the general survey format itself….

[n]I agree. In reviewing multiple types of surveys trying to target safety, there are many confounding variables. Trying to find a really good survey is tough – and I'm not entirely sure that it is possible to create something that can be applied by all. It is one of the reasons I was so intrigued by the BMS approach.

[o]Though—a lot of that work (asking questions multiple ways, asking control questions, determining validity and reliability, etc) is done in the original work that initially develops the survey metric. Just because it's not in a survey that one is taking or administering, doesn't necessarily mean that work isn't there

[p]Agreed – There are a lot of possible method biases in safety surveys. Maybe impression management isn't the most impactful. There just hasn't been much research in this area as it relates to safety measures, but certainly there is a lot out there on method biases more broadly. Stephanie and I had a follow up study (conference paper) looking at blatant extreme responding (using only the extreme endpoints on safety survey items). Ultimately, that too appears to be an issue

[q]In looking back over the summary, I was drawn to the use of the word 'dishonesty."  That implies intent.  I'm wondering whether it is equally likely that people are lousy at estimating risk and generally overestimate their own capabilities (Dunning Kruger anyone?).  So it is not so much about dishonesty but more about incompetency.

[r]They are more likely scared of retribution.

[s]This is an interesting point and I do think there is a part of the underestimation that has to do with an unintentional miscalibration. But, I think the work in this paper does go to show that some of the underestimation is related to people's proclivity to attempt to control how people perceive them and their performance.

[t]Even so, that proclivity is not necessarily outright dishonesty.

[u]I agree. I doubt that the respondents set out with an intent to be fraudulent or dishonest. Perhaps a milder or softer term would be more accurate?

[v]I wonder how strong this effect is for, say, graduate students who are in research labs under a PI who doesn't value safety

[w]I thinks its huge. I know I see difference in speaking with people in private versus our surveys

[x]Within my department, I know I became very cynical about surveys that were administered by the department or faculty members. Nothing ever seemed to change, so it didn't really matter what you said on them.

[y]I also think it is very significant. We are currently dealing with an issue where the students would not report safety violations to our Safety Concerns and Near Misses database because they were afraid of faculty reprisal. The lab is not especially safe, but if no one reports it, the conclusion might be drawn that no problems exist.

[z]And to bring it back to another point that was made earlier: when you're not sure if reporting will even trigger any helpful benefits, is the perceived risk of retribution worth some unknown maybe-benefit?

[aa]I heard a lot of the same concerns when we tried doing a "Near Miss" project. Even when anonymity was included, I had several people tell me that the details of the Near Miss would give away who they were, so they didn't want to share it.

[ab]Interesting point. It would seem here that folks fear if they say something is amiss with safety in the workplace, it will be treated as something wrong with themselves that must be fixed.

[ac]Yeah I feel like this kind of plays in to our discussion from last week, when we were talking about people feeling like they're personally in trouble if there is an incident

[ad]A related finding has been cited in other writings on surveys – if you give a survey, and nothing changes after the survey, then people catch on that the survey is essentially meaningless and they either don't take surveys anymore or just give positive answers because it isn't worth explaining negative answers.

[ae]There are risks associated with research labs, but I don't know if I would call them "clear". My sense is that "notable incidents" is a catchphrase people are using about academic lab safety to avoid quantitating the risks any more specifically.

[af]This is interesting to think about. One the one hand, if one works in a higher hazard environment maybe they just NOTICE hazardous situations more and think of them as more important. On the other hand, there is a lot of discussion around the normalization of hazards in an environment that would seem to suggest that they would not report on the hazards because they are normal.

[ag]Maybe they receive more training as well which helps them identify hazards easier. Oil & Gas industry Chemical engineers certainly get more training from my experience.

[ah]Oil and gas workers were also far more likely to participate in the study than the academic groups.  I think private industry has internalized safety differently (not necessarily better or worse) than academia.  And high hazard industries like oil and gas have a good feel for the cost of safety-related incidents.  That definitely gets passed on to the workforce

[ai]How does normalization take culture into effect? Industries have a much longer history of self-reporting and reporting of accidents in general than do academic institutions.

[aj]Some industries have histories of self-reporting in some periods of time. For example, oil and gas did a lot of soul searching after the Deepwater explosion (which occurred the day of a celebration of 3 years with no injury reports), but this trend can fade with time. Alcoa in the 1990s and 2000s is a good example of this. For example, I've looked into Paul H. O'Neill's history with Alcoa. He was safety champion whose work faded soon after he left.

[ak]I wonder if this can be used as a way to normalize the surveys somehow

[al]Hmm, yeah I think you could, but you would also have to take a measure of impression management so that you could remove the variance caused by that from your model.

Erg, but then long surveys…. the eternal dilemma.

[am]I bet there are overlapping biases too that have opposite effects, maybe all you could do is determine to what extent of un-reliability your survey has

[an]In the BMS paper we covered last semester, it was noted that after they started to do the managerial lab visits, the committee actually received MORE information about hazardous situations. They attributed this to the fact that the committee was being very serious about doing something about each issue that was discovered. Once people realized that their complaints would actually be heard & addressed, they were more willing to report.

[ao]and the visits allowed for personal interactions which can be kept confidential as opposed to a paper trail of a complaint

[ap]I imagine that it was also just vindicating to have another human listen to you about your concerns like you are also a human. I do find there is something inherently dehumanizing about surveys (and I say this as someone who relies on them for multiple things!). When it comes to safety in my own workplace, I would think having a human make time for me to discuss my concerns would draw out very different answers.

[aq]Prudent point

[ar]The Hawthorne Effect?

[as]I thought that had to do with simply being "studied" and how it impacts behavior. With the BMS study, they found that people were reporting more BECAUSE their problems were actually getting solved. So now it was actually "worth it" to report issues.

[at]It would be interesting to ask the same question of upper management in terms of whether their safety attitudes are "true" or not. I don't know of any organizations that don't talk the safety talk. Even Amazon includes a worker safety portion to its advertising campaign despite its pretty poor record in that regard.

[au]I wish they would have expanded on this more, I'm really curious to see what methods to do this are out there or what impact it would have, besides providing more support that self-reporting surveys shouldn't be used

[av]That is an excellent point and again something that the reviewers pushed for. We added some text to the discussion about alternative approaches to measure these constructs. Ultimately, what can we do if we buy into the premise that self-report surveys of safety are biased? Certainly one option is to use another referent (e.g., managers) instead of the workers themselves. But that also introduces its own set of bias. Additionally, there are some constructs that would be odd to measure based on anything other than self-report (e.g., safety climate). So I think it's still somewhat of an open question, but a very good one. I'm sure Stephanie will have thoughts on this too for our discussion next week. 🙂 But to me that is the crux of the issue: what do we do with self-reports that tend to be biased?

[aw]Love this, I will have to go read the full paper. Especially your point about safety climate, it will be interesting to see what solutions the field comes up with because everyone in academia uses surveys for this. Maybe it will end up being the same as incident reports, where they aren't a reliable indicator for the culture.

The Art & State of Safety Journal Club: “Mental models in warnings message design: A review and two case studies”

Sept 22, 2021 Table Read

The full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753513001598?via%3Dihub

Two case studies in consumer risk perception and exposure assessment, focusing on mothballs and elemental mercury.

For this Table Read, we will only be reviewing the two case studies presented in the paper.

4. Two case studies: mothballs and mercury

Two case studies illustrate the importance of careful adaptation to context[a][b][c]. In the first case, an expert model of consumer-product use of paradichlorobenzene mothballs is enhanced with information from lay users’ mental models, so the model can become more behaviorally realistic (Riley et al., 2006a). In the second case, the mental models elicitation is enhanced with ethnographic methods including participant observation in order to gain critical information about cultural context of mercury use in Latino and Caribbean communities in the New York area (Newby et al., 2006; Riley et al., 2001a, 2001b, 2006b).

Both cases are drawn from chemical consumer products applied in residential uses. The chemicals considered here – paradichlorobenzene and mercury – have a wide variety of consumer and occupational uses that underscore the importance of considering context in order to attain a realistic sense of beliefs about the chemical, exposure behaviors, and resultant risk.

This analysis focuses on what these case studies can tell us about the process of risk communication design[d] in order to take account of the multidimensional aspects of risk perception as well as the overall cultural context of risk. Thus, risk communications may be tailored to the beliefs held by individuals in a specific setting, as well as to the specifics of their situation (factors physical, social, and cultural) which influence perceptions of and decision making about risk.

[e][f][g][h]

4.1. Mothballs

Mothballs are used in homes to control moth infestations in clothing or other textiles. Mothballs are solids (paradichlorobenzene or naphthalene) that sublimate (move from a solid state to a gaseous state) at room temperature. Many are in the shape of balls about 1 in. in diameter, but they are also available as larger cakes or as flakes. The products with the highest exposed surface area (flakes) volatilize more quickly. The product works by the vapor killing adult moths, breaking the insect life cycle.

The primary exposure pathway is inhalation of product vapors, but dermal contact and ingestion may also occur. Cases of ingestion have included children mistaking mothballs for candy and individuals with psychological disorders who compulsively eat household items (Avila et al., 2006; Bates, 2002). Acute exposure to paradichlorobenzene can cause skin, eye, or nasal tissue irritation; acute exposure to naphthalene can cause hemolytic anemia, as well as neurological effects. Chronic exposures to either compound causes liver damage and central nervous system effects. Additional long-term effects of naphthalene exposure include retinal damage and cataracts (USEPA, 2000). Both paradichlorobenzene and naphthalene are classified as possible human carcinogens (IARC Group II B) (IARC, 1999). Since this classification in 1987, however, a mechanism for cancer development has been identified for both naphthalene and paradichlorobenzene, in which the chemicals block enzymes that are key to the process of apoptosis, the natural die-off of cells. Without apoptosis, tumors may form as cell growth continues unchecked (Kokel et al., 2006).

Indoor air quality researchers have studied mothballs through modeling and experiment (e.g., Chang and Krebs, 1992; Sparks et al., 1991, 1996; Tichenor et al., 1991; Wallace, 1991). Research on this topic has focused on developing and validating models of fate and transport of paradichlorobenzene or naphthalene in indoor air. Unfortunately, the effects of consumer behavior on exposure were not considered[i][j]. Due to the importance of the influence of consumer behavior on exposure, it is worth revisiting this work to incorporate realistic approximations of consumer behavior.

Understanding consumer decisions about purchasing, storage, and use is critical for arriving at realistic exposure estimates as well as effective risk management strategies and warnings content. Consumer decision-making is further based upon existing knowledge and understanding of exposure pathways, mental models of how risk arises (Morgan et al., 2001), and beliefs about the effectiveness of various risk-mitigation strategies. Riley et al. (2001a, 2001b) previously proposed a model of behaviorally realistic exposure assessment for chemical consumer products, in which exposure endpoints are modeled in order to estimate the relative effectiveness of different risk mitigation strategies, and by extension, to evaluate warnings (refer to Fig. 1). The goal is to develop warnings that provide readers with the information they need to manage the risks associated with a given product, including how hazards may arise, potential effects, and risk-mitigation strategies.

4.1.1. Methods

The idea behind behaviorally realistic exposure assessment is to consider both the behavioral and physical determinants of exposure in an integrated way (Riley et al., 2000). Thus, user interviews and/or observational studies are combined with modeling and/or experimental studies to quantitatively assess the relative importance of different risk mitigation strategies and to prioritize content for the design of warnings, based on what users already know about a product. Open-ended interviews elicit people’s beliefs about the product, how it works, how hazards arise, and how they may be prevented or mitigated. User-supplied information is used as input to the modeling or experimental design in order to reflect how people really interact with a given product. Modeling can be used to estimate user exposure or to understand the range of possible exposures that can result from different combinations of warning designs and reading strategies.

Riley et al. (2006a) recruited 22 adult volunteers [k][l][m][n][o][p][q][r]who had used mothballs from the business district in Northampton, Massachusetts. Interview questions probed five areas: motivation for product use and selection; detailed use data (location, time activity patterns, amount and frequency of use); mental models of how the product works and how hazards may arise; and risk perceptions; risk mitigation strategies. Responses were analyzed using categorical coding (Weisberg et al., 1996). A consumer exposure model utilized user-supplied data to determine the concentration of paradichlorobenzene in a two-box model (a room or compartment in which moth products are used, and a larger living space).

4.1.2. Uses

Table 1 illustrates the diversity of behavior surrounding the use[s][t][u][v][w] of mothballs in the home. It is clear that many users behave differently around the product from what one might assume from reading directions or warnings on the package label.

65% of participants reported using mothballs to kill or repel moths, which is its intended use. 35% reported other uses for the product, including as an air freshener and to repel rodents outdoors. Such uses imply different use behaviors related to the amount of product used and the location where it is applied. Effective use of paradichlorobenzene as an indoor insecticide requires use in an enclosed space, the more airtight the better. Ventilation is not recommended, and individuals should limit their exposure to the non-ventilated space. In contrast, use as a deodorizer disperses paradichlorobenzene throughout a space by design.

These different behaviors imply different resultant exposure levels. For use as an air freshener, the exposure might be higher due to using the product in the open in one’s living space. Exposures might also be lower, as in the reported outdoor use for controlling mammal pests.

A use not reported in this study, perhaps due to the small sample size, or perhaps due to the stigma associated with drug use, is the practice of huffing or sniffing – intentional inhalation in order to take advantage of the physiological effects of volatile chemicals (Weintraub et al., 2000). This use is worth mentioning due to its high potential for injury, even if this use is far less likely than other uses reported here.

The majority of users place mothballs outside of sealed containers in order to control moths, another use that is not recommended by experts or on package labeling[x][y][z][aa][ab][ac][ad][ae]. Even though the product is recommended for active infestations, many users report using the product preventively, increasing the frequency of use and resultant exposure above recommended or expected levels. Finally, the amount used is greater than indicated for a majority of the treatment scenarios reported. These variances from recommended use scenarios underscore the need for effective risk communication, and suggest priority areas for reducing risk.

These results indicate a wide range of residential uses with a variety of exposure patterns. In occupational settings, one might anticipate a similarly broad range of uses. In addition to industrial and commercial uses as mothballs (e.g., textile storage, dry cleaning) and air fresheners (e.g., taxi cabs, restaurants), paradichlorobenzene is used as an insecticide (ants, fruit borers) or fungicide (mold and mildew), as a reagent for manufacturing other chemical products, plastics and pharmaceuticals, and in dyeing (GRR Exports, 2006).

4.1.3. Exposures

Modeling of home uses illustrates the range of possible exposures[af] based on self-reported behavior, and compares a high and low case scenario from the self-reports to an ‘‘intended use’’ scenario that follows label instructions exactly.

Table 2 shows the inputs used for modeling and resultant exposures. The label employed for the expected use scenario advised that one box (10 oz, 65 mothballs) should be used for every 50 cubic feet (1.4 cubic meters) of tightly enclosed space. Thus, for a 2-cubic-meter closet, 90 mothballs were assumed for the intended use scenario. The low exposure scenario involved a participant self-report in which 10 mothballs were placed in a closed dresser drawer, and the high exposure scenario involved two boxes of mothballs reportedly placed in the corners of a 30-cubic-meter bedroom.

Results show that placing moth products in a tightly enclosed space significantly reduces the concentration in users’ living space.[ag][ah][ai][aj][ak][al][am][an] The high level of exposure resulting from the usage scenario with a large amount of mothballs placed directly in the living space coincided with reports from the user of a noticeable odor and adverse health effects that the user attributed to mothball use.

4.1.4. Risk perception

There were a wide range of beliefs about the function and hazards[ao][ap] of [aq][ar][as][at][au][av][aw]mothballs among participants, as well as a gap in knowledge between consumer and expert ideas of how the product works. Only 14% of the participants were able to correctly identify an active ingredient in mothballs, while 76% stated that they did not know the ingredients. Similarly, 68% could not correctly describe how moth products work, with 54% of all participants believing that moths are repelled by the unpleasant odor. Two-thirds of participants expressed health concerns related to using moth products[ax][ay]. 43% mentioned inhalation, 38% mentioned poisoning by ingestion, 21% mentioned cancer, and 19% mentioned dermal exposure. A few participants held beliefs that were completely divergent from expert models, for example a belief that mothballs ‘‘cause parasites’’ or ‘‘recrystallize in your lungs.’’

A particular concern arises from the common belief that moths are repelled by the smell of mothballs. This may well mean that users would want to be able to smell the product to know it is working – when in fact this would be an indication that users themselves were being exposed and possibly using the product incorrectly. Improvements to mothball warnings might seek to address this misconception of how mothballs work, and emphasize the importance of closed containers, concentrating the product near the treated materials and away from people.

4.2. Mercury as a consumer product

Elemental mercury is used in numerous consumer products, where it is typically encapsulated, causing injury only when a product breaks. Examples include thermometers, thermostats, and items containing mercury switches such as irons or sneakers with flashing lights. The primary hazard arises from the fact that mercury volatilizes at room temperature. Because of its tendency to adsorb onto room surfaces, it has long residence times in buildings compared with volatile organic compounds. Inhaled mercury vapor is readily taken up by the body; in the short term it can cause acute effects on the lungs, ranging from cough and chest pain to pulmonary edema and pneumonitis in severe cases. Long-term exposure can cause neurological symptoms including tremors, polyneuropathy, and deterioration of cognitive function (ATSDR, 1999).

The second case study focuses on specific uses of elemental mercury as a consumer product among members of Latino and Caribbean communities in the United States. Mercury is sold as a consumer product in botánicas (herbal pharmacies and spiritual supply stores), for a range of uses that are characterized variously as folkloric, spiritual or religious in nature.

4.2.1. Methods

Newby et al. (2006) conducted participant observation and interviews with 22 practitioners and shop owners[az], seeking to characterize both practices that involved mercury use and perceptions of resulting risks. These practices were compared and contrasted with uses reported in the literature as generally attributable to Latino and Caribbean religious and cultural traditions in order to distinguish between uses that are part of the Santeria religion, and other uses that are part of other religious practice or secular in nature. Special attention was paid to the context of Santeria, especially insider–outsider dynamics created by its secrecy, grounded in its histories of suppression by dominant cultures. Because the label Latino is applied to a broad diversity of ethnicities, races, and nationalities, the authors sought to attend to these differences as they apply to beliefs and practices related to mercury.

Uses reported in the literature and reported by participants to Newby et al. (2006) and Riley et al. (2001a, 2001b) were modeled to estimate resulting exposures. The fate and transport of mercury in indoor air is difficult to characterize because of its tendency to adsorb onto surfaces and the importance of droplet-size distributions on overall volatilization rates (Riley et al., 2006b). Nevertheless, simple mass transfer and indoor air quality models can be employed to illustrate the relative importance of different behaviors in determining exposure levels.

4.2.2. Uses

Many uses are enclosed, such as placing mercury in an amulet, gourd, walnut, or cement figure (Johnson, 1999; Riley et al., 2001a, 2001b, 2006b; Zayas and Ozuah, 1996). Other uses are more likely to elevate levels of mercury in indoor air to hazardous levels, including sprinkling of mercury indoors or in cars for good luck or protection, or adding mercury to cleaning compounds or cosmetic products (Johnson, 1999; Zayas and Ozuah, 1996).

Some uses, particularly those attributable to Santeria, are occupational in nature. Santeros and babalaos (priests and high priests) described being paid to prepare certain items that use mercury (Newby et al., 2006). Similarly, botanica personnel described selling mercury as well as creating certain preparations with it (Newby et al., 2006; Riley et al., 2001a, 2001b). One case report described exposure from a santero spilling mercury (Forman et al., 2000). Some of this work occurs in the home, making it both occupational and residential.

Across the U.S. population, including in Latino and Caribbean populations, it is more common for individuals to be exposed to elemental mercury vapor through accidental exposures such as thermometer, thermostat and other product breakage or spills from mercury found in schools and abandoned waste sites (Zeitz et al., 2002). The cultural and religious uses described above reflect key differences in use (including intentional vs. accidental exposure) that require attention in design of risk communications.

4.2.3. Exposures

Riley et al. (2001a, 2001b) solved a single-chamber indoor-air quality model analytically to estimate exposures based on scenarios derived from two interviews with mercury users. Riley et al. (2001a, 2001b) similarly modeled scenarios for sprinkling activities reported elsewhere in the literature. Riley et al. (2001a, 2001b) additionally employed mass transfer modeling combined with indoor air quality modeling to estimate resulting exposures from the contained uses described in interviews with practitioners (Newby et al., 2006).

Results presented in Table 3 show wide variation in predicted exposures resulting from different behavior patterns in different settings. Contained uses produce the lowest exposures. As long as the mercury remains encapsulated or submerged in other media, it poses little risk. By contrast, uses in open air can result in exposures orders of magnitude greater, depending on amounts and how the mercury is distributed, as droplet size and surface area are key determinants of exposure.

4.2.4. Risk perception

Newby et al. (2006) found that participants identified the risks of mercury use as primarily legal in nature.[ba][bb][bc][bd][be][bf] Concerns about getting caught by either police or health officials were strong[bg][bh][bi]. After these concerns, practitioners mentioned the risks of mercury use ‘‘backfiring’’ on a spiritual level, particularly if too much is used.[bj][bk][bl] There was some awareness of potential harmful health effects from mercury use[bm][bn], but the perceptions of mercury’s spiritual power and the perceived legal risks of possession and sale figured more prominently in users’ rationales for taking care in using it and clearly affected risk-mitigation strategies described (e.g., not discussing use or sales openly, giving people a bargain so they won’t tell authorities).

Newby et al. (2006) discuss at length the insider–outsider dynamics in the study, and their influence on the strength of fears of illegality of mercury. Because of taboos on sharing details of Santeria practice, the authors warn against providing certain details of practice in risk communications designed by outsiders, as it would undercut the credibility of the warning messages.

Mental models of risk perception are critically important in all cases of consumer mercury use, both intentional and unintentional. When a thermostat or thermometer breaks in a home, many users will use a vacuum to clean up the spil[bo][bp][bq][br]l, based on a mental model of mercury’s hazards that does not include a notion of mercury as volatile. A key gap in people’s knowledge of mercury relates to its volatility; most lay people do not realize that vacuuming mercury will greatly increase its indoor air concentration, causing a greater health hazard than simply leaving mercury on the floor (Schwartz et al., 1992; Zelman et al., 1991). Thus, many existing risk communications about mercury focus on accidental spills and how (or how not) to clean them up.[bs][bt]


[a]Interesting timing for me on this paper- we’re currently working on a scheme to communicate hazards to staff & faculty at a new facility.  We have an ethnic diversity to consider and a number of the spaces will host the general public for special events.  Lots of perspectives to account for…

[b]If you have info on this next week, it would be interesting to hear what challenges you have run into and what you have done to address them.

[c]I’d be game for that.  I’m just getting into the project and was starting to consider different risk perceptions among different audiences.  This paper has given me some food for thought

[d]This is different from the way I have used the term “risk communication” traditionally. Traditionally risk communication is designed to help a variety of stakeholders work through scientific information to come to a shared decision about risk. See https://www.epa.gov/risk-communication for example. However, this paper’s approach soundsm more like the public health approach used to collect more accurate information about a risk

[e]I really like the concept of “behaviorally realistic exposure assessment”. Interestingly, EPA has taken over regulation of workplace chemicals from OSHA because OSHA was legally constrained from using realistic exposure assessment (specifically, the assumption that PPE may not be work correctly all the time)

[f]Wow – that is crazy to hear that OSHA would be limited in that way. One would thing actual use would be an incredibly important thing to consider.

[g]OSHA is expected to assume that all of its regulations will be followed as part of its risk assessment. EPA doesn’t believe that. This impacted EPA’s TSCA risk assessment of Methylene Chloride

[h]There is a press story about this at

https://www.cbsnews.com/video/family-of-man-who-died-after-methylene-chloride-exposure-call-epa-decision-step-in-the-right-direction/

[i]I’m wondering if the researchers are in academia or from the company. If from companies which supplied mothballs, I’m surprised that this was not one of the first things that they considered.

[j]That’s an interesting question. Mothballs are a product with a long history and they were well dispersed in the economy before regulatory concerns about them arose. So the vendors probably had an established product line before the regulations arose.

[k]Wondering about the age distribution of the study group- when I think of mothballs, I think of my grandparents who would be about 100 years old now.  Maybe younger people would handle the mothballs differently since they are likely to be unfamiliar with them.

[l]I’m also wondering about how they recruited these volunteers because that could introduce bias, for example only people who already know what they are might be interested

[m]Volunteer recruitment is an interesting thought…what avenue did they use to recruit persons who had this product? Currently wondering who still uses them since I don’t know anyone personally who talks about it…

[n]It sounds like they went into the shopping area outside Smith College and recruited people off the street. Northampton is a diverse social environment, but I suspect mothball users are from a particular segment of society

[o]How the recruitment happened seems like a key method that wasn’t discussed sufficiently here. After re-reading this it might be people that they recruited people without knowing if they had used mothballs or not.

[p]Interesting thought. When I was in my undergraduate studies, one of my professors characterized a substance as “smelling like mothballs,” and me and all of my peers were like “What? What do mothballs smell like…?” Curious as to whether product risk assessment is different between these generational groups.

[q]Did you and your undergraduate peers go grab a box and sniff them to grok the reference?

[r]I certainly did not! But I wonder how many people would have, if there were available at the time!

[s]Would anyone like to share equivalents they have seen in research labs? Researchers using a product in a way different from intended? Did it cause any safety issues? Were the researchers surprised to find that they were not using as intended? Were the researchers wrong in their interpretations and safety assessment? If so, how?

[t]I presume you’re thinking of something with a specific vendor-defined use as opposed to a reagent situation where, for example, a change in the acid use to nitric led to unforeseen consequences.

[u]I agree that this can apply to the use of chemicals in research labs. Human error is why we want to build in multiple controls. In terms of examples, using certain gloves of PPE for improper uses is the first thing that comes to mind.

[v]I have seen hardware store products repurposed in lab settings with unfortunate results, but I can’t recall specific of these events off the top of my head. (One was an hubcap cleaning solution with HF in it used in the History Dept to restore granite architectural features.)

[w]I have seen antifreeze brought in for Chemistry labs and rock salt brought in for freezing point depression labs…not dangerous, but not what they were intended for.

[x]So is the take-away on this point and the ones that follow in the paragraph that another communication method is needed?  Reading the manual before use is rare (in my experience)- too wordy.  Maybe pictographs or some sort of icon-based method of communications.

[y]This seems like to takeaway to me! Pictures, videos—anything that makes the barrier to content engagement as low as possible. Even making sure that it is more difficult to miss the information when trying to use the product would likely help (ie, not having to look it up in a manual, not having to read long paragraphs, not having to keep another piece of paper around)

[z]In the complete article, they discuss three hurdles to risk understanding:

1. Cognitive heuristics

2. Information overload

3. Believability and self-efficacy

These all sound familiar from the research setting when discussing risks

[aa]Curious how many people actually read package labeling, and of those people how many take the labeling as best-practice and how many take it as suggested use. I’m also curious how an analogy to this behavior might be made in research settings. It seems to me that there would likely be a parallel.

[ab]I believe that the Consumer Product Safety Commission does research into this question

[ac]Other considerations: is the package labeling comprehensible for people (appropriate language)? If stuff is written really small, how many people actually take the time to read it? Would these sorts of instructions benefit more from pictures rather than words?

[ad]I was watching an satirical Better Call Saul “legal ethics” video yesterday where the instructor said “it doesn’t matter how small the writing is, you just have to have it there”. See https://www.dailymotion.com/video/x7s7223 for that specific “lesson”

[ae]I think we’d see a parallel in cleaning practices, especially if it’s a product that gets diluted different amounts for different tasks. Our undergraduate students for example think all soap is the same and use straight microwash if they see it out instead of diluting.

[af]Notably, even when putting them outside in a wide area, you can still smell them from a distance, which would make them a higher exposure than expected.  Pictures and larger writing on the boxes would definitely help, but general awareness may need to be shared another way.

[ag]Historical use was in drawers with sweaters, linens, etc (which is shown to be the “low exposure”)…were these products inadvertently found to be useful in other residential uses much later?

[ah]I wonder if the “other uses” were things consumers figured out and shared with their networks – but those uses would actually increase exposure.

[ai]It appears so!  Another issue may be the amount of the product used.  Using them outside rather than in a drawer, may minimize the exposure some, but that would be relative to exactly how much of the product was used…

[aj]The comment about being able to smell it to “know it is working” is also interesting. It makes me think of how certain smells (lemon scent) are associated with “cleanliness” even if it has nothing to do with the cleanliness.

[ak]I’ve also heard people refer to the smell of bleach as the smell of clean – although if you can smell it, it means you are being exposed to it!

[al]This is making me second guess every cleaning product I use!

[am]It also makes me wonder if added scent has been used to discourage people from overusing a product.

[an]I think that is why some people tout vinegar as the only cleaner you will ever need!

[ao]What do you think would lead to this wide range?

[ap]If they are considering beliefs that were passed down through parents and grandparents this would also correlate with consumers not giving attention to the packaging because they grew up with a certain set of beliefs and knowledge and they have never thought to question it.

[aq]Is it strange to hear this given that the use and directions are explained right on the packaging?

[ar]I don’t think it is that strange. I think a lot of people don’t bother to read instructions or labels closely, especially for a product that they feel they are already familiar with (grow up with it being used)

[as]Ah, per my earlier comment regarding whether or not people read the packaging—I am not actually very surprised by this. I think there is somewhat of an implicit sense that products made easily available are fairly safe for use. That, coupled with a human tendency to acclimate to unknown situations after no obvious negative consequences plus the shear volume of text meant to protect corporations (re: Terms of Use, etc), I think people sort of ignore these things in their day-to-day.

[at]I agree it seems that people use products they saw used throughout their childhood…believed them to be effective and hence don’t read the packaging.  (Clearly going home to read the packaging myself…as I used them recently to repel skunks from my yard).

[au]Since the effects of exposure to mothball active ingredients are not acute in all but the most extreme cases (like ingestion), it is unlikely that any ill health effects would even be linked to mothballs

[av]I have wondered if a similar pattern happens with researchers at early stages. If the researcher is interested to a reagent or a process by someone who doesn’t emphasize safety considerations, that early researcher things of it as relatively safe – then doesn’t do the risk assessment on their own.

[aw]Yes, exactly. Long-term consequences are much harder for us to grapple with than acute consequences, which may lead to overconfidence and overexposure

[ax]I wonder why with so many having health concerns, only 12% used on the correct “as needed” basis.

[ay]A very, very interesting question. I wonder if it has something to do with a sense that “we just don’t know” along with a need to find a solution to an acute problem. i.e., maybe people are worried about whether or not it is safe in a broad, undefined, and somewhat intractable manner, but are also looking to a quick solution to a problem they are currently facing, and perhaps ignore a pestering doubt

[az]Again here, I’m wondering why more is not described about how they identified participants, because it is a small sample size and there is a possibility for bias

[ba]This is an important finding. Public risk perception and professional risk perception can be quite different. I don’t think regulators consider how they might contribute to this gap because each chemical is regulated in isolation from other risks and exposures.

[bb]It is also related to the idea of how researchers view EHS inspections. Do they see them as opportunities for how they can make their research work safer? Or do they merely see them as annoying exercises that might get them “in trouble” somehow?

[bc]I think that in both Hg case and the research case, there is a power struggle expressed as a culture clash issue. Both the users of Hg for spiritual purposes and researchers are likely to feel misunderstood by mainstream society represented by the external oversight process

[bd]I think this is *such* an important takeaway. The sense as to whether long documents (terms of use and other contracts, etc) and regulatory bodies (EHS etc) are meant to protect the *people/consumers* or whether they are meant to protect the *corporation* I think is a big deal in our society. Contracts, usage information, etc abound, but it’s often viewed (and used) as a means to protect from liability, not as a means to protect from harm. I think people pick up on that.

[be]I am in total agreement – recently I sat through a legal deposition for occupational exposure related mesothelioma, it was unsettling how each representative from each company pushed the blame off in every other possible direction, including the defendant. There are way more legal protections in place for companies than I could have ever imagined.

[bf]There is some discussion of trying to address this problem with software user’s agreements, but I haven’t heard of this concern on the chemical use front.

[bg]This is to say there is a disconnect to reasoning behind the legal implications? Assuming most are not aware of the purpose of the regulations as protections for people?

[bh]I don’t know of any agency that would recognize use of Hg as a spiritual practice. Some Native Americans have found their spiritual practices outlawed because their use of a material is different scenario from the risk scenario that the regulators base their rules on

[bi]I agree with your comment about a disconnect. Perhaps if they understood more about the reasons for the laws they would be more worried about their health rather than getting in trouble.

[bj]To a practitioner, is a spiritual “backfire” completely different from a health effect, or just a different explanation of the same outcome?

[bk]Good question

[bl]Good point – I thought about this too. I’d love to hear more about what the “spiritual backfire” actually looked like. Makes me think of the movie “The Exorcism of Emily Rose” where they showed her story from the perspective of someone who thinks she is possessed by demons versus someone who thinks she is mentally ill.

[bm]I am curious to find out how risk communication plays a role here cause it seems those using the mercury know about its potential health hazard.

[bn]Agree – It does say “some” awareness so I would be interested to see how bad they think it is for health vs reality. It looks like they are doing a risk analysis of sorts and are thinking the benefits outweigh the risks.

[bo]I’m not sure how to articulate it, but this is very different than the spiritual use of mercury.  Spiritual users can understand the danger of mercury exposure but feel the results will be worth it.  The person wielding a vacuum does not understand how Hg’s hazard is increased through volatilization.  I suspect a label on vacuum cleaners that said ‘NOT FOR MERCURY CLEANUP’ would be fairly effective.

[bp]Would vacuum companies see this as worth doing today? I don’t think I really encountered mercury until I was working in labs – it is much less prevalent today than it used to be (especially in homes), so I wonder if they would not see it as worth it by the numbers.

[bq]Once you list one warning like this, vacuum companies might need to list out all sorts of other hazards that the vacuum is not appropriate for cleanup

[br]Also, Mercury is being phased out in homes but you still see it around sometimes in thermometers especially. Keep in mind this paper is from 2014.

[bs]I don’t understand this statement in the context of the paragraph. Which risk communication messages is she referring to? I know that institutional response to Hg spills has changed a lot over the last 30 years. There are hazmat emergency responses to them in schools and hospitals monthly

[bt]I think this vacuum example is just showing how there is a gap in the risk communications to the public (not practitioners), since they mainly focused on clean up rather than misuse. It would be nice if there was a reference or supporting info here. They may have looked at packaging from different mercury suppliers.

ACS Webinar: Changing the Culture of Chemistry – Safety in the Lab

Speakers at the webinar included:

  • Mary Beth Mulcahy, Manager in the Global Chemical and Biological Security group at Sandia National Laboratories, Editor-in-Chief of ACS Chemical Health & Safety
  • Michael B. Blayney, Executive Director, Research Safety at Northwestern University
  • Monica Mame Soma Nyansa, Ph.D. Student, Michigan Technological University
  • Kali Miller, Managing Editor, ACS Publications
https://axial.acs.org/2021/06/01/lab-safety-webinar-on-demand/?utm_source=pubs_content_marketing&utm_medium=email&utm_campaign=PUBS_0621_JHS_axialnewsletter0521&src=PUBS_0621_JHS_axialnewsletter0521&ref=pubs_content_marketing_email_PUBS_0621_JHS_axialnewsletter0521

The session closed with a question-and-answer session moderated by Kali Miller, an ACS Publications Managing Editor, where all three panelists were able to share more insights and advice.

2020-21 CHAS Journal Club Index

During the 2020-21 academic year, an average of between 15 and 20 people gathered to review and discuss academic papers relevant to lab safety in academia.

During the fall, we followed the traditional model of a presenter who led the discussion after the group was encouraged to read the paper. In the spring, we began a two-step process: first a table read where the group silently collaboratively commented on an abbreviated version of the paper in a shared google document one week and then had an oral discussion the second week. The second approach enabled much more engagement by the group as a whole.

The spring papers we discussed were primarily focused on graduate student led Lab Safety Teams and included (in reverse chronological order):

The fall papers were focused primarily on the idea of safety culture and included (in reverse chronological order):

  • What Is A Culture Of Safety And Can It Be Changed?
  • Safety Culture & Communication
  • Supporting Scientists By Making Research Safer
  • Perspectives On Safety Culture
  • Making Safety Second Nature In An Academic Lab
  • We will pick up the Journal Club again in the fall of 2021.
    We are interested in looking at the psychology of safety with 2 things in mind:

    • (1) papers with well-done empirical studies, and
    • (2) studies that investigate an issue that is present in academia.

    It is likely that papers that are investigating the psychology of safety have focused primarily on industry (construction, airplanes, etc), so it will be important to identify the specific phenomenon they are investigating and be prepared to translate it to academia. Questions about the CHAS Journal Club can be directed to membership@dchas.org

    Engaging senior management to improve the safety culture

    The Art & State of Safety Journal Club, 05/05/21

    Excerpts from “Engaging senior management to improve the safety culture of a chemical development organization thru the SPYDR (Safety as Part of Your Daily Routine) lab visit program

    written by Victor Rosso, Jeffery Simon, Matthew Hickey, Christina Risatti, Chris Sfouggatakis, Lydia Breckenridge, Sha Lou, Robert Forest, Grace Chiou, Jonathan Marshall, and Jean Tom

    Presented by Victor Rosso

    Bristol-Myers Squibb

    The full paper can be found here: https://pubs.acs.org/doi/10.1016/j.jchas.2019.03.005 

    INTRODUCTION

    The improvement and enrichment of an organization’s safety culture are common goals throughout both industrial and academic research. As a chemical process development organization that designs and develops safe, efficient, environmentally appropriate and economically viable chemical processes for the manufacture of small molecule drug substances, we continually strive to improve our safety culture. Cultivating and energizing a rich safety culture is critical for an organization whose members are performing a multitude of processes at different scales using a broad spectrum of hazardous chemical reagents as its core activities. While we certainly place an emphasis on utilizing greener materials and safer reagents, the nature of our business requires us to work with all types of hazardous and reactive chemicals and the challenges we face are pertinent to any chemical research organization.

    In our organization of approximately 200 organic and analytical chemists[a] and chemical engineers, we have a Safety Culture Team (SCT) [b][c][d][e][f][g]whose mission is to develop programs to enhance the organization’s safety culture. To make this culture visible, the  team developed a key concept, Safety is Part of Your Daily Routine, into a brand with its own logo SPYDR. To build on this concept, we designed a program known as the SPYDR Lab Visits shown in Figure 1. The program engages our senior leadership[h][i] by having them interact with our scientists directly at the bench in the laboratory[j][k][l][m] to discuss safety concerns. This program, initiated in 2013, has visibly engaged our senior leaders directly in the organization’s safety culture and brought to our attention a wide range of safety concerns that would not readily appear[n][o][p] in a typical safety inspection. Furthermore, this program provides a mechanism for increased communication between all levels of the organization by arranging meetings between personnel who may not normally interact with one another on a regular basis. The success of this program has led to similar programs across other functional areas in the company.[q]

    A key safety objective for all organizations is to ensure that the entire organization can trust that the leadership is engaged in and supportive of the safety culture. [r][s][t][u]Therefore this program was designed to (1) emphasize that safety is a top priority from the top of the organization to the bottom[v][w][x][y][z], (2) engage our senior leadership with a prominent role in the safety conversations in the organization, (3) build a closer relationship between our senior leaders and the laboratory occupants and (4) utilize the feedback obtained from the visits to make the working environment better for our scientists. The program is a supplement to and not a replacement for the long standing laboratory inspection program done by the scientists in the organization.

    The program involves assigning the senior leaders to meet with 2–5 scientists in the scientists’ laboratory. There are approximately 40 laboratories in the organization, and over the course of the year, each laboratory will meet with 2–3 senior leaders and each senior leader will visit 4–6 different laboratories. All of this is organized using calendar entries which informs the senior leaders and scientists of where and when to meet, and contains the survey link to collect the feedback.

    As a result of this program, our senior leaders engage our bench scientists in conversations that are primarily driven to draw out the safety concerns of our scientists. However, these conversations can run the gamut of anything that is a concern to our team members[aa][ab]. This can range from safety issues, laboratory operations, and current research work to organizational changes and personal concerns. The senior leadership regularly reminds and encourages the scientists to engage on any topic of their choosing; this creates a collegial atmosphere for laboratory occupants to voice their safety concerns and ideas.

    The laboratory visit program was modeled around the Safety SPYDR and thus we designed the program to have 8 legs[ac]. The first two legs consist of the program’s goals for the visit. We asked the senior leaders to ensure that they state the purpose of the program, that they are visiting the laboratory to find ways to improve lab safety. The second leg, which is the primary goal, is to ask “what are your safety concerns?”. Often this is met with “we have no safety concerns”, but using techniques common in the interviewing process, the leaders ask deeper probing questions to draw out what the scientists care about and with additional probing[ad][ae][af][ag], root causes of the safety concerns will emerge. Once the scientists start talking about one safety concern, often multiple concerns will then surface, thus giving our safety teams an opportunity to deal with these concerns.

    The next two legs of the SPYDR Lab visits consist of observations we ask our senior leaders to make on laboratory clutter and access to emergency equipment[ah]. If the clutter level of a laboratory is deemed unacceptable,[ai][aj][ak][al] the SCT will look to provide support to address root causes of the clutter. Typical solutions have been addition of storage capacity, removal of excess equipment from the work spaces, and alternative workflows. The second observation is to ensure clear paths from the work areas to emergency equipment exist, should an incident occur. We wanted to make sure a direct line existed to the eyewash station/shower such that the occupant would not be tripping over excessive carts, chillers, shelving or miscellaneous equipment. These observations led to active coaching of our laboratory occupants to ensure safe egress existed and modifications to the work environment. For example, the relocation of many chillers to compartments underneath the hood from being on a cart in front of the hood enabled improved egress for a number of laboratories.

    For the final four legs of the SPYDR Visit, we ask the senior leaders to probe for understanding on various topics[am] that range from personal protective equipment selection, waste handling, reactor setup and chemical hazards. The visitor is asked to rate these areas from needs improvements, to average, high, or very high. Figure 2 compares these ratings from the first year (2013) with the current year (2018). In the first year of the  program, there were a few scattered “needs improvement” rating that resulted in communication with the line management of the laboratory. After the initial year, “needs improvement” ratings became very rare in all cases except clutter. In the current year, we shifted two topics[an] to Laboratory Ergonomics[ao] and Electricity, which uncovered additional opportunities for improvement.  We recommend changing the contents of these legs on a regular basis[ap] as it shifts the focus of the discussion and potentially uncovers new safety concerns.

    FEEDBACK MECHANISM

    The SPYDR lab visits are built around a feedback loop illustrated in Figure 3 that utilizes an online survey to both track completion of the visits as well as to communicate findings back to the SCT. The order of events around a laboratory visit consist of scheduling a half hour meeting between our senior leaders and the occupants in their laboratories. Once the visit is completed, the visitors will fill out the simple online survey (Figure 4) that details their findings for the visit. The SCT will meet regularly to review the surveys and take actions based on the occupants’ safety concerns. This often involves following up with the team members in the laboratory to ensure they know their safety concerns were heard[aq][ar].

    Two potential and significant detractors for this program exist. The first challenge is if the senior visitor does not show up for the visit, this results in a perception that senior management  does not embrace safety as a top priority. The second pitfall is if the visitor uncovers a safety concern, but does not  fill out the survey to report safety concerns, or if the SCT is unable to address a safety concern. In this case, there would be a perception that a safety concern was reported to a senior leader and “nothing happened”. To minimize these risks, there is significant  emphasis for the senior leaders to take ownership of the laboratory visits[as][at]  and for the SCT to take ownership of the action items and ensure the team members know their voices have been heard.

    DISCUSSION OF SAFETY CONCERNS

    A summary of safety concerns is illustrated in Table 1. By a wide margin, clutter was the predominant safety concern in 2013 as it was noted in 50% of the laboratories visited. Three major safety programs within the department were inspired by early visits in order to reduce clutter in the laboratories. This included several rounds of organized general laboratory cleanouts to remove old equipment[au][av]. A second program systematically purged old and/or duplicate chemicals throughout the department.[aw] Most recently, a third program created a systematic long term chemical inventory management system[ax][ay][az] that was designed to reduce clutter caused by the large number of processing samples stored in the department. This program has returned over 900 sq. feet of storage space to our laboratories and has greatly reduced the amount of clutter in the labs. Although clutter remains a common theme in our visits, the focus is now often related to removal of old instruments and equipment [ba][bb][bc][bd]rather than a gross shortage of storage space.

    In the first year of the program, one aspect of the laboratory visit was to discuss hazards associated with chemical reactions (feedback rate of 28%) and equipment setup (32%). A common thread in these discussions were expectations of collaboration and behavior from “visiting scientists”. These “visiting scientists” were colleagues[be][bf][bg][bh] and project team members from other laboratories coming to the specific laboratory in order to use its specialized equipment (examples: 20 liter reactors, automated reactor blocks). This caused certain friction between the visiting scientists and their hosts on safety expectations. The SCT addressed this by convening a meeting between hosts and visiting scientists to discuss root causes of friction to produce a list of “best practices” shown in Figure 5 to improve the work experience for both hosts and visitors that is still in use for specialty labs with shared equipment today.[bi][bj]

    The next major category of safety concerns for our laboratory visits was associated with facility repairs which was present in 24% of our first year visits. These included items such as leaking roofs, unsafe cabinet doors, or delays in re-energizing hoods after fire drills. These were addressed by connecting our scientists to the appropriate building managers who would be able to evaluate and address these safety concerns. After the initial year, most of the facility related concerns transitioned to the addition/removal of storage solutions within specific laboratories. Currently, when new laboratories are associated with the SPYDR Lab Visit program, major facility concerns will quickly be reported.

    These visits also brought to light a common problem occurring in the laboratories, that is, the loss of electrical power associated with circuit breakers being tripped when the electrical outlets associated with a laboratory hood were being used at capacity. This led to the identification of the need to increase the electrical capacity in the fume hoods and this Is now being addressed by an ongoing capital project.

    By the third year of the program, the nature of the safety concerns changed as many of the laboratory-based concerns had been addressed[bk]. Concerns raised now included site issues such as traffic patterns, pedestrian safety, walking in parking lots at night, and training. [bl]Among the items addressed for the site include on-site intersections being modified and movement of a fence line to enable safer crosswalks and improvements for the driver’s line-of-sight. A simple question raised about fire extinguisher training and who was permitted to use an AED device led to the expansion of departmental fire extinguisher training to a broader group and the offering of AED/CPR training to the broader organization.

    These safety concerns would not be typically detected by a laboratory safety inspection program and are only accessible by directly asking the occupants what their safety concerns are. [bm][bn][bo]Through the SCT, these issues were resolved over time as the team took accountability to move the issue through various channels (facilities, capital projects, ordering of equipment) to develop and implement the solutions.

    CONCLUSION

    Since 2013, this novel program[bp] has successfully engaged our leadership with laboratory personnel and has led to hundreds of concerns being addressed[bq]. The concerns have arisen from over 300 laboratory visits, and more than a thousand safety conversations with our scientists. Because this is not a safety inspection program, these visits routinely uncover new safety concerns that would not be expected to surface in our typical laboratory inspection program. The SPYDR visit program is a strong supplement to the laboratory inspection program, and has produced a measurable impact on the safety culture.

    A collateral benefit from the program is that it drives social interactions within the department where senior leaders who may not necessarily interact with certain parts of the organization have a chance to visit these team members in their workplace and learn firsthand what they do in the organization[br][bs][bt][bu].

    [a]Only a bit bigger than some of the bigger graduate chemistry programs in the US.

    [b]How large is the Safety culture Team?

    [c]in the range of 6-10

    [d]Does this fall in the “other duties as assigned” category or more driven by personal interest in the topic?

    [e]Do position descriptions during recruitment include Required or Preferred skills that would add value to inclusion on the SCT?

    [f]I was unable to access the article in its entirety so this question may be answered there….  What is the composition of the SCT- who in the organization participates?

    [g]representatives from various departments and leaders of safety teams

    [h]do some of the senior leadership going to the labs have lab experience?

    [i]yes

    [j]Was this a formal thing or out of the blue visits?

    [k]initially planned as random, unannounced, we had to revert to scheduled in order to ensure scientists were present and available when leaders stopped in

    [l]We had the same thing in academic lab inspections. While unannounced visits seemed more intuitive, the benefit of the visits wasn’t there if the lab workers weren’t available to work with the inspectors. So scheduling visits worked out better in the end

    [m]In terms of compliance inspections, I would think that the benefit of scheduled inspections is that it can motivate people to clean their labs before the inspection. While I get that it would be preferable that they clean their labs more regularly, the announced visit seems like it would guarantee that all labs get cleaned up at least once per year. And maybe they’ll see the benefit of the cleaner lab and be more inspired to keep it cleaner generally – but I realize that might just be wishful thinking.

    [n]So important. We keep running into the issues of experimental safety getting missed by 1-shot inspections.

    [o]Some of that could be addressed with better risk assessment training of research staff.

    [p]concerns are generally wide ranging, most started out as lab centric in the early years then expanded beyond the labs

    [q]Are these other functional areas related to safety or other issues (e.g. quality control, business processes, etc.)

    [r]This seems key but also can be super hard to obtain.

    [s]I think that it requires leadership that is familiar with all of the different kinds of expertise in the orgainzation to say that. Higher ed contains so many different types of expertise, it is difficult for leadership to know what this commitment entails

    [t]And far too often in my experience in academia those in leadership positions have limited management training, which can inhibit good leadership traits.

    [u]Many academics promoted into chair or dean level get stuck on  budgeting arguments rather than more strategic / visionary questions

    [v]I’ve found this expectation to be quite challenging at some higher ed institutions.

    [w]Everytime I bring it up to upper management in higher ed, they say “of course safety is #1”, but they don’t want to spend their leadership capital on it.

    [x]the program was designed to give  senior leaders a role specifically designed for them

    [y]@Ralph I completely agree!

    [z]This approach seems to be a way for leadership to get involved with out spending a lot of leadership capital.

    I always had my best luck “inspecting” labs when I could lead with science-related questions rather than compliance issues

    [aa]I think it is really cool that this is thought of expansively.

    [ab]Nice to not put bounds on safety concerns going into the conversation.  Reinforced later in the paper thru the identification and mitigation of hazards well outside the lab

    [ac]Are these legs connected to on boarding training for lab employees?

    [ad]This skill would be exceptionally important when discussing such issues with graduate students.

    [ae]Are scientists trained in this technique?  Or does the SCT have individuals selected for that skill set?  When I look around campus at TTU I can see lots of opportunity for collaboration by bringing “non-scientists” into the discussion to get new perspectives and possibly see new problems

    [af]This definitely takes practice, but it can also be learned in workshops and by observing good mentoring. The observation process requires a conscious commitment by both the mentor and the employee, though

    [ag]one thought at least for me, was the interviewing experience senior folks would have and this would be a chance to practice said skill

    [ah]Seems like the process could have some standard topics that can be replaced with new focus areas as the program matures or issues are addressed

    [ai]Lab clutter is an ongoing stress for me. Is the clutter related to current work or a legacy of previous work that hasn’t been officially decommissioned yet? 

    Did your organization develop a set of process decommissioning criteria to maintain lab housekeeping?

    [aj]Part of me feels that all researchers should at some point visit/tour a trace analytical laboratory.  Contamination is always of such concern when looking for things at ppb/ppt/ppq levels, that clutter rarely develops.  But outside of trace analysis laboratory its definitely a continuous problem in most research spaces.

    [ak]This is a good idea. I wonder if Bristol Myers Squibb has a program to rotate scientists among different lab groups to share “cross-cultural” learning?

    [al]@Chris – good point. I started research in a molecular genetics lab. While there were some issues, the benches and hoods were definitely MUCH cleaner and better organized because of concerns over contamination. Also, we have lab colonies of different insects in which things had to be kept very clean in order to keep lab-acquired disease transmission low for the insects. I was FLOORED by what I saw in chemistry labs once I joined my department. We very much had different ideas about what “clean” meant.

    [am]I really like this idea as well. Make sure everyone is on the same page.

    [an]I like the idea of shifting focus the previous issues have been addressed

    [ao]Great to see emphasis on an often overlooked topic

    [ap]Would reviewing these legs annually be regular enough? Or too often?

    [aq]So important – people are more willing to discuss issues if they feel like someone is really listening and is prepared to actually address the issues.

    [ar]And it demonstrates true commitment to the program and improvements.  Supports the trust built between the different stakeholders.

    [as]Is there some sort of training or prepping down with these senior leaders?

    [at]a short training session occurs to introduce leaders to the purpose of the program

    [au]Thank you! This is a challenge for all laboratory organizations I have worked for

    [av]Agreed!  Too often things are kept even when there is no definitive plan for future use.

    [aw]What % of the chemical stock did this purge represent?

    [ax]I’m always amazed when I learn of a laboratory that attempts to function without a structured chemical management system.  The ones without are often those that duplicate chemical purchases, often in quantities of scale (for price savings) that far exceed their consumption need.

    [ay]I once asked the chem lab manager about this. He said that 80% of his budget is people and 15% chemicals. He’d rather focus his time on managing the 80% than the 15%.

    He had a point, but I think he was passing up an important opportunity with that approach

    [az]@Chris – and grad students waste loads of time looking for the reagents and glassware they need for their experiments. And when they find them, sometimes they have been so poorly stored/ignored that they are contaminated or otherwise useless. Welcome to my lab!

    [ba]is this more of a challenge in academia vs. industry?

    [bb]This is definitely a pretty big issue for us at the university I work at. Constant struggle.

    [bc]One of the things I found frustrating while working at a govt lab is that I found out that we legally weren’t allowed to donate old equipment. I was simultaneously attending a tiny PUI nearby who would have LOVED to take the old equipment off their hands. Now working in an academic lab, I have been able to snag some donated equipment from industry labs.

    [bd]@Jessica as someone presently in government research I share your frustration!  I have to remind myself that the government systems are all too often setup to prevent abuse, rather than be efficient and benevolent.

    [be]Are these other laboratories from within your organization or external partners?

    [bf]visitors from other labs within the department,

    [bg]We had that challenge to some extent, but the bigger issues arose when visitors from other campuses showed up with different safety expectations than we were trying to instill. International visitors were a particularly interesting challenge…

    [bh]@Ralph that was often my experience too, dramatically differing safety expectations now being asked to share research space.

    [bi]I wish this occurred with greater frequency in academia.  Too often folks are too concerned about hurting a colleagues’ feelings or ego than to have a conversation to address safety concerns.

    [bj]I like the best practices approach- less prescriptive and allows researchers some latitude in meeting the requirements.  Provides an opportunity for someone (who is a subject matter expert in their field) to come up with a better solution

    [bk]That’s great, shows a commitment to the program and supports the trust that has been built between the stakeholders.

    [bl]These are important issues in setting the tone of a safety culture for an organization

    [bm]Such an important statement here.

    [bn]Agreed!

    [bo]+1

    [bp]This is a good one!

    [bq]Since I’m sure these were tracked, this is a nice metric- prevalence of a particular concern over time.

    [br]does this go both ways at all? do the research scientists have the chance to ask how their research projects impact the goals of senior leadership/company?

    [bs]there is a social interaction aspect here were scientists will get to interact with leaders they normally would not cross paths with, we can take this opportunity for our analytical leaders to visit chemists, chemistry leaders to visit engineers and engineering leads to visit analytical chemists

    [bt]Did business leadership (sales, marketing, etc.) have the opportunity to see this kind of interaction? Or do they have separate interactions with lab staff?

    [bu]In higher ed, it would be interesting to take admissions staff on lab tours to inform them about what is going on there and potentially give feedback about what students and parents are interested in

    Constructing Consequences for non-compliance

    04/21 Table Read for The Art & State of Safety Journal Club

    Excerpts from “Constructing Consequences for Noncompliance: The Case of Academic Laboratories” presented by Dr. Ruthanne Huising, Emlyon Business School

    Full paper can be found here: https://journals.sagepub.com/doi/full/10.1177/0002716213492633

    Introduction

    As a profession, contemporary scientists enjoy an unusual degree of autonomy and deference. Universities are professional-bureaucracies (Mintzberg 1979). One side of the organization is collegial, collectively governed, participatory, consensual, and democratic. The other side is a Weberian, hierarchical, top-down bureaucracy with descending lines of authority and increasing specialization. These organizational structures may allow for differential interpretations of and responses to legal mandates and differential experiences of regulation and self governance. They often disadvantage regulators and administrative support staff, who occupy lower-status positions with less prestige, in their efforts to monitor, manage, and constrain laboratory hazards (Gray and Silbey 2011). What is regarded as academic freedom by the faculty and university administration looks like mismanagement, if not anarchy, to regulators[a][b].Herein lies the gravamen of the risk management problem: the challenge of balancing academic freedom and scientific autonomy with the demand for responsibility and accountability[c][d][e][f][g][h].

    We…describe the efforts of one university, Eastern University, to create a system for managing laboratory health, safety, and environmental hazards and to transform established notions that =faculty have little obligation to be aware of administrative and legal procedures.[i][j][k] We describe the setting—Eastern University, an Environmental Protection Agency (EPA) inspection, and a negotiated agreement to design a system for managing laboratory hazards—and our research methods.

    We describe efforts, through the design of the management system, to create prescribed consequences for noncompliant practices in laboratories. We show that in an effort to design a management system that communicates regulatory standards, seeks compliance with the requirements[l], and then attempts to respond and correct noncompliant action, Eastern University struggled to balance case-by-case discretion consistent with academic freedom and scientific creativity with the demands for consistent conformity, transparency, and accountability for safe laboratory practices.

    Constructing Organizational Consequences at Eastern University: Management System as Solution?

    During a routine inspection of Eastern University, a private research university in the eastern United States, federal EPA agents recorded more than three thousand violations of RCRA, CAA, CWA, and their implementing regulations. Despite the large number of discrete violations, both the EPA and the university regarded all but one as minor infractions. The university’s major failure, according to the EPA, was its lack of uniform practices across departments and laboratories on campus.[m][n][o][p] There was no clear, hierarchical organizational infrastructure for compliance with environmental laws, no clear delineation of roles and responsibilities, and, most importantly, no obvious modes of accountability for compliance….Without admitting any violation of law or any liability, the university agreed in a negotiated consent decree to settle the matter without a trial on any issues of fact or law.

    At Eastern, the management system reconfigured the work of staff and researchers by moving compliance responsibility away from centralized specialists to researchers working at the laboratory bench. Scientists became responsible for ensuring that their daily research practices complied with city, state, and federal regulations. [q][r]This shift in responsibility was to be facilitated by the creation of operating manuals, inspection checklists, enhanced training, and new administrative support roles.[s][t][u][v][w][x][y][z][aa][ab][ac][ad][ae]

    Research Methods: Observing the Design of an Environmental, Health, and Safety (EHS)  Management System

    From 2001 through 2007, we conducted ethnographic fieldwork at Eastern University to  investigate what happens when compliance with legal regulations is pursued through a management system.

    The fieldwork included observation, interviewing, and document collection. It was supplemented by data collection with standardized instruments for some observations and via several surveys of lab personnel and environmental management staff. For this article, we draw primarily from notes taken at meetings of the committee designing the system, presenting notes from the discussions concerning a catalog of consequences for poor performance required by the consent decree.

    Building Responsiveness and Responsibility into an EHS-MS: Consequences for Departures from Specified Operating Procedures

    The final version [of the management system manual] was agreed to only after hundreds of hours of negotiations among four basic constituencies: the academic leadership, the university attorney overseeing the consent decree, the environmental health and safety support staff located within the administration, a nonacademic hierarchy, and the lab managers and faculty within the academic hierarchy.[af][ag][ah]

    These descriptions explain that each person, or role incumbent, works with a committee of faculty and staff of safety professionals that provides consultation, monitoring, and recommendations, although legal responsibility for compliance is placed entirely within the  academic hierarchy, with ultimate disciplinary responsibility in a university-wide committee.

    >What will constitute noncompliance? 

    Despite the adoption of the original distinctions between minor, moderate, and very serious incidents [described in a section not included in this Table Read], discussions continued about the relationship between these categories and the actual behavior of the scientists. How would the system’s categories of “acceptable” and “unacceptable” actions map onto normal lab behaviors? How much would the lives of the lab workers be constrained by overly restrictive criteria?[ai][aj][ak] As Professor Doty said, no one wanted the system to be like police surveillance. Labs are places where science students live, after all.[al][am] Once a basic list of unacceptable conditions and actions was created and communicated through safety training, the salient issue would be intentionality, as it is in much conventional legal discourse.

    >Who will identify noncompliance?

    Marsha (attorney for University): I think we’re going to need to be more specific, though, for university-wide committee policy. If the consequence of a particular action is termination from Eastern, then there’s policy in place for that, but what leads up to that? When do you shut down a lab?[an][ao] When do you require faculty to do inspections in departments like XYZ? A lot of people here have partial responsibility for things—the system may work well, but it’s not[ap] always clear who’s responsible. Where we need to end up is to remember this key link to the PI. In order for this to work, I think it really comes down to the PI accepting responsibility[aq], but how they deal with that locally is a very personal thing[ar][as][at][au]. I don’t think we should prescribe action, tell the PI how to keep untrained people out of [the] lab. But we need to convince the faculty of this responsibility.[av]

    >How will those formally responsible in this now clearly delineated line of responsibility be informed? 

    Who will tell the professor that his or her lab is dirty or noncompliant?[aw][ax][ay][az][ba][bb][bc][bd]

    Informing the responsible scientist turns out to be a complex issue at the very heart of the management system design, especially in the specifications about distribution of roles and responsibilities. In the end, Eastern’s EHS-MS named a hierarchy of responsibility, as described above, from the professor, up through the university academic hierarchy, exempting the professional support staff.[be][bf]

    Despite the traceable lines of reporting and responsibility on the organizational charts, consultation, advice, and support was widely dispersed so that the enactment of responsibility and holding those responsible to account were constant challenges and remain so to this day. Most importantly, perhaps, because the faculty hold the highest status and yet hold the lowest level of accessibility and accountability, the committee was vexed as to how to get their attention about different types of violations.

    Marsha: …So, while you’re defining responsibilities and consequences, make sure you don’t relieve the PI of his duties. You can assign them helpers, but they need to be responsible[bg][bh][bi]. There can be a difference between who actually does everything and who is responsible[bj]. You need to make sure people are clear about that.

    Marsha: We need to convince the faculty of this responsibility….This is what we should be working on this summer. This is unfortunately the labor intensive part—we need to keep “looping back”—going to people’s offices and asking their opinions so they don’t hear things for the first time at [some committee meeting].[bk]

    Marsha: We need department heads and the deans to help us with PIs in the coming months…. We need to get PIs—if we don’t get them engaged the system will fail….We will get them by pointing out all the support there is for them, but bottom line is they have to buy into taking responsibility.[bl][bm][bn][bo][bp][bq][br][bs]

    In the end, the system built in three formal means to secure the faculty’s attention and acknowledgement of their responsibility for laboratory safety: (1) A registration system was implemented, in which the EHS personnel went from one faculty office to another registering the faculty and his or her lab into the system’s database. The faculty were required to sign a document attesting that they had read the list of their responsibilities and certifying that the information describing the location, hazards, and personnel in their lab was correct for entry into the database. (2) All faculty, as well as students, were required to complete safety training courses. Some are available online, some in regularly scheduled meetings, and others can be arranged for individual research groups in their own lab spaces. The required training modules vary with the hazards and procedures of the different laboratories. (3) Semiannual university inspections and periodic EPA inspections and audits were set up to provide information to faculty, as well as the university administration and staff, about the quality of compliance in the laboratories. Surveys of the faculty, students, and staff, completed during the design process and more recently, repeatedly show that familiarity with the EHS system varies widely.

    Although the audit found full compliance in the form of a well-designed system, it also revealed that many of the faculty and some administrators did not have deep knowledge of it,[bt] despite the effort at participatory design.

    >What action should be taken? 

    Consequences vary with the severity of the incident.

    It was essential to the design of the system that there be discretionary responses to minor incidents, which are inevitably a part of science.

    It was assumed[bu] that regular interaction with the lab safety representative, discussions in group sessions, and regular visits by the EHS coordinator would identify these and correct them on the spot with discussion and additional direction. The feedback would be routine, semiautomatic in terms of the ongoing relationships between relatively intimate colleagues in the labs and departments. No written documents would even record the transaction unless it was an official inspection; weekly self-inspections by the safety reps were not to be fed into the data system.[bv][bw][bx][by][bz][ca][cb][cc][cd][ce] 

    Consequences for moderately serious incidents include one or more of the following actions: oral or written warning(s) consistent with university human resources policies; a peer review of the event with recommendations for corrective action; a written plan by a supervisor [cf]that may include retraining, new protocols, approval from the department EHS committee, and a follow-up plan and inspection; or suspension of activities until the corrective plan is provided, or completed, as appropriate.

    A list of eight possible consequence[cg][ch][ci]s accompanies the definition of a very serious incident. The list begins with peer review and a written plan, as in moderately serious incidents, but then includes new items: appearance before the university’s EHS committee or other relevant presidential committees to explain the situation, to present and get approval of a written plan to correct the situation, and to implement the plan; restriction of the involved person’s authority to purchase or use regulated chemical, biological, radioactive, or other materials/equipment; suspension or revocation of the laboratory facility’s authorization to operate; suspension of research and other funds to the laboratory/facility; closure of a lab or facility; and applicable university personnel actions, which may include a written warning, suspension, termination, or other action against the involved person(s) as appropriate.

    These descriptions illustrate the sequential escalation of requirements and consequences and display, rather boldly we think, the effort of the committee to draft a legal code for enforcement of the management system’s requirements.

    When the committee completed its work, Marsha, the lead attorney, went to work editing it. When it was returned to the committee, the changes, many of which were grammatical rather than substantive, nonetheless so offended the group that participation in the planning process ceased for a long while.[cj][ck][cl] The associate dean communicated to the EHS leadership that morale among the coordinators and other committee members from the laboratories was low and that their willingness to do their best was being compromised. They believed that the decisions they made collectively in the working meetings were being undermined and changed so that at subsequent meetings, documents did not read as they were drafted; they believed that crucial “subtleties, complexities and nuances to policies and proposals” were being ignored, if not actively erased. If they were to continue working together, they asked for complete minutes and officially recorded votes.

    Nonetheless, it was the scientists’ and their representatives’ fear that the system would in fact become what a system is designed to be: self-observant and responsive and, thus, would eventually and automatically escalate what were momentary and minor actions into moderate, if not severe, incidents. This anxiety animated the planning committee’s discussions, feeding the desire to insert qualifications and guidelines to create officially sanctioned room for discretionary interpretation.

    >Who will be responsible for taking action to correct the noncompliant incident?

    Clearly, most minor incidents are to be handled in situ, when observed, through informal conversation, and the noncompliant action is supposed to be corrected by the observer’s instruction and the lab worker’s revised action. Some noncompliance is discovered through inspections that inform the PI of noncompliant incidents; a follow-up inspection confirms that the PI instructed her students to change their ways. Very few incidents actually move up the pyramid of seriousness.

    A significant proportion of the chronically reported incidents are associated with the physical facilities and materials in the laboratories[cm], such as broken sashes on the hoods, eye washes not working or absent, missing signage, inadequate tagging on waste, empty first aid kits, or crowding—simply not enough benches or storage areas for the number of people and materials in the lab…Corrections are not always straightforward or easy to achieve.[cn] Tagging of waste, proper signage, and adequate first aid kits may be fixed within a few minutes by ordering new tags and signs from the EHS office and a first aid kit through the standard purchasing process. While the lab may order its own supplies, it must wait for the EHS office to respond with the tags and signs. The hood sashes and eye wash repairs depend on the university facilities office, which is notoriously behind in its work and thus appears unresponsive. In nearly every conversation about how to respond to failed inspections, discussion turned to the problems with facilities (cf. Lyneis 2012[co])…crowding is often the consequence of more research funding than actual space: the scientist hires more students and technicians than there are lab benches. This has been a chronic issue for many universities, with lab construction lagging behind the expansion of research funding over the last 20 years.

    Just as the staff experienced the faculty as uninterested in the management system, the scientists experienced a “Don’t bother me” attitude in the staff, because often the ability to take corrective action does not rest entirely with the persons formally responsible for the lab.[cp][cq][cr][cs][ct] The PI depends on the extended network of roles and responsibilities across the university to sustain a compliant laboratory. This gap between agency (the ability to perform the corrective action) and accountability (being held responsible and liable for action) characterizes the scientists’ experience of what they perceive as the staff’s attitude of “Don’t bother me.” The management system is, after all, a set of documents, not a substitute for human behavior.

    Discussion and Conclusion

    In this article, we have used the case Eastern University to show how coordination and knowledge problems embedded in complex organizations such as academic research laboratories create intractable regulatory and governance issues and, unfortunately, sometimes lead to serious or even deadly outcomes. Overlaying bureaucratic procedures on spaces and actors lacking a sense of accountability to norms that may in real or perceived terms interfere with their productivity highlights the central challenge in any regulatory system: to balance autonomy and expertise with responsibility and accountability. Under these conditions, accountability may be, in the end, illusory.

    …Rather than an automatically self-correcting system of strictly codified practices, Eastern’s EHS-MS relies on case-by-case discretion that values situational variation and accommodation. Compromises between conformity and autonomy produce a system that formally acknowledges large and legitimate spaces for discretionary interpretation while recognizing the importance of relatively consistent case criteria and high environmental, health, and safety standards. [cu][cv][cw][cx]Marsha, Eastern’s principal attorney, noted the difficulties of balancing standardized ways of working in high-autonomy settings, voicing concern about “the exceptions [that] gobble up the rule.” The logic of the common law is reproduced in the EHS-MS because, like our common law, only some cases become known and part of the formal legal record: those that are contested, litigated, and go to appeal. In this way, the formal system creates a case law of only the most unusual incidents while the routine exceptions gobble up the rule.

    …safer practices and self-correcting reforms are produced by surrounding the pocket of recalcitrant actors who occupy the ground level of responsibility with layers of supportive agents who monitor, investigate, and respond to noncompliant incidents. In the end, we describe not an automatic feedback loop but a system that depends on the human relationships that constitute the system’s links.[cy][cz][da]

    Group Comments

    [a]Interesting set of different perspectives…

    [b]The quote “They spent all this time wondering if they could, that no one thought to think about if they should” is the first thing that comes to mind when I read this sentence.

    [c]Why are these viewed as diametrically opposed?  They can be complimentary

    [d]In practice, have you found this to be the case? I find this perspective interesting, because at least in my experiences and the experiences of those I’ve interacted with, practically, they do often conflict (or, at least are *perceived* as conflicting, which really may be all that matters, culturally)

    [e]In many of the issues I have explored as a grad student, I have noticed that this “freedom” often translates into no one actually being responsible for things for which someone really SHOULD be responsible. And if faculty step up to take responsibility, they are often taking on that responsibility alone.

    [f]I think it’s strongly dependent on awareness (often via required training) and leadership expectations.  In instances where both were sound and established, I’ve seen these elements to be complimentary.

    [g]I wonder what this training would look like. In my experience, a lot of training is disregarded as an administrative hoop to jump through every once in a while. I also think it’s wildly dependent on the culture of the university, as it exists. There is often little recourse to leverage over faculty to modify their behavior if it’s not (1) hired in, (2) positively incentivized, or (3) socially demanded by faculty peers. It seems difficult to me to try to newly instill such training requirements, with the goal of making PIs aware of their responsibility for ensuring safety. If there are no consequences (short of a major accident drawing the eye of the law), no social pressure to engage in safe behavior, and no positive incentive structure to award participation, why would faculty change their behavior? Many of them are already aware of the safety requirements—many of them just choose to prefer short term productivity and to prioritize other metrics. In an ideal world, I think I would agree with you that training at the University level would be sufficient, but I think there needs to be a much broader discussion of faculty *motivation*, not just their awareness.

    [h]Agreed, the culture/environment aspects are huge in terms of how such awareness training is received.  There’s multiple incentive models, and I’d hope that legal liability isn’t the only one that would lead to proactive action.

    [i]Sounds like a training deficiency if that’s the perception.

    [j]What are you defining as “training”?

    [k]Awareness of their obligations/requirements for both administrative (university) and regulatory elements.  This should be provided by the institution.

    [l]which should include process safety

    [m]This is a bit vague. Uniform practices? A one size fits all approach?

    [n]This caught me out a bit as well. If all they were finding were minor infractions, do we actually have a problem here?

    [o]I’m curious what these “minor infractions” were, though. What’s the scale? What’s the difference between a major and minor infraction? 3,000 opportunities for individual chemical exposures or needle pricks may be considered small when it comes to the EPA, but it seems quite substantial when it comes to individual health and safety

    [p]In general, minor infractions involve things like labellng of waste containers, storage times in accumulatins areas, etc. without any physical resulting impacts. EPA writes these up and can fine for them, but infractions don’t involve physical damange

    [q]Interesting…. this is what caused problems for us at Texas Tech before our accident. Individual oversight often meant no oversight…

    [r]Agreed, some form of checks-and-balances should be implemented to verify elements are being completed.

    [s]In my experience this is often a techniques employed by higher administration to shift the blame on frontline researchers and their supervisors.

    Combine with an underfunded EHS department and this situation results in no oversight or enforcement of these requirements.

    [t]But administrators are not experts as the PIs claim they are, If PI’s want freedom and recognition as experts then they do need to be held accountable but EMPOWERED by providing support mechanisms. Responsibility without empowerment is useless.

    [u]My eyes immediately went to the “new admin support roles.” If you have someone who understands both the regulations and the relationships in the department, I would think that person would be more effective than someone who shows up from a different department once per year with a checklist.

    [v]Agree with Anonymous. If you want the freedom, you take on the responsibility. Don’t want the responsibility, hand over the freedom.

    [w]I agree that faculty should be responsible. The common arguments I hear is that faculty aren’t in the lab all the time and can’t always be responsible for what happens day to day. Sort of a cynical view that says that faculty are the idea people and others (students?) should be the implementers…

    [x]I don’t think anyone expects them to be responsible for everything every single day though. I think the idea is that they should be responsible for setting the tone in their lab and having standards for the graduate researchers working in their labs – and they should be making an effort to visit their labs in order to walk around and make sure everything is operating as it should.

    [y]@Jessica I agree, but if they aren’t overseeing the day-to-day elements, they need to assign that responsibility to someone and make that assignment known to the research group.  AND they need to support and empower that individual.

    [z]I think I agree with Jessica here. Perhaps, someone with an eye and responsibility for safety NOT being in the lab on the day-to-day is part of the problem. maybe it *should* be a responsibility of PIs to visit their labs, to organize them, to keep up safety standards, inventory, etc. Or, perhaps it is their responsibility to hire someone to do this, specifically. Perhaps these responsibilities should be traded for teaching responsibilities, and thus institutions with high research focus can focus on hiring research teachers and managers (PIs) who are trained as such, and teachers who are actually trained as teachers.

    [aa]In the 1970’s and 80’s, externally funded PIs would hire people to do this kind of stuff (often called “lab wives”) but funding for this function was shifted to additional student support

    [ab]Blending what Sara and anonymous said while student support has gone up, it has also become more transactional in that it is more linked to teaching duties, while research assistantships tend to be the exception in many “research heavy universities”.

    [ac]I think the responsibility of the PI should be first to open the door for safety related discussions amongst the group, and then to make the final decision on acceptable behavior if consensus is not achieved. Following that, they should bare the responsibility of any ramifications of that decision. I think they can achieve awareness of what is happening in their lab without being there every day, but they need to continuously allow the researchers to voice their concerns

    [ad]I also think that PI’s might need tools to help them be accountable.  Particularly new faculty

    [ae]This the unfortunate part of interdepartmental politics, how far is the new faculty wiling to speak up, when in 5 years the same older faculty members will be a part of their tenure decision.

    [af]This goes to early commentary on the shifting of regulatory compliance to researchers: were any researchers involved in these discussions or were PIs/lab managers speaking for them in these discussions?

    [ag]I’d hope some (if they existed at this institution) laboratory safety officers were participants.

    [ah]Lab safety officers were often active participants, but often on a parallel track to the faculty level discussions. I guess which group carried more clout in the system design?

    [ai]The wording of this question seems to imply that lab safety impedes on lab productivity

    [aj]Building upon this, there is evidence that when safety concerns are not an issue (due to correct practices) productivity is actually better.

    [ak]I don’t have evidence for this, but I think it depends on how prevalent compliance is. If everyone is being safe in their labs then I think overall productivity would go up. If some people start cutting corners then while they may get short term improvement in productivity, in the long term everybody suffers (evacuations and accident investigation halting research, bad laboratory practices accumulating, etc.)

    [al]Does this imply that “students” are a class of people whose rights and responsibilities are different from other people in the laboratory?

    [am]Or to put it another way – why are students living in their labs?

    [an]One of the EHS professionals involved in these discussions told me “When you shut down one laboratory you have a mad faculty member; when you shut down a second, you have an environmental management system.”

    [ao]Faculty do notice when their colleague’s labs are shut down…

    [ap]In what way is the system “working well”? What mission is being served by the way the origional system was structured?

    [aq]for compliance in their lab

    [ar]This seems contradictory to earlier comments that researchers are responsible for their own compliance

    [as]The government does not believe this. They believe that the president of the institution is responsible for instituional compliance. The president of the institution may or may not believe that

    [at]However, since the presidents turn over much more quickly than faculty, faculty often outwait upper admin interest in this issue

    [au]I hear this a lot, but then I have to wonder what the word “responsibility” means in this context. The president of my university has never been to my lab, so how would he be responsible for it?

    [av]The subtext I see here is that it would be awfully expensive to have the enough staff to do this

    [aw]A ‘dish best served’ by faculty peers rather than university admin staff or legal.

    [ax]I don’t believe faculty will have these conversations with each other due to politics. We can have someone they respect make assessments by doing a myriad of approaches including having industry people come visit.

    [ay]Or in other words, who is willing to break be the bearer of bad news?

    [az]I like this. When I have discussed having issues in my own lab with others outside of my institution, so many respond with “tell the head of the department.” And I’m surprised that they don’t seem to realize that this is fraught with issues – this person’s labs are right across the hall from mine – and this person visits his lab far more often – and this person has seen my lab – he already knows what is going on and has already chosen to “not see it.” Now what?

    [ba]Building upon more that “head of department” does not actually mean higher in the hierarchy of authority. These people are still colleagues at the same level of authority most of the time.

    [bb]I remember hearing during the research process for the NAS report on Academic Lab Safety, that Stanford Chem had an established structure for true peer inspections of other faculty spaces and in some instances risk assessment of new research efforts.  From what I recall that was successful and implemented as that was the expectation.  So maybe it just needs to be the expectation, rather than optional.  Another alternative is you staff the administrative side with folks that have research experience, then the message may be better received.

    [bc]I really like that idea – that faculty would be engaged in the risk assessment of new research efforts. You are right that it would have to be established as a norm at the university – not as optional work – or work that goes to a committee that virtually no one is on.

    [bd]The biosafety world is run by a faculty involved oversight committee for grant proposals, for historical funding reasons. My experience is faculty are very reluctant to approach this process critically as peer review, but it does put biosafety issues on the agenda of the PI writing the grant

    [be]Interesting that this group is left out.  As an EHS employee I see an opportunity to be the consistency and impartiality across departments.  Also can disseminate best practices that are implemented in some labs

    [bf]Absolutely, and serve as a valuable mechanism for knowledge transfer.

    [bg]Yes, delegate task responsibility but not ultimate liability.

    [bh]Yes, we have a form we have for the delegation of tasks to the Lab Safety Coordinator (LSC). Ultimate responsibility at PI level, but allow them to delegate tasks using this form.

    [bi]That is great that its formally documented!

    [bj]Lawyers believe this. Safety professionals not so much.

    [bk]Should this responsibility be part of the on boarding process for new laboratory workers in general? I would note that “Eastern U” has had problem with grad student and postdoc misbehavior in the lab, including criminal acts against lab mates. These are handled by police rather than EHS, but EHS is often involved in assessing the degree of the problem.

    [bl]Herein lies one of the problems with the unorganized academic hierarchy where PI’s fall into. While systems that improve safety should always attempt to be non-punitive, at the end of the day the repeat offenders still have the freedom to not comply. 

    This can become problematic if that particular faculty member has a more influential role and position in their department.

    [bm]I agree with you that in this case study faculty may have the freedom not to comply until the situation escalates. It is not the case that this is always true. Some universities have the authority to shut down labs. There may be a mad faculty member, but it is a powerful statement to the rest of the faculty to get their acts together.

    [bn]The thing about this, thought, is that shutting down a lab is very nearly the “nuclear” option. I would imagine it would be incredibly problematic to determine who deserves to have their lab shut down and who doesn’t. And what has to occur before the lab is allowed to reopen.

    [bo]The other problem this presents is the impact of a lab closure on “innocent” grad students in the lab and colloborators with the lab, both on campus and externally. These factors can make for a very confusing conversation with PIs, chairs and deans, which I’ve had more than once

    [bp]In my experience, the only time admin and safety committees have even considered lab shutdown is when there’s outright defiance of the expectations and no effort made to resolve identified safety and compliance issues.  I’m not sure I’d considered those criteria as being a ‘nuclear’ option, seems more like enforced accountability IMHO.

    [bq]I feel like what you just described is what I meant by “nuclear” option. Their doesn’t seem to be anything between “innocuous notice” and “lab shutdown.”

    [br]Agree on the point as well about graduate students being the ones who actually “pay” in  a lab shutdown. If a faculty member is tenured, then they are getting their paycheck and not losing their job while their lab is shutdown. However, it directly harms the graduate students in no uncertain terms.

    [bs]@Jessica Then it sounds like the institution lacks some form of progressive discipline/resolution structure if there’s only one of two options.  Sadly some of that (progressive discipline structure) needs to be created with the involvement of HR to ensure labor laws and bargaining unit contracts aren’t violated.  But there absolutely needs to be a spectrum of progression of options before lab shutdown is all thats left.  And yes, I agree that the graduate student(s) bear a disproportionate penalty at times in the event of a lab shutdown.

    [bt]Is this after having the faculty sign on to the program through the registration process?

    [bu]I would say “hoped” here rather than “assumed”

    [bv]Why no documentation of the informal interactions/feedback?  Or was it optional?  From a regulators perspective, if it isn’t documented it never happened.

    [bw]Good point.  The accountability system could take the informality into account but if a lab racks up a bunch of minor, informal infractions it is probably indicative of culture.

    [bx]I also worry that not having it documented could lead to the ability for the feedback to be “forgotten” or denied as to having happened in the future if a larger infraction occurs.

    [by]This is something that was discussed as problematic in the paper. If it is undocumented, no one knows just how many warnings an individual has had. It is also one of the problems with solely relying on researchers watching out for each other. You don’t know how many times a convo was had and weather the issue was fixed OR the person just got tired of correctly their colleague.

    [bz]We are trying a pilot program using this approach. A EH&S building sweep to build a relationships with labs and let them know we don’t just visit to document non-compliance. We are not sure what we’ll document, since these are friendly visits.

    [ca]Have you read Rosso’s paper about the SPYDR program they do at BMS? I thought it was a very interesting approach that could be adapted to the academic environment.

    [cb]https://drive.google.com/file/d/1oCu1q6xqc12PpArTaDQ3lmlIvk-zpFbk/view?usp=sharing

    [cc]Thank you Jessica!

    [cd]What I particularly like about the feel of this approach is that they are having management intentionally visit labs in order to ASK THE RESEARCHERS what they feel like the problems are. This speaks to me because, as a graduate student, I was frustrated with EHS inspections in which they were focused on their checklist and minor infractions that didn’t matter while they walked right past really problematic things that were not on the checklist – and I would’ve much rather been encouraged to discuss those issues!

    [ce]@Jessica Great point about inspectors being too ‘tunnel visioned’ on their compliance checklist and not able to be truly receptive to bigger issues, whether observed or vocalized during (collaborative) discussion with the research group members.

    [cf]Is this a supervisor of the lab or to the lab?

    [cg]There was a PhD dissertation at “Eastern U” that described how this was negotiated and the impact of those negotiations on the design of the computerized database that was used to implement the system. It’s a fascinating story to read.

    [ch]Are you able to quickly find a link to share here?

    [ci]That would be great to see!

    [cj]This seems very strange to me. Was any additional information provided about the substantive changes that were made that could’ve potentially justified this type of response?

    [ck]In working with the EPA, an agreement we were working on was almost scuttled by too many commas in a key sentence. It took 6 months to resolve it because sets of lawyers were convinced that those commas changed the meeting entirely. I couldn’t see the difference myself

    [cl]The way through this was for the “clients” (techmical people from the school and the EPA) to get together without the lawyers in the room and come to a mutual understanding and then tell the lawyers to knock it off

    [cm]Hardly fair to hold PIs accountable but give the university a pass on providing a safe work place.  Although since this is outside the “academic freedom” morass it should be easier to address

    [cn]Or are expensive and inappropriate for the PI to do.

    [co]I have some of the same problems with facilities

    [cp]I am disappointed that this is not a bigger part of this paper. Faculty are often characterized as “not caring” when I think the situation is much more complex than that. As a graduate student trying to get problems fixed, I can certainly attest to how difficult this is – even to know who to go to, who is supposed to be paying for it, who is supposed to be doing the work – and while I am chasing all of that, I am not getting my research done. It can be atrocious to try to get responsiveness within the system – and I can see why it would be viewed as pointless to chase by researchers at least at some institutions.

    [cq]As an EHS staff member I can see a cultural rift between the groups.  Comes down to good leadership at the top which is in short supply.  Faculty and staff all work for the same university…

    [cr]@Jessica I agree that getting resolution on infrastructure issues as a graduate student can be a huge time sink and at times even ineffective.  That’s where having an ally/collaborator from the professional staff or EHS groups can be invaluable. They often know the structure and can help guide said efforts.

    [cs]I would be interested in what percentage of the faculty had this attitude. In my experience, it represents about 20% of an institutional faculty population; 20% of the faculty are proactive in seeking EHS help; and the remaining 60% are willing to go with the departmental flow with regards to safety culture.

    [ct]” Faculty and staff all work for the same university…” and work on the same mission, although in very different ways. 

    Another challenge is that many faculty don’t have a lot of identification with their host institution and often perceinve they need to change their schools in order to improve their lab’s resources or their personal standing in the hierachy

    [cu]And if we don’t have experts in actual scientific application looking at the problems or identifying problems then the system is broken. A lab can look “clean and safe” but be filled with hazards due to processes. I believe a two tier audit system needs to be in place: First tier compliance Second tier: Safety in lab processes

    [cv]YES! I have often been frustrated when having discussions in the “safety sphere” on these issues. By coming at it from the “processes” perspective, the compliance rules make a lot more sense.

    [cw]Reliance on point-in-time inspections can be misleading.  My group (EHS) does this for all labs across campus.  It is a good start- ensures the lab space is basically safe.  But what is missed is what happens when people work in the lab (processes).  In a past life, different industry, I worked with a group to develop best practices for oil spill response.  If response organizations subscribed to the practices they had guidelines on how to implement response strategies.  Not super prescriptive, but set some good guardrails.  Might be useful here?

    [cx]Experts in process safety are often soaked up by larger industries with much more predictable processes. The common sense questions they ask (what chemicals do you use?, who will be doing the work?) are met with blank stares in academia

    [cy]I think this is a profound observation which leads to the success or failure of this kind of approach.

    [cz]…this is also foreshadowing for some of her other papers on this case study :).

    [da]Spooky!

    Student-Led Climate Assessment Promotes a Healthier Graduate School Environment: CHAS Journal Club

    Climate Survey Team representatives: Rebeca Fernandez (she/her/hers), Tesia Janicki (she/her/hers)

    On March 10, 2021 the CHAS Journal continuing our discussion of the paper “Student-Led Climate Assessment Promotes a Healthier Graduate School Environment.” The original paper can be found at https://pubs.acs.org/doi/full/10.1021/acs.jchemed.9b00611 One of the authors, Rebeca Fernandez, led the discussion. Comments from the Table Read (that was led by Tesia Janicki) are also below.

    03/03 Table Read for The Art & State of Safety Journal Club
    Excerpts from “Student-Led Climate Assessment Promotes a Healthier Graduate School Environment”

    INTRODUCTION


    Recent reports have emerged that highlight a prevalence of mental health disorders among graduate students. These studies show that graduate students are disproportionately susceptible to mental health disorders when compared to the general population, due in part to unique challenges associated with the graduate school experience[a][b][c][d][e][f][g]. The majority of incoming students are recent college graduates in their early 20s, and their transition to graduate life is typically preceded by a relocation that separates them from their social networks and support systems[h][i][j][k]. Graduate programs that are able to assist students during the transition into their departments will benefit from a happier, healthier, and more productive group of young researchers. Although it is not universally recognized among faculty [l][m][n]that chemistry graduate programs need to adapt to better support the needs of graduate students, a few departments have initiated major institutional efforts to improve the research and educational climate in graduate school.
    Here, we define “climate” to encompass all aspects of the graduate student experience such as research practices, mentorship, social activities, work-life balance, and cultivation of a healthy lifestyle. Along with the general challenges associated with graduate school, each individual department has unique elements that influence its culture, such as size, demographics, geographic location, and whether the university is private or public. These differences notwithstanding, many challenges that graduate students experience appear to be universal[o][p]. The accurate evaluation of graduate program climate and student mental health has been hindered by the transient nature of the graduate student population[q][r], but encouragingly, graduate programs across the country have begun developing metrics to examine departmental climate and the graduate student experience. In 2014, the University of California, Berkeley administered a survey to assess the well-being of graduate students in all departments at the university. In 2018, Mousavi et al. demonstrated the successful implementation of a survey tailored to the Department of Chemistry at the University of Minnesota (UMN). At UMN, the development of a climate survey was initiated by faculty[s][t][u][v][w], with student involvement, and the survey results were used to guide institutional changes to improve graduate student culture. These results are further discussed alongside our Recommendations and Initiatives.

    SURVEY PROCESS


    Survey Development The UW−Madison climate survey was developed by the Climate Survey Team (CST), a group composed of eight students from different research laboratories[x][y] and years in graduate school who provided unique perspectives on the graduate school experience. The chemistry department at UW− Madison represents one of the largest national programs and has non-uniform demographics throughout the department, i.e., among research groups, across subfields, and between years (the breakdown of department demographics versus survey respondents is provided in the Supporting Information). Given these variations, we sought input from fellow graduate students, faculty, staff, representatives from University Health Services (UHS), and select department alumni, including a human resources expert, throughout each step of the survey design process.
    [z][aa][ab][ac]

    REPRESENTATIVE SURVEY FINDINGS[ad][ae]

    Emotional Well-Being and Work-Life Balance


    Graduate students and postdocs were asked what factors influenced their emotional well-being over the course of the previous year. It is clear from these data (Figure 2) that personal relationships, ranging from principle investigator (PI) involvement to peer interactions, have a significant impact on the emotional well-being of students. Notably, the advisor/PI was ranked highly as both a positive and negative influence, depending on the respondent, representing the outsized effect of PI−student interactions on the overall graduate school experience. [af][ag][ah][ai][aj][ak][al]Of the respondents who indicated that their relationship with their advisor/PI had a negative impact on their emotional well-being at least once per month (22%), 5% identified as male and 17% did not identify as male (details of how demographic responses were grouped can be found in the Supporting Information). From this data, it is clear that differences in PI−student relationships may be related to gender; however, we could not elucidate more specific causes from this survey. The significance of the PI−student relationship, regardless of gender, is further supported by a global PhD student survey in 2017 that reports, “good (PI) mentorship was the main factor driving (graduate student) satisfaction levels”.

    Work Environment


    Demographic correlations regarding perceived mentorship efficacy revealed some dependence on ethnicity. For example, 83% of those who identified as Caucasian experienced effective mentorship by senior graduate students/postdocs compared to only 58% of those who did not identify as Caucasian. Similarly, 90% of respondents who identified as Caucasian reported supportive interactions with their PI[am][an][ao], in contrast with 63% who do not identify as Caucasian. Variations in responses based on ethnicity reflect many factors, such as the diversity (or lack thereof) among the students and faculty members, which will vary across departments and over time. In response to trends revealed from demographic correlations and the negative experiences reported in Figure 3b, we recommend the installation of regular implicit bias training, mentorship,[ap][aq] and conflict resolution workshop[ar][as][at][au][av][aw]s.

    Outside the scope of this survey, we and other departments are making a concerted effort to improve minority representation in chemistry[ax][ay][az], necessitating shifts in climate that respect and integrate a more diverse student pool. [ba][bb][bc][bd][be]These data serve as an important baseline from which to gauge the effect of new policies through future assessments. The climate survey administered by the Department of Chemistry at the University of California, Berkeley echoes these sentiments and emphasizes the importance of creating a welcoming work environment for women and underrepresented minorities.

    Mental Health[bf][bg][bh][bi][bj][bk][bl][bm][bn][bo]

    A cumulative 59% of graduate students and postdocs reported feeling depressed or sad (a symptom of depression) at least a few times per month compared with 37% of adults surveyed among a broader population (one-month time frame)[bp][bq][br][bs][bt][bu]. Additionally, high percentages of graduate students and postdocs reported exhibiting symptoms of anxiety, with 25% of students experiencing a panic or anxiety attack at least once per month. For comparison, a 2012 study reported that 31.2% of adults had some anxiety disorder, where 4.7% had a panic disorder, specifically.

    The most shocking observation from our climate survey was that 9.1% of graduate students and postdocs reported experiencing thoughts that they would be better off dead or hurting themselves at least a few days a month. This alarming number is comparable with that reported for graduate students from other universities using similar methods. To address and attempt to mitigate the struggles of graduate students and postdocs, we recommend increasing access to and awareness of mental health resources through education and structured conversations.

    If faculty are educated about the mental health resources available on campus, they are in a better position to direct their students to the appropriate resources if needed[bv][bw][bx][by][bz]. Most, if not all, graduate schools will have an on-campus mental health organization (at UW−Madison this is the UHS). Collaboration with professionals is essential to making mental health support for graduate students and postdocs accessible.

    Our department now hosts biweekly “Office [ca][cb][cc]Hours” with a UHS professional providing drop-in confidential consultation sessions for graduate students and postdocs inside of the chemistry building, significantly lowering the barrier to seek support. We encourage graduate students at other institutes to connect with their on-campus health professionals and inquire about implementing a similar program[cd][ce][cf][cg].

    RECOMMENDATIONS AND INITIATIVES BASED ON CLIMATE SURVEY RESULTS


    Student buy-in for any climate discussions in the department is essential and faculty support is equally crucial to the success of implementing lasting change[ch][ci][cj][ck][cl][cm][cn][co]. Faculty acceptance of student participation in various activities (e.g., being a member of a student council) and engagement in conversations about mental health signals that students’ well-being is valued in addition to their research productivity.

    With the coordinated efforts of students and faculty, department curricula can be updated to provide explicit and detailed program requirements for graduate students. We encourage graduate students in other programs to work with faculty and staff to design and carry out a plan to foster a healthy graduate school climate based on the specific needs of their departments[cp][cq]. Utilizing a survey such as ours provides a starting point to gather information that is critical to creating lasting change.

    A list of the major initiatives, which have been implemented in the UW−Madison Department of Chemistry to address our own unique challenges, can be found below.

    • The advent of conversations surrounding graduate student struggles with stress, anxiety, and depression, which have provided a framework for both individuals and research groups to discuss related problems.
    • The organization of a regular department-wide town hall to discuss relevant issues.
    • An increase in the number of events focused on raising awareness about mental health disorders and resources available on campus.
    • Revision of graduate program policies[cr][cs][ct][cu][cv][cw][cx] to reduce stresses associated with the transition into a research group and subsequent graduation requirements.
    • An effort to develop an expectations document for independent research laboratories to mitigate stress surrounding graduation requirements.
    • A focus on providing leadership opportunities for graduate students and postdocs to further increase student involvement.


    [a]Do we have any historical data to know if this is a change over either the short term or long term?
    [b]Can you elaborate? I’m not sure I understand your question
    [c]Related: Are we reporting differently just because we think about it differently?
    [d]Sounds like we need more surveys!
    [e]I wonder what the experience is in 2020 compared to 2010, compared to 1980 For example, when my father was in grad school in the 1970’s, my mother was typing his papes for him. I wonder if changes in the reasons and ways people go to grad school impacts their experience of the situation.
    [f]Ohhhh wow. I really don’t know! We have data from 2015 at the earliest. And we really can’t compare because we used very different questions in 2017
    [g]This reminds me of an essay written in the 1970s I think (can’t remember author) in which a feminist explains how she would really like to have a wife since they do all of the thankless things to support the success of their spouse :).
    [h]Is this influenced by the demographics; i.e. are older grad students more likely to be international or vice versa?
    [i]Interesting, I don’t think we have this data at hand but my thought would be no.
    [j]Yes – I wondered about this assumption. I know that older students are coming back to school now (I’m one of them). There seem to be more people starting families while in grad school – so the idea that they are all singletons in their 20s doesn’t really seem to ring true anymore.
    [k]In 2019, we included questions on family status for correlations, but not age.


    [l]Is this because they accepted certain negative aspects of their programs in the past w/o complaint or is it because something has fundamentally changed about the structure of graduate programs?
    [m]Also interested to know about the generational aspects of this (generational in terms of inherited structures, temporal changes, etc.)
    [n]I think that the fundamental structure of graduate programs needs to change. It is built to publish and conduct research for the PI not to support graduate students in their achievements. This is especially true for those who identify as BIPOC. From our experience trying to implement changes not every faculty member is interested in helping or volunteering their time
    [o]I presume this means universal across institutions rather than all individuals feel the same challenges
    [p]Yes, for example across most institutions you do not have to be trained to manage people to be a PI
    [q]To me, this seems to also be related to Jessica’s question above querying faculty’s resistance to adaptation and supporting new and/or old unaddressed needs, since the transient nature of the grad student population might only be a major source of hindrance to evaluation if the effort is taken up, pushed, and facilitated by graduate students
    [r]Yes exactly.

    [s]What motivated these faculty to take on this project?
    [t]I believe this was one faculty member. I’m involved with the student group associated with the survey (they’ve repeated it since the initial survey) and the faculty advisor of the student group was also the director of graduate studies at the time and collaborated with psychiatrists from the health center on campus
    [u]You can read some more about it here (https://cen.acs.org/articles/95/i32/Grappling-graduate-student-mental-health.html)
    [v]Thanks – I’ll take a look. Is this something that the JST has taken on – or is this a separate group?
    [w]A separate group. This is run through http://ccgs.chem.umn.edu/

    [x]Were these all chemistry students?
    [y]Yes, spanning years 2-6
    [z]Was there a focus group phase to test the questions before they were used in the survey?
    [aa]Yes. Our sample group included graduate students and postdocs, men and women, international and domestic. We also had some faculty and mental health advisors read the survey for their take.
    [ab]Thanks. In my experience, that is a very helpful step that not all climate surveys undertake
    [ac]It was especially important for us to ensure language was clear to those for whom English was not their first language.
    [ad]The transition from U/G to grad school introduces the student to going from being one of the academic leaders of the class to being a lesser star in a peer group of academic stars. Coping with that can be difficult. Is this addressed in this study?
    [ae]We did not address this specific phenomenon. Themes of imposter syndrome were pervasive in qualitative responses, however.
    [af]In personal conversations, I definitely see evidence for this. While a grad student joins a dept, it really feels more like they join a PI. Two students in the same department can have wildly different experiences depending on who their PI is.
    [ag]You can even at times see very different experiences within a group (e.g. the student that is fellowship funded vs. the student that requires support off the research grant).
    [ah]I have had this exact same experience. I initially joined one group and after my first year I switched to a different group due to the terrible environment in the first group. My mental health improved tremendously as a result. Both were in the same department
    [ai] Definitely true. On a personal note, I encourage fellow graduate students to seek as much funding outside of their PI as possible, even if they think they are covered, because money often = power.
    [aj]Money helps, but at the end of the day your PI writes your rec letter, introduces you to collaborators, your future bosses, etc.
    [ak]Securing outside funding can also make it easier to switch PIs or bring in a more supportive co-PI. I am speaking on a personal level here – it helps more than any other single factor.
    [al]Absolutely agree on being self-funded leading to greater opportunities and flexibility.


    [am]Is this data broken out in any way to see if caucasian students were having these more positive interactions with caucasian PIs or just all PIs? Also, for those who are not caucasian, do they also have more positive experiences when their PIs are not caucasian (or even identify in the same way as the student)?
    [an]We did not collect PI identity/demographics of respondents. We would also run into small-numbers statistics here due to poor diversity among faculty in just our department. I think this is an excellent point to share with our future climate survey teams!
    [ao]Definitely – it would also be interesting to compare to another institution that does have more diversity among its faculty, especially with the number of international faculty that exist in departments throughout the US.
    [ap]Specifically to mentorship, are faculty required to go through any mentoring programs/trainings?
    [aq]There is a huge limitation with requiring tenured faculty to attend these trainings. There has been recent discussion of having “digital badges” placed on faculty profiles for those who have completed the training. A very visual form of peer pressure, but again, not a requirement.
    [ar]Are any proposals made to increase “buy-in” on these workshops? I feel that sometimes those going to events and programs of this nature tend to be those who already understand the importance of mental health and not necessarily those who need to hear it.
    [as]I echo this sentiment. At my uni, the faculty most in need of training / the most egregious ones are the ones who think it’s a waste of time and won’t participate
    [at]Hard AGREE.
    [au]Absolutely. My dream would be to tie it to tenure and promotion but that has not happened yet. We do now host mentorship training for faculty though!
    [av]While tying it to promotion would eventually fix this the Old guard which unfortunately is a lot of the more repeat offenders who are set in their ways would still be relatively left untouched more active / radical initiatives like linking mentorship performance to grant support would have a bigger effect. But this would be highly difficult to apply at the institutional level.

    [aw]On the PI side, I am also curious who in the institution really sits down and thinks about how PI time is divvied up and what the expectations are. Given that PIs do not have a direct boss, I feel like there are a whole lot of “PIs should…” discussions without rebalancing the demands that already exist.
    [ax]I am curious about how this is being done? Increased recruitment?
    [ay]Currently, UW-Madison is working on recruitment as well as retention initiatives via mentoring programs. More on some of these programs here:
    https://chem.wisc.edu/catalyst/
    https://chem.wisc.edu/2013/10/09/opportunities-abound-chops-and-pgsec-programs-expose-undergraduates-to-graduate-school-life/
    [az]At my uni, we have had some success with a student-led team called the Graduate Recruitment Initiative Team (GRIT) – https://voices.uchicago.edu/grit/
    They specifically work to target recruitment at URMs, work to create and maintain an accessible support network for the URM students they successfully recruited, and they work to address issues in application requirements (for instance, they were successfully able to get the GRE removed from admissions requirements across all graduate programs https://www.chicagomaroon.com/article/2018/11/16/grits-urging-biological-sciences-drops-gre-require/)
    [ba]It is in my experience that when faced with the notion of recruiting more underrepresented minorities into graduate programs department leadership has come up with rather lackluster ideas of how this isdone.
    As a matter of fact while recruitment seems to be increasing I would like to see how that compares to degree completion related to overall satisfaction with the program.
    [bb]That is why I was asking because I personally believe increased recruitment alone hasn’t helped dealt with the issue.
    [bc]Another factor is how is the overall climate in the department is adapting to increase presence of underrepresented minorities. Has it been embraced and flourished or have these students been tokenized in an effort to improve the outward appearance of a program?
    [bd]Yes yes yes. these are all so important. Increased recruitment just forces someone into a space that can be toxic. We’re working on this (because we really need to). Like Tesia said we’re trying to change our department to create spaces and be supportive. Along the lines of supporting and promoting affinity groups, standardizing requirements, increasing transparency along every step of the program (graduation requirements and PI expectation documents). There’s definitely more but I can’t think of it off the top of my head.
    [be]I’m just copying and pasting my comment from above about GRIT, since they are an organization that works diligently to support URM students that they’ve successfully recruited, as well. It’s a fundamental part of their functioning.
    At my uni, we have had some success with a student-led team called the Graduate Recruitment Initiative Team (GRIT) – https://voices.uchicago.edu/grit/
    They specifically work to target recruitment at URMs, work to create and maintain an accessible support network for the URM students they successfully recruited, and they work to address issues in application requirements (for instance, they were successfully able to get the GRE removed from admissions requirements across all graduate programs https://www.chicagomaroon.com/article/2018/11/16/grits-urging-biological-sciences-drops-gre-require/)

    [bf]How do you separate frustration related to failed experiments from other stressors? Isn’t part of the development as a scientist learning to deal with experiments not working and developing tools to create the desired outcome?
    [bg]I don’t think you can necessarily separate those frustrations. The way stressors build on one another makes them entangled so that they cannot be dissected away from each other. I think it’s more so important to acknowledge that failed experiments will exist and find ways to minimize additional stressors that can exacerbate the frustrations associated with the experimental side.
    [bh]Isn’t that learning to cope with failure?
    [bi]I think even learning from failure can be facilitated so that it is not so detrimental to one’s mental health. My current PI is very good at this, and constantly tries to make my lab mates and not take failure personally, and constantly turn it into a learning experience. The end result of all of this is that while I still get frustrated due to inevitable failures, I am more often able to go home at the end of the day and not feel terrible about myself
    [bj]This is an interesting thread to me. Actual results of my experiments never actually represented a “stress” for me – either when I was working in a research lab in undergrad or in grad school. The stress has all been centered around poor & neglectful relationships, lack of clarity on what goal posts are supposed to exist where. Every stupid little thing is some sort of mystery to figure out. It is incredibly unnecessary and takes away from the joy of the actual research.
    [bk]I agree with you on this. It may seem trivial talking about teaching graduate students how to deal with failure of experiments or grad work in general but grad school is a journey. Not knowing how to effectively deal with frustrations from work can pile up real quick and that might lead to some detrimental mental health issues.
    [bl]It’s also pretty important to recognize that one’s ability to “cope with failure” is heavily dependent on their support system, the degree to which they feel they have power/control over their environment and lives, the degree to which failures are expected as natural and normal by others in the environment, and their self-perceptions (which are themselves influenced heavily by the environment).
    I think it’s too reductive to say that it’s a matter of “learning to cope.” Coping skills are very important, but they are only reasonable deterrents when your environment and support systems are reasonably sufficient.
    [bm]For me, when my experiments failed most of the stress came from my former PI getting upset that I couldn’t make it work. I think the response to the failure from the PI dictates a lot more how the students will respond to it. Obviously, personalities dictate this to some extent as well, but having a PI who supports you despite the failures can minimize a lot of the frustrations due to the failure.

    [bn]This is a really good point. It sort of alludes to a point I made above asking about the mental well being of PIs. How much unhealthy coping and overreaction to negative things (including data) is being triggered by the PI themselves reacting in a really inappropriate way?
    [bo]Jessica’s point about the uncertainty for the ‘goal posts’ IMHO can’t be overstated. That’s a conversation that all grad students should have with the PI before ever deciding to join their group. And even then, things can shift during the student’s career, but having some idea up front (and confirming similar expectations with other students in the group) should be very high on the evaluation list for prospective research group selection.
    [bp]I think one interesting question to ask would be, how has this value changed before and after starting grad school. In other words, did students enrolling in graduate school have a history/propensity towards problems with mental health
    [bq]This is interesting. I also wonder if any studies have been done to see if those who become research faculty exhibit anxiety at a higher rate than the average population.
    [br]I am wondering if it would help if PIs have some expertise in Psychology
    [bs]I think that kind of begs the question. Is that important to address? If the answer to your question Taysir is yes, then that doesn’t mean we give up on not making graduate school a negative environment.
    [bt]Not necessarily an expertise in psychology – but more having some training in team building and project management. It is shocking to me what PIs are expected to do when nothing about graduate training suggests that these types of things are part of the education.
    [bu]@rebeca Fernandez. Oh absolutely, I fully agree that creating a more supportive is something that is essential. What I was alluding to is this: the answer to my question is yes, then that would warrant further investigation into the source of these metal health issues that arose prior to grad school, and how grad school exasperated them. If the answer to my question is no, which I find the most likely scenario, then that would even more conclusively show that grad school is indeed the source of these mental health issues, and further validate your efforts

    [bv]I worry about this becoming a culture of faculty just handing off “troubled” students instead of acknowledging the role they may be playing in harming the mental health of their students.
    [bw]I second this there are a lot of treat the symptoms not the cause initiatives when handling graduate student mental health.
    [bx]Agreed. How much of the issue is “my mental health” versus “this relationship is incredibly harmful but I depend heavily on it”?
    [by]I agree.
    [bz]Yeah definitely. Cristian’s point is excellent. I think that is an easy flaw of Climate Survey data. It becomes much easier to treat symptoms then address the root cause of it

    [ca]How has this been impacted by COVID? Do you have any post-publication information about attendance at these office hours?
    [cb]This will be addressed in the 2021-2022 survey
    [cc]These are still hosted now but via Zoom or phone call.
    [cd]I wonder how grad student pay correlates with these findings. Are all grad students paid the same amount? Is it a living wage in Madison?
    [ce]Ooooo interesting. We asked if stress was caused by financial factors. I believe that everyone is “paid the same” unless you are on NSF GRFP. International students have to pay more fees.
    [cf]Many universities have different pay levels based on degree progress. It could also vary wildly by department a common occurrence pay discrepancies also tends to show up in summer funding for 9 month stipends. The summer funding can be either widely available or generously compensating. And this dichotomy has been observed way too much in higher education.
    [cg]The financial aspects are definitely something that should be considered. Most are compensated at levels near (or below) the poverty line. Even those lucky enough to have compensation dictated by federal programs aren’t substantially better-off financially.

    [ch]In addition to faculty and staff, there are over 100 staff members in the department. I suspect that they have a significant role to play in influencing the department’s climate
    [ci]Great point. Also worth noting that Departmental leaders (e.g. Chair) aren’t always the most qualified to lead, but get that responsibility due to politics or the unwillingness of the more qualified to offer their time for those responsibilities.
    [cj]This is so true. From the staff in the department office, the technical support people to the custodians.
    [ck]Chemistry faculty rarely have any structured education in leadership or management. This lapse leads to difficulty managing people (students/staff) and causes unnecessary stress on those managed. I have no idea how to address this on a system level. Back when I was a PI and later as a business owner, I took some management classes.
    [cl]Faculty support is mentioned specifically because of the present power structure. Recommendations in the survey are were developed for all department members (including staff). Discussions of staff climate surveys have occurred, but I don’t have much information on that development.

    [cm]While technically true, it can feel very much like you really only answer to your PI. I have found it quite stunning how little I know about what is going on in my building – especially when I compare it to positions in which I worked before coming to grad school.
    [cn]Yes, staff play an active role and we meet with faculty, staff and students frequently on various committees.
    [co]when I was a new BS graduate, I was hired by an academic department to support international grad students who needed help with their English, etc. This was 1980 and most labs in the agronomy department had that kind of support that faculty could rely on to support their grad students. I get the impression that this support team has dwindled significantly since about 1990
    [cp]Is there historical data about the drop out rate of the department’s grad students. A faculty friend of mine who went to UW Madison as a history major in the 1980’s said he was the only person in his entering class to actually get his degree there.
    [cq]This likely exists in department records, but I do not have stats on this presently.


    [cr]On a personal level, I have found it extremely frustrating how much time I have wasted learning about basic things at my university. Everything is do disjointed. While it is not “the” stresser, it adds an unnecessary layer that distracts you from focusing on the truly challenging parts of graduate school.
    [cs]I agree! I think making it more transparent on how to do things and report things helps a lot! Especially if you encounter an abusive faculty, less hoops to jump through make it much easier to remedy the situation.
    [ct]I suspect that the disjointed nature of the academic community is part of the education of the grad student, as opposed to the technical training aspects. This is not an efficient approach to sharing information, but primes people for being faculty members rather than scientists
    [cu]Ha! It is literally the 1st thing I would fix – as I have in multiple companies. Why would I want my team distracted by pointless garbage when instead I would want their eyes on the prize and the focus on the actual work we are producing?
    [cv]Actually there’s a lot of data that shows that this exact type of information specifically selects against minoritized students in academia. This is what we mean by increasing transparency and standardizing graduation requirements. SO that this information can be easily found and not create an undue burden.
    [cw]It’s a feature not a bug if you’re trying to produce more faculty members
    [cx]And there should be many arguments about whether or not we should be trying to produce more faculty members, given the job availability.

    And, it only “primes people for being faculty members” under the assumption that the future of academic structure remains disjointed 😛

    JCHAS Editor's Spotlight: A 4 Way Tie!

    The JCHAS Editor’s Spotlight for the November / December 2019 issue of the Journal of Chemical Health and Safety is shining on 4 articles, all written by former chairs of the Division!

    The first presents an innovative approach to assessing heat stress concerns while wearing personal protective equipment.

    Then there is a suite of three articles that describe safety culture development programs in variety of settings (undergraduate instruction, academic research support, and industrial laboratory settings).

    In addition, there are articles assessing the impact of safety cultures in several settings and 4 articles on interesting approaches to improving ventilation in chemical settings.

    Note: This is the final issue of JCHAS that is published under the Division’s contract with Elsevier. Starting in 2020, the Journal will be published by the American Chemical Society’s publications division. This will change how members access the journal. Let me know if you have any questions about this change.

    Journal of Chemical Health and Safety,
    November–December 2019

    Editor’s Spotlight: Predicting and preventing heat stress related excessive exposures and injuries: A field-friendly tool for the safety professional
    Harry J. Elston, Michael J. Schmoldt

    Editor’s Spotlight: Industrial lab safety committees and teams — Case study
    Kenneth P. Fivizzani Pages 71-74

    Editor’s Spotlight: DiSCO — Department Safety Coordinators and Officers: Building Safety Culture
    Victor Duraj, Debbie M. Decker Pages 84-88

    Editor’s Spotlight: Incorporating chemical safety awareness as a general education requirement — Case study
    Frankie Wood-Black

    Chemistry laboratory safety climate survey (CLASS): A tool for measuring students’ perceptions of safety
    Luz S. Marin, Francisca O. Muñoz-Osuna, Karla Lizbeth Arvayo-Mata, Clara Rosalía Álvarez- Chávez

    Assessing graduate student perceptions of safety in the Department of Chemistry at UC Davis
    Brittany M. Armstrong

    Use of Lean Six Sigma methods to eliminate fume hood disorder
    Hugo Schmidt

    Computational fluid dynamics (CFD) modelling on effect of fume extraction
    Kar Yen Sam, Siong Hoong Lee, Zhen Hong Ban Pages 20-31

    Anatomy of an incident: A hydrogen gas leak showcases the need for antifragile safety systems
    Hugo Schmidt

    Cleaning diamond surfaces using boiling acid treatment in a standard laboratory chemical hood
    Kimberly Jean Brown, Elizabeth Chartier, Ellen M. Sweet, David A. Hopper, Lee C. Bassett

    In Your Face: Consideration of higher risks for chemical exposure to persons with disabilities in laboratories
    Janet S. Baum, Amie E. Norton

    Lessons learned — Vacuum pump fire
    Elizabeth Czornyj, Imke Schroeder, Nancy L. Wayne, Craig A. Merlic

    Pilot study predicting core body temperatures in hot work environments using thermal imagery
    Jacob B. Thomas, Leon Pahler, Rodney Handy, Matthew S. Thiese, Camie Schaefer


    SERMACS 2019 CHAS Presentations

    The 2019 Southeast Regional Meeting hosted a Division of Chemical Health and Safety symposium related to safety culture in the laboratory. The symposium was entitled Teaching, Creating and Sustaining a Safety Culture. This symposium was supported  in part by a Corporation Associates Local Section grant in the amount of $1,000.00 which was used to support the speaker’s travel costs. PDF versions of these presentations of this symposium are available below.

    Nurturing a safety culture through student engagement, Ralph House, UNC-CH

    Supporting a Culture of Safety with Teachable Moments Melinda Box NC State University

    Successful Execution of Top-Down Safety Culture at UNC-Chapel Hill Jim Potts UNC-CH

    Collaborative safety training and integrative program development Mark Lassiter Montreat College

    Cultivating a culture of safety in undergraduate chemistry labs at UNC Chapel Hill Kathleen Nevins UNC-CH

    From rules to RAMP: Embracing safety culture, expanding frontier as a recent graduate Rachel Bocwinski ACS

    SOPs, SOCs, and Docs: Developing peer-to-peer safety to fight complacency in synthetic inorganic chemistry Quinton Bruch UNC-CH

    Laboratory Safety Culture at UNC-CH Mary Beth Koza UNC-CH