All posts by Ralph Stuart

CHAS Chat on SDS’s, October, 2021

On October 28, 2021, Dr. Rob Toreki and Dr. Dan Kuespert discussed the history and challenges of Safety Data Sheets as a chemical hazard communication tool. 50 people attended this CHAS chat and 66 contributed to the pre-survey we did for this CHAS chat.

Key references that were discussed in the CHAS chat were:

Rob’s ilpi.com SDS FAQ page at http://www.ilpi.com/msds/faq/index.html

The Laboratory Chemical Safety Summaries found at PubChem (more information about these can be found at https://pubchemblog.ncbi.nlm.nih.gov/2015/08/17/a-laboratory-chemical-safety-summary-lcss-now-available-in-pubchem/

Dr. Kuespert’s sides on the Utility and Limits of Safety Data Sheets are

The results of the SDS survey we reviewed are available here:

The recording of this discussion can be found at https://drive.google.com/file/d/1-iAK7FVlSpGPpOGwfof5taXaBiO33LLT/view?usp=sharing

Safety Culture Transformation – The impact of training on explicit and implicit safety attitudes

On October 27, 2021, the CHAS Journal Club head from the lead author of the paper “Safety culture transformation—The impact of training on explicit and implicit safety attitudes”. The complete open access paper can be found on line at this link. The presentation file used by Nicki Marquadt, the presenting author, includes the graphics and statistical data from the paper.

Comments from the Table Read

On October 20, the journal club did a silent table read of an abridged portion of the article. This version of the article and their comments are below.

1. INTRODUCTION

Safety attitudes of workers and managers have a large impact on safety behavior and performance in many industries (Clarke, 2006, 2010; Ford & Tetrick, 2011, Ricci et al., 2018). They are an integral part of an organizational safety culture[a] and can therefore influence occupational health and safety, organizational reliability, and product safety (Burns et al., 2006; Guldenmund, 2000; Marquardt et al., 2012; Xu et al., 2014).

There are different forms of interventions for safety culture and safety attitude change, trainings are one of them. Safety trainings seek to emphasize the importance of safety behavior and promote appropriate, safety-oriented attitudes among employees[b][c][d][e][f][g][h][i] (Ricci et al., 2016, 2018).-*

However, research in the field of social cognition has shown that attitudes can be grouped in two different forms: On the one hand, there are conscious and reflective so-called explicit attitudes and on the other hand, there are mainly unconscious implicit attitudes (Greenwald & Banaji, 1995). Although there is an ongoing debate whether implicit attitudes are unconscious or partly unconscious (Berger, 2020; Gawronski et al., 2006), most researchers affirm the existence of these two structurally distinctive attitudes (Greenwald & Nosek, 2009). Traditionally, researchers have studied explicit attitudes of employees by using questionnaires [j](e.g., Cox & Cox, 1991; Rundmo, 2000). However, increasingly more researchers now focus on implicit attitudes that can be assessed with reaction time measures like the Implicit Association Test[k][l] (IAT; Greenwald et al., 1998; Ledesma et al., 2015; Marquardt, 2010; Rydell et al., 2006). These implicit attitudes could provide better insights into what influences safety behavior because they are considered to be tightly linked with key safety indicators. Unlike explicit attitudes, they are considered unalterable by social desirable responses (Burns et al., 2006; Ledesma et al., 2018; Marquardt et al., 2012; Xu et al., 2014). Nevertheless, no empirical research on whether implicit and explicit safety attitudes are affected by training could be found yet. Therefore, the aim of this paper is to investigate the effects that training may have on implicit and explicit safety attitudes. The results could be used to draw implications for the improvement of safety training and safety culture development.

1.1 Explicit and implicit attitudes in safety contexts

Explicit attitudes are described as reflected which means a person has conscious control over them[m] (Strack & Deutsch, 2004). In their associative–propositional evaluation (APE) model, Gawronski and Bodenhausen (2006) assume that explicit attitudes are based on propositional processes. These consist of evaluations derived from logical conclusions. In addition, explicit attitudes are often influenced by social desirability[n][o][p][q][r], if the topic is rather sensitive such as moral issues (Maass et al., 2012; Marquardt, 2010; Van de Mortel, 2008). This has also been observed in safety research where, in a study on helmet use, the explicit measure was associated with a Social Desirability Scale (Ledesma et al., 2018). Furthermore, it is said that explicit attitudes can be changed faster and more completely than implicit ones (Dovidio et al., 2001; Gawronski et al., 2017).

On the other hand, implicit attitudes are considered automatic, impulsive, and widely unconscious (Rydell et al., 2006). According to Greenwald and Banaji (1995, p. 5), they can be defined as “introspectively unidentified (or inaccurately identified) traces of past experience” that mediate overt responses. Hence, they use the term “implicit” as a broad label for a wide range of mental states and processes such as unaware, unconscious, intuitive, and automatic which are difficult to identify introspectively by a subject. Gawronski and Bodenhausen (2006) describe implicit attitudes as affective reactions that arise when stimuli activate automatic networks of associations. However, although Gawronski and Bodenhausen (2006) do not deny “that certain affective reactions are below the threshold of experiential awareness” (p. 696), they are critical towards the “potential unconsciousness of implicit attitudes” (p. 696). Therefore, they use the term “implicit” predominantly for the aspect of automaticity of affective reactions. Nevertheless, research has shown that people are not fully aware of the influence of implicit attitudes on their thinking and behavior even though they are not always completely unconscious (Berger, 2020; Chen & Bargh, 1997; De Houwer et al., 2007; Gawronski et al., 2006). Many authors say that implicit attitudes remain more or less stable over time and are hard to change (Charlesworth & Banaji, 2019; Dovidio et al., 2001; Wilson et al., 2000). In line with this, past studies in which attempts were made to change implicit attitudes often failed to achieve significant improvements (e.g., Marquardt, 2016; Vingilis et al., 2015).

1.3 Training and safety attitude change[s][t]

As mentioned in the introduction, the main question of this paper is to find out whether training can change implicit and explicit safety attitudes. Safety training can improve a person’s ability to correctly identify, assess, and respond to possible hazards in the work environment, which in turn can lead to better safety culture (Burke et al., 2006; Duffy, 2003; Wu et al., 2007). Besides individual safety training increasingly more industries such as aviation, medicine, and offshore oil and gas industry implement group trainings labeled as Crew Resource Management (CRM) training to address shared knowledge and task coordination in dynamic and dangerous work settings (Salas et al., 2006).

There are many different factors, which determine the effectiveness of safety trainings (Burke et al., 2006; Ricci et al., 2016) such as the training method (e.g., classroom lectures) and training duration (e.g., 8 h).

As can be seen in Figure 1, it can be stated that associative evaluations[u][v][w][x] (process) can be activated by different safety intervention stimuli such as training (input). These associative evaluations are the foundation for implicit safety attitudes (output) and propositional reasoning (processes), which in turn form the explicit safety attitudes (output). In addition, associative evaluations and propositional reasoning processes affect each other in many complex conscious and unconscious ways (Gawronski & Bodenhausen, 2006). However, change rates might be different. While the propositional processes adapt very quickly to the input (e.g., safety training), the associative evaluations might need longer periods of time for restructuring the associative network (Karpen et al., 2012). Therefore, divergences in the implicit and explicit measures resulting in inconsistent attitudes (output) can occur (McKenzie & Carrie, 2018).

1.4 Hypotheses and overview of the present studies

Based on the theories and findings introduced above, two main hypotheses are presented. Since previous research describes that explicit attitudes can be changed relatively quickly (Dovidio et al., 2001; Karpen et al., 2012), the first hypothesis states that:

  • H1: Explicit safety attitudes can be changed by training.
    Even though implicit attitudes are said to be more stable and harder to change (Dovidio et al.,
    2001; Gawronski et al., 2017; Wilson et al., 2000), changes by training in implicit attitudes can be expected too, due to changes in the associative evaluation processes (Lai et al., 2013) which affect the implicit attitudes (see EISAC model in Figure 1). Empirical research on the subject of implicit attitudinal change through training is scarce (Marquardt, 2016), however, it was shown that an influence on implicit attitudes is possible[y][z][aa] (Charlesworth & Banaji, 2019; Jackson et al., 2014; Lai et al., 2016; Rudman et al., 2001). Therefore, the second hypothesis states that:
  • H2: Implicit safety attitudes can be changed by training.

However, currently, there is a lack of empirical studies on implicit and explicit attitude change using longitudinal designs in different contexts (Lai et al., 2013). Also, in the field of safety training research, studies are needed to estimate training effectiveness over time (Burke et al., 2006). Therefore, to address the issues of time and context in safety attitude change by training, three studies with different training durations and measurement time frames in different safety-relevant contexts were conducted (see Table 1). In the first study, the short-term attitude change was measured 3 days prior and after a 2-h safety training in a chemical laboratory. In the second study, the medium-term attitude change was assessed 1 month prior and after a 2 days of CRM training for production workers. In the third study, the long-term attitude changes were measured within an advanced experimental design (12 months between pre- and post-measure) after a 12 weeks of safety ethics training in an occupational psychology student sample.[ab] To make this paper more succinct and to ease the comparability of used methods and reveled results, all three studies will be presented in parallel in the following method, results, and discussion sections. A summary table of all the studies can be seen in Table 1.

2. METHODS

Study 1

Fifteen participants (eight female and seven were male; mean age = 22.93 years; SD = 2.74) were recruited for the first study. The participants were from different countries with a focus on east and south Asia (e.g., India, Bangladesh, and China). They were enrolled in one class of an international environmental sciences study program with a major focus on practical experimental work in chemical and biological laboratories in Germany. Participation in regular safety training was mandatory for all participants to be admitted to working in these laboratories. To ensure safe working in the laboratories, the environmental sciences study program has traditionally small classes of 15–20 students. Hence, the sample represents the vast majority of one entire class of this study program. However, due to the lockdown caused by the COVID-19 pandemic, there was no opportunity to increase the sample size in a subsequent study. Consequently, the sample size was very small.

2.1.2 Study 2

A sample of 81 German assembly-line workers of an automotive manufacturer participated in Study 2. The workers were grouped into self-directed teams responsible for gearbox manufacturing. Hence, human error during the production process could threaten the health and safety of the affected workers and also the product safety of the gearbox which in turn affects the health and safety of prospective consumers. The gearbox production unit encompassed roughly 85 workers. Thus, the sample represents the vast majority of the production unit’s workforce. Due to the precondition of the evaluation being anonymous, as requested by the firm’s work council, personal data such as age, sex, and qualification could not be collected.

2.1.3 Study 3

In Study 3, complete data sets of 134 German participants (mean age = 24.14; SD = 5.49; 92 female, 42 male) could be collected. All participants were enrolled in Occupational Psychology and International Business study programs with a special focus on managerial decision making under uncertainty and risks. The sample represents the vast majority of two classes of this study program since one class typically includes roughly 60–70 students. Furthermore, 43 of these students also had a few years of work experience (mean = 4.31; SD = 4.07).

4. DISCUSSION

4.1 Discussion of results

The overall research objective of this paper was to find out about the possibility of explicit and implicit safety attitude changes by training. Therefore, two hypotheses were created. H1 stated that explicit safety attitudes can be changed by training. H2 stated that implicit safety attitudes can be changed by training. Based on the results of Studies 1–3, it can be concluded that explicit safety attitudes can be changed by safety training. In respect of effect sizes, significant small effects (Study 2), medium effects (Study 1), and even large effects (Study 3) were observed. Consequently, the first hypothesis (H1) was supported by all three studies. Nevertheless, compared to the meta-analytic results by Ricci et al. (2016) who obtained very large effect sizes, the effects of training on the explicit safety attitudes were lower in the present studies. In contrast, none of the three studies revealed significant changes in the implicit safety attitudes after the training. Even though there were positive changes in the post-measures, the effect sizes were marginal and nonsignificant. Accordingly, the second hypothesis (H2) was not confirmed in any of these three studies. In addition, it seems that the duration of safety training (e.g., 2 h, 2 days, or even 12 weeks) has no effect on the implicit attitudes[ac][ad][ae][af][ag][ah]. However, the effect sizes of short-term and medium-term training of Studies 1 and 2 were larger than those obtained in the study by Lai et al. (2016), whose effect sizes were close to zero after the follow-up measure 2–4 days after the intervention.

The results obtained in these studies differ with regard to effect size. This can partly be explained by the characteristics of the sample. For instance, in Studies 1 and 3, the participants of the training, as well as the control groups (Study 3 only), were students from occupational psychology and environmental sciences degree programs. Therefore, all students—even those of the control groups—are familiar with concepts of health and safety issues, sustainability, and prosocial behavior. Consequently, the degree programs could have had an impact on the implicit sensitization of the students which might have caused high values in implicit safety attitudes even in the control groups. The relatively high IAT-effects in all four groups prior and after the training are therefore an indication of a ceiling effect in the third study (see Table 3). This is line with the few empirical results gained by previous research in the field of implicit and explicit attitude change by training (Jackson et al., 2014; Marquardt, 2016). Specifically, Jackson et al. (2014) have also found a ceiling effect in the favorable implicit attitudes towards women in STEM of female participants, who showed no significant change in implicit attitudes after a diversity training.[ai][aj][ak]

Finally, it seems that the implicit attitudes were mainly unaffected by the training. The IAT data have shown no significant impact in any group comparison or pre- and post-measure comparison. To conclude, based on the current results it can be assumed that when there is a training effect, then it manifests itself in the explicit and not the implicit safety attitudes. One explanation might be that implicit safety attitudes are more stable unconscious dispositions which cannot be easily changed like explicit ones (Charlesworth & Banaji, 2019; Dovidio et al., 2001; Wilson et al., 2000). In respect of the EISAC model (see Section 1.3), unconscious associative evaluations might be activated by safety training, but not sustainably changed. A true implicit safety attitude change would refer to a shift in associative evaluations that persist across multiple safety contexts and over longer periods of time (Lai et al., 2013).[al][am]

5. PRACTICAL IMPLICATIONS AND CONCLUSION

What do the current empirical results mean for safety culture and training development? Based on the assumption that the implicit attitudes are harder to change (Gawronski et al., 2017) and thus may require active engagement via the central route of conviction (Petty & Cacioppo, 1986), this could be an explanation why there was no change in Study 1. This assumption is supported by the meta-analysis of Burke et al. (2006), who found large effect sizes for highly engaging training methods (e.g., behavior modeling, feedback, safety dialog) in general, and by the meta-analysis of Ricci et al. (2016) who obtained large effect sizes on attitudes in particular. However, the more engaging training methods such as interactive tutorials, case analyses, cooperative learning phases, role plays, and debriefs (structured group discussions)—which have proved strong meta-analytic effects (Ricci et al., 2016)—used in Studies 2 (CRM training) and 3 (Safety ethics training) did have a significant impact on the explicit but not implicit attitude change[an][ao]. In addition, it seems that more intense training with longer duration (e.g., such as 12 weeks in Study 3) has again no effect on the implicit attitude change. Therefore, maybe other approaches [ap][aq]can be more promising.

To sum up, even though the outlined conclusions are tentative, it could be very useful in the future to design realistic and affect-inducing training simulations via emergency simulators or virtual reality approaches[ar][as][at][au][av] [aw][ax][ay][az][ba](Sacks et al., 2013; Seymour et al., 2002) for all highly hazardous industries. If these simulations are accompanied by highly engaging behavioral (e.g., behavioral modeling; Burke et al., 2006, 2011), social (e.g., debriefs/structured group discussions; Ricci et al., 2016), and cognitive (e.g., implementation intentions; Lai et al., 2016) training methods, then they might facilitate a positive explicit and even implicit safety attitude change and finally a sustainable safety culture transformation.

[a]A theoretical question that occurs to me when reading this is:

Is “an organizational safety culture” the sum of the safety attitudes of workers and management or is there a synergy among these attitudes that creates a non-linear feedback effect?

[b]I would not have thought of this as the purpose of discreet trainings. I would have thought that the purpose of trainings is to teach the skills necessary to do a job safely.

[c]I agree. Safety Trainings are about acquiring skills to operate safely in a specific process…the collective (Total Environment) affects safety behavior.

[d]I think this could go back to the point below about fostering the environment – safety trainings communicating that safety is a part of the culture here.

[e]Safety professionals (myself included) have historically misused the term “training” to refer to what are really presentations.

[f]Agreed. I always say something that happens in a lecture hall with my butt in a chair is probably not a “training.” While I see the point made above, many places have “trainings” simply because they are legally required to have them. It says little to nothing about the safety culture of the whole environment.

[g]Maybe they go more into the actual training types used in the manuscript, but we typically start in a lecture hall and then move into the labs for our trainings, so I would still classify what we have as a training, but I can see what you mean about a training being more like a presentation in some cases.

[h]This is something I struggle with…but I’m trying to refer to the lecture style component as a safety presentation and the actual working with spill kits as a safety training.  It has been well-received!

[i]This is a core question and has been an ongoing struggle ever since I started EHS training in an education-oriented environment.

As a result, over time I have moved my educational objectives from content based (e.g. what is an MSDS?) to awareness based (what steps should you take when you have a safety question). However, the EHS community is sloppy when talking about training and education, which are distinct activities.

[j]Looks like these would be used for more factual items such as evaluating what the researcher did, not how/why they did it

[k]I’m skeptical that IATs are predictive of real-world behavior in all, or even most, circumstances. I’d be more interested in an extension of this work that looks at whether training (or “training”) changes revealed preferences based on field observations.

[l]Yes – much more difficult to do but also much more relevant. I would be more interested in seeing if decision-making behavior changes under certain circumstances. This would tell you if training was effective or not.

[m]This is a little confusing to me but sounds like language that makes sense in another context.

[n]What are the safety-related social desirabilities of chemistry grad students?

[o]I would think these would be tied to not wanting to “get in trouble.”

[p]Also, likely linked to being wrong about something chemistry-related.

[q]What about the opposite? Not wear PPE to be cool?

[r]In my grad student days, I was primarily learning how to “fake it until I make it”. This often led to the imposter syndrome being socially desirable. This probably arose from the ongoing awareness of grading and other judgement systems that the academic environment relies on

[s]Were study participants aware or were the studies conducted blind? If I am an employee and I know my progress will be measured, I may behave differently than if I had not known.

[t]This points back to last week’s article.

[u]What are some other ways to activate our associative evaluations?

[v]I would think it would include things like witnessing your lab mates follow safety guidance, having your PI explicitly ask you about risk assessment on your experiments, having safety issues remedied quickly by your facility. Basically, the norms you would associate with your workplace.

[w]Right, I just wonder if there’d be another way besides the training (input) to produce the intended change in the associative evaluation process we go through to form an implicit attitude. We definitely have interactions on a daily basis which can influence that, but is there some other way to tell our subconscious mind something is important.

[x]In the days before social media, we used social marketing campaigns that were observably successful, but they relied on a core of career lab techs who supported a rotating cast of medical researchers. The lab techs were quite concerned about both their own safety and the quality of their science as a result of the 3 to 6 month rotation of the MD/PhD researchers.

The social marketing campaigns included 1) word of mouth, 2) supporting graphical materials and 3) ongoing EHS presence in labs to be the bad guys on behalf of the career lab techs

[y]This reminds me of leading vs lagging indicators for cultural change

[z]This also makes me think of the arguments around “get the hands to do the right things and the attitudes will follow” which is along the lines of what Geller describes.

[aa]That’s a great comparison. Emphasizes the importance of embedding it throughout the curriculum to be taught over long periods of time

[ab]A possible confounding variable here would have to do with how much that training was reinforced between the training and the survey period. 12 months out (or even 3 months out) a person may not even remember what was said or done in that specific training, so their attitudes are likely to be influenced by what has been happening in the mean time.

[ac]I don’t find this surprising. I would imagine that what was happening in the mean time (outside of the training) would have a larger impact on implicit attitudes.

[ad]I was really hoping to see a comparison using the same attitude time frame for the 3 different training durations. Like a short-term, medium, and long-term evaluation of the attitudes for all 3 training durations, but maybe this isn’t how things are done in these kinds of studies.

[ae]This seems to be the trouble with many of the behavioral sciences papers I read, where you can study what is available not something that lines up with your hypothesis

[af]I really would probably have been more interested in the long-term evaluation for the medium training duration personally to see their attitude over a longer period of time, for example.

[ag]I think this is incredibly hard to get right though. An individual training is rarely impactful enough for people to remember it. And lots of stuff happens in between when you take the training and when you are “measured” that could also impact your safety attitudes. If the training you just went through isn’t enforced by anyone anywhere, what value did it really have? Alternatively, if people already do things the right way, then the training may have just helped you learn how to do everything right – but was it the training or the environment that led to positive implicit safety attitudes? Very difficult to tease apart in reality.

[ah]Yeah, maybe have training follow-ups or an assessment of some sorts to determine if information was retained to kind of evaluate the impact the training had on other aspects as well as the attitudes.

[ai]What effect does this conclusion have on JEDI or DEI training?

[aj]I also found this point to be very interesting. I wonder if this paper discussed explicit attitudes. I’m not sure what explicit vs implicit attitudes would mean in a DEI context because they seem more interrelated (unconscious bias, etc.)

[ak]I am also curious how Implicit Attitude compares to Unconscious Bias.

[al]i.e. Integrated across the curriculum over time?

[am]One challenge I see here is the competing definitions of “safety”. There are chemical safety, personal security, community safety,  social safety all competing for part of the safety education pie. I think this is why many people’s eyes glaze over when safety training is brought up or presented

[an]The authors mention that social desirability is one reason explicit and implicit attitudes can diverge, but is it the only reason, or even the primary reason? I’m somehwat interested in the degree to which that played a role here (though I’m also still not entirely sure how much I care whether someone is a “true believer” when it comes to safety or just says/does all the right things because they know it’s expected of them).

[ao]This is a good point.

[ap]I am curious to learn more about these approaches.

[aq]I believe the author discusses more thoroughly in the full paper

[ar]Would these trainings only be for emergencies or all trainings? I feel that a lot of times we are told what emergencies might pop up and how you would handle them but never see them in action. This reminds me of a thought I had about making a lab safety-related video game that you could “fail” on handling an emergency situation in lab but you wouldn’t have the direct consequences in the real world.

[as]Love that idea, it makes sense that you would remember it better if you got to walk through the actual process. I wonder what the effect of engagement would be on implicit and explicit attitudes.

[at]Absolutely – I think valuable learning moments come from doing the action and it honestly would be safer to learn by making mistakes in a virtual environment when it comes to our kind of safety. The idea reminds me of the  tennis video games I used to play when I was younger and they helped me learn how to keep score in tennis. Now screen time would be a concern, but something like this could be looked at in some capacity.

[au]This idea is central to trying to bring VR into training. Obviously, you can’t actually have someone spill chemical all over themselves, etc – but VR makes it so you virtually could. And there are papers suggesting that the brain “reads” things happening in the VR world as if they really happened. Although one has to be careful with this because that also opens up the possibility that you could actually traumatize someone in the VR world.

[av]I know I was traumatized just jumping into a VR game where you fell through hoops (10/10 don’t recommend falling-based VR games), but maybe less of a VR game and more of like a cartoon character that they can customize so they see the impact exposure to different chemicals could have but they don’t have that traumatic experience of being burned themselves,for example.

[aw]In limited time and/or limited funding situations, how can academia utilize these training methodologies? Any creative solutions?

[ax]I’m also really surprised that the conclusion is to focus on training for the worker. I would think that changing attitudes (explicit and implicit) would have more to do with the environment that one works in than it does on a specific training.

[ay]I agree on this. I think the environment one finds themselves plays a part in shaping one’s attitudes and behaviors.

[az]AGREED

[ba]100% with the emphasis on the environment rather than the training

Are employee surveys biased? CHAS Journal club, Oct 13, 2021

Impression management as a response bias in workplace safety constructs

In October,, 2021 the CHAS Journal club reviewed the 2019 paper by Keiser & Payne examining the impact of “impression management” on the way workers in different sectors responded to safety climate surveys. The authors were able to attend to discuss their work with the group on October 13. Below is their presentation file as well as the comments from the table read the week before.

Our thanks to Drs. Keiser and Payne for their work and their willingness to talk with us about it!

10/06 Table Read for The Art & State of Safety Journal Club

Excerpts from “Are employee surveys biased? Impression management as a response bias in workplace safety constructs”

Full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753518315340?casa_token=oOShJnb3arMAAAAA:c4AcnB3fwnlDYlol3o2bcizGF_AlpgKLdEC0FPjkKg8h3CBg0YaAETq8mfCY0y-kn7YcLmOWFA

Meeting Plan

  • (5 minutes) Sarah to open meeting
  • (15 minutes) All participants read complete document
  • (10 minutes) All participants use “Comments” function to share thoughts
  • (10 minutes) All participants read others’ Comments & respond
  • (10 minutes) All participants return to their own Comments & respond
  • (5 minutes) Sarah announces next week’s plans & closes meeting

Introduction

The ultimate goal of workplace safety research is to reduce injuries and fatalities on the job.[a] Safety surveys that measure various safety-related constructs,including safety climate (Zohar, 1980), safety motivation and knowledge (Griffin and Neal, 2000), safety participation and compliance (Griffin and Neal, 2000), and outcome indices (e.g., injuries, incidents, and near misses) are the primary way that researchers gather relevant safety data. They are also used extensively in industry. It is quite common to administer self-report measures of both safety predictors and outcomes in the same survey, which introduces the possibility that method biases prevalent in self-report measures contaminate relationships among safety constructs (Podsakoff et al., 2012).

The impetus for the current investigation is the continued reliance by safety researchers and practitioners on self-report workplace safety surveys. Despite evidence that employees frequently underreport in-juries (Probst, 2015; Probst and Estrada, 2010), researchers have not directly examined the possibility that employees portray the workplace as safer than it really is on safety surveys[b]. Correspondingly, the current investigation strives to answer the following question: Are employee safety surveys biased? In this study,we focus on one potential biasing variable, impression management, defined as conscious attempts at exaggerating positive attributes and ignoring negative attributes (Connelly and Chang, 2016; Paulhus, 1984).The purpose of this study is to estimate the prevalence of impression management as a method bias in safety surveys based on the extent to which impression management contaminates self-reports of various workplace safety constructs and relationships among them.[c][d][e]

Study 1

Method

This study was part of a larger assessment of safety climate at a public research university in the United States using a sample of research laboratory personnel. The recruitment e-mail was concurrently sent to people who completed laboratory safety training in the previous two years (1841) and principal investigators (1897). Seven hundred forty-six laboratory personnel responded to the survey… To incentivize participation, respondents were given the option to provide their name and email address after they completed the survey in a separate survey link, in order to be included in a raffle for one of five $100 gift cards.

Measures:

  • Safety climate
  • Safety knowledge, compliance, and participation
  • Perceived job risk and safety outcomes
  • Impression management

Study 2

a second study was conducted to

  1. Further examine impression management as a method bias in self-reports of safety while
  2. Accounting for personality trait variance in impression management scales.

A personality measure was administered to respondents and controlled to more accurately estimate the degree to which self-report measures of safety constructs are susceptible to impression management as a response bias.

Method

A similar survey was distributed to all laboratory personnel at a different university located in Qatar. A recruitment email was sent to all faculty, staff, and students at the university (532 people), which included a link to an online laboratory safety survey. No incentive was provided for participating and no personally identifying information was collected from participants. A total of 123 laboratory personnel responded.[f]

Measures:

  • Same constructs as Study 1, plus
  • Personality

Study 3

Two limitations inherent in Study 1 and Study 2 were addressed in a third study, specifically, score reliability and generalizability.

Method

A safety survey was distributed to personnel at an oil and gas company in Qatar, as part of a larger collaboration to examine the effectiveness of a safety communication workshop. All employees (∼370) were invited to participate in the survey and 107 responded (29% response rate). Respondents were asked to report their employee identification numbers at the start of the survey, which was used to identify those who participated in the workshop. A majority of employees provided their identifying information (96, 90%).

Measures:

  • Same constructs used in Study 1, plus
  • Risk propensity
  • Safety communication
  • Safety motivation
  • Unlikely virtues

Conclusion[g][h][i][j][k][l]

Safety researchers have provided few direct estimates of method bias [m][n][o][p]in self-report measures of safety constructs. This oversight is especially problematic considering they rely heavily on self-reports to measure safety predictors and criteria.

The results from all three studies, but especially the first two, suggest that self-reports of safety are susceptible to dishonesty[q][r][s][t][u] aimed at presenting an overly positive representation of safety.[v][w][x][y][z][aa] In Study 1, self reports of safety knowledge, climate, and behavior appeared to be more susceptible to impression management compared to self-reports of perceived job risk and safety outcomes. Study 2 provided additional support for impression management as a method bias in self-reports of both safety predictors and outcomes. Further, relationships between impression management and safety constructs remained significant even when controlling for Alpha personality trait variance (conscientiousness, agreeableness, emotional stability). Findings from Study 3 provided less support for the biasing effect of impression management on self-report measures of safety constructs (average VRR=11%). However, the unlikely virtues measure [this is a measure of the tendency to claim uncommon positive traits] did reflect more reliable scores as those observed in Study 1 and Study 2 and it was significantly related to safety knowledge, motivation, and compliance. Controlling for the unlikely virtues measure led to the largest reductions in relationships with safety knowledge. Further exploratory comparison of identified vs. anonymous respondents observed that mean scores on the unlikely virtues measure were not significantly different for the identified subsample compared to the anonymous subsample; however, unlikely virtues had a larger impact on relationships among safety constructs for the anonymous subsample.

The argument for impression management as a biasing variable in self-reports of safety relied on the salient social consequences to responding and other costs to providing a less desirable response, including for instance negative reactions from management, remedial training, or overtime work[ab][ac]. Findings suggest that the influence of impression management on self-report measures of safety constructs depends on various factors[ad] (e.g., distinct safety constructs, the identifying approach, industry and/or safety salience) rather than the ubiquitous claim that impression management serves as a pervasive method bias.

The results of Study 1 and Study 3 suggest that impression management was most influential as a method bias in self-report measures of safety climate, knowledge, and behavior, compared to perceived risk and safety outcomes. These results might reflect the more concrete nature of these constructs based on actual experience with hazards and outcomes. Moreover, these findings are in line with Christian et al.’s (2009) conclusion that measurement biases are less of an issue for safety outcomes compared to safety behavior. These findings in combination with theoretical rationale suggest that the social consequences of responding are more strongly elicited by self-report measures of safety climate, knowledge, and behavior, compared to self-reports of perceived job risk and safety outcomes. Items in safety perception and behavior measures fittingly tend to be more personally (e.g., safety compliance – “I carry out my work in a safe manner.”) and socially relevant (e.g., safety climate – “My coworkers always follow safety procedures.”).

The results from Study 2, compared to findings from Study 1 and Study 3, suggest that assessments of job risk and outcomes are also susceptible to impression management. The Alpha personality factor generally accounted for a smaller portion of the variance in the relationships between impression management and perceived risk and safety outcomes. The largest effects of impression management on the relationships among safety constructs were for relationships with perceived risk and safety outcomes. These results align with research on injury underreporting (Probst et al., 2013; Probst and Estrada, 2010) and suggest that employees may have been reluctant to report safety outcomes even when they were administered on an anonymous survey used for research purposes.

We used three samples in part to determine if the effect of impression management generalizes. However, results from Study 3 were inconsistent with the observed effect of impression management in Studies 1 and 2. One possible explanation is that these findings are due to industry differences and specifically the salience of safety. There are clear risks associated with research laboratories as exemplified by notable incidents; [ae]however, the risks of bodily harm and death in the oil and gas industry tend to be much more salient (National Academies of Sciences, Engineering, and Medicine, 2018). Given these differences, employees from the oil and gas industry as reflected in this investigation might have been more motivated to provide a candid and honest response to self-report measures of safety.[af][ag][ah][ai][aj] This explanation, however, is in need of more rigorous assessment.

These results in combination apply more broadly to method bias [ak][al][am]in workplace safety research. The results of these studies highlight the need for safety researchers to acknowledge the potential influence of method bias and to assess the extent to which measurement conditions elicit particular biases.

It is also noteworthy that impression management suppressed relationships in some cases; thus, accounting for impression management might strengthen theoretically important relationships. These results also have meaningful implications for organizations because positively biased responding on safety surveys can contribute to the incorrect assumption that an organization is safer than it really is[an][ao][ap][aq][ar][as][at].

The results of Study 2 are particularly concerning and practically relevant as they suggest that employees in certain cases are likely to underreport the number of safety outcomes that they experience even when their survey responses are anonymous. However, these findings were not reflected in results from Study 1 and Study 3. Thus, it appears that impression management serves as a method bias among self-reports of safety outcomes only in particular situations. Further research[au][av][aw] is needed to explicate the conditions under which employees are more/less likely to provide honest responses to self-report measures of safety outcomes.

———————————————————————————————————————

BONUS MATERIAL FOR YOUR REFERENCE:

For reference only, not for reading during the table read

Respondents and Measures

  • Study 1

Respondents:

graduate students (229,37%),

undergraduate students (183, 30%),

research scientists and associates (123,20%),

post-doctoral researchers (28,5%),

laboratory managers (25, 4%),

principal investigators (23, 4%)

329 [53%] female;

287 [47%] male

377 [64%] White;

16 [3%] Black;

126 [21%] Asian;

72 [12%] Hispanic

Age (M=31,SD=13.24)

Respondents worked in various types of laboratories, including:

biological (219,29%),

Animal biological (212,28%),

human subjects/computer (126,17%),

Chemical (124,17%),

mechanical/electrical (65,9%)

Measures:

  • Safety Climate

Nine items from Beus et al. (2019) 30-item safety climate measure were used in the current study. The nine-item measure included one item from each of five safety climate dimensions (safety communication, co-worker safety practices, safety training, safety involvement, safety rewards) and two items from the management commitment and safety equipment and  housekeeping dimensions. The nine items were identified based on factor loadings from Beus et al. (2019). Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Safety knowledge, compliance, and participation

Respondents completed slightly modified versions of Griffin and Neal’s (2000) four-item measures of safety knowledge (e.g., “I know how to perform my job in the lab in a safe manner.”), compliance (e.g., “I carry out my work in the lab in a safe manner.”), and participation (e.g., “I promote safety within the laboratory.”). Items were completed using a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Perceived job risk and safety outcomes

Respondents completed a three-item measure of perceived job risk (e.g., “I encounter personally hazardous situations while in the laboratory;” 1=almost always untrue, 5=almost always true; Jermier et al., 1989). Respondents also provided safety incident data regarding the number of injuries, incidents, and near misses that they experienced in the last 12 months.

  • Impression Management

Four items were selected from Paulhus’s (1991) 20-item Balanced Inventory of Desirable Responding. These items were selected based on a review of Paulhus’s (1991) full measure and an assessment of those items that were most relevant and best representative of the full measure (Table 1). Items were completed using a five-point accuracy scale (1=very inaccurate, 5=very accurate). Ideally this survey would have included Paulhus’s (1991) full 20-item measure. However, as is often the case in survey research, we had to balance construct validity with survey length and concerns about respondent fatigue and for these reasons only a subset of Paulhus’s (1991) measure was included.

  • Study 2

Respondents:

research scientists or post-doctoral researchers (43; 39%)

principal investigators (12; 11%)

laboratory managers and coordinators (12; 11%)

graduate students (3; 3%)

Faculty teaching in a laboratory (3; 3%)

 one administrator (1%)

Respondents primarily worked in:

chemical (55; 45%)

mechanical/electrical (39; 32%)

Uncategorized laboratory (29; 24%)

Measures:

  • Safety Constructs

Respondents completed the same six self-report measures of safety constructs that were used in Study 1: safety climate, safety knowledge, safety compliance, safety participation, perceived job risk, and injuries, incidents, and near misses in the previous 12 months.

  • Impression Management

Respondents completed a five-item measure of impression management from the Bidimensional Impression Management Index (Table 1; Blasberg et al., 2014). Five items from the Communal Management subscale were selected based on an assessment of their quality and degree to which they represent the 10-item scale.5 A subset of Blasberg et al.’s (2014) full measure was used because of concerns from management about survey length. Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Personality

Conscientiousness, agreeableness, and emotional stability were assessed using six items from Gosling et al. (2003) 10-item personality measure. Four items from the 10-item measure assessing openness to experience and extraversion were not included in this study. Respondents were asked to indicate the degree to which adjectives were representative of them (i.e., Conscientiousness – “dependable, self-disciplined;”  agreeableness – “sympathetic, warm;” Emotional stability – “calm, emotionally stable”; 1=strongly disagree, 7=strongly agree) and combined to represent the Alpha personality factor. One conscientiousness item was dropped because it had a negative item-total correlation (“disorganized, careless” [reverse coded]). This was not surprising as it was the only reverse-scored personality item administered.

  • Study 3

Respondents:

The typical respondent was male (101, 94%) and had no supervisory responsibility (72, 67%); however, some women (6, 6%), supervisors (17, 16%), and managers/senior managers (16, 15%) also completed the survey.7 The sample was diverse in national origin with most respondents from India (44, 42%) and Pakistan (25, 24%).

Measures:

  • Safety Constructs

Respondents completed five of the same self-report measures of

safety constructs used in Study 1 and Study 2, including safety climate (Beus et al., 2019), safety knowledge (Griffin and Neal, 2000), safety compliance (Griffin and Neal, 2000), safety participation (Griffin and Neal, 2000), and injuries, incidents, and near misses in the previous 6 months. Respondents completed a similar measure of perceived job risk (Jermier et al., 1989) that included three additional items assessing the degree to which physical, administrative, and personal controls

  • Unlikely Virtues

Five items were selected from Weekley’s (2006) 10-item unlikely

virtues measure (see also Levashina et al., 2014; Table 1) and were responded to on a 5-point agreement scale (1=strongly disagree; 5=strongly agree). Akin to the previous studies, an abbreviated version of the measure was used because of constraints with survey length and the need to balance research and organizational objectives.

[a]In my mind, this is a negative way to start a safety research project. The ultimate goal of the organization is to complete its mission and injuries and fatalities are not part of the mission. So this puts the safety researcher immediately at odds with the organization.

[b]I wonder if this happens beyond surveys—do employees more generally portray a false sense of safety to co-workers, visitors, employers, trainees, etc? Is that made worse by surveying, or do surveys pick up on bias that exists more generally in the work culture?

[c]Employees always portray things in a better light on surveys because who really knows if its confidential

[d]Not just with regard to safety; most employees, I suspect, want to portray their businesses in a positive light. Good marketing…

[e]I think that this depends on the quality of the survey. If someone is pencil whipping a questionnaire, they are probably giving answers that will draw the least attention. However, if the questions are framed in an interesting way, I believe it is possible to have a survey be both a data collection tool and a discussion starter. Surveys are easy to generate, but hard to do well.

[f]In my experience, these are pretty high response rates for the lab surveys (around 20%).

[g]A concern that was raised by a reviewer on this paper was that it leads to a conclusion of blaming the workers. We certainly didn't set out to do that, but I can understand that perspective. I'm curious if others had that reaction.

[h]I had the same reaction and I can see how it could lead to a rosier estimate of safety conditions.

[i]There is an interesting note below where you mention the possible outcomes of surveys that "go poorly" if you will. If the result is that the workers are expected to spend more of their time and energy "fixing" the problem, it is probably no surprise that they will just say that there is no problem.

[j]I am always thinking about this type of thing—how results are framed and who the finger is being pointed at. I can see how this work can be interpreted that way, but I also see it from an even bigger picture—if people are feeling that they have to manage impressions (for financial safety, interpersonal safety, etc) then to me it stinks of a bigger cultural, systemic problem. Not really an individual one.

[k]Well – the "consequences" of the survey are really in the hands of the company or institution. A researcher can go in with the best of intentions, but a company can (and often does) respond in a way that discourages others from being forthright.

[l]Oh for sure! I didn't mean to shoulder the bigger problem on researchers or the way that research is conducted—rather, that there are other external pressures that are making individuals feel like managing people's impressions of them is somehow more vital than reporting safety issues, mistakes, needs, etc. Whether that's at the company, institution, or greater cultural level (probably everywhere), I don't think it's at the individual level.

[m]My first thought on bias in safety surveys had to do more with the survey model rather than method bias.  Most all safety surveys I have taken are based on the same template and questions generally approach safety from the same angle.  I haven't seen a survey that asks the same question several ways in the course of the survey or seen any control questions to attempt to determine validity of answers.  Perhaps some of the bias comes from the general survey format itself….

[n]I agree. In reviewing multiple types of surveys trying to target safety, there are many confounding variables. Trying to find a really good survey is tough – and I'm not entirely sure that it is possible to create something that can be applied by all. It is one of the reasons I was so intrigued by the BMS approach.

[o]Though—a lot of that work (asking questions multiple ways, asking control questions, determining validity and reliability, etc) is done in the original work that initially develops the survey metric. Just because it's not in a survey that one is taking or administering, doesn't necessarily mean that work isn't there

[p]Agreed – There are a lot of possible method biases in safety surveys. Maybe impression management isn't the most impactful. There just hasn't been much research in this area as it relates to safety measures, but certainly there is a lot out there on method biases more broadly. Stephanie and I had a follow up study (conference paper) looking at blatant extreme responding (using only the extreme endpoints on safety survey items). Ultimately, that too appears to be an issue

[q]In looking back over the summary, I was drawn to the use of the word 'dishonesty."  That implies intent.  I'm wondering whether it is equally likely that people are lousy at estimating risk and generally overestimate their own capabilities (Dunning Kruger anyone?).  So it is not so much about dishonesty but more about incompetency.

[r]They are more likely scared of retribution.

[s]This is an interesting point and I do think there is a part of the underestimation that has to do with an unintentional miscalibration. But, I think the work in this paper does go to show that some of the underestimation is related to people's proclivity to attempt to control how people perceive them and their performance.

[t]Even so, that proclivity is not necessarily outright dishonesty.

[u]I agree. I doubt that the respondents set out with an intent to be fraudulent or dishonest. Perhaps a milder or softer term would be more accurate?

[v]I wonder how strong this effect is for, say, graduate students who are in research labs under a PI who doesn't value safety

[w]I thinks its huge. I know I see difference in speaking with people in private versus our surveys

[x]Within my department, I know I became very cynical about surveys that were administered by the department or faculty members. Nothing ever seemed to change, so it didn't really matter what you said on them.

[y]I also think it is very significant. We are currently dealing with an issue where the students would not report safety violations to our Safety Concerns and Near Misses database because they were afraid of faculty reprisal. The lab is not especially safe, but if no one reports it, the conclusion might be drawn that no problems exist.

[z]And to bring it back to another point that was made earlier: when you're not sure if reporting will even trigger any helpful benefits, is the perceived risk of retribution worth some unknown maybe-benefit?

[aa]I heard a lot of the same concerns when we tried doing a "Near Miss" project. Even when anonymity was included, I had several people tell me that the details of the Near Miss would give away who they were, so they didn't want to share it.

[ab]Interesting point. It would seem here that folks fear if they say something is amiss with safety in the workplace, it will be treated as something wrong with themselves that must be fixed.

[ac]Yeah I feel like this kind of plays in to our discussion from last week, when we were talking about people feeling like they're personally in trouble if there is an incident

[ad]A related finding has been cited in other writings on surveys – if you give a survey, and nothing changes after the survey, then people catch on that the survey is essentially meaningless and they either don't take surveys anymore or just give positive answers because it isn't worth explaining negative answers.

[ae]There are risks associated with research labs, but I don't know if I would call them "clear". My sense is that "notable incidents" is a catchphrase people are using about academic lab safety to avoid quantitating the risks any more specifically.

[af]This is interesting to think about. One the one hand, if one works in a higher hazard environment maybe they just NOTICE hazardous situations more and think of them as more important. On the other hand, there is a lot of discussion around the normalization of hazards in an environment that would seem to suggest that they would not report on the hazards because they are normal.

[ag]Maybe they receive more training as well which helps them identify hazards easier. Oil & Gas industry Chemical engineers certainly get more training from my experience.

[ah]Oil and gas workers were also far more likely to participate in the study than the academic groups.  I think private industry has internalized safety differently (not necessarily better or worse) than academia.  And high hazard industries like oil and gas have a good feel for the cost of safety-related incidents.  That definitely gets passed on to the workforce

[ai]How does normalization take culture into effect? Industries have a much longer history of self-reporting and reporting of accidents in general than do academic institutions.

[aj]Some industries have histories of self-reporting in some periods of time. For example, oil and gas did a lot of soul searching after the Deepwater explosion (which occurred the day of a celebration of 3 years with no injury reports), but this trend can fade with time. Alcoa in the 1990s and 2000s is a good example of this. For example, I've looked into Paul H. O'Neill's history with Alcoa. He was safety champion whose work faded soon after he left.

[ak]I wonder if this can be used as a way to normalize the surveys somehow

[al]Hmm, yeah I think you could, but you would also have to take a measure of impression management so that you could remove the variance caused by that from your model.

Erg, but then long surveys…. the eternal dilemma.

[am]I bet there are overlapping biases too that have opposite effects, maybe all you could do is determine to what extent of un-reliability your survey has

[an]In the BMS paper we covered last semester, it was noted that after they started to do the managerial lab visits, the committee actually received MORE information about hazardous situations. They attributed this to the fact that the committee was being very serious about doing something about each issue that was discovered. Once people realized that their complaints would actually be heard & addressed, they were more willing to report.

[ao]and the visits allowed for personal interactions which can be kept confidential as opposed to a paper trail of a complaint

[ap]I imagine that it was also just vindicating to have another human listen to you about your concerns like you are also a human. I do find there is something inherently dehumanizing about surveys (and I say this as someone who relies on them for multiple things!). When it comes to safety in my own workplace, I would think having a human make time for me to discuss my concerns would draw out very different answers.

[aq]Prudent point

[ar]The Hawthorne Effect?

[as]I thought that had to do with simply being "studied" and how it impacts behavior. With the BMS study, they found that people were reporting more BECAUSE their problems were actually getting solved. So now it was actually "worth it" to report issues.

[at]It would be interesting to ask the same question of upper management in terms of whether their safety attitudes are "true" or not. I don't know of any organizations that don't talk the safety talk. Even Amazon includes a worker safety portion to its advertising campaign despite its pretty poor record in that regard.

[au]I wish they would have expanded on this more, I'm really curious to see what methods to do this are out there or what impact it would have, besides providing more support that self-reporting surveys shouldn't be used

[av]That is an excellent point and again something that the reviewers pushed for. We added some text to the discussion about alternative approaches to measure these constructs. Ultimately, what can we do if we buy into the premise that self-report surveys of safety are biased? Certainly one option is to use another referent (e.g., managers) instead of the workers themselves. But that also introduces its own set of bias. Additionally, there are some constructs that would be odd to measure based on anything other than self-report (e.g., safety climate). So I think it's still somewhat of an open question, but a very good one. I'm sure Stephanie will have thoughts on this too for our discussion next week. 🙂 But to me that is the crux of the issue: what do we do with self-reports that tend to be biased?

[aw]Love this, I will have to go read the full paper. Especially your point about safety climate, it will be interesting to see what solutions the field comes up with because everyone in academia uses surveys for this. Maybe it will end up being the same as incident reports, where they aren't a reliable indicator for the culture.

History of the CHAS LST Workshop

In 2018, Dr. Kali A. Miller, at the time a graduate researcher at the University of Illinois-Urbana Champaign and involved in the laboratory safety team there, developed a workshop called “Developing Graduate Student Leadership Skills in Laboratory Safety.” Initially supported by the ACS Committee on Chemical Safety, ACS Division of Chemical Health and Safety, ACS Safety Programs, and the ACS Office of Graduate Education, the workshop was first held at the ACS National Meeting in Spring 2018. A paper was published describing this pilot workshop and initial survey analysis that can be found here. The workshop continued to be held at ACS National Meetings with different graduate researcher facilitators.

Jessica A. Martin began working closely with Dr. Miller to coordinate, and eventually take over general management of the workshop. They interviewed LST teams as a means of learning more about the movement and improving the offerings of the workshop. This resulted in a publication about LSTs that can be found here.

As 2020 approached, the COVID pandemic necessitated a rapid shift to a virtual world. Starting with the Spring 2020 ACS National Meeting, Jessica worked with the facilitators for that workshop to rapidly convert the workshop to a virtual format. 

As interest in the workshop continued to blossom in the academic community, it was recognized that the virtual version could reach a wider audience and be held independent of conferences. Jessica recruited known and active advocates in the ACS safety community to constitute an LST Mentorship Team to continue to improve the workshop. Graduate students involved in the LST leadership at their own institutions are also regularly recruited to serve as Facilitators and Moderators to share their experiences and improve their own professional skills. The workshop was renamed “Empowering Academic Researchers to Strengthen Safety Culture” in recognition of the changes made to it.

In 2022, Monica Nyansa of Michigan Tech assumed the role of workshop coordinator. She described her experience in the workshop as part of her professional development in a posting for the ACS Chemistry Grad and Postdoc blog.

[table “24” not found /]

Description of Roles:

Leader: Recruits and trains Facilitators and Moderators; manages updates to workshop content; organizes logistics; communicates with participants before and after workshop as needed; manages technology during the workshop; leads pre-workshop practice and post-workshop review sessions for Facilitators with the LST Mentorship Team

Facilitator: Participates in updates to workshop content through pre-workshop practice and post-workshop review sessions with the Leader and LST Mentorship Team; delivers workshop content to live audience; facilitates small group and large group activities and discussion during the workshop

Moderator (Position created for virtual workshop): Serves in the practice audience for pre-workshop practice sessions and participates in post-workshop review session; can contribute to updates to workshop content; monitors Breakout Room activities during the Workshop

LST Mentorship Team: This team is a group of senior CHAS members and other safety professionals who support the workshop leader with content review and advice and who often serve as moderators for virtual workshops.

Mentorship Team

Kali A. MillerACS Publications
David FinsterWittenberg University (retired)
Marta GmurczykACS
Mary Beth KozaUniversity of North Carolina at Chapel Hill (retired)
Ralph StuartKeene State College
Samuella SigmannAppalachian State University

Please check our workshop schedule for information and registration for the latest workshop!

SDS’s: What are They Good For?

CHAS Chat October 28, 2021

Chemical safety and information technology have both evolved significantly since OSHA established Material Safety Data Sheets as the basic regulatory unit of chemical safety information in the 1980’s. This evolution has both advantages and challenges for lab workers. This session will discuss both sides of this coin and best practices for using SDS’s as chemical safety information resource in the laboratory setting.

Join us from 1 to 3 PM on Thursday, October 28 to hear from chemical safety experts discuss the uses and challenges of safety data sheets as a source of laboratory chemical safety information. Our presenters will be Dr. Dan Kuespert of Johns Hopkins University and Dr. Rob Toreki of ilpi.com

If you are interested in attending this session, give us your e-mail address here, and we will provide a Zoom connection link the week of the CHAS chat

Thanks for your interest in Chemical Health and Safety!

The Art & State of Safety Journal Club: “Mental models in warnings message design: A review and two case studies”

Sept 22, 2021 Table Read

The full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753513001598?via%3Dihub

Two case studies in consumer risk perception and exposure assessment, focusing on mothballs and elemental mercury.

For this Table Read, we will only be reviewing the two case studies presented in the paper.

4. Two case studies: mothballs and mercury

Two case studies illustrate the importance of careful adaptation to context[a][b][c]. In the first case, an expert model of consumer-product use of paradichlorobenzene mothballs is enhanced with information from lay users’ mental models, so the model can become more behaviorally realistic (Riley et al., 2006a). In the second case, the mental models elicitation is enhanced with ethnographic methods including participant observation in order to gain critical information about cultural context of mercury use in Latino and Caribbean communities in the New York area (Newby et al., 2006; Riley et al., 2001a, 2001b, 2006b).

Both cases are drawn from chemical consumer products applied in residential uses. The chemicals considered here – paradichlorobenzene and mercury – have a wide variety of consumer and occupational uses that underscore the importance of considering context in order to attain a realistic sense of beliefs about the chemical, exposure behaviors, and resultant risk.

This analysis focuses on what these case studies can tell us about the process of risk communication design[d] in order to take account of the multidimensional aspects of risk perception as well as the overall cultural context of risk. Thus, risk communications may be tailored to the beliefs held by individuals in a specific setting, as well as to the specifics of their situation (factors physical, social, and cultural) which influence perceptions of and decision making about risk.

[e][f][g][h]

4.1. Mothballs

Mothballs are used in homes to control moth infestations in clothing or other textiles. Mothballs are solids (paradichlorobenzene or naphthalene) that sublimate (move from a solid state to a gaseous state) at room temperature. Many are in the shape of balls about 1 in. in diameter, but they are also available as larger cakes or as flakes. The products with the highest exposed surface area (flakes) volatilize more quickly. The product works by the vapor killing adult moths, breaking the insect life cycle.

The primary exposure pathway is inhalation of product vapors, but dermal contact and ingestion may also occur. Cases of ingestion have included children mistaking mothballs for candy and individuals with psychological disorders who compulsively eat household items (Avila et al., 2006; Bates, 2002). Acute exposure to paradichlorobenzene can cause skin, eye, or nasal tissue irritation; acute exposure to naphthalene can cause hemolytic anemia, as well as neurological effects. Chronic exposures to either compound causes liver damage and central nervous system effects. Additional long-term effects of naphthalene exposure include retinal damage and cataracts (USEPA, 2000). Both paradichlorobenzene and naphthalene are classified as possible human carcinogens (IARC Group II B) (IARC, 1999). Since this classification in 1987, however, a mechanism for cancer development has been identified for both naphthalene and paradichlorobenzene, in which the chemicals block enzymes that are key to the process of apoptosis, the natural die-off of cells. Without apoptosis, tumors may form as cell growth continues unchecked (Kokel et al., 2006).

Indoor air quality researchers have studied mothballs through modeling and experiment (e.g., Chang and Krebs, 1992; Sparks et al., 1991, 1996; Tichenor et al., 1991; Wallace, 1991). Research on this topic has focused on developing and validating models of fate and transport of paradichlorobenzene or naphthalene in indoor air. Unfortunately, the effects of consumer behavior on exposure were not considered[i][j]. Due to the importance of the influence of consumer behavior on exposure, it is worth revisiting this work to incorporate realistic approximations of consumer behavior.

Understanding consumer decisions about purchasing, storage, and use is critical for arriving at realistic exposure estimates as well as effective risk management strategies and warnings content. Consumer decision-making is further based upon existing knowledge and understanding of exposure pathways, mental models of how risk arises (Morgan et al., 2001), and beliefs about the effectiveness of various risk-mitigation strategies. Riley et al. (2001a, 2001b) previously proposed a model of behaviorally realistic exposure assessment for chemical consumer products, in which exposure endpoints are modeled in order to estimate the relative effectiveness of different risk mitigation strategies, and by extension, to evaluate warnings (refer to Fig. 1). The goal is to develop warnings that provide readers with the information they need to manage the risks associated with a given product, including how hazards may arise, potential effects, and risk-mitigation strategies.

4.1.1. Methods

The idea behind behaviorally realistic exposure assessment is to consider both the behavioral and physical determinants of exposure in an integrated way (Riley et al., 2000). Thus, user interviews and/or observational studies are combined with modeling and/or experimental studies to quantitatively assess the relative importance of different risk mitigation strategies and to prioritize content for the design of warnings, based on what users already know about a product. Open-ended interviews elicit people’s beliefs about the product, how it works, how hazards arise, and how they may be prevented or mitigated. User-supplied information is used as input to the modeling or experimental design in order to reflect how people really interact with a given product. Modeling can be used to estimate user exposure or to understand the range of possible exposures that can result from different combinations of warning designs and reading strategies.

Riley et al. (2006a) recruited 22 adult volunteers [k][l][m][n][o][p][q][r]who had used mothballs from the business district in Northampton, Massachusetts. Interview questions probed five areas: motivation for product use and selection; detailed use data (location, time activity patterns, amount and frequency of use); mental models of how the product works and how hazards may arise; and risk perceptions; risk mitigation strategies. Responses were analyzed using categorical coding (Weisberg et al., 1996). A consumer exposure model utilized user-supplied data to determine the concentration of paradichlorobenzene in a two-box model (a room or compartment in which moth products are used, and a larger living space).

4.1.2. Uses

Table 1 illustrates the diversity of behavior surrounding the use[s][t][u][v][w] of mothballs in the home. It is clear that many users behave differently around the product from what one might assume from reading directions or warnings on the package label.

65% of participants reported using mothballs to kill or repel moths, which is its intended use. 35% reported other uses for the product, including as an air freshener and to repel rodents outdoors. Such uses imply different use behaviors related to the amount of product used and the location where it is applied. Effective use of paradichlorobenzene as an indoor insecticide requires use in an enclosed space, the more airtight the better. Ventilation is not recommended, and individuals should limit their exposure to the non-ventilated space. In contrast, use as a deodorizer disperses paradichlorobenzene throughout a space by design.

These different behaviors imply different resultant exposure levels. For use as an air freshener, the exposure might be higher due to using the product in the open in one’s living space. Exposures might also be lower, as in the reported outdoor use for controlling mammal pests.

A use not reported in this study, perhaps due to the small sample size, or perhaps due to the stigma associated with drug use, is the practice of huffing or sniffing – intentional inhalation in order to take advantage of the physiological effects of volatile chemicals (Weintraub et al., 2000). This use is worth mentioning due to its high potential for injury, even if this use is far less likely than other uses reported here.

The majority of users place mothballs outside of sealed containers in order to control moths, another use that is not recommended by experts or on package labeling[x][y][z][aa][ab][ac][ad][ae]. Even though the product is recommended for active infestations, many users report using the product preventively, increasing the frequency of use and resultant exposure above recommended or expected levels. Finally, the amount used is greater than indicated for a majority of the treatment scenarios reported. These variances from recommended use scenarios underscore the need for effective risk communication, and suggest priority areas for reducing risk.

These results indicate a wide range of residential uses with a variety of exposure patterns. In occupational settings, one might anticipate a similarly broad range of uses. In addition to industrial and commercial uses as mothballs (e.g., textile storage, dry cleaning) and air fresheners (e.g., taxi cabs, restaurants), paradichlorobenzene is used as an insecticide (ants, fruit borers) or fungicide (mold and mildew), as a reagent for manufacturing other chemical products, plastics and pharmaceuticals, and in dyeing (GRR Exports, 2006).

4.1.3. Exposures

Modeling of home uses illustrates the range of possible exposures[af] based on self-reported behavior, and compares a high and low case scenario from the self-reports to an ‘‘intended use’’ scenario that follows label instructions exactly.

Table 2 shows the inputs used for modeling and resultant exposures. The label employed for the expected use scenario advised that one box (10 oz, 65 mothballs) should be used for every 50 cubic feet (1.4 cubic meters) of tightly enclosed space. Thus, for a 2-cubic-meter closet, 90 mothballs were assumed for the intended use scenario. The low exposure scenario involved a participant self-report in which 10 mothballs were placed in a closed dresser drawer, and the high exposure scenario involved two boxes of mothballs reportedly placed in the corners of a 30-cubic-meter bedroom.

Results show that placing moth products in a tightly enclosed space significantly reduces the concentration in users’ living space.[ag][ah][ai][aj][ak][al][am][an] The high level of exposure resulting from the usage scenario with a large amount of mothballs placed directly in the living space coincided with reports from the user of a noticeable odor and adverse health effects that the user attributed to mothball use.

4.1.4. Risk perception

There were a wide range of beliefs about the function and hazards[ao][ap] of [aq][ar][as][at][au][av][aw]mothballs among participants, as well as a gap in knowledge between consumer and expert ideas of how the product works. Only 14% of the participants were able to correctly identify an active ingredient in mothballs, while 76% stated that they did not know the ingredients. Similarly, 68% could not correctly describe how moth products work, with 54% of all participants believing that moths are repelled by the unpleasant odor. Two-thirds of participants expressed health concerns related to using moth products[ax][ay]. 43% mentioned inhalation, 38% mentioned poisoning by ingestion, 21% mentioned cancer, and 19% mentioned dermal exposure. A few participants held beliefs that were completely divergent from expert models, for example a belief that mothballs ‘‘cause parasites’’ or ‘‘recrystallize in your lungs.’’

A particular concern arises from the common belief that moths are repelled by the smell of mothballs. This may well mean that users would want to be able to smell the product to know it is working – when in fact this would be an indication that users themselves were being exposed and possibly using the product incorrectly. Improvements to mothball warnings might seek to address this misconception of how mothballs work, and emphasize the importance of closed containers, concentrating the product near the treated materials and away from people.

4.2. Mercury as a consumer product

Elemental mercury is used in numerous consumer products, where it is typically encapsulated, causing injury only when a product breaks. Examples include thermometers, thermostats, and items containing mercury switches such as irons or sneakers with flashing lights. The primary hazard arises from the fact that mercury volatilizes at room temperature. Because of its tendency to adsorb onto room surfaces, it has long residence times in buildings compared with volatile organic compounds. Inhaled mercury vapor is readily taken up by the body; in the short term it can cause acute effects on the lungs, ranging from cough and chest pain to pulmonary edema and pneumonitis in severe cases. Long-term exposure can cause neurological symptoms including tremors, polyneuropathy, and deterioration of cognitive function (ATSDR, 1999).

The second case study focuses on specific uses of elemental mercury as a consumer product among members of Latino and Caribbean communities in the United States. Mercury is sold as a consumer product in botánicas (herbal pharmacies and spiritual supply stores), for a range of uses that are characterized variously as folkloric, spiritual or religious in nature.

4.2.1. Methods

Newby et al. (2006) conducted participant observation and interviews with 22 practitioners and shop owners[az], seeking to characterize both practices that involved mercury use and perceptions of resulting risks. These practices were compared and contrasted with uses reported in the literature as generally attributable to Latino and Caribbean religious and cultural traditions in order to distinguish between uses that are part of the Santeria religion, and other uses that are part of other religious practice or secular in nature. Special attention was paid to the context of Santeria, especially insider–outsider dynamics created by its secrecy, grounded in its histories of suppression by dominant cultures. Because the label Latino is applied to a broad diversity of ethnicities, races, and nationalities, the authors sought to attend to these differences as they apply to beliefs and practices related to mercury.

Uses reported in the literature and reported by participants to Newby et al. (2006) and Riley et al. (2001a, 2001b) were modeled to estimate resulting exposures. The fate and transport of mercury in indoor air is difficult to characterize because of its tendency to adsorb onto surfaces and the importance of droplet-size distributions on overall volatilization rates (Riley et al., 2006b). Nevertheless, simple mass transfer and indoor air quality models can be employed to illustrate the relative importance of different behaviors in determining exposure levels.

4.2.2. Uses

Many uses are enclosed, such as placing mercury in an amulet, gourd, walnut, or cement figure (Johnson, 1999; Riley et al., 2001a, 2001b, 2006b; Zayas and Ozuah, 1996). Other uses are more likely to elevate levels of mercury in indoor air to hazardous levels, including sprinkling of mercury indoors or in cars for good luck or protection, or adding mercury to cleaning compounds or cosmetic products (Johnson, 1999; Zayas and Ozuah, 1996).

Some uses, particularly those attributable to Santeria, are occupational in nature. Santeros and babalaos (priests and high priests) described being paid to prepare certain items that use mercury (Newby et al., 2006). Similarly, botanica personnel described selling mercury as well as creating certain preparations with it (Newby et al., 2006; Riley et al., 2001a, 2001b). One case report described exposure from a santero spilling mercury (Forman et al., 2000). Some of this work occurs in the home, making it both occupational and residential.

Across the U.S. population, including in Latino and Caribbean populations, it is more common for individuals to be exposed to elemental mercury vapor through accidental exposures such as thermometer, thermostat and other product breakage or spills from mercury found in schools and abandoned waste sites (Zeitz et al., 2002). The cultural and religious uses described above reflect key differences in use (including intentional vs. accidental exposure) that require attention in design of risk communications.

4.2.3. Exposures

Riley et al. (2001a, 2001b) solved a single-chamber indoor-air quality model analytically to estimate exposures based on scenarios derived from two interviews with mercury users. Riley et al. (2001a, 2001b) similarly modeled scenarios for sprinkling activities reported elsewhere in the literature. Riley et al. (2001a, 2001b) additionally employed mass transfer modeling combined with indoor air quality modeling to estimate resulting exposures from the contained uses described in interviews with practitioners (Newby et al., 2006).

Results presented in Table 3 show wide variation in predicted exposures resulting from different behavior patterns in different settings. Contained uses produce the lowest exposures. As long as the mercury remains encapsulated or submerged in other media, it poses little risk. By contrast, uses in open air can result in exposures orders of magnitude greater, depending on amounts and how the mercury is distributed, as droplet size and surface area are key determinants of exposure.

4.2.4. Risk perception

Newby et al. (2006) found that participants identified the risks of mercury use as primarily legal in nature.[ba][bb][bc][bd][be][bf] Concerns about getting caught by either police or health officials were strong[bg][bh][bi]. After these concerns, practitioners mentioned the risks of mercury use ‘‘backfiring’’ on a spiritual level, particularly if too much is used.[bj][bk][bl] There was some awareness of potential harmful health effects from mercury use[bm][bn], but the perceptions of mercury’s spiritual power and the perceived legal risks of possession and sale figured more prominently in users’ rationales for taking care in using it and clearly affected risk-mitigation strategies described (e.g., not discussing use or sales openly, giving people a bargain so they won’t tell authorities).

Newby et al. (2006) discuss at length the insider–outsider dynamics in the study, and their influence on the strength of fears of illegality of mercury. Because of taboos on sharing details of Santeria practice, the authors warn against providing certain details of practice in risk communications designed by outsiders, as it would undercut the credibility of the warning messages.

Mental models of risk perception are critically important in all cases of consumer mercury use, both intentional and unintentional. When a thermostat or thermometer breaks in a home, many users will use a vacuum to clean up the spil[bo][bp][bq][br]l, based on a mental model of mercury’s hazards that does not include a notion of mercury as volatile. A key gap in people’s knowledge of mercury relates to its volatility; most lay people do not realize that vacuuming mercury will greatly increase its indoor air concentration, causing a greater health hazard than simply leaving mercury on the floor (Schwartz et al., 1992; Zelman et al., 1991). Thus, many existing risk communications about mercury focus on accidental spills and how (or how not) to clean them up.[bs][bt]


[a]Interesting timing for me on this paper- we’re currently working on a scheme to communicate hazards to staff & faculty at a new facility.  We have an ethnic diversity to consider and a number of the spaces will host the general public for special events.  Lots of perspectives to account for…

[b]If you have info on this next week, it would be interesting to hear what challenges you have run into and what you have done to address them.

[c]I’d be game for that.  I’m just getting into the project and was starting to consider different risk perceptions among different audiences.  This paper has given me some food for thought

[d]This is different from the way I have used the term “risk communication” traditionally. Traditionally risk communication is designed to help a variety of stakeholders work through scientific information to come to a shared decision about risk. See https://www.epa.gov/risk-communication for example. However, this paper’s approach soundsm more like the public health approach used to collect more accurate information about a risk

[e]I really like the concept of “behaviorally realistic exposure assessment”. Interestingly, EPA has taken over regulation of workplace chemicals from OSHA because OSHA was legally constrained from using realistic exposure assessment (specifically, the assumption that PPE may not be work correctly all the time)

[f]Wow – that is crazy to hear that OSHA would be limited in that way. One would thing actual use would be an incredibly important thing to consider.

[g]OSHA is expected to assume that all of its regulations will be followed as part of its risk assessment. EPA doesn’t believe that. This impacted EPA’s TSCA risk assessment of Methylene Chloride

[h]There is a press story about this at

https://www.cbsnews.com/video/family-of-man-who-died-after-methylene-chloride-exposure-call-epa-decision-step-in-the-right-direction/

[i]I’m wondering if the researchers are in academia or from the company. If from companies which supplied mothballs, I’m surprised that this was not one of the first things that they considered.

[j]That’s an interesting question. Mothballs are a product with a long history and they were well dispersed in the economy before regulatory concerns about them arose. So the vendors probably had an established product line before the regulations arose.

[k]Wondering about the age distribution of the study group- when I think of mothballs, I think of my grandparents who would be about 100 years old now.  Maybe younger people would handle the mothballs differently since they are likely to be unfamiliar with them.

[l]I’m also wondering about how they recruited these volunteers because that could introduce bias, for example only people who already know what they are might be interested

[m]Volunteer recruitment is an interesting thought…what avenue did they use to recruit persons who had this product? Currently wondering who still uses them since I don’t know anyone personally who talks about it…

[n]It sounds like they went into the shopping area outside Smith College and recruited people off the street. Northampton is a diverse social environment, but I suspect mothball users are from a particular segment of society

[o]How the recruitment happened seems like a key method that wasn’t discussed sufficiently here. After re-reading this it might be people that they recruited people without knowing if they had used mothballs or not.

[p]Interesting thought. When I was in my undergraduate studies, one of my professors characterized a substance as “smelling like mothballs,” and me and all of my peers were like “What? What do mothballs smell like…?” Curious as to whether product risk assessment is different between these generational groups.

[q]Did you and your undergraduate peers go grab a box and sniff them to grok the reference?

[r]I certainly did not! But I wonder how many people would have, if there were available at the time!

[s]Would anyone like to share equivalents they have seen in research labs? Researchers using a product in a way different from intended? Did it cause any safety issues? Were the researchers surprised to find that they were not using as intended? Were the researchers wrong in their interpretations and safety assessment? If so, how?

[t]I presume you’re thinking of something with a specific vendor-defined use as opposed to a reagent situation where, for example, a change in the acid use to nitric led to unforeseen consequences.

[u]I agree that this can apply to the use of chemicals in research labs. Human error is why we want to build in multiple controls. In terms of examples, using certain gloves of PPE for improper uses is the first thing that comes to mind.

[v]I have seen hardware store products repurposed in lab settings with unfortunate results, but I can’t recall specific of these events off the top of my head. (One was an hubcap cleaning solution with HF in it used in the History Dept to restore granite architectural features.)

[w]I have seen antifreeze brought in for Chemistry labs and rock salt brought in for freezing point depression labs…not dangerous, but not what they were intended for.

[x]So is the take-away on this point and the ones that follow in the paragraph that another communication method is needed?  Reading the manual before use is rare (in my experience)- too wordy.  Maybe pictographs or some sort of icon-based method of communications.

[y]This seems like to takeaway to me! Pictures, videos—anything that makes the barrier to content engagement as low as possible. Even making sure that it is more difficult to miss the information when trying to use the product would likely help (ie, not having to look it up in a manual, not having to read long paragraphs, not having to keep another piece of paper around)

[z]In the complete article, they discuss three hurdles to risk understanding:

1. Cognitive heuristics

2. Information overload

3. Believability and self-efficacy

These all sound familiar from the research setting when discussing risks

[aa]Curious how many people actually read package labeling, and of those people how many take the labeling as best-practice and how many take it as suggested use. I’m also curious how an analogy to this behavior might be made in research settings. It seems to me that there would likely be a parallel.

[ab]I believe that the Consumer Product Safety Commission does research into this question

[ac]Other considerations: is the package labeling comprehensible for people (appropriate language)? If stuff is written really small, how many people actually take the time to read it? Would these sorts of instructions benefit more from pictures rather than words?

[ad]I was watching an satirical Better Call Saul “legal ethics” video yesterday where the instructor said “it doesn’t matter how small the writing is, you just have to have it there”. See https://www.dailymotion.com/video/x7s7223 for that specific “lesson”

[ae]I think we’d see a parallel in cleaning practices, especially if it’s a product that gets diluted different amounts for different tasks. Our undergraduate students for example think all soap is the same and use straight microwash if they see it out instead of diluting.

[af]Notably, even when putting them outside in a wide area, you can still smell them from a distance, which would make them a higher exposure than expected.  Pictures and larger writing on the boxes would definitely help, but general awareness may need to be shared another way.

[ag]Historical use was in drawers with sweaters, linens, etc (which is shown to be the “low exposure”)…were these products inadvertently found to be useful in other residential uses much later?

[ah]I wonder if the “other uses” were things consumers figured out and shared with their networks – but those uses would actually increase exposure.

[ai]It appears so!  Another issue may be the amount of the product used.  Using them outside rather than in a drawer, may minimize the exposure some, but that would be relative to exactly how much of the product was used…

[aj]The comment about being able to smell it to “know it is working” is also interesting. It makes me think of how certain smells (lemon scent) are associated with “cleanliness” even if it has nothing to do with the cleanliness.

[ak]I’ve also heard people refer to the smell of bleach as the smell of clean – although if you can smell it, it means you are being exposed to it!

[al]This is making me second guess every cleaning product I use!

[am]It also makes me wonder if added scent has been used to discourage people from overusing a product.

[an]I think that is why some people tout vinegar as the only cleaner you will ever need!

[ao]What do you think would lead to this wide range?

[ap]If they are considering beliefs that were passed down through parents and grandparents this would also correlate with consumers not giving attention to the packaging because they grew up with a certain set of beliefs and knowledge and they have never thought to question it.

[aq]Is it strange to hear this given that the use and directions are explained right on the packaging?

[ar]I don’t think it is that strange. I think a lot of people don’t bother to read instructions or labels closely, especially for a product that they feel they are already familiar with (grow up with it being used)

[as]Ah, per my earlier comment regarding whether or not people read the packaging—I am not actually very surprised by this. I think there is somewhat of an implicit sense that products made easily available are fairly safe for use. That, coupled with a human tendency to acclimate to unknown situations after no obvious negative consequences plus the shear volume of text meant to protect corporations (re: Terms of Use, etc), I think people sort of ignore these things in their day-to-day.

[at]I agree it seems that people use products they saw used throughout their childhood…believed them to be effective and hence don’t read the packaging.  (Clearly going home to read the packaging myself…as I used them recently to repel skunks from my yard).

[au]Since the effects of exposure to mothball active ingredients are not acute in all but the most extreme cases (like ingestion), it is unlikely that any ill health effects would even be linked to mothballs

[av]I have wondered if a similar pattern happens with researchers at early stages. If the researcher is interested to a reagent or a process by someone who doesn’t emphasize safety considerations, that early researcher things of it as relatively safe – then doesn’t do the risk assessment on their own.

[aw]Yes, exactly. Long-term consequences are much harder for us to grapple with than acute consequences, which may lead to overconfidence and overexposure

[ax]I wonder why with so many having health concerns, only 12% used on the correct “as needed” basis.

[ay]A very, very interesting question. I wonder if it has something to do with a sense that “we just don’t know” along with a need to find a solution to an acute problem. i.e., maybe people are worried about whether or not it is safe in a broad, undefined, and somewhat intractable manner, but are also looking to a quick solution to a problem they are currently facing, and perhaps ignore a pestering doubt

[az]Again here, I’m wondering why more is not described about how they identified participants, because it is a small sample size and there is a possibility for bias

[ba]This is an important finding. Public risk perception and professional risk perception can be quite different. I don’t think regulators consider how they might contribute to this gap because each chemical is regulated in isolation from other risks and exposures.

[bb]It is also related to the idea of how researchers view EHS inspections. Do they see them as opportunities for how they can make their research work safer? Or do they merely see them as annoying exercises that might get them “in trouble” somehow?

[bc]I think that in both Hg case and the research case, there is a power struggle expressed as a culture clash issue. Both the users of Hg for spiritual purposes and researchers are likely to feel misunderstood by mainstream society represented by the external oversight process

[bd]I think this is *such* an important takeaway. The sense as to whether long documents (terms of use and other contracts, etc) and regulatory bodies (EHS etc) are meant to protect the *people/consumers* or whether they are meant to protect the *corporation* I think is a big deal in our society. Contracts, usage information, etc abound, but it’s often viewed (and used) as a means to protect from liability, not as a means to protect from harm. I think people pick up on that.

[be]I am in total agreement – recently I sat through a legal deposition for occupational exposure related mesothelioma, it was unsettling how each representative from each company pushed the blame off in every other possible direction, including the defendant. There are way more legal protections in place for companies than I could have ever imagined.

[bf]There is some discussion of trying to address this problem with software user’s agreements, but I haven’t heard of this concern on the chemical use front.

[bg]This is to say there is a disconnect to reasoning behind the legal implications? Assuming most are not aware of the purpose of the regulations as protections for people?

[bh]I don’t know of any agency that would recognize use of Hg as a spiritual practice. Some Native Americans have found their spiritual practices outlawed because their use of a material is different scenario from the risk scenario that the regulators base their rules on

[bi]I agree with your comment about a disconnect. Perhaps if they understood more about the reasons for the laws they would be more worried about their health rather than getting in trouble.

[bj]To a practitioner, is a spiritual “backfire” completely different from a health effect, or just a different explanation of the same outcome?

[bk]Good question

[bl]Good point – I thought about this too. I’d love to hear more about what the “spiritual backfire” actually looked like. Makes me think of the movie “The Exorcism of Emily Rose” where they showed her story from the perspective of someone who thinks she is possessed by demons versus someone who thinks she is mentally ill.

[bm]I am curious to find out how risk communication plays a role here cause it seems those using the mercury know about its potential health hazard.

[bn]Agree – It does say “some” awareness so I would be interested to see how bad they think it is for health vs reality. It looks like they are doing a risk analysis of sorts and are thinking the benefits outweigh the risks.

[bo]I’m not sure how to articulate it, but this is very different than the spiritual use of mercury.  Spiritual users can understand the danger of mercury exposure but feel the results will be worth it.  The person wielding a vacuum does not understand how Hg’s hazard is increased through volatilization.  I suspect a label on vacuum cleaners that said ‘NOT FOR MERCURY CLEANUP’ would be fairly effective.

[bp]Would vacuum companies see this as worth doing today? I don’t think I really encountered mercury until I was working in labs – it is much less prevalent today than it used to be (especially in homes), so I wonder if they would not see it as worth it by the numbers.

[bq]Once you list one warning like this, vacuum companies might need to list out all sorts of other hazards that the vacuum is not appropriate for cleanup

[br]Also, Mercury is being phased out in homes but you still see it around sometimes in thermometers especially. Keep in mind this paper is from 2014.

[bs]I don’t understand this statement in the context of the paragraph. Which risk communication messages is she referring to? I know that institutional response to Hg spills has changed a lot over the last 30 years. There are hazmat emergency responses to them in schools and hospitals monthly

[bt]I think this vacuum example is just showing how there is a gap in the risk communications to the public (not practitioners), since they mainly focused on clean up rather than misuse. It would be nice if there was a reference or supporting info here. They may have looked at packaging from different mercury suppliers.

CHAS Workshops 2021

The Division of Chemical Safety presents several workshops as part of the American Chemical Society’s continuing education program around chemical safety issues. CHAS has two workshop tracks, aimed at either specific stakeholders or chemistry professionals who need specific chemical safety education to complete their expertise, either in the lab or in business settings. The stakeholder workshops are taught by people from that group with safety experience (e.g. grad students teaching grad students), whereas the professional development workshops are taught by full time Environmental Health and Safety professionals.

To register for any of these workshops, click on the the workshop description for that workshop.

We can also help arrange presentations of these workshops in other venues. If you are interested in arranging any of these trainings for your company or local section meetings, contact us at membership@dchas.org

Also known as the Lab Safety Teams workshop, taught by chemistry graduate students with experience with implementing and maintaining laboratory safety programs at their home institution. This workshop will next be offered Sunday, October 17, 2021; you can register for it here.
Conducting risk assessments in the research lab requires special considerations. This workshop will explore using the RAMP paradigm to meet this need and will be offered this November. You can register for this workshop here.




The ACS Youtube channel hosts chemical safety related videos on a variety of topics and styles for specific audiences. They are all available for Creative Commons use in classes with attribution.
This extensive 15 hour course on chemical safety in the laboratory is designed for undergraduate STEM students and others with a need to review the fundamentals of chemical safety in the laboratory. Register at https://institute.acs.org/courses/foundations-chemical-safety.html
ACS Essentials of Lab Safety for General Chemistry
On line, registration fee required.

If you have any questions about these workshops, contact us at membership@dchas.org or complete the workshop below.

Fall 2021 National Meeting Technical Presentations

2021 CHAS Awards Presentations

Safety in Lab Facilities Symposium

The Impact of Covid on EHS

General Papers

Safety Papers from Symposia in Other Divisions

Chemical Education (CHED)

Chemical Information (CINF)