All posts by Ralph Stuart

Are employee surveys biased? CHAS Journal club, Oct 13, 2021

Impression management as a response bias in workplace safety constructs

In October,, 2021 the CHAS Journal club reviewed the 2019 paper by Keiser & Payne examining the impact of “impression management” on the way workers in different sectors responded to safety climate surveys. The authors were able to attend to discuss their work with the group on October 13. Below is their presentation file as well as the comments from the table read the week before.

Our thanks to Drs. Keiser and Payne for their work and their willingness to talk with us about it!

10/06 Table Read for The Art & State of Safety Journal Club

Excerpts from “Are employee surveys biased? Impression management as a response bias in workplace safety constructs”

Full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753518315340?casa_token=oOShJnb3arMAAAAA:c4AcnB3fwnlDYlol3o2bcizGF_AlpgKLdEC0FPjkKg8h3CBg0YaAETq8mfCY0y-kn7YcLmOWFA

Meeting Plan

  • (5 minutes) Sarah to open meeting
  • (15 minutes) All participants read complete document
  • (10 minutes) All participants use “Comments” function to share thoughts
  • (10 minutes) All participants read others’ Comments & respond
  • (10 minutes) All participants return to their own Comments & respond
  • (5 minutes) Sarah announces next week’s plans & closes meeting

Introduction

The ultimate goal of workplace safety research is to reduce injuries and fatalities on the job.[a] Safety surveys that measure various safety-related constructs,including safety climate (Zohar, 1980), safety motivation and knowledge (Griffin and Neal, 2000), safety participation and compliance (Griffin and Neal, 2000), and outcome indices (e.g., injuries, incidents, and near misses) are the primary way that researchers gather relevant safety data. They are also used extensively in industry. It is quite common to administer self-report measures of both safety predictors and outcomes in the same survey, which introduces the possibility that method biases prevalent in self-report measures contaminate relationships among safety constructs (Podsakoff et al., 2012).

The impetus for the current investigation is the continued reliance by safety researchers and practitioners on self-report workplace safety surveys. Despite evidence that employees frequently underreport in-juries (Probst, 2015; Probst and Estrada, 2010), researchers have not directly examined the possibility that employees portray the workplace as safer than it really is on safety surveys[b]. Correspondingly, the current investigation strives to answer the following question: Are employee safety surveys biased? In this study,we focus on one potential biasing variable, impression management, defined as conscious attempts at exaggerating positive attributes and ignoring negative attributes (Connelly and Chang, 2016; Paulhus, 1984).The purpose of this study is to estimate the prevalence of impression management as a method bias in safety surveys based on the extent to which impression management contaminates self-reports of various workplace safety constructs and relationships among them.[c][d][e]

Study 1

Method

This study was part of a larger assessment of safety climate at a public research university in the United States using a sample of research laboratory personnel. The recruitment e-mail was concurrently sent to people who completed laboratory safety training in the previous two years (1841) and principal investigators (1897). Seven hundred forty-six laboratory personnel responded to the survey… To incentivize participation, respondents were given the option to provide their name and email address after they completed the survey in a separate survey link, in order to be included in a raffle for one of five $100 gift cards.

Measures:

  • Safety climate
  • Safety knowledge, compliance, and participation
  • Perceived job risk and safety outcomes
  • Impression management

Study 2

a second study was conducted to

  1. Further examine impression management as a method bias in self-reports of safety while
  2. Accounting for personality trait variance in impression management scales.

A personality measure was administered to respondents and controlled to more accurately estimate the degree to which self-report measures of safety constructs are susceptible to impression management as a response bias.

Method

A similar survey was distributed to all laboratory personnel at a different university located in Qatar. A recruitment email was sent to all faculty, staff, and students at the university (532 people), which included a link to an online laboratory safety survey. No incentive was provided for participating and no personally identifying information was collected from participants. A total of 123 laboratory personnel responded.[f]

Measures:

  • Same constructs as Study 1, plus
  • Personality

Study 3

Two limitations inherent in Study 1 and Study 2 were addressed in a third study, specifically, score reliability and generalizability.

Method

A safety survey was distributed to personnel at an oil and gas company in Qatar, as part of a larger collaboration to examine the effectiveness of a safety communication workshop. All employees (∼370) were invited to participate in the survey and 107 responded (29% response rate). Respondents were asked to report their employee identification numbers at the start of the survey, which was used to identify those who participated in the workshop. A majority of employees provided their identifying information (96, 90%).

Measures:

  • Same constructs used in Study 1, plus
  • Risk propensity
  • Safety communication
  • Safety motivation
  • Unlikely virtues

Conclusion[g][h][i][j][k][l]

Safety researchers have provided few direct estimates of method bias [m][n][o][p]in self-report measures of safety constructs. This oversight is especially problematic considering they rely heavily on self-reports to measure safety predictors and criteria.

The results from all three studies, but especially the first two, suggest that self-reports of safety are susceptible to dishonesty[q][r][s][t][u] aimed at presenting an overly positive representation of safety.[v][w][x][y][z][aa] In Study 1, self reports of safety knowledge, climate, and behavior appeared to be more susceptible to impression management compared to self-reports of perceived job risk and safety outcomes. Study 2 provided additional support for impression management as a method bias in self-reports of both safety predictors and outcomes. Further, relationships between impression management and safety constructs remained significant even when controlling for Alpha personality trait variance (conscientiousness, agreeableness, emotional stability). Findings from Study 3 provided less support for the biasing effect of impression management on self-report measures of safety constructs (average VRR=11%). However, the unlikely virtues measure [this is a measure of the tendency to claim uncommon positive traits] did reflect more reliable scores as those observed in Study 1 and Study 2 and it was significantly related to safety knowledge, motivation, and compliance. Controlling for the unlikely virtues measure led to the largest reductions in relationships with safety knowledge. Further exploratory comparison of identified vs. anonymous respondents observed that mean scores on the unlikely virtues measure were not significantly different for the identified subsample compared to the anonymous subsample; however, unlikely virtues had a larger impact on relationships among safety constructs for the anonymous subsample.

The argument for impression management as a biasing variable in self-reports of safety relied on the salient social consequences to responding and other costs to providing a less desirable response, including for instance negative reactions from management, remedial training, or overtime work[ab][ac]. Findings suggest that the influence of impression management on self-report measures of safety constructs depends on various factors[ad] (e.g., distinct safety constructs, the identifying approach, industry and/or safety salience) rather than the ubiquitous claim that impression management serves as a pervasive method bias.

The results of Study 1 and Study 3 suggest that impression management was most influential as a method bias in self-report measures of safety climate, knowledge, and behavior, compared to perceived risk and safety outcomes. These results might reflect the more concrete nature of these constructs based on actual experience with hazards and outcomes. Moreover, these findings are in line with Christian et al.’s (2009) conclusion that measurement biases are less of an issue for safety outcomes compared to safety behavior. These findings in combination with theoretical rationale suggest that the social consequences of responding are more strongly elicited by self-report measures of safety climate, knowledge, and behavior, compared to self-reports of perceived job risk and safety outcomes. Items in safety perception and behavior measures fittingly tend to be more personally (e.g., safety compliance – “I carry out my work in a safe manner.”) and socially relevant (e.g., safety climate – “My coworkers always follow safety procedures.”).

The results from Study 2, compared to findings from Study 1 and Study 3, suggest that assessments of job risk and outcomes are also susceptible to impression management. The Alpha personality factor generally accounted for a smaller portion of the variance in the relationships between impression management and perceived risk and safety outcomes. The largest effects of impression management on the relationships among safety constructs were for relationships with perceived risk and safety outcomes. These results align with research on injury underreporting (Probst et al., 2013; Probst and Estrada, 2010) and suggest that employees may have been reluctant to report safety outcomes even when they were administered on an anonymous survey used for research purposes.

We used three samples in part to determine if the effect of impression management generalizes. However, results from Study 3 were inconsistent with the observed effect of impression management in Studies 1 and 2. One possible explanation is that these findings are due to industry differences and specifically the salience of safety. There are clear risks associated with research laboratories as exemplified by notable incidents; [ae]however, the risks of bodily harm and death in the oil and gas industry tend to be much more salient (National Academies of Sciences, Engineering, and Medicine, 2018). Given these differences, employees from the oil and gas industry as reflected in this investigation might have been more motivated to provide a candid and honest response to self-report measures of safety.[af][ag][ah][ai][aj] This explanation, however, is in need of more rigorous assessment.

These results in combination apply more broadly to method bias [ak][al][am]in workplace safety research. The results of these studies highlight the need for safety researchers to acknowledge the potential influence of method bias and to assess the extent to which measurement conditions elicit particular biases.

It is also noteworthy that impression management suppressed relationships in some cases; thus, accounting for impression management might strengthen theoretically important relationships. These results also have meaningful implications for organizations because positively biased responding on safety surveys can contribute to the incorrect assumption that an organization is safer than it really is[an][ao][ap][aq][ar][as][at].

The results of Study 2 are particularly concerning and practically relevant as they suggest that employees in certain cases are likely to underreport the number of safety outcomes that they experience even when their survey responses are anonymous. However, these findings were not reflected in results from Study 1 and Study 3. Thus, it appears that impression management serves as a method bias among self-reports of safety outcomes only in particular situations. Further research[au][av][aw] is needed to explicate the conditions under which employees are more/less likely to provide honest responses to self-report measures of safety outcomes.

———————————————————————————————————————

BONUS MATERIAL FOR YOUR REFERENCE:

For reference only, not for reading during the table read

Respondents and Measures

  • Study 1

Respondents:

graduate students (229,37%),

undergraduate students (183, 30%),

research scientists and associates (123,20%),

post-doctoral researchers (28,5%),

laboratory managers (25, 4%),

principal investigators (23, 4%)

329 [53%] female;

287 [47%] male

377 [64%] White;

16 [3%] Black;

126 [21%] Asian;

72 [12%] Hispanic

Age (M=31,SD=13.24)

Respondents worked in various types of laboratories, including:

biological (219,29%),

Animal biological (212,28%),

human subjects/computer (126,17%),

Chemical (124,17%),

mechanical/electrical (65,9%)

Measures:

  • Safety Climate

Nine items from Beus et al. (2019) 30-item safety climate measure were used in the current study. The nine-item measure included one item from each of five safety climate dimensions (safety communication, co-worker safety practices, safety training, safety involvement, safety rewards) and two items from the management commitment and safety equipment and  housekeeping dimensions. The nine items were identified based on factor loadings from Beus et al. (2019). Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Safety knowledge, compliance, and participation

Respondents completed slightly modified versions of Griffin and Neal’s (2000) four-item measures of safety knowledge (e.g., “I know how to perform my job in the lab in a safe manner.”), compliance (e.g., “I carry out my work in the lab in a safe manner.”), and participation (e.g., “I promote safety within the laboratory.”). Items were completed using a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Perceived job risk and safety outcomes

Respondents completed a three-item measure of perceived job risk (e.g., “I encounter personally hazardous situations while in the laboratory;” 1=almost always untrue, 5=almost always true; Jermier et al., 1989). Respondents also provided safety incident data regarding the number of injuries, incidents, and near misses that they experienced in the last 12 months.

  • Impression Management

Four items were selected from Paulhus’s (1991) 20-item Balanced Inventory of Desirable Responding. These items were selected based on a review of Paulhus’s (1991) full measure and an assessment of those items that were most relevant and best representative of the full measure (Table 1). Items were completed using a five-point accuracy scale (1=very inaccurate, 5=very accurate). Ideally this survey would have included Paulhus’s (1991) full 20-item measure. However, as is often the case in survey research, we had to balance construct validity with survey length and concerns about respondent fatigue and for these reasons only a subset of Paulhus’s (1991) measure was included.

  • Study 2

Respondents:

research scientists or post-doctoral researchers (43; 39%)

principal investigators (12; 11%)

laboratory managers and coordinators (12; 11%)

graduate students (3; 3%)

Faculty teaching in a laboratory (3; 3%)

 one administrator (1%)

Respondents primarily worked in:

chemical (55; 45%)

mechanical/electrical (39; 32%)

Uncategorized laboratory (29; 24%)

Measures:

  • Safety Constructs

Respondents completed the same six self-report measures of safety constructs that were used in Study 1: safety climate, safety knowledge, safety compliance, safety participation, perceived job risk, and injuries, incidents, and near misses in the previous 12 months.

  • Impression Management

Respondents completed a five-item measure of impression management from the Bidimensional Impression Management Index (Table 1; Blasberg et al., 2014). Five items from the Communal Management subscale were selected based on an assessment of their quality and degree to which they represent the 10-item scale.5 A subset of Blasberg et al.’s (2014) full measure was used because of concerns from management about survey length. Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Personality

Conscientiousness, agreeableness, and emotional stability were assessed using six items from Gosling et al. (2003) 10-item personality measure. Four items from the 10-item measure assessing openness to experience and extraversion were not included in this study. Respondents were asked to indicate the degree to which adjectives were representative of them (i.e., Conscientiousness – “dependable, self-disciplined;”  agreeableness – “sympathetic, warm;” Emotional stability – “calm, emotionally stable”; 1=strongly disagree, 7=strongly agree) and combined to represent the Alpha personality factor. One conscientiousness item was dropped because it had a negative item-total correlation (“disorganized, careless” [reverse coded]). This was not surprising as it was the only reverse-scored personality item administered.

  • Study 3

Respondents:

The typical respondent was male (101, 94%) and had no supervisory responsibility (72, 67%); however, some women (6, 6%), supervisors (17, 16%), and managers/senior managers (16, 15%) also completed the survey.7 The sample was diverse in national origin with most respondents from India (44, 42%) and Pakistan (25, 24%).

Measures:

  • Safety Constructs

Respondents completed five of the same self-report measures of

safety constructs used in Study 1 and Study 2, including safety climate (Beus et al., 2019), safety knowledge (Griffin and Neal, 2000), safety compliance (Griffin and Neal, 2000), safety participation (Griffin and Neal, 2000), and injuries, incidents, and near misses in the previous 6 months. Respondents completed a similar measure of perceived job risk (Jermier et al., 1989) that included three additional items assessing the degree to which physical, administrative, and personal controls

  • Unlikely Virtues

Five items were selected from Weekley’s (2006) 10-item unlikely

virtues measure (see also Levashina et al., 2014; Table 1) and were responded to on a 5-point agreement scale (1=strongly disagree; 5=strongly agree). Akin to the previous studies, an abbreviated version of the measure was used because of constraints with survey length and the need to balance research and organizational objectives.

[a]In my mind, this is a negative way to start a safety research project. The ultimate goal of the organization is to complete its mission and injuries and fatalities are not part of the mission. So this puts the safety researcher immediately at odds with the organization.

[b]I wonder if this happens beyond surveys—do employees more generally portray a false sense of safety to co-workers, visitors, employers, trainees, etc? Is that made worse by surveying, or do surveys pick up on bias that exists more generally in the work culture?

[c]Employees always portray things in a better light on surveys because who really knows if its confidential

[d]Not just with regard to safety; most employees, I suspect, want to portray their businesses in a positive light. Good marketing…

[e]I think that this depends on the quality of the survey. If someone is pencil whipping a questionnaire, they are probably giving answers that will draw the least attention. However, if the questions are framed in an interesting way, I believe it is possible to have a survey be both a data collection tool and a discussion starter. Surveys are easy to generate, but hard to do well.

[f]In my experience, these are pretty high response rates for the lab surveys (around 20%).

[g]A concern that was raised by a reviewer on this paper was that it leads to a conclusion of blaming the workers. We certainly didn't set out to do that, but I can understand that perspective. I'm curious if others had that reaction.

[h]I had the same reaction and I can see how it could lead to a rosier estimate of safety conditions.

[i]There is an interesting note below where you mention the possible outcomes of surveys that "go poorly" if you will. If the result is that the workers are expected to spend more of their time and energy "fixing" the problem, it is probably no surprise that they will just say that there is no problem.

[j]I am always thinking about this type of thing—how results are framed and who the finger is being pointed at. I can see how this work can be interpreted that way, but I also see it from an even bigger picture—if people are feeling that they have to manage impressions (for financial safety, interpersonal safety, etc) then to me it stinks of a bigger cultural, systemic problem. Not really an individual one.

[k]Well – the "consequences" of the survey are really in the hands of the company or institution. A researcher can go in with the best of intentions, but a company can (and often does) respond in a way that discourages others from being forthright.

[l]Oh for sure! I didn't mean to shoulder the bigger problem on researchers or the way that research is conducted—rather, that there are other external pressures that are making individuals feel like managing people's impressions of them is somehow more vital than reporting safety issues, mistakes, needs, etc. Whether that's at the company, institution, or greater cultural level (probably everywhere), I don't think it's at the individual level.

[m]My first thought on bias in safety surveys had to do more with the survey model rather than method bias.  Most all safety surveys I have taken are based on the same template and questions generally approach safety from the same angle.  I haven't seen a survey that asks the same question several ways in the course of the survey or seen any control questions to attempt to determine validity of answers.  Perhaps some of the bias comes from the general survey format itself….

[n]I agree. In reviewing multiple types of surveys trying to target safety, there are many confounding variables. Trying to find a really good survey is tough – and I'm not entirely sure that it is possible to create something that can be applied by all. It is one of the reasons I was so intrigued by the BMS approach.

[o]Though—a lot of that work (asking questions multiple ways, asking control questions, determining validity and reliability, etc) is done in the original work that initially develops the survey metric. Just because it's not in a survey that one is taking or administering, doesn't necessarily mean that work isn't there

[p]Agreed – There are a lot of possible method biases in safety surveys. Maybe impression management isn't the most impactful. There just hasn't been much research in this area as it relates to safety measures, but certainly there is a lot out there on method biases more broadly. Stephanie and I had a follow up study (conference paper) looking at blatant extreme responding (using only the extreme endpoints on safety survey items). Ultimately, that too appears to be an issue

[q]In looking back over the summary, I was drawn to the use of the word 'dishonesty."  That implies intent.  I'm wondering whether it is equally likely that people are lousy at estimating risk and generally overestimate their own capabilities (Dunning Kruger anyone?).  So it is not so much about dishonesty but more about incompetency.

[r]They are more likely scared of retribution.

[s]This is an interesting point and I do think there is a part of the underestimation that has to do with an unintentional miscalibration. But, I think the work in this paper does go to show that some of the underestimation is related to people's proclivity to attempt to control how people perceive them and their performance.

[t]Even so, that proclivity is not necessarily outright dishonesty.

[u]I agree. I doubt that the respondents set out with an intent to be fraudulent or dishonest. Perhaps a milder or softer term would be more accurate?

[v]I wonder how strong this effect is for, say, graduate students who are in research labs under a PI who doesn't value safety

[w]I thinks its huge. I know I see difference in speaking with people in private versus our surveys

[x]Within my department, I know I became very cynical about surveys that were administered by the department or faculty members. Nothing ever seemed to change, so it didn't really matter what you said on them.

[y]I also think it is very significant. We are currently dealing with an issue where the students would not report safety violations to our Safety Concerns and Near Misses database because they were afraid of faculty reprisal. The lab is not especially safe, but if no one reports it, the conclusion might be drawn that no problems exist.

[z]And to bring it back to another point that was made earlier: when you're not sure if reporting will even trigger any helpful benefits, is the perceived risk of retribution worth some unknown maybe-benefit?

[aa]I heard a lot of the same concerns when we tried doing a "Near Miss" project. Even when anonymity was included, I had several people tell me that the details of the Near Miss would give away who they were, so they didn't want to share it.

[ab]Interesting point. It would seem here that folks fear if they say something is amiss with safety in the workplace, it will be treated as something wrong with themselves that must be fixed.

[ac]Yeah I feel like this kind of plays in to our discussion from last week, when we were talking about people feeling like they're personally in trouble if there is an incident

[ad]A related finding has been cited in other writings on surveys – if you give a survey, and nothing changes after the survey, then people catch on that the survey is essentially meaningless and they either don't take surveys anymore or just give positive answers because it isn't worth explaining negative answers.

[ae]There are risks associated with research labs, but I don't know if I would call them "clear". My sense is that "notable incidents" is a catchphrase people are using about academic lab safety to avoid quantitating the risks any more specifically.

[af]This is interesting to think about. One the one hand, if one works in a higher hazard environment maybe they just NOTICE hazardous situations more and think of them as more important. On the other hand, there is a lot of discussion around the normalization of hazards in an environment that would seem to suggest that they would not report on the hazards because they are normal.

[ag]Maybe they receive more training as well which helps them identify hazards easier. Oil & Gas industry Chemical engineers certainly get more training from my experience.

[ah]Oil and gas workers were also far more likely to participate in the study than the academic groups.  I think private industry has internalized safety differently (not necessarily better or worse) than academia.  And high hazard industries like oil and gas have a good feel for the cost of safety-related incidents.  That definitely gets passed on to the workforce

[ai]How does normalization take culture into effect? Industries have a much longer history of self-reporting and reporting of accidents in general than do academic institutions.

[aj]Some industries have histories of self-reporting in some periods of time. For example, oil and gas did a lot of soul searching after the Deepwater explosion (which occurred the day of a celebration of 3 years with no injury reports), but this trend can fade with time. Alcoa in the 1990s and 2000s is a good example of this. For example, I've looked into Paul H. O'Neill's history with Alcoa. He was safety champion whose work faded soon after he left.

[ak]I wonder if this can be used as a way to normalize the surveys somehow

[al]Hmm, yeah I think you could, but you would also have to take a measure of impression management so that you could remove the variance caused by that from your model.

Erg, but then long surveys…. the eternal dilemma.

[am]I bet there are overlapping biases too that have opposite effects, maybe all you could do is determine to what extent of un-reliability your survey has

[an]In the BMS paper we covered last semester, it was noted that after they started to do the managerial lab visits, the committee actually received MORE information about hazardous situations. They attributed this to the fact that the committee was being very serious about doing something about each issue that was discovered. Once people realized that their complaints would actually be heard & addressed, they were more willing to report.

[ao]and the visits allowed for personal interactions which can be kept confidential as opposed to a paper trail of a complaint

[ap]I imagine that it was also just vindicating to have another human listen to you about your concerns like you are also a human. I do find there is something inherently dehumanizing about surveys (and I say this as someone who relies on them for multiple things!). When it comes to safety in my own workplace, I would think having a human make time for me to discuss my concerns would draw out very different answers.

[aq]Prudent point

[ar]The Hawthorne Effect?

[as]I thought that had to do with simply being "studied" and how it impacts behavior. With the BMS study, they found that people were reporting more BECAUSE their problems were actually getting solved. So now it was actually "worth it" to report issues.

[at]It would be interesting to ask the same question of upper management in terms of whether their safety attitudes are "true" or not. I don't know of any organizations that don't talk the safety talk. Even Amazon includes a worker safety portion to its advertising campaign despite its pretty poor record in that regard.

[au]I wish they would have expanded on this more, I'm really curious to see what methods to do this are out there or what impact it would have, besides providing more support that self-reporting surveys shouldn't be used

[av]That is an excellent point and again something that the reviewers pushed for. We added some text to the discussion about alternative approaches to measure these constructs. Ultimately, what can we do if we buy into the premise that self-report surveys of safety are biased? Certainly one option is to use another referent (e.g., managers) instead of the workers themselves. But that also introduces its own set of bias. Additionally, there are some constructs that would be odd to measure based on anything other than self-report (e.g., safety climate). So I think it's still somewhat of an open question, but a very good one. I'm sure Stephanie will have thoughts on this too for our discussion next week. 🙂 But to me that is the crux of the issue: what do we do with self-reports that tend to be biased?

[aw]Love this, I will have to go read the full paper. Especially your point about safety climate, it will be interesting to see what solutions the field comes up with because everyone in academia uses surveys for this. Maybe it will end up being the same as incident reports, where they aren't a reliable indicator for the culture.

History of the CHAS LST Workshop

In 2018, Dr. Kali A. Miller, at the time a graduate researcher at the University of Illinois-Urbana Champaign and involved in the laboratory safety team there, developed a workshop called “Developing Graduate Student Leadership Skills in Laboratory Safety.” Initially supported by the ACS Committee on Chemical Safety, ACS Division of Chemical Health and Safety, ACS Safety Programs, and the ACS Office of Graduate Education, the workshop was first held at the ACS National Meeting in Spring 2018. A paper was published describing this pilot workshop and initial survey analysis that can be found here. The workshop continued to be held at ACS National Meetings with different graduate researcher facilitators.

Jessica A. Martin began working closely with Dr. Miller to coordinate, and eventually take over general management of the workshop. They interviewed LST teams as a means of learning more about the movement and improving the offerings of the workshop. This resulted in a publication about LSTs that can be found here.

As 2020 approached, the COVID pandemic necessitated a rapid shift to a virtual world. Starting with the Spring 2020 ACS National Meeting, Jessica worked with the facilitators for that workshop to rapidly convert the workshop to a virtual format. 

As interest in the workshop continued to blossom in the academic community, it was recognized that the virtual version could reach a wider audience and be held independent of conferences. Jessica recruited known and active advocates in the ACS safety community to constitute an LST Mentorship Team to continue to improve the workshop. Graduate students involved in the LST leadership at their own institutions are also regularly recruited to serve as Facilitators and Moderators to share their experiences and improve their own professional skills. The workshop was renamed “Empowering Academic Researchers to Strengthen Safety Culture” in recognition of the changes made to it.

Workshop Date (year, month)RoleLinked nameInstitutionLocal safety program
2021. 10 (Virtual)ModeratorAbhijeet PatilMichigan Technological UniversityMTU Chemistry Graduate Safety Committee
2021. 10 (Virtual)ModeratorMelissa AlfonsoUniversity of MemphisUniversity of Memphis Graduate Safety Team
2021. 10 (Virtual)ModeratorMaggi Braasch-TuriColorado State University
2021. 10 (Virtual)ModeratorRachel WileyThe University of MemphisUniversity of Memphis Graduate Safety Team
2021. 10 (Virtual)ModeratorAustin MoyleWashington University in St. LouisChemistry Peer Review Safety Group (CPRSG)
2021. 10 (Virtual)LeaderJessica MartinUniversity of ConnecticutJoint Safety Team (JST)
2021. 10 (Virtual)FacilitatorMonica Nyansa Michigan Technological UniversityMTU Chemistry Graduate Safety Committee
2021. 10 (Virtual)FacilitatorCalla McCulleyUniversity of Texas at AustinChemistry Student Safety Organization (CSSO)
2021. 10 (Virtual)ModeratorHemanta TimsinaUniversity of Arkansas Engineering Safety Club
2021. 10 (Virtual)ModeratorFarouq BusariUniversity of Ibadan (Nigeria)
2021. 10 (Virtual)ModeratorAdelina OronovaMichigan Technological University MTU Safety Team
2021. 06 (Virtual)ModeratorAbhijeet PatilMichigan Technological UniversityMTU Chemistry Graduate Safety Committee
2021. 06 (Virtual)ModeratorTaysir BaderUniversity of MinnesotaJoint Safety Team
2021. 06 (Virtual)ModeratorMonica Nyansa Michigan Technological UniversityMTU Chemistry Graduate Safety Committee
2021. 06 (Virtual)ModeratorMaggi Braasch-TuriColorado State University
2021. 06 (Virtual)ModeratorRachel WileyThe University of MemphisUniversity of Memphis Graduate Safety Team
2021. 06 (Virtual)ModeratorCalla McCulleyUniversity of Texas at AustinChemistry Student Safety Organization (CSSO)
2021. 06 (Virtual)ModeratorAustin MoyleWashington University in St. LouisChemistry Peer Review Safety Group (CPRSG)
2021. 06 (Virtual)LeaderJessica A. MartinUniversity of ConnecticutJoint Safety Team (JST)
2021. 06 (Virtual)FacilitatorJessica DeYoungUniversity of IowaChemistry Safety and Responsibility Stewards
2021. 06 (Virtual)FacilitatorLindsey ApplegateUniversity of IowaChemistry Safety and Responsibility Stewards
2021. 02 (Virtual)ModeratorAbhijeet PatilMichigan Technological UniversityMTU Chemistry Graduate Safety Committee
2021. 02 (Virtual)ModeratorJessica DeYoungUniversity of IowaChemistry Safety and Responsibility Stewards
2021. 02 (Virtual)ModeratorTaysir BaderUniversity of MinnesotaJoint Safety Team
2021. 02 (Virtual)ModeratorMonica Nyansa Michigan Technological UniversityMTU Chemistry Graduate Safety Committee
2021. 02 (Virtual)ModeratorMelissa AlfonsoUniversity of MemphisUniversity of Memphis Graduate Safety Team
2021. 02 (Virtual)ModeratorLindsey ApplegateUniversity of IowaChemistry Safety and Responsibility Stewards
2021. 02 (Virtual)ModeratorCristian Aviles-MartinUniversity of ConnecticutJoint Safety Team (JST)
2021. 02 (Virtual)ModeratorCalla McCulleyUniversity of Texas at AustinChemistry Student Safety Organization (CSSO)
2021. 02 (Virtual)LeaderJessica MartinUniversity of ConnecticutJoint Safety Team (JST)
2021. 02 (Virtual)FacilitatorOmar Leon RuizUniversity of California, Los AngelesJoint Research Safety Initiative (JRSI)
2021. 02 (Virtual)FacilitatorDagen HughesUniversity of IowaChemistry Safety and Responsibility Stewards
2020, 11 (Virtual)ModeratorOmar Leon RuizUniversity of California, Los AngelesJoint Research Safety Initiative (JRSI)
2020, 11 (Virtual)ModeratorMary Beth KozaUniversity of North Carolina at Chapel Hill (retired)
2020, 11 (Virtual)ModeratorMarta GmurczykACS
2020, 11 (Virtual)ModeratorKali A. MillerACS Publications
2020, 11 (Virtual)ModeratorDavid FinsterWittenburg University (retired)
2020, 11 (Virtual)ModeratorDagen HughesUniversity of IowaChemistry Safety and Responsibility Stewards
2020, 11 (Virtual)ModeratorCristian Aviles-MartinUniversity of ConnecticutJoint Safety Team (JST)
2020, 11 (Virtual)ModeratorRalph StuartKeene State College
2020, 11 (Virtual)ModeratorLindsey ApplegateUniversity of IowaChemistry Safety and Responsibility Stewards
2020, 11 (Virtual)LeaderJessica A. MartinUniversity of ConnecticutJoint Safety Team (JST)
2020, 11 (Virtual)FacilitatorSarah ZinnUniversity of ChicagoJoint Research Safety Initiative (JRSI)
2020, 11 (Virtual)FacilitatorJessica DeYoungUniversity of IowaChemistry Safety and Responsibility Stewards
2020, 08 (Virtual)ModeratorSarah ZinnUniversity of ChicagoJoint Research Safety Initiative (JRSI)
2020, 08 (Virtual)ModeratorSamuella SigmannAppalachian State University
2020, 08 (Virtual)ModeratorMarta GmurczykACSACS Safety Web Site
2020, 08 (Virtual)ModeratorKali A. MillerACS Publications
2020, 08 (Virtual)ModeratorJessica DeYoungUniversity of Iowa
2020, 08 (Virtual)ModeratorDavid FinsterWittenburg University (retired)
2020, 08 (Virtual)ModeratorRalph StuartKeene State College
2020, 08 (Virtual)LeaderJessica A. MartinUniversity of ConnecticutJoint Safety Team (JST)
2020, 08 (Virtual)FacilitatorOmar Leon RuizUniversity of California, Los AngelesJoint Research Safety Initiative (JRSI)
2020, 08 (Virtual)FacilitatorCristian Aviles-MartinUniversity of ConnecticutJoint Safety Team (JST)
2020, 03 (Virtual)FacilitatorVictor BeaumontYale UniversityChemistry Joint Safety Team (JST)
2020, 03 (Virtual)FacilitatorVeronica HayesUniversity of ConnecticutJoint Safety Team (JST)
2020, 03 (Virtual)LeaderJessica A. MartinUniversity of ConnecticutJoint Safety Team (JST)
2019, 08LeaderKali A. MillerUniversity of Illinois-Urbana ChampaignChemistry Joint Safety Team (JST)
2019, 08FacilitatorJessica A. MartinUniversity of ConnecticutJoint Safety Team (JST)
2019, 08FacilitatorChandra KarkiUniversity of South Dakota
2019, 06FacilitatorJessica A. MartinUniversity of ConnecticutJoint Safety Team (JST)
2019, 06FacilitatorDavid FinsterWittenburg University
2019, 03FacilitatorKendra DenlingerUniversity of Cincinnati
2019, 03FacilitatorKali A. MillerUniversity of Illinois-Urbana ChampaignChemistry Joint Safety Team (JST)
2018, 08FacilitatorKali A. MillerUniversity of Illinois-Urbana ChampaignChemistry Joint Safety Team (JST)
2018, 08FacilitatorKaitlin TylerUniversity of Illinois-Urbana ChampaignChemistry Joint Safety Team (JST)
2018, 03FacilitatorMichael VlysidisUniversity of MinnesotaJoint Safety Team (JST)
2018, 03FacilitatorKali A. MillerUniversity of Illinois-Urbana ChampaignChemistry Joint Safety Team (JST)

Description of Roles:

Leader: Recruits and trains Facilitators and Moderators; manages updates to workshop content; organizes logistics; communicates with participants before and after workshop as needed; manages technology during the workshop; leads pre-workshop practice and post-workshop review sessions for Facilitators with the LST Mentorship Team

Facilitator: Participates in updates to workshop content through pre-workshop practice and post-workshop review sessions with the Leader and LST Mentorship Team; delivers workshop content to live audience; facilitates small group and large group activities and discussion during the workshop

Moderator (Position created for virtual workshop): Serves in the practice audience for pre-workshop practice sessions and participates in post-workshop review session; can contribute to updates to workshop content; monitors Breakout Room activities during the Workshop

LST Mentorship Team: This team is a group of senior CHAS members and other safety professionals who support the workshop leader with content review and advice and who often serve as moderators for virtual workshops.

Mentorship Team

Kali A. MillerACS Publications
David FinsterWittenberg University (retired)
Marta GmurczykACS
Mary Beth KozaUniversity of North Carolina at Chapel Hill (retired)
Ralph StuartKeene State College
Samuella SigmannAppalachian State University

Please check our “Workshop Current Schedule” page for information and registration for the latest workshop!

October 2021 CHAS Chat Pre-survey on SDSs

SDS CHAT survey

SDS CHAT survey

To help us prepare for the October 28, 2021 CHAS Chat, please respond to the questions below.

What are your top priorities for your SDS program?
What approach does your lab use in collecting and organizing SDSs?
Do you require that SDSs you collect for lab chemicals be specific to the suppliers that produced the chemical?
What is your experience with the reliability of SDSs?
How is GHS doing for you?
What sector do you work in?
What is your role in the lab? (check all that apply)
How much lab experience do you have?

SDS’s: What are They Good For?

CHAS Chat October 28, 2021

Chemical safety and information technology have both evolved significantly since OSHA established Material Safety Data Sheets as the basic regulatory unit of chemical safety information in the 1980’s. This evolution has both advantages and challenges for lab workers. This session will discuss both sides of this coin and best practices for using SDS’s as chemical safety information resource in the laboratory setting.

Join us from 1 to 3 PM on Thursday, October 28 to hear from chemical safety experts discuss the uses and challenges of safety data sheets as a source of laboratory chemical safety information. Our presenters will be Dr. Dan Kuespert of Johns Hopkins University and Dr. Rob Toreki of ilpi.com

If you are interested in attending this session, give us your e-mail address here, and we will provide a Zoom connection link the week of the CHAS chat

Thanks for your interest in Chemical Health and Safety!

The Art & State of Safety Journal Club: “Mental models in warnings message design: A review and two case studies”

Sept 22, 2021 Table Read

The full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753513001598?via%3Dihub

Two case studies in consumer risk perception and exposure assessment, focusing on mothballs and elemental mercury.

For this Table Read, we will only be reviewing the two case studies presented in the paper.

4. Two case studies: mothballs and mercury

Two case studies illustrate the importance of careful adaptation to context[a][b][c]. In the first case, an expert model of consumer-product use of paradichlorobenzene mothballs is enhanced with information from lay users’ mental models, so the model can become more behaviorally realistic (Riley et al., 2006a). In the second case, the mental models elicitation is enhanced with ethnographic methods including participant observation in order to gain critical information about cultural context of mercury use in Latino and Caribbean communities in the New York area (Newby et al., 2006; Riley et al., 2001a, 2001b, 2006b).

Both cases are drawn from chemical consumer products applied in residential uses. The chemicals considered here – paradichlorobenzene and mercury – have a wide variety of consumer and occupational uses that underscore the importance of considering context in order to attain a realistic sense of beliefs about the chemical, exposure behaviors, and resultant risk.

This analysis focuses on what these case studies can tell us about the process of risk communication design[d] in order to take account of the multidimensional aspects of risk perception as well as the overall cultural context of risk. Thus, risk communications may be tailored to the beliefs held by individuals in a specific setting, as well as to the specifics of their situation (factors physical, social, and cultural) which influence perceptions of and decision making about risk.

[e][f][g][h]

4.1. Mothballs

Mothballs are used in homes to control moth infestations in clothing or other textiles. Mothballs are solids (paradichlorobenzene or naphthalene) that sublimate (move from a solid state to a gaseous state) at room temperature. Many are in the shape of balls about 1 in. in diameter, but they are also available as larger cakes or as flakes. The products with the highest exposed surface area (flakes) volatilize more quickly. The product works by the vapor killing adult moths, breaking the insect life cycle.

The primary exposure pathway is inhalation of product vapors, but dermal contact and ingestion may also occur. Cases of ingestion have included children mistaking mothballs for candy and individuals with psychological disorders who compulsively eat household items (Avila et al., 2006; Bates, 2002). Acute exposure to paradichlorobenzene can cause skin, eye, or nasal tissue irritation; acute exposure to naphthalene can cause hemolytic anemia, as well as neurological effects. Chronic exposures to either compound causes liver damage and central nervous system effects. Additional long-term effects of naphthalene exposure include retinal damage and cataracts (USEPA, 2000). Both paradichlorobenzene and naphthalene are classified as possible human carcinogens (IARC Group II B) (IARC, 1999). Since this classification in 1987, however, a mechanism for cancer development has been identified for both naphthalene and paradichlorobenzene, in which the chemicals block enzymes that are key to the process of apoptosis, the natural die-off of cells. Without apoptosis, tumors may form as cell growth continues unchecked (Kokel et al., 2006).

Indoor air quality researchers have studied mothballs through modeling and experiment (e.g., Chang and Krebs, 1992; Sparks et al., 1991, 1996; Tichenor et al., 1991; Wallace, 1991). Research on this topic has focused on developing and validating models of fate and transport of paradichlorobenzene or naphthalene in indoor air. Unfortunately, the effects of consumer behavior on exposure were not considered[i][j]. Due to the importance of the influence of consumer behavior on exposure, it is worth revisiting this work to incorporate realistic approximations of consumer behavior.

Understanding consumer decisions about purchasing, storage, and use is critical for arriving at realistic exposure estimates as well as effective risk management strategies and warnings content. Consumer decision-making is further based upon existing knowledge and understanding of exposure pathways, mental models of how risk arises (Morgan et al., 2001), and beliefs about the effectiveness of various risk-mitigation strategies. Riley et al. (2001a, 2001b) previously proposed a model of behaviorally realistic exposure assessment for chemical consumer products, in which exposure endpoints are modeled in order to estimate the relative effectiveness of different risk mitigation strategies, and by extension, to evaluate warnings (refer to Fig. 1). The goal is to develop warnings that provide readers with the information they need to manage the risks associated with a given product, including how hazards may arise, potential effects, and risk-mitigation strategies.

4.1.1. Methods

The idea behind behaviorally realistic exposure assessment is to consider both the behavioral and physical determinants of exposure in an integrated way (Riley et al., 2000). Thus, user interviews and/or observational studies are combined with modeling and/or experimental studies to quantitatively assess the relative importance of different risk mitigation strategies and to prioritize content for the design of warnings, based on what users already know about a product. Open-ended interviews elicit people’s beliefs about the product, how it works, how hazards arise, and how they may be prevented or mitigated. User-supplied information is used as input to the modeling or experimental design in order to reflect how people really interact with a given product. Modeling can be used to estimate user exposure or to understand the range of possible exposures that can result from different combinations of warning designs and reading strategies.

Riley et al. (2006a) recruited 22 adult volunteers [k][l][m][n][o][p][q][r]who had used mothballs from the business district in Northampton, Massachusetts. Interview questions probed five areas: motivation for product use and selection; detailed use data (location, time activity patterns, amount and frequency of use); mental models of how the product works and how hazards may arise; and risk perceptions; risk mitigation strategies. Responses were analyzed using categorical coding (Weisberg et al., 1996). A consumer exposure model utilized user-supplied data to determine the concentration of paradichlorobenzene in a two-box model (a room or compartment in which moth products are used, and a larger living space).

4.1.2. Uses

Table 1 illustrates the diversity of behavior surrounding the use[s][t][u][v][w] of mothballs in the home. It is clear that many users behave differently around the product from what one might assume from reading directions or warnings on the package label.

65% of participants reported using mothballs to kill or repel moths, which is its intended use. 35% reported other uses for the product, including as an air freshener and to repel rodents outdoors. Such uses imply different use behaviors related to the amount of product used and the location where it is applied. Effective use of paradichlorobenzene as an indoor insecticide requires use in an enclosed space, the more airtight the better. Ventilation is not recommended, and individuals should limit their exposure to the non-ventilated space. In contrast, use as a deodorizer disperses paradichlorobenzene throughout a space by design.

These different behaviors imply different resultant exposure levels. For use as an air freshener, the exposure might be higher due to using the product in the open in one’s living space. Exposures might also be lower, as in the reported outdoor use for controlling mammal pests.

A use not reported in this study, perhaps due to the small sample size, or perhaps due to the stigma associated with drug use, is the practice of huffing or sniffing – intentional inhalation in order to take advantage of the physiological effects of volatile chemicals (Weintraub et al., 2000). This use is worth mentioning due to its high potential for injury, even if this use is far less likely than other uses reported here.

The majority of users place mothballs outside of sealed containers in order to control moths, another use that is not recommended by experts or on package labeling[x][y][z][aa][ab][ac][ad][ae]. Even though the product is recommended for active infestations, many users report using the product preventively, increasing the frequency of use and resultant exposure above recommended or expected levels. Finally, the amount used is greater than indicated for a majority of the treatment scenarios reported. These variances from recommended use scenarios underscore the need for effective risk communication, and suggest priority areas for reducing risk.

These results indicate a wide range of residential uses with a variety of exposure patterns. In occupational settings, one might anticipate a similarly broad range of uses. In addition to industrial and commercial uses as mothballs (e.g., textile storage, dry cleaning) and air fresheners (e.g., taxi cabs, restaurants), paradichlorobenzene is used as an insecticide (ants, fruit borers) or fungicide (mold and mildew), as a reagent for manufacturing other chemical products, plastics and pharmaceuticals, and in dyeing (GRR Exports, 2006).

4.1.3. Exposures

Modeling of home uses illustrates the range of possible exposures[af] based on self-reported behavior, and compares a high and low case scenario from the self-reports to an ‘‘intended use’’ scenario that follows label instructions exactly.

Table 2 shows the inputs used for modeling and resultant exposures. The label employed for the expected use scenario advised that one box (10 oz, 65 mothballs) should be used for every 50 cubic feet (1.4 cubic meters) of tightly enclosed space. Thus, for a 2-cubic-meter closet, 90 mothballs were assumed for the intended use scenario. The low exposure scenario involved a participant self-report in which 10 mothballs were placed in a closed dresser drawer, and the high exposure scenario involved two boxes of mothballs reportedly placed in the corners of a 30-cubic-meter bedroom.

Results show that placing moth products in a tightly enclosed space significantly reduces the concentration in users’ living space.[ag][ah][ai][aj][ak][al][am][an] The high level of exposure resulting from the usage scenario with a large amount of mothballs placed directly in the living space coincided with reports from the user of a noticeable odor and adverse health effects that the user attributed to mothball use.

4.1.4. Risk perception

There were a wide range of beliefs about the function and hazards[ao][ap] of [aq][ar][as][at][au][av][aw]mothballs among participants, as well as a gap in knowledge between consumer and expert ideas of how the product works. Only 14% of the participants were able to correctly identify an active ingredient in mothballs, while 76% stated that they did not know the ingredients. Similarly, 68% could not correctly describe how moth products work, with 54% of all participants believing that moths are repelled by the unpleasant odor. Two-thirds of participants expressed health concerns related to using moth products[ax][ay]. 43% mentioned inhalation, 38% mentioned poisoning by ingestion, 21% mentioned cancer, and 19% mentioned dermal exposure. A few participants held beliefs that were completely divergent from expert models, for example a belief that mothballs ‘‘cause parasites’’ or ‘‘recrystallize in your lungs.’’

A particular concern arises from the common belief that moths are repelled by the smell of mothballs. This may well mean that users would want to be able to smell the product to know it is working – when in fact this would be an indication that users themselves were being exposed and possibly using the product incorrectly. Improvements to mothball warnings might seek to address this misconception of how mothballs work, and emphasize the importance of closed containers, concentrating the product near the treated materials and away from people.

4.2. Mercury as a consumer product

Elemental mercury is used in numerous consumer products, where it is typically encapsulated, causing injury only when a product breaks. Examples include thermometers, thermostats, and items containing mercury switches such as irons or sneakers with flashing lights. The primary hazard arises from the fact that mercury volatilizes at room temperature. Because of its tendency to adsorb onto room surfaces, it has long residence times in buildings compared with volatile organic compounds. Inhaled mercury vapor is readily taken up by the body; in the short term it can cause acute effects on the lungs, ranging from cough and chest pain to pulmonary edema and pneumonitis in severe cases. Long-term exposure can cause neurological symptoms including tremors, polyneuropathy, and deterioration of cognitive function (ATSDR, 1999).

The second case study focuses on specific uses of elemental mercury as a consumer product among members of Latino and Caribbean communities in the United States. Mercury is sold as a consumer product in botánicas (herbal pharmacies and spiritual supply stores), for a range of uses that are characterized variously as folkloric, spiritual or religious in nature.

4.2.1. Methods

Newby et al. (2006) conducted participant observation and interviews with 22 practitioners and shop owners[az], seeking to characterize both practices that involved mercury use and perceptions of resulting risks. These practices were compared and contrasted with uses reported in the literature as generally attributable to Latino and Caribbean religious and cultural traditions in order to distinguish between uses that are part of the Santeria religion, and other uses that are part of other religious practice or secular in nature. Special attention was paid to the context of Santeria, especially insider–outsider dynamics created by its secrecy, grounded in its histories of suppression by dominant cultures. Because the label Latino is applied to a broad diversity of ethnicities, races, and nationalities, the authors sought to attend to these differences as they apply to beliefs and practices related to mercury.

Uses reported in the literature and reported by participants to Newby et al. (2006) and Riley et al. (2001a, 2001b) were modeled to estimate resulting exposures. The fate and transport of mercury in indoor air is difficult to characterize because of its tendency to adsorb onto surfaces and the importance of droplet-size distributions on overall volatilization rates (Riley et al., 2006b). Nevertheless, simple mass transfer and indoor air quality models can be employed to illustrate the relative importance of different behaviors in determining exposure levels.

4.2.2. Uses

Many uses are enclosed, such as placing mercury in an amulet, gourd, walnut, or cement figure (Johnson, 1999; Riley et al., 2001a, 2001b, 2006b; Zayas and Ozuah, 1996). Other uses are more likely to elevate levels of mercury in indoor air to hazardous levels, including sprinkling of mercury indoors or in cars for good luck or protection, or adding mercury to cleaning compounds or cosmetic products (Johnson, 1999; Zayas and Ozuah, 1996).

Some uses, particularly those attributable to Santeria, are occupational in nature. Santeros and babalaos (priests and high priests) described being paid to prepare certain items that use mercury (Newby et al., 2006). Similarly, botanica personnel described selling mercury as well as creating certain preparations with it (Newby et al., 2006; Riley et al., 2001a, 2001b). One case report described exposure from a santero spilling mercury (Forman et al., 2000). Some of this work occurs in the home, making it both occupational and residential.

Across the U.S. population, including in Latino and Caribbean populations, it is more common for individuals to be exposed to elemental mercury vapor through accidental exposures such as thermometer, thermostat and other product breakage or spills from mercury found in schools and abandoned waste sites (Zeitz et al., 2002). The cultural and religious uses described above reflect key differences in use (including intentional vs. accidental exposure) that require attention in design of risk communications.

4.2.3. Exposures

Riley et al. (2001a, 2001b) solved a single-chamber indoor-air quality model analytically to estimate exposures based on scenarios derived from two interviews with mercury users. Riley et al. (2001a, 2001b) similarly modeled scenarios for sprinkling activities reported elsewhere in the literature. Riley et al. (2001a, 2001b) additionally employed mass transfer modeling combined with indoor air quality modeling to estimate resulting exposures from the contained uses described in interviews with practitioners (Newby et al., 2006).

Results presented in Table 3 show wide variation in predicted exposures resulting from different behavior patterns in different settings. Contained uses produce the lowest exposures. As long as the mercury remains encapsulated or submerged in other media, it poses little risk. By contrast, uses in open air can result in exposures orders of magnitude greater, depending on amounts and how the mercury is distributed, as droplet size and surface area are key determinants of exposure.

4.2.4. Risk perception

Newby et al. (2006) found that participants identified the risks of mercury use as primarily legal in nature.[ba][bb][bc][bd][be][bf] Concerns about getting caught by either police or health officials were strong[bg][bh][bi]. After these concerns, practitioners mentioned the risks of mercury use ‘‘backfiring’’ on a spiritual level, particularly if too much is used.[bj][bk][bl] There was some awareness of potential harmful health effects from mercury use[bm][bn], but the perceptions of mercury’s spiritual power and the perceived legal risks of possession and sale figured more prominently in users’ rationales for taking care in using it and clearly affected risk-mitigation strategies described (e.g., not discussing use or sales openly, giving people a bargain so they won’t tell authorities).

Newby et al. (2006) discuss at length the insider–outsider dynamics in the study, and their influence on the strength of fears of illegality of mercury. Because of taboos on sharing details of Santeria practice, the authors warn against providing certain details of practice in risk communications designed by outsiders, as it would undercut the credibility of the warning messages.

Mental models of risk perception are critically important in all cases of consumer mercury use, both intentional and unintentional. When a thermostat or thermometer breaks in a home, many users will use a vacuum to clean up the spil[bo][bp][bq][br]l, based on a mental model of mercury’s hazards that does not include a notion of mercury as volatile. A key gap in people’s knowledge of mercury relates to its volatility; most lay people do not realize that vacuuming mercury will greatly increase its indoor air concentration, causing a greater health hazard than simply leaving mercury on the floor (Schwartz et al., 1992; Zelman et al., 1991). Thus, many existing risk communications about mercury focus on accidental spills and how (or how not) to clean them up.[bs][bt]


[a]Interesting timing for me on this paper- we’re currently working on a scheme to communicate hazards to staff & faculty at a new facility.  We have an ethnic diversity to consider and a number of the spaces will host the general public for special events.  Lots of perspectives to account for…

[b]If you have info on this next week, it would be interesting to hear what challenges you have run into and what you have done to address them.

[c]I’d be game for that.  I’m just getting into the project and was starting to consider different risk perceptions among different audiences.  This paper has given me some food for thought

[d]This is different from the way I have used the term “risk communication” traditionally. Traditionally risk communication is designed to help a variety of stakeholders work through scientific information to come to a shared decision about risk. See https://www.epa.gov/risk-communication for example. However, this paper’s approach soundsm more like the public health approach used to collect more accurate information about a risk

[e]I really like the concept of “behaviorally realistic exposure assessment”. Interestingly, EPA has taken over regulation of workplace chemicals from OSHA because OSHA was legally constrained from using realistic exposure assessment (specifically, the assumption that PPE may not be work correctly all the time)

[f]Wow – that is crazy to hear that OSHA would be limited in that way. One would thing actual use would be an incredibly important thing to consider.

[g]OSHA is expected to assume that all of its regulations will be followed as part of its risk assessment. EPA doesn’t believe that. This impacted EPA’s TSCA risk assessment of Methylene Chloride

[h]There is a press story about this at

https://www.cbsnews.com/video/family-of-man-who-died-after-methylene-chloride-exposure-call-epa-decision-step-in-the-right-direction/

[i]I’m wondering if the researchers are in academia or from the company. If from companies which supplied mothballs, I’m surprised that this was not one of the first things that they considered.

[j]That’s an interesting question. Mothballs are a product with a long history and they were well dispersed in the economy before regulatory concerns about them arose. So the vendors probably had an established product line before the regulations arose.

[k]Wondering about the age distribution of the study group- when I think of mothballs, I think of my grandparents who would be about 100 years old now.  Maybe younger people would handle the mothballs differently since they are likely to be unfamiliar with them.

[l]I’m also wondering about how they recruited these volunteers because that could introduce bias, for example only people who already know what they are might be interested

[m]Volunteer recruitment is an interesting thought…what avenue did they use to recruit persons who had this product? Currently wondering who still uses them since I don’t know anyone personally who talks about it…

[n]It sounds like they went into the shopping area outside Smith College and recruited people off the street. Northampton is a diverse social environment, but I suspect mothball users are from a particular segment of society

[o]How the recruitment happened seems like a key method that wasn’t discussed sufficiently here. After re-reading this it might be people that they recruited people without knowing if they had used mothballs or not.

[p]Interesting thought. When I was in my undergraduate studies, one of my professors characterized a substance as “smelling like mothballs,” and me and all of my peers were like “What? What do mothballs smell like…?” Curious as to whether product risk assessment is different between these generational groups.

[q]Did you and your undergraduate peers go grab a box and sniff them to grok the reference?

[r]I certainly did not! But I wonder how many people would have, if there were available at the time!

[s]Would anyone like to share equivalents they have seen in research labs? Researchers using a product in a way different from intended? Did it cause any safety issues? Were the researchers surprised to find that they were not using as intended? Were the researchers wrong in their interpretations and safety assessment? If so, how?

[t]I presume you’re thinking of something with a specific vendor-defined use as opposed to a reagent situation where, for example, a change in the acid use to nitric led to unforeseen consequences.

[u]I agree that this can apply to the use of chemicals in research labs. Human error is why we want to build in multiple controls. In terms of examples, using certain gloves of PPE for improper uses is the first thing that comes to mind.

[v]I have seen hardware store products repurposed in lab settings with unfortunate results, but I can’t recall specific of these events off the top of my head. (One was an hubcap cleaning solution with HF in it used in the History Dept to restore granite architectural features.)

[w]I have seen antifreeze brought in for Chemistry labs and rock salt brought in for freezing point depression labs…not dangerous, but not what they were intended for.

[x]So is the take-away on this point and the ones that follow in the paragraph that another communication method is needed?  Reading the manual before use is rare (in my experience)- too wordy.  Maybe pictographs or some sort of icon-based method of communications.

[y]This seems like to takeaway to me! Pictures, videos—anything that makes the barrier to content engagement as low as possible. Even making sure that it is more difficult to miss the information when trying to use the product would likely help (ie, not having to look it up in a manual, not having to read long paragraphs, not having to keep another piece of paper around)

[z]In the complete article, they discuss three hurdles to risk understanding:

1. Cognitive heuristics

2. Information overload

3. Believability and self-efficacy

These all sound familiar from the research setting when discussing risks

[aa]Curious how many people actually read package labeling, and of those people how many take the labeling as best-practice and how many take it as suggested use. I’m also curious how an analogy to this behavior might be made in research settings. It seems to me that there would likely be a parallel.

[ab]I believe that the Consumer Product Safety Commission does research into this question

[ac]Other considerations: is the package labeling comprehensible for people (appropriate language)? If stuff is written really small, how many people actually take the time to read it? Would these sorts of instructions benefit more from pictures rather than words?

[ad]I was watching an satirical Better Call Saul “legal ethics” video yesterday where the instructor said “it doesn’t matter how small the writing is, you just have to have it there”. See https://www.dailymotion.com/video/x7s7223 for that specific “lesson”

[ae]I think we’d see a parallel in cleaning practices, especially if it’s a product that gets diluted different amounts for different tasks. Our undergraduate students for example think all soap is the same and use straight microwash if they see it out instead of diluting.

[af]Notably, even when putting them outside in a wide area, you can still smell them from a distance, which would make them a higher exposure than expected.  Pictures and larger writing on the boxes would definitely help, but general awareness may need to be shared another way.

[ag]Historical use was in drawers with sweaters, linens, etc (which is shown to be the “low exposure”)…were these products inadvertently found to be useful in other residential uses much later?

[ah]I wonder if the “other uses” were things consumers figured out and shared with their networks – but those uses would actually increase exposure.

[ai]It appears so!  Another issue may be the amount of the product used.  Using them outside rather than in a drawer, may minimize the exposure some, but that would be relative to exactly how much of the product was used…

[aj]The comment about being able to smell it to “know it is working” is also interesting. It makes me think of how certain smells (lemon scent) are associated with “cleanliness” even if it has nothing to do with the cleanliness.

[ak]I’ve also heard people refer to the smell of bleach as the smell of clean – although if you can smell it, it means you are being exposed to it!

[al]This is making me second guess every cleaning product I use!

[am]It also makes me wonder if added scent has been used to discourage people from overusing a product.

[an]I think that is why some people tout vinegar as the only cleaner you will ever need!

[ao]What do you think would lead to this wide range?

[ap]If they are considering beliefs that were passed down through parents and grandparents this would also correlate with consumers not giving attention to the packaging because they grew up with a certain set of beliefs and knowledge and they have never thought to question it.

[aq]Is it strange to hear this given that the use and directions are explained right on the packaging?

[ar]I don’t think it is that strange. I think a lot of people don’t bother to read instructions or labels closely, especially for a product that they feel they are already familiar with (grow up with it being used)

[as]Ah, per my earlier comment regarding whether or not people read the packaging—I am not actually very surprised by this. I think there is somewhat of an implicit sense that products made easily available are fairly safe for use. That, coupled with a human tendency to acclimate to unknown situations after no obvious negative consequences plus the shear volume of text meant to protect corporations (re: Terms of Use, etc), I think people sort of ignore these things in their day-to-day.

[at]I agree it seems that people use products they saw used throughout their childhood…believed them to be effective and hence don’t read the packaging.  (Clearly going home to read the packaging myself…as I used them recently to repel skunks from my yard).

[au]Since the effects of exposure to mothball active ingredients are not acute in all but the most extreme cases (like ingestion), it is unlikely that any ill health effects would even be linked to mothballs

[av]I have wondered if a similar pattern happens with researchers at early stages. If the researcher is interested to a reagent or a process by someone who doesn’t emphasize safety considerations, that early researcher things of it as relatively safe – then doesn’t do the risk assessment on their own.

[aw]Yes, exactly. Long-term consequences are much harder for us to grapple with than acute consequences, which may lead to overconfidence and overexposure

[ax]I wonder why with so many having health concerns, only 12% used on the correct “as needed” basis.

[ay]A very, very interesting question. I wonder if it has something to do with a sense that “we just don’t know” along with a need to find a solution to an acute problem. i.e., maybe people are worried about whether or not it is safe in a broad, undefined, and somewhat intractable manner, but are also looking to a quick solution to a problem they are currently facing, and perhaps ignore a pestering doubt

[az]Again here, I’m wondering why more is not described about how they identified participants, because it is a small sample size and there is a possibility for bias

[ba]This is an important finding. Public risk perception and professional risk perception can be quite different. I don’t think regulators consider how they might contribute to this gap because each chemical is regulated in isolation from other risks and exposures.

[bb]It is also related to the idea of how researchers view EHS inspections. Do they see them as opportunities for how they can make their research work safer? Or do they merely see them as annoying exercises that might get them “in trouble” somehow?

[bc]I think that in both Hg case and the research case, there is a power struggle expressed as a culture clash issue. Both the users of Hg for spiritual purposes and researchers are likely to feel misunderstood by mainstream society represented by the external oversight process

[bd]I think this is *such* an important takeaway. The sense as to whether long documents (terms of use and other contracts, etc) and regulatory bodies (EHS etc) are meant to protect the *people/consumers* or whether they are meant to protect the *corporation* I think is a big deal in our society. Contracts, usage information, etc abound, but it’s often viewed (and used) as a means to protect from liability, not as a means to protect from harm. I think people pick up on that.

[be]I am in total agreement – recently I sat through a legal deposition for occupational exposure related mesothelioma, it was unsettling how each representative from each company pushed the blame off in every other possible direction, including the defendant. There are way more legal protections in place for companies than I could have ever imagined.

[bf]There is some discussion of trying to address this problem with software user’s agreements, but I haven’t heard of this concern on the chemical use front.

[bg]This is to say there is a disconnect to reasoning behind the legal implications? Assuming most are not aware of the purpose of the regulations as protections for people?

[bh]I don’t know of any agency that would recognize use of Hg as a spiritual practice. Some Native Americans have found their spiritual practices outlawed because their use of a material is different scenario from the risk scenario that the regulators base their rules on

[bi]I agree with your comment about a disconnect. Perhaps if they understood more about the reasons for the laws they would be more worried about their health rather than getting in trouble.

[bj]To a practitioner, is a spiritual “backfire” completely different from a health effect, or just a different explanation of the same outcome?

[bk]Good question

[bl]Good point – I thought about this too. I’d love to hear more about what the “spiritual backfire” actually looked like. Makes me think of the movie “The Exorcism of Emily Rose” where they showed her story from the perspective of someone who thinks she is possessed by demons versus someone who thinks she is mentally ill.

[bm]I am curious to find out how risk communication plays a role here cause it seems those using the mercury know about its potential health hazard.

[bn]Agree – It does say “some” awareness so I would be interested to see how bad they think it is for health vs reality. It looks like they are doing a risk analysis of sorts and are thinking the benefits outweigh the risks.

[bo]I’m not sure how to articulate it, but this is very different than the spiritual use of mercury.  Spiritual users can understand the danger of mercury exposure but feel the results will be worth it.  The person wielding a vacuum does not understand how Hg’s hazard is increased through volatilization.  I suspect a label on vacuum cleaners that said ‘NOT FOR MERCURY CLEANUP’ would be fairly effective.

[bp]Would vacuum companies see this as worth doing today? I don’t think I really encountered mercury until I was working in labs – it is much less prevalent today than it used to be (especially in homes), so I wonder if they would not see it as worth it by the numbers.

[bq]Once you list one warning like this, vacuum companies might need to list out all sorts of other hazards that the vacuum is not appropriate for cleanup

[br]Also, Mercury is being phased out in homes but you still see it around sometimes in thermometers especially. Keep in mind this paper is from 2014.

[bs]I don’t understand this statement in the context of the paragraph. Which risk communication messages is she referring to? I know that institutional response to Hg spills has changed a lot over the last 30 years. There are hazmat emergency responses to them in schools and hospitals monthly

[bt]I think this vacuum example is just showing how there is a gap in the risk communications to the public (not practitioners), since they mainly focused on clean up rather than misuse. It would be nice if there was a reference or supporting info here. They may have looked at packaging from different mercury suppliers.

CHAS Workshops 2021

The Division of Chemical Safety presents several workshops as part of the American Chemical Society’s continuing education program around chemical safety issues. CHAS has two workshop tracks, aimed at either specific stakeholders or chemistry professionals who need specific chemical safety education to complete their expertise, either in the lab or in business settings. The stakeholder workshops are taught by people from that group with safety experience (e.g. grad students teaching grad students), whereas the professional development workshops are taught by full time Environmental Health and Safety professionals.

To register for any of these workshops, click on the the workshop description for that workshop.

We can also help arrange presentations of these workshops in other venues. If you are interested in arranging any of these trainings for your company or local section meetings, contact us at membership@dchas.org

Also known as the Lab Safety Teams workshop, taught by chemistry graduate students with experience with implementing and maintaining laboratory safety programs at their home institution. This workshop will next be offered Sunday, October 17, 2021; you can register for it here.
Conducting risk assessments in the research lab requires special considerations. This workshop will explore using the RAMP paradigm to meet this need and will be offered this November. You can register for this workshop here.




The ACS Youtube channel hosts chemical safety related videos on a variety of topics and styles for specific audiences. They are all available for Creative Commons use in classes with attribution.
This extensive 15 hour course on chemical safety in the laboratory is designed for undergraduate STEM students and others with a need to review the fundamentals of chemical safety in the laboratory. Register at https://institute.acs.org/courses/foundations-chemical-safety.html
ACS Essentials of Lab Safety for General Chemistry
On line, registration fee required.

If you have any questions about these workshops, contact us at membership@dchas.org or complete the workshop below.

Fall 2021 National Meeting Technical Presentations

2021 CHAS Awards Presentations

Safety in Lab Facilities Symposium

The Impact of Covid on EHS

General Papers

Safety Papers from Symposia in Other Divisions

Chemical Education (CHED)

Chemical Information (CINF)

ACS Webinar: Working Together to Design Safer Laboratories

Designing laboratories that allow for safe and efficient research requires input and collaboration between researchers, architects, engineers and lab planners. Michael Labosky of MIT, Ellen Sweet of Cornell University, and Melinda Box of N.C. State University discussed the challenges of designing and operating labs from multiple perspectives, using concrete examples from the real world. This ACS Webinar is moderated by Environmental Safety Manager Ralph Stuart of Keene State College and is co-produced with the ACS Division of Chemical Safety and the ACS Committee on Chemical Safety. The webinar was recorded and is available to ACS members at http://www.acs.org/webinars Information from the webinar is provided below. If you have any follow up questions about this webinar, let us know at membership@dchas.org

References cited during the webinar include:

ACS Chemical Health & Safety special issue articles currently available:

 Safe Lab Design: A Call for Papers https://pubs.acs.org/doi/10.1021/acs.chas.1c00034

Code Considerations for the Design of Laboratories Which Will Also House Pilot Plants
https://pubs.acs.org/doi/10.1021/acs.chas.0c00053

Planning and Building Laboratories: A Collaboration among Many
https://pubs.acs.org/doi/10.1021/acs.chas.0c00081

Controls for University Fabrication Laboratories—Best Practices for Health and Safety
https://pubs.acs.org/doi/10.1021/acs.chas.0c00093

Design and Practice of an Organic Analysis Laboratory to Enhance Laboratory Safety  
https://pubs.acs.org/doi/10.1021/acs.chas.1c00008

Comments from the audience:

  • Working with Undergraduate Students is really a challenging task for us. The information shared through the webinars are really helpful and beneficial for us.
  • We are planning a new lab, it was just great!
  • Very good overview of the challenges associated with the design and maintenance of acceptable air handling for laboratories. The speakers were exceptionally knowledgeable, and this presentation was very useful.
  • This webinar is a new window of safety and security in labs
  • This was definitely for the inexperienced in lab safety design
  • This was an excellent and very relevant webinar. Re-consulting the notes and, more importantly, the recorded version will be useful as a significant amount of relevant information was given verbally and could barely be noted down (lack of time!). Maybe this can be corrected by adding more point-form keywords and statements on slides would help following the talks.
  • This was a great learning experience, I work indirectly with the labs almost every day. Our ventilation systems are top tier but it’s great to understand some of the design aspects and procedural steps to take in order to create an effective and comprehensive system. I may not use this information daily but it’s a great refresher.
  • Someone in the chat had a great suggestion for chemical inventory.
  • Showed me I am on the right track and pointed out some key things that I can further look into to make my lab safer
  • It was very informative and a very good overview.
  • It was beneficial to hear from peer institutions, especially with respect to ventilation. During the Q&A, the questions pertaining to core safety topics for the various levels if chemistry curriculum was also interesting.
  • I was provided with a great deal of additional resources to consult as we begin planning a revamping of our existing high school chemistry laboratory.
  • I hope to put in practice the knowledge acquired in laboratory design for safety and sustainability
  • I have benefited immensely from the little I was able to grab
  • I am working in a lab that has no such facilities and most of the time we ignored it as it was not in our hands. But here in this webinar, I have learnt many safety measures. I think this makes a difference in the safety measures of our lab.
  • I am planning to start a electroplating set up for my research work so definitely it has benefited me.
  • Excellent information from qualified professionals w/ real world experience and helpful insight.
  • El webinar me sirve como soporte para dar recomendaciones en la construcción del laboratorio de la CDMB que se está realizando en estos momentos en Bucaramanga – Colombia. Soy el jefe de ese laboratorio y debo estar preparado para emitir conceptos o aportar en la toma de decisiones para el laboratorio.
  • As EHS professional it is refreshing to see that lab users get more educated and aware of the lab ventilation issues and challenges
  • As an EHS professional, it primarily reinforced information that I already knew. However, the presenters offered good tips or ideas as well.

ACS Webinar: Changing the Culture of Chemistry – Safety in the Lab

Speakers at the webinar included:

  • Mary Beth Mulcahy, Manager in the Global Chemical and Biological Security group at Sandia National Laboratories, Editor-in-Chief of ACS Chemical Health & Safety
  • Michael B. Blayney, Executive Director, Research Safety at Northwestern University
  • Monica Mame Soma Nyansa, Ph.D. Student, Michigan Technological University
  • Kali Miller, Managing Editor, ACS Publications

The session closed with a question-and-answer session moderated by Kali Miller, an ACS Publications Managing Editor, where all three panelists were able to share more insights and advice.