Tag Archives: journal club

Psychological Evaluation of Virtual Reality Applied to Safety Training

04/14/22 Table Read for The Art & State of Safety Journal Club

Excerpts and comments from “A Transferable Psychological Evaluation of Virtual Reality Applied to Safety Training in Chemical Manufacturing”

Published as part of the ACS Chemical Health & Safety joint virtual special issue “Process Safety from Bench to Pilot to Plant” in collaboration with Organic Process Research & Development and Journal of Loss Prevention in the Process Industries.

Matthieu Poyade, Claire Eaglesham,§ Jordan Trench,§ and Marc Reid*

The full paper can be found here: https://pubs.acs.org/doi/abs/10.1021/acs.chas.0c00105

1. Introduction

Safety in Chemical Manufacturing

Recent high-profile accidents—on both research and manufacturing scales—have provided strong drivers for culture change and training improvements. While our focus here is on process-scale chemical manufacturing,[a][b][c][d][e][f][g][h][i][j] the similarly severe safety challenges exist on the laboratory scale; such dangers have been extensively reviewed recently. Through consideration of the emerging digitalization trends and perennial safety challenges in the chemical sector, we envisaged using interactive and immersive virtual reality (VR) as an opportune technology for developing safety training and accident readiness for those working in dangerous chemical environments.

Virtual Reality

VR enables interactive and immersive real-time task simulations across a growing wealth of areas. In higher education, prelab training in[k][l][m] VR has the potential to address these issues giving students multiple attempts to complete core protocols virtually in advance of experimental work, creating the time and space to practice outside of the physical laboratory.

Safety Educational Challenges

Safety education and research have evolved with technology, moving from videos on cassettes to online formats and simulations. However, the area is still a challenge, and very recent work has demonstrated that there must be an active link between pre-laboratory work and laboratory work in order for the advance work to have impact.

Study Aims

The primary question for this study can be framed as follows: When evaluated on a controlled basis, how do two distinct training methods[n][o][p][q][r][s], (1) VR training and (2) traditional slide training[t][u][v] (displayed as a video to ensure the consistency of the provision of training), compare for the same safety-critical task?

We describe herein the digital recreation of a hazardous facility using VR to provide immersive and proactive safety training. We use this case to deliver a thorough statistical assessment of the psychological principles of our VR safety training platform versus the traditional non-immersive training (the latter still being the de facto standard for such live industrial settings).

Figure 3. Summarized workflow for safety training case identification and comparative assessment of PowerPoint video versus VR.

2. Methods

After completing their training, participants were required to fill in standardized questionnaires which aimed to formally assess 5 measures of their training experiences.

1. Task-Specific Learning Effect

Participants’ post-training knowledge of the ammonia offload task was assessed in an exam-style test composed of six task-specific open-questions.

2. Perception of Learning Confidence

How well participants perform on a training exam and how they feel about the overall learning experience are not the same thing. Participant experiences were assessed through 8 bespoke statements, which were specifically designed for the assessment of both training conditions.

3. Sense of Perceived Presence

“Presence” can be defined as the depth of a user’s imagined sensation of “being there” inside the training media they are interacting with.

4. Usability

From the field of human–computer interaction, the System Usability Scale (SUS) has become the industry standard for the assessment of system performance and fitness for the intended purpose… A user answers their level of agreement on a Likert scale, resulting in a score out of 100 that can be converted to a grade A–F. In our study, the SUS was used to evaluate the subjective usability of our VR training system.

[w][x][y]

5. Sentiment Analysis

Transcripts of participant feedback—from both the VR and non-VR safety training groups—were used with the Linguistic Inquiry and Word Count (LIWC, pronounced “Luke”) program. Therein, the unedited and word-ordered text structure (the corpus) was analyzed against the LIWC default dictionary, outputting a percentage of words fitting psychological descriptors. Most importantly for this study, the percentage of words labeled with positive or negative affect (or emotion) were captured to enable quantifiable comparison between the VR and non-VR feedback transcripts.

3. Results

Safety Training Evaluation

Having created a bespoke VR safety training platform for the GSK ammonia offloading task, the value of this modern training approach could be formally assessed versus GSK’s existing training protocols. Crucial to this assessment was the bringing together of experimental methods which focus on psychological principles that are not yet commonly practiced in chemical health and safety training assessment (Figure 3). All results presented below are summarized in Figure 8 and Table 1.

Figure 8. Summary of the psychological assessment of VR versus non-VR (video slide-based) safety training. (a) Task-specific learning effect. (b) Perception of learning confidence. (c) Assessment of training presence. (d) VR system usability score. In parts b and c, * and ** represent statistically significant results with p < 0.05 and p < 0.001, respectively.

1. Task-Specific Learning Effect (Figure 8a)

Task-specific learning for the ammonia offload task was assessed using a questionnaire[z][aa][ab][ac] built upon official GSK training materials and marking schemes. Overall, test scores from the Control group and the VR group showed no statistical difference between groups. However, there was tighter distribution[ad] around the mean score for the VR group versus the Control group.

2. Perception of Learning Confidence (Figure 8b)

Participants’ perceived confidence of having gained new knowledge was assessed using a questionnaire composed of 8 statements, probing multiple aspects of the learning experience… Within a 95% confidence limit, the VR training method was perceived by participants to be significantly more fit for training purpose than video slides. VR also gave participants confidence that they could next perform the safety task alone[ae][af][ag]. Moreover, participants rated VR as having more potential than traditional slides for helping train in other complex tasks and to improve decision making skills (Figure 8b). Overall, participants from the VR group felt more confident and prepared for on-site training than those from the Control group.[ah]

3. Sense of Perceived Presence (Figure 8c)

The Sense of Presence questionnaire was used to gauge participants’ overall feeling of training involvement across four key dimensions. Results show that participants from the VR group reported experiencing a higher sense of presence than those from the Control group. On the fourth dimension, negative effects, participants from the Control group reported experiencing more negative effects than those from the VR group, but the result was not statistically different (Figure 8c). [ai]

4. Usability of the VR Training Platform (Figure 8d)

The System Usability Scale (SUS) was used to assess the effectiveness, intuitiveness, and satisfaction with which participants were able to achieve the task objectives within the VR environment. The average SUS score recorded was 79.559 (∼80, or grade A−), which placed our VR training platform on the edge of the top 10% of SUS scores (see Figure 5 for context). The SUS result indicated an overall excellent experience for participants in the VR group.

[Participants also] disagreed with any notion that the VR experience was too long (1.6 ± 0.7) and did not think it was too short (2.5 ± 1.1). Participants agreed that the simulation was stable and smooth (3.9 ± 1.2) and disagreed that it was in any way jaggy (2.3 ± 0.8). Hand-based interactions with the VR environment were agreed to be relatively intuitive (3.8 ± 1.3), and the head-mounted display was found to provide agreeable comfort for the duration of the training (4.0 ± 0.9).

5. Sentiment Analysis of Participant Feedback (Table 1)

In the final part of our study, we aimed to corroborate the formal statistical analysis against a quantitative analysis of open participant feedback. Using the text-based transcripts from both the Control and VR group participant feedback, the Linguistic Inquiry and Word Count (LIWC) tool provided further insight based on the emotional sentiment hidden in the plain text. VR participants were found to use more positively emotive words (4.2% of the VR training feedback corpus) versus the Control group (2.1% of the video training feedback corpus). More broadly, the VR group displayed a more positive emotive tone and used fewer negatively emotive words than the Control group.

Table 1. LIWC-Enabled Sentiment Analysis of Participant Feedback Transcripts

LIWC variable

brief description

VR-group

non-VR group

word count

no. of words in the transcript

1493

984

emotional tone

difference between positive and negative words (<50 = negative)

83.1

24.2

positive emotion

% positive words (e.g., love, nice, sweet)

4.2%

2.1%

negative emotion

% negative words (e.g., hurt, nasty, ugly)

1.1%

2.2%

4. Discussion

Overall, using our transferable assessment workflow, the statistical survey analysis showed that task-specific learning was equivalent[aj][ak][al][am][an] for VR and non-VR groups. This suggests that the VR training in that specific context is not detrimental to learning and appears to be as effective as the traditional training modality but, crucially, with improved user investment in the training experience. However, the distribution difference between both training modalities suggests that the VR training provided a more consistent experience across participants than watching video slides, but more evaluation would be required to verify this.

In addition, perceived learning confidence and sense of perceived presence were reported to be all significantly better in VR over the non-VR group. The reported differences in perceived learning confidence between participants from both groups suggest that those from the VR group, despite having acquired a similar amount of knowledge, were feeling more assured about the applicability of that knowledge. These findings thus suggest that the VR training resulted in a more engaging and psychologically involving modality able to increase participants’ confidence in their own learning. [ao][ap][aq]Further research will also aim to explore the applicability and validation of the perceived learning confidence questionnaire introduced in this investigation.

Additionally, VR system usability was quantifiably excellent, according to the SUS score and feedback text sentiment analysis.

Although our experimental data demonstrate the value of the VR modality for health and safety training in chemical manufacturing settings, the sampling, and more particularly the variation in digital literacy[ar] among participants, may be a limitation to the study. Therefore, future research should explore the training validity of the proposed approach involving a homogeneously digitally literate cohort of participants to more rigorously measure knowledge development between experimental conditions.

4.1. Implications for Chemical Health and Safety Training

By requiring learners to complete core protocols virtually in advance of real work, VR pretask training has the potential to address issues of complex learning, knowledge retention[as][at][au], training turnover times, and safety culture enhancements. Researchers in the Chemical and Petrochemical Sciences operate across an expansive range of sites, from small laboratories to pilot plants and refineries. Therefore, beyond valuable safety simulations and training exercises, outputs from this work are envisaged to breed applications where remote virtual or augmented assistance can alleviate the significant risks to staff on large-scale manufacturing sites.

4.2. Optimizing Resource-Intensive Laboratory Spaces

As a space category in buildings, chemistry laboratories are significantly more resource-intensive than office or storage spaces. The ability to deliver virtual chemical safety training, as demonstrated herein, could serve toward the consolidation and recategorization, minimizing utility and space expenditure threatening sustainability.[av][aw][ax][ay][az][ba][bb] By developing the new Chemistry VR laboratories, high utility bills[bc][bd][be][bf] associated with running physical chemistry laboratories could potentially be significantly reduced.

4.3. Bridging Chemistry and Psychology

By bringing together psychological and computational assessments of safety training, the workflow applied herein could serve as a blueprint for future developments in this emerging multidisciplinary research domain. Indeed, the need to bring together chemical and psychological skill sets was highlighted in the aforementioned safety review by Trant and Menard.

5. Conclusions

Toward a higher standard of safety training and culture, we have described the end-to-end development of a VR safety training platform deployed in a dangerous chemical manufacturing environment. Using a specific process chemical case study, we have introduced a transferable workflow for the psychological assessment of an advanced training tool versus traditional slide-based safety training. This same workflow could conceivably be applied to training developments beyond safety.

Comparing our VR safety training versus GSK’s established training protocols, we found no statistical difference in the task-specific learning[bg] achieved in VR versus traditional slide-based training. However, statistical differences, in favor of VR, were found for participants’ positive perception of learning confidence and in their training presence (or involvement) in what was being taught. In sum, VR training was shown to help participants invest more in their safety training than in a more traditional setting for training[bh][bi][bj][bk][bl].

Specific to the VR platform itself, the standard System Usability Scale (SUS) found that our development ranked as “A–” or 80%, placing it toward an “excellent” rating and well within the level of acceptance to deliver competent training.

Our ongoing research in this space is now extending into related chemical safety application domains.

[a]I would think the expense of VR could be seen as more “worth it” at this scale rather than at the lab scale given how much bigger and scarier emergencies can be (and how you really can’t “recreate” such an emergency in real life without some serious problems).

[b]Additionally, I suspect that in manufacturing there is more incentive to train up workers outside of the lecture/textbook approach. Many people are hands on learners and tend to move into the trades for that reason.

[c]I was also just about to make a comment that the budget for training and purchasing and upkeep for VR equipment is probably more negligible in those environments compared to smaller lab groups

[d]Jessica…You can create very realistic large scale simulations.  For example, I have simulated 50-100 barrel oil spills on water, in rivers, ponds and lakes, with really good results.

[e]Oh – this is a good point. What is not taken into account here is the comfort level people have with different types of learning. It would be interesting to know if Ph.D. level scientists and those who moved into this work through apprenticeship and/or just a BA would’ve felt differently about these training experiences.

[f]Neal – I wasn’t questioning that. I was saying that those things are difficult (or impossible) to recreate in real life – which is why being able to do a simulation would be more attractive for process scale than for lab scale.

[g]The skill level of the participants in not known.  A pilot plant team spans very skilled technicians to PhD level engineers and other scientists. I do not buy into Ralph’s observation.

[h]Jessica, I disagree.  Simulated or hands-on is really valuable for learning skills that require both conceptual understanding and muscle memory tasks.

[i]I’m not disagreeing. What I am saying is that if you want to teach someone how to clean up a spill, it is a lot easier to spill 50 mL of something in reality and have them practice leaning it up, than it is to spill 5000 L of something and ask them to practice cleaning it up. Ergo, simulation is going to automatically be more attractive to those who have to deal with much larger quantities.

[j]And the question wasn’t about skill level. It was about comfort with different sorts of learning. You can be incredibly highly skilled and still prefer to learn something hands-on – or prefer for someone to give you a book to read. The educational levels were somewhat being used as proxies for this (i.e. to get a PhD, you better like learning through reading!).

[k]Videos

The following YouTube links provide representative examples of:

i. The ammonia offload training introduction video; https://youtu.be/30SbytSHbrU

ii. The VR training walkthrough; https://youtu.be/DlXu0nTMCPQ

iii. The GSK ammonia tank farm fly through (i.e. the digital twin used in the VR training;

iv. The video slide training video; https://youtu.be/TZxJDJXVPgM

[l]Awesome – thank you for adding this.

[m]Very helpful. Thank you

[n]I’d be curious to see how a case 3 of video lecture then VR training compares, because this would have a focused educational component then focused skill and habit development component

[o]I’d be interested to see how in person training would compare to these.

[p]This would also open up a can of worms. It is in-person interactive learning? Is it in-person hands-on learning? Or is it sitting in a classroom watching someone give a Powerpoint in-person learning?

[q]I was imagining hands-on learning or the reality version of the VR training they did so they could inspect how immersive the training is compared to the real thing. Comparision to an interactive train could also have been interesting.

[r]I was about to add that I feel like comparison to interactive in-person training would’ve been good to see. I tend to think of VR as same as an in-person training but just digital.

[s]Thats why I think it could be interesting. They could see if there is in fact a difference between VR and hands-on training. Then if there was none you would have an argument for a cost saving in space and personnel.

[t]Comparing these two very different methods is problematic. If you are trying to assess the added immersive value of VR then it needed to be compared to another ‘active learning’ method such as computer simulation.

[u]Wouldn’t VR training essentially be a form of computer simulation? I was thinking that the things being compared were 2 trainings that the employees could do “on their own”. At the moment, the training done on their own is typically some sort of recorded slideshow. So they are comparing to something more interactive that is also something that the employee can do “on their own.”

[v]A good comparison could have been VR with the headset and controllers in each hand compared to a keyboard and mouse simulation where you can select certain options. More like Oregon Trail.

[w]I’ve never seen this graphic before, but I love it! Excel is powerful but user-hostile, particularly if you try to share your work with someone else.

Of course, Google and Amazon are cash cows for their platforms, so they have an incentive to optimize System Usability (partially to camouflage their commercial interest in what they are selling).

[x]I found this comparison odd. It is claiming to measure a computer system’s fitness for purpose. Amazon’s purpose for the user is to get people to buy stuff. While it may be complex beyond the scenes, it has a very simple and pretty singular purpose. Excel’s purpose for the user is to be able to do loads of different and varied complex tasks. They are really different animals.

[y]Yes, I agree that they are different animals. However understanding the limits of the two applications requires more IT education than most people get.(Faculty routinely comment that “this generation of students doesn’t know how to use computers”.) As a result, Excel is commonly used for tasks that it is not appropriate for. But you’re correct that Amazon and Google’s missions are much simpler than those Excel are used for.

[z]I’d be very interested to know what would have happened if the learners were asked to perform the offload task.

[aa]Yes! I just saw your comment below. I wonder if they are seeing no difference here because they are testing on different platform than they trained. Would be interesting to compare written and in practice results for each group. Maybe the VR group would be worse at the written test but better at physically doing the tasks.

[ab]This could get at my concerns about the Dunning-Kruger effect that I mentioned below. as well Just because someone feels more confident that they can do something, doesn’t mean that they are correct about that. It definitely would’ve been helpful to actually have the employees perform the task and see how the two groups compared. Especially since the purpose of training isn’t to pass a test – it is to actually be able to DO the task!

[ac]“Especially since the purpose of training isn’t to pass a test – it is to actually be able to DO the task!” – this

[ad]If I’m understanding the plot correctly, it looks like there is a higher skew in the positive direction for the control group which is interesting. I.e. lower but also higher scores. It seems to have an evening effect which makes the results of the training more predictable.

[ae]I’m slightly alarmed that VR give people the confidence to perform the tasks alone when there was no statistical difference in the task-specific learning scores than the control group. VR seems to give a false sense of confidence.

[af]I had the same reaction to reading this. I don’t believe a false sense of confidence in performing a task alone to be a good thing. Confidence in your understanding is great, but overconfidence in physically performing something can definitely lead to more accidents.

[ag]A comment I’ve seen is that the VR trainees may perform better in doing the action than the control but they only gauged their knowledge in an exam style, not in actually performing the task they are trained to do. But regardless, a false confidence would not be good.

[ah]Wonder if there is concern that this creates a false confidence given the exam scores.

[ai]Wow these are big differences. There is basically no overlap in the error bars except the negative effect dimension.

[aj]If the task-specific learning was equivalent, but one method caused people to express greater confidence than the other, is this necessarily a good thing? Taken to an extreme perhaps, wouldn’t we just be triggering the Dunning-Kruger effect?

[ak]I’d be interested in the value of VR option as refresher training, similar to the way that flight simulators are used for pilots. Sullenberger attributed surviving the bird strike at LaGuardia to lots of simulator time allowing him to react functionally rather than emotionally in the 30 seconds after birds were hit

[al]I had the same comment above. I was wondering if people should be worried about false confidence. But the problem with the assessment was on paper, they may be better at physically doing the tasks or responding in real time, that was not tested.

[am]Exactly, Kali!

[an]From Ralph’s comment, I think there is a lot of value in VR for training for and simulating emergency situations without any of the risks.

[ao]Again, this triggers the question: should they be more confident in this learning?

[ap]I don’t think the available data let us distinguish between an overconfident test group and an under-confident control group (or both).

[aq]I am curious what the standard in the field is for determining at what point learners or over confident. For example, are these good scores? Is there a correlation between higher test scores and confidence or inverse meaning false confidence?

[ar]I am curious as well how much training there needed to be to explain how the VR setup worked. That is a huge factor to how easy people will perceive the VR setup to be if someone walks them through it slowly, versus walking up to a training station and being expected to know how to use it.

[as]I suspect that if people are more engaged in the VR training that they would have better retention of the training over time and would be interesting to explore. If so, that would be a great argument for VR.

[at]Right and if places don’t have the capacity to offer hands on training, an alternative to online-only is VR. Or, in high hazard work where it’s not advisable to train under the circumstances.

[au]I argee that their is a lot of merit for VR in high hazard or emergnecy response simulation and training.

[av]I’m personally more interested in the potential to use augmented reality in an empty lab space to train researchers to work with a variety of hazards/operations.

[aw]Right, this whole time I wasn’t thinking about replacing labs I was thinking about adding a desk space for training. I am still curious about the logistics though, in the era of COVID if people will really be comfortable sharing equipment and how it will be maintained, because that all costs money as well.

[ax]This really would open up some interesting arguments. Would learning how to do everything for 4 years in college by VR actually translate to an individual being able to walk into a real lab and work confidently?

[ay]From COVID, some portion of the labs they were able to do virtually – like moving this etc. From personal standpoint, I’d say no and like we talked about here – it would give a false sense of confidence which could be detrimental. It’s the same way I feel about “virtual fire extinguisher training.” I don’t think it provides any of the necessary exposure.

[az]Amanda totally agree.  The virtual fire extinguisher training does not provide the feel (heat) smell (fire & agent) or sound or a live fire exercise.

[ba]Oh wait – virtual training and “virtual reality training” are pretty different concepts. I agree that virtual training would never be able to substitute for hands-on experience. However, what VR training has been driving at for years is to try to create such a realistic environment within which to perform the training that it really does “feel like you are there.” I’m not sure how close we are to that. In my experiences with VR, I haven’t seen anything THAT good. But I likely haven’t been exposed to the real cutting edge.

[bb]Jessica, I wouldn’t recommend that AR/VR ever be used as the only training provided,  but I suspect that it could shorten someone’s learning curve.

Amanda and Neal, having done both, I think that both have their advantages. The heat of the fire and kickback from the extinguisher aren’t well-replicated, but I actually think that the virtual extinguisher can do a better job of simulating the difficulty. When I’ve done the hands-on training, you’d succeed in about 1-2 seconds as long as you pointed the extinguisher in the general vicinity of the fire.

[bc]Utility bills for lab spaces are higher than average, but a small fraction of the total costs of labs. At Cornell, energy costs were about 5% of the cost of operating the building when you took the salaries of the people in the building into account. This explains why utility costs are not as compelling to academic administrators as they might seem. However, if there are labor cost savings associated with VR…

[bd]I’d love to see a breakdown of the fixed cost of purchasing the VR equipment, and the incremental cost of developing each new module.

[be]Ralph, that is so interesting to see the numbers I had no idea it was that small of an amount. I suppose the argument might need to be wasted energy/ environmental considerations then, rather than cost savings.

[bf]Yes, that is why I had a job at Cornell – a large part of the campus community was very committed to moving toward carbon neutrality and support energy conservation projects even with long payback periods (e.g. 7 years)

[bg]It seems like it would be more helpful to have a paper-test and field-test to see if VR helped with physically doing the tasks, since that is the benefit that I see. In addition, looking at retention over time would be important. Otherwise the case is harder to make for VR if it’s not increasing the desired knowledge and skills

[bh]How much of the statistical favoring of VR was due to the VR platform being “cool” and “new” versus the participants familiarity with traditional approaches? I do not see any control in the document for this.

[bi]I agree that work as reported seems to be demonstrating that the platform provides an acceptable experience for the learner, but it’s not clear whether the skills are acquired or retained.

[bj]three of the 4 authors are in the VR academic environment; one is in a chemistry department.  Seems to be real bias in showing that VR works in this.

[bk]One application I think this could help with is one I experienced today. I had to sit through an Underground Storage Tank training which was similar to the text-heavy powerpoint in the video. I had been able to self direct my tour of the tank and play with the valves and detection systems, I would have been able to complete the regulatory requirements of understanding the system well enough to oversee its operations, but not well enough to have hands-on responsibilities. The hands-on work is left the contractors who work on the tanks everyday.

[bl]We have very good results using simulators in the nuclear power and airflight industries. BUT, there is still significant learning that occurs when things become real.  The most dramatic example is landing high performance aircraft on carrier decks.  Every pilot agrees that nothing prepares them for the reality this controlled crash.

Resources for Improving Safety Culture, Training, and Awareness in the Academic Laboratory

Dr. Quinton J. Bruch, April 7th, 2022

Excerpts from “Resources for Improving Safety Culture, Training, and Awareness in the Academic Laboratory”

Full paper can be found here: https://www.sciencedirect.com/science/article/pii/B9780081026885000921?via%3Dihub

Full paper can be found here: https://www.sciencedirect.com/science/article/pii/B9780081026885000921?via%3Dihub

Theme 1: Setting Core Values and Leading By Example. 

Mission statements. …Mission statements establish the priorities and values of an organization, and can be useful in setting a positive example and signaling to potential new researchers the importance of safety. To examine the current state of public-facing mission statements, we accessed the websites of the top [a][b]50 chemistry graduate programs in the United States and searched any published mission statements for mentions of safety. [c]Although 29 departments had prominently displayed mission statements on their websites, only two (the University of Pennsylvania and UMN) included practicing safe chemistry in their mission statement (~at time of publishing in early 2021).[d][e][f][g]

Regular and consistent discussions about safety. Leaders can demonstrate that safety is a priority by regularly discussing safety in the context of experimental set-up, research presentations, literature presentations, or other student interactions[h][i]. Many potential safety teaching moments occur during routine discussions of the day-to-day lab experience.[j] Additionally, “safety minutes[k][l][m][n] have become a popular method in both industry and academia to address safety at the beginning of meetings.

Holding researchers accountable[o][p]. In an academic setting, researchers are still in the trainee stage of the career. As a result, it is important to hold follow-up discussions on safety to ensure that they are being properly implemented.[q] For example, in the UMN Department of Chemistry a subset of the departmental safety committee performs regular PPE “spot checks,”[r][s] and highlights exemplary behavior through spotlights in departmental media. Additionally, each UMN chemistry graduate student participates in an annual review process that includes laboratory safety as a formal category.[t] Candid discussion and follow-up on safe practices is critical for effective trainee development.

Theme 2: Empowering Researchers to Collaborate and Practice Safe Science.

Within a group: designate safety officers. Empower group members to take charge of safety within a group, both as individuals and through formal appointed roles such as laboratory safety officers (LSOs).[u] LSOs can serve multiple important roles within a research group. First, LSOs can act as a safety role model for peers in the research group, and also as a resource for departmental policy. Further, they act as liaisons to ensure open communication between the PI, the research group, and EHS staff. These types of formal liaisons are critical for fostering collaborations between a PI, EHS, and the researchers actually carrying out labwork. LSOs can also assist [v][w]with routine safety upkeep in a lab, such as managing hazardous waste removal protocols and regularly testing safety equipment such as eyewash stations. For example, in the departments of chemistry at UMN and UNC, LSOs are responsible for periodically flushing eyewash stations and recording the check on nearby signage[x].[y][z][aa][ab] Finally, LSOs are also natural picks for department-level initiatives such as department safety committees or student-led joint safety teams.[ac]

Within a group: work on policies as a collective. Have researchers co-write, edit, and revise standard operating procedures (SOPs).[ad] Many EHS offices have guides and templates for writing SOPs. More details on SOPs will be discussed in the training section of this chapter. Co-writing has the double benefit of acting as a safety teaching moment while also helping researchers feel more engaged and responsible for the safety protocols of the group.[ae][af][ag]

Within a department: establish safety collaborations.[ah] Research groups within departments often have very diverse sets of expertise. These should be leveraged through collaboration to complement “blind spots”[ai] within a particular group to the benefit of all involved—commonly, this is done through departmental safety committees, but alternative (or complementary) models are emerging. An extremely successful and increasingly popular strategy for establishing department-wide safety collaborations is the Joint Safety Team model.

Joint Safety Teams (JSTs). A Joint Safety Team (JST) is a collaborative, graduate student- and postdoc-led initiative with the goal of proliferating a culture of laboratory safety by bridging the gaps between safety administration, departmental administration, and researchers. JSTs are built on the premise that grassroots efforts can empower students to take ownership of safety practices, thus improving safety culture from the ground up. Since its inception in 2012, the first JST at UMN (a joint endeavor between the departments of Chemistry and Chemical Engineering & Materials Science, spurred through collaboration with Dow Chemical) has directly and significantly impacted the adoption of improved safety practices, and also noticeably improved the overall safety culture. Indeed, the energy and enthusiasm of students, which are well-recognized as key drivers in research innovation, can also be a significant driver for improving safety culture.[aj][ak]

…The JST model has several advantages[al]: (1) it spreads the safety burden across a greater number of stakeholders, reducing the workload for any one individual or committee; (2) it provides departmental leadership with better insight into the “on-the-ground” attitudes and behaviors of researchers;[am][an][ao][ap] and (3) it provides students with practical safety leadership opportunities that will be beneficial to their career. In fact, many of the strategies discussed in this chapter can be either traced back to a JST, or could potentially be implemented by a JST.

An inherent challenge with student-led initiatives like JSTs is that the high student turnover in a graduate program necessitates ongoing enthusiastic participation of students after the first generation. In fact, after the first generation of LSOs left the UMN JST upon graduation, there was a temporary lag in enthusiasm and engagement. In order to maintain consistent engagement with the JST, the departmental administration created small salary bonuses for officer-level positions within the JST, as well as a funded TA position. Since spending time on JST activities takes away from potential research or other time, it seems reasonable that students be compensated accordingly when resources allow it…[aq][ar]

3.      Training

Initial Safety Training. Chemical safety training often starts with standardized, institution-wide modules that accompany a comprehensive chemical hygiene plan or laboratory safety manual. These are important resources, but researchers can be overwhelmed by the amount of information[as][at]—particularly if only some components seem directly relevant. There is anecdotal evidence that augmentation with departmental-level training[au][av][aw] initiatives can provide a stronger safety training foundation. For example, several departments have implemented laboratory safety courses. At UNC, a course for all first-year chemistry graduate students was created with the goal of providing additional training tailored to the department’s students. Iteratively refined using student feedback, the course operates via a “flipped classroom” model to maximize engagement. The course combines case study-based instruction, role playing (i.e., “what would you do” scenarios), hands-on activities (e.g., field trips to academic labs), and discussions led by active researchers in chemistry subdisciplines or industrial research.

Continued Safety Training[ax]. Maintaining engagement and critical thinking about safety can help assure that individual researchers continue to use safe practices,[ay] and can strengthen the broader safety culture. Without reinforcement, complacency is to be expected—and this can lead to catastrophe. We believe continued training can go well beyond an annual review of documentation by incorporating aspects of safety training seamlessly into existing frameworks.

Departments can incorporate safety minutes into weekly seminars or meetings, creating dedicated time for safety discussions (e.g., specific hazards, risk assessment, or emergency action plans). In our experience, safety minutes are most effective when they combine interactive prompts[az] that encourage researcher participation and discussion. These safety minutes allow researchers to learn from one another and collaborate on safety.

While safety minutes provide continuous exposure and opportunities to practice thinking about safety, they lack critical hands-on (re)learning experiences. Some institutions have implemented annual researcher-driven activities to help address this challenge. For example, researchers at UNC started “safety field days” (harkening back to field days in grade school) designed specifically to offer practice with hands-on techniques in small groups. At UMN, principal investigators participate in an annual SOP “peer review” process, in which two PIs pair up and discuss strengths and potential weaknesses of a given research group’s SOP portfolio.[ba][bb]

Continued training can also come, perhaps paradoxically, when a researcher steps into a mentoring role to train someone else. The act of mentoring a less experienced researcher provides training to the mentee, but also forces the mentor to re-establish a commitment to safe practices and re-learn best practices[bc].[bd] Peer teaching approaches have long been known to improve learning outcomes for both the mentor and the mentee. With regards to safety training, mentors would be re-exposed to a number of resources used in initial safety trainings, such as SOPs, while having to demonstrate mastery over the material through their teaching. Furthermore, providing hands-on instruction would require demonstrating techniques for mentees and subsequently critically assessing and correcting the mentee’s technique. Additionally, mentors would have to engage in critical thinking about safety while answering questions and guiding the mentee’s risk assessments.

Continued training is important for all members of a research group. Seniority does not necessarily convey innate understandings of safety, nor does it exempt oneself from complacency. For example, incoming postdocs[be] will bring a wealth of research and safety knowledge, but they may not be as intimately familiar with all of the hazards or procedures of their new lab, or they may come from a group or department with a different safety culture. Discussing safety with researchers can be difficult, but is a necessary part of both initial and continued training.

 4.      Awareness[bf]

Awareness as it pertains to chemical safety involves building layers of experience and expertise on top of safety training. It is about being mindful and engaged in the lab and being proactive and prepared in the event of an adverse scenario. Awareness is about foreseeing and avoiding accidents before they happen, not just making retroactive changes to prevent them from happening again. There are aspects of awareness that come from experiential learning—knowing the sights and sounds of the lab—while other aspects grow out of more formal training. For example, awareness of the potential hazards associated with particular materials or procedures likely requires some form of risk assessment.

Heightened awareness grows out of strong training and frequent communication, two facets of an environment with a strong culture of safety. Communication with lab mates, supervisors, EHS staff, industry, and the broader community builds awareness at many levels. Awareness is a critical pillar of laboratory safety because it links the deeply personal aspects of being a laboratory researcher with the broader context of safety communication, training, and culture. It also helps address the challenge of implementing a constructive and supportive infrastructure.

Like any experience-based knowledge, safety awareness will vary significantly between individuals. Members of a group likely have had many of the same experiences, and thus often have overlap in their awareness. Effective mentoring can lead to further overlap by sharing awareness of hazards even when the mentee has not directly experienced the situation. Between groups in a department, however, where the techniques and hazards can vary tremendously, there is often little overlap in safety awareness. [bg][bh][bi][bj]In this section, we consider strategies to heighten researcher safety awareness at various levels through resources and tools that allow for intentional sharing of experiential safety knowledge.

Awareness Within an Academic Laboratory[bk]. …Some level of awareness[bl] will come through formal safety training, as was discussed in the preceding section. Our focus here is on heightened awareness through experiential learning, mindset development, and cultivating communication and teaching to share experience between researchers.

We have encountered or utilized several frameworks for building experience. One widely utilized model involves one-on-one mentoring schemes, where a more experienced researcher is paired with a junior researcher. This provides the junior researcher an opportunity to hear about many experiences and, when working together, the more experienced researcher can point out times when heightened awareness is needed. All the while, the junior researcher is running experiments and learning new techniques. There are drawbacks to this method, though. For example, the mentor may not be as experienced in particular techniques or reagents needed for the junior researcher’s project. Or the mentor may not be effective in sharing experience or teaching awareness. Gaps in senior leadership can develop in a group, leaving mentorship voids or leading to knowledge transfer losses. Like the game “telephone” where one person whispers a message to another in a chain, it is easy for the starting message to change gradually with each transfer of knowledge.[bm] This underscores the importance of effective mentoring,[bn] and providing mentors with a supportive environment and training resources such as SOPs and other documentation…

…Another approach involves discussing a hypothetical scenario, rather than waiting to experience it directly in the lab. Safety scenarios are a type of safety minute that provide an opportunity to proactively consider plausible scenarios that could be encountered.[bo][bp] Whereas many groups or departments discuss how to prevent an accident from happening again, hypothetical scenarios provide a chance to think about how to prevent an accident before it happens[bq][br]. Researchers can mediate these scenarios at group meetings. If a researcher is asked to provide a safety scenario every few weeks, they may also spend more time in the lab thinking about possible situations and how to handle them on their own.

Almost all of these methods constitute some form of risk or hazard assessment. As discussed in the training section, formal risk assessment has not traditionally been part of the academic regimen. Students are often surprised [bs]to learn that they perform informal risk assessments constantly, as they survey a procedure, ask lab mates questions about a reagent, or have a mentor check on an apparatus before proceeding. Intuition provides a valuable risk assessment tool, but only when one’s intuition is strong. A balance[bt] of formal risk assessment and informal, experiential, or intuition-based risk assessment is probably ideal for an academic lab.

Checklists are another useful tool for checking in on safety awareness. [bu][bv][bw]Checklists are critically important in many fields, including aviation and medicine. They provide a rapid visual cue that can be particularly useful in high-stress situations where rapid recall of proper protocol can be compromised, or in high-repetition situations where forgetting a key step could have negative consequences. Inspired by conversations with EHS staff, the Miller lab recently created checklists that cover daily lab closing, emergency shutdowns, glovebox malfunctions, high-pressure reactor status, vacuum line operations, and more. The checklists are not comprehensive, and do not replace in-person training and SOPs, but instead capture the most important aspects that can cause problems or are commonly forgotten.[bx][by] The signage provides a visual reminder of the experiential learning that each researcher has accumulated, and can provide an aid during a stressful moment when recall can break down.

A unifying theme in these approaches is the development of frameworks for gaining experience with strong mentoring and researcher-centric continued education. Communication is also essential, as this enables the shared experiences of a group to be absorbed by each individual…[bz][ca]

[a]Wondering if these were determined by us news

[b]I can’t remember exactly what we did, but if we were writing this today; that is where I’d probably start. Plenty of discussion about how accurate/useful those rankings are, but in this instance, it serves as a good starting point

[c]If safety is not part of your mission statement or part of the your graduate student handbook, then this could cause issues with any disciplinary actions you may want to take. For instance, we had a graduate student set off the fire alarm twice on purpose in a research intense building. It was difficult for the department this person was part for actions to be taken. 

We have a separate section on safety in our handbook “Chemical engineering research often involves handling and disposing of hazardous materials. Graduate

students must follow safe laboratory practices as well as attend a basic safety training seminar before starting

any laboratory work. In order to promote a culture of safety, the department maintains an active Laboratory

Safety Committee composed of the department head, faculty, staff and a student members which meets each

semester. Students are expected to be responsive to the safety improvements suggested by the committee,

to serve on the committee when asked, and utilize the committee members as a resource for lab safety

communication.”

[d]This is an interesting perspective on how others have prioritized safety.

[e]I find these sorts of things ring hollow given how little PIs or department leadership seem to know about what is happening in other people’s labs.

[f]I agree, particularly at the University level. However, a number of labs have mission statements, including the Miller lab, that mentions safety. I think at that level, it certainly can demonstrate greater intent

[g]Agreed. Having that language come from the PI is definitely different from having it come from the department or university – especially if the PI also walks the walk.

[h]So important because what is important to the professor becomes important to the student

[i]I definitely agree with this.  I have always noticed that our students immediately reflect, pick up on, and look for what they think their professors deem most necessary/important essential.

[j]I think this is an underappreciated area. These are the conversations that help make safety a part of doing work, not just something bolted on to check a box or provide legal cover.

[k]Is this the same as safety moments?

[l]I would say somewhat? I’ve come to realize that these two terms are often used interchangeable, but can mean very different things. At UNC, what my lab called “safety minutes” would be a dedicated section of group meeting every week where someone would lead a hypothetical scenario or a discussion of how to design a dangerous experiment with cradle-to-grave waste planning. At other places; these can mean things as simple as a static slide before seminar.

[m]The inverse of my comment above. I think these have their place, but if they’re disconnected from what the group’s “really” there to talk about, they can reinforce the idea that safety isn’t a core part of research.

[n]Agreed. I’ve seen some groups implement this by essentially swiping “safety minutes” from someone else. While this could be a way to get started, the items addressed really should be specific to your research group and your lab in order to be meaningful.

[o]Note how all these examples are of the department holding its own researchers accountable, not EHS coming externally to enforce

[p]Yes! It doesn’t fight against the autonomy that’s a core value of academia.

[q]This is a good point. Can’t be a one-and-done although it often feels like it is treated that way.

[r]It seems like this practice would also build awareness and appreciation of the other lab settings and help to foster communication between groups.

[s]I would also hope that it would normalize reminding others about their PPE. I was surprised to find how many faculty members in my department were incredibly uncomfortable with correcting the PPE or behavior of a student from a different lab. It meant effectively that we had extremely variable approaches to PPE and safety throughout our department.

[t]LOVE this idea. I’m guessing it makes the PI reflect on safety of each student as well as start a conversation about what’s going well and ways to improve

[u]We’ve seen dramatic improvement in safety issue resolution once we implemented this kind of program.

[v]Note how the word assist is used. Important to emphasize that the PI is delegating some of their duties to the LSO and throughout the lab but they are ultimately responsible for the safety of their researchers.

[w]Really important point. Too often I’ve seen an LSO designated just so the PI can essentially wash their hands of responsibility and just hand everything safety related to the LSO. It is also important for the PI to be prepared to back their LSO if another student doesn’t want to abide by a rule.

[x]Wondering what the risk is of institutions taking advantage of LSO programs by putting tasks on researchers that should really be the responsibility of the facility or EHS

[y]At UMN and UNC, do these tasks/roles contribute towards progress towards degree?

[z]In my experience at UNC serving as an LSO, it does not relate to progressing one’s studies/degree in any way (though there is the time commitment component of the role). At UMN, I know departmentally they have stronger support for their LSOs and requirements but I would not say that serving as an LSO helps/hinders degree progression (outside again of potential time commitments).

[aa]Thanks! I’m glad to hear that it doesn’t seem to hurt, but I think finding ways for it to help could make a huge difference. Thinking back to my own experience, my advisor counseled us to always have that end goal in mind when thinking about how we spent our time. This was in the context of not prioritizing TA duties ahead of research, but it is something that could argue against taking on these sorts of tasks.

[ab]Yeah, I think that is a really important point. If you’re PI continuously stresses only the results of research/imparts a sense of speed > safety; the students will pick up on that and shift in that direction through a function of their lab culture. So the flipside is if you can build and sustain a strong culture of safety; it becomes an inherent requirement, not an external check

[ac]It is important to keep in mind that this work should be considered in relation to other duties and to somehow be equally shared out among lab members. Depending on how this work is distributed, it can become an incredibly time-consuming set of tasks for one person to constantly be handling.

[ad]I have struggled with how to get buy-in for production of written SOPs.

[ae]It also increases the likelihood that the researchers will be able to implement the controls!

[af]Important point. It is often difficult in grad school to admit when you don’t understand or know how to do something. It is critical to make sure that they understand what is expected. I ran a small pilot project in which I found out that all 6 “soon to be defending” folks involved in the pilot had no clue what baffles in a hood were. Our Powerpoint “hoods” training was required every year. Ha!

[ag]In addition, it serves as a review process to catch risks hazards that the first writer may not have thought of. In industry this is a common practice that multiple people must check off on a protocol before it is used.

[ah]As outlined in this section, a good idea.  We’re also working toward development of collaboration between staff with safety functions.  For instance, have building managers from one department involved in inspections of buildings of other departments.

[ai]Also to avoid re-inventing the wheel. If another group has this expertise and has done risk assessments on the procedure you’re doing, better to start there rather than from scratch. You may even identify items for the other group to add.

[aj]Bottom up approach works extremely well when you have departmental support but not so much when the Head of the Dept doesn’t care.

[ak]They also can’t be effective if the concerns that the JST raise aren’t taken seriously by those who can change policies.

[al]An additional advantage is displaying value for performing safety duties. I.e. The culture is developed such that you work is appreciated rather than an annoyance

[am]I’m curious to hear what discoveries have been made about this.

[an]At UConn, we were trying to use surveys to essentially prove to our department leadership and faculty safety committee that graduate students actually DID want safety trainings – we just wanted them to be more specific to the things that we are actually concerned about it lab (as opposed to the same canned ones that they kept offering). My colleagues have told me that they are actually moving forward with these types of trainings now.

[ao]We also started holding quarterly LSO meetings because we proved to faculty through surveys that graduate students actually did want them (as long as they were designed usefully and addressed real issues in the research labs).

[ap]We work with representatives from various campus entities, which brings us a variety of insights.  Yes focused training is much more valuable, and feels more worthwhile, educating both the trainer and trainee.

[aq]This is an impressive way for department administration to show endorsement and support for safety efforts.

[ar]Another way departments can communicate their commitment towards safety.

[as]We have one of these, too, but we need to move away from it. Not augmentation, but replacement. I don’t know what that looks like yet, but a content dump isn’t it.

[at]Agreed. At UNC we actually have an annual requirement to review the laboratory safety manual with our PI in some sort of setting (requires the PI to sign off on some forms saying they did it). Obviously with the length, that isn’t feasible in its entirety; so we’d highlight a few key sections but yeah. Not the most useful document

[au]I see the CHP or lab safety manual as education/resources and the training as actually practicing behaviors that are expected of researchers, which is critical to actually enforcing policies

[av]Agreed. I always felt any “training” should actually be doing something hands-on. Sitting in a lecture hall watching a Powerpoint should not qualify as “training.”

[aw]Agreed as well Jessica.  That is why now all our safety trainings are hands on as you stated.  It has worked and come across MUCH better.  Even with our facilities and security personnel.

[ax]I really like how continued training is embedded throughout regular day-to-day activities in many cases, this is important. I would add that it is important to have the beginning training available for refresher or reference as needed but don’t think it’s worth it to completely retake the online trainings.

[ay]Let me remind everyone that when a senior lab person shows a junior person how to do a procedure, training is occurring. Including safety aspects in the teaching is critical. Capturing this with a note in the junior person’s lab notebook documents it. The UCLA prosecution would not have occurred if the PD had done this with Ms. Sangji.

[az]I’m very curious to hear about examples of this.

[ba]Wow. Getting the PIs to do this would be awesome. I wonder what is the PI incentive. Part of their annual review?

[bb]Agreed, this seems like a big ask

[bc]In recognition of this our department has put together an on-line lab research mentoring guide and we’re looking for ways to disseminate info about it.

[bd]This works both ways; a mentee ending up with a mentor who doesn’t emphasize safety in their work might be communicating that to their mentees as well.

[be]This has been something I’ve been concerned about and am not sure how to address as an embedded safety professional.

[bf]Just a thought – It seems like there is a big overlap between continued training and developing awareness, which makes sense

[bg]In working with graduate students, I have found a really odd understanding of this to be quite common. Many think that they are only responsible for understanding what is going on in their own labs – and not for what may be going on next door.

[bh]So true. This is where EHS could really help departments or buildings define safety better.  Most people may not be aware that the floor above them uses pyrophorics, etc.

[bi]I think this speaks to how insular grad school almost forces you to be. You spend so much time deepening your knowledge and understanding of your area of research that you have no time to develop breadth.

[bj]Yeah, these are great points. Anecdotally, at UNC when we started the JST we really struggled to get any engagement whatsoever out of the computational groups, even when their work space is across the hall from heavily synthetic organic chemistry groups. We didn’t really solve this, but I know it was and is something we’re chewing on

[bk]Not mentioned explicitly in this section, but documenting what is learned is critically important.  As noted earlier in the paper, students cycle through labs.  The knowledge needs to stay there.

[bl]From my perspective, awareness seems to be directly tied t the PIs priorities except for the 1 in 20 student.

[bm]It seems like something like this happened in the 2010 Texas Tech explosion.

[bn]This highlights the importance of developing procedures/protocols/SOPs, secondary review, and good training/onboarding practicing for specific techniques

[bo]These are always great discussions and fruitful.

[bp]Agreed. Even if you never encounter that scenario in your career, the process of how to think about responding to the unexpected is a generalizable skill.

[bq]I also think it is important to use something like this to help researchers think about how to respond to incidents when they do happen in order to decrease the harm caused by the incident.

[br]This is really great. I think a big part of knowing what to do when a lab incident occurs has a lot to do with thinking about how to respond to the incident before it happens.

[bs]I’m really glad this is included in here. Most of risk assessment is actually very intuitive but this highlights the importance of going through the process in a formal way. But the term is so unfamiliar to researchers sometimes that it seems unapproachable

[bt]I’m interested in learning how others judge the way to find this balance.

[bu]These are useful if the student doing the work develops the checklist otherwise it becomes just a checklist without understanding and thought.  I see many students look at these checklist and ignore hazards because it is not on or part of the list.

[bv]Good point. It would likely be a good practice to periodically update these as well – especially encouraging folks to bring in things that they’ve come across that were done poorly or they had to clean up.

[bw]These are great points. The checklists we’ve designed try their best to highlight major hazards, but due to brevity it isn’t possible to cover everything. I think as Jessica pointed out, is that if they are reviewed periodically, that can be a huge boost in a way that reviewing and updating SOPs as living documents is also important

[bx]This is important – a good checklist is neither an SOP nor a training.

[by]I utilize checklists but sometimes see a form of checklist fatigue – a repeated user thinks they know it and doesn’t bother with the checklist.

So the comment about a GOOD checklist is applicable.

[bz]I’m impressed with the ideas and diversity and discussion of alternatives.  It’s inspiring.  However, I’m trying to institute a culture of safety where I am and many of these ideas aren’t possible for me.  I’m in chemistry at a 2-year (community) college, and I don’t have TAs, graduate students, etc.  We’re not a research institution which is somewhat of an advantage because our situations are relatively static and theoretically controllable .  My other problem is I’m trying to carry the safety culture across the college and to our sister colleges, to departments like maintenance and operations, art, aircraft mechanics, welding etc.

I would love to see ways to address 

1. just a teaching college

2. other processes across campus.

I head a safety committee but am challenged to keep people engaged and aware.

[ca]I’ve found some helpful insights from others about this sort of thing from NAOSMM. They have a listserv and annual conferences where they offer workshops and presentations on safety helpful for non-research institutions like to what you’re describing.

Positive Feedback for Improved Safety Inspections

Melinda Box, MEd, presenter
Ciana Paye, &
Maria Gallardo-Williams, PhD 
North Carolina State University

Melinda’s powerpoint can be downloaded from here.

03/17 Table Read for The Art & State of Safety Journal Club

Excerpts from “Positive Feedback via Descriptive Comments for Improved Safety Inspection Responsiveness and Compliance”

The full paper can be found here: https://pubs.acs.org/doi/10.1021/acs.chas.1c00009

Safety is at the core of industrial and academic laboratory operations worldwide and is arguably one of the most important parts of any enterprise since worker protection is key to maintaining on-going activity.28 At the core of these efforts is the establishment of clear expectations regarding acceptable conditions and procedures necessary to ensure protection from accidents and unwanted exposures[a]. To achieve adherence to these expectations, frequent and well-documented inspections are made an integral part of these systems.23 

Consideration of the inspection format is essential to the success of this process.31 Important components to be mindful about include frequency of inspections, inspector training, degree of recipient involvement, and means of documentation[b][c][d][e]        . Within documentation, the form used for inspection, the report generated, and the means of communicating, compiling, and tracking results all deserve scrutiny in order to optimize inspection benefits.27

Within that realm of documentation, inspection practice often depends on the use of checklists, a widespread and standard approach.  Because checklists make content readily accessible and organize knowledge in a way that facilitates systematic evaluation, they are understandably preferred for this application.  In addition, checklists reduce frequency of omission errors and, while not eliminating variability, do increase consistency in inspection elements[f][g][h][i][j] among evaluators because users are directed to focus on a single item at a time.26 This not only amplifies the reliability of results, but it also can proactively communicate expectations to inspection recipients and thereby facilitate their compliance preceding an inspection.

However, checklists do have limitations.  Most notably, if items on a list cover a large scale and inspection time is limited, reproducibility in recorded findings can be reduced[k]. In addition, individual interpretation and inspector training and preparation can affect inspection results[l][m][n][o][p][q].11 The unfortunate consequence of this variation in thoroughness is that without a note of deficiency there is not an unequivocal indication that an inspection has been done. Instead, if something is recorded as satisfactory, the question remains whether the check was sufficiently thorough or even done at all.  Therefore, in effect, the certainty of what a checklist conveys becomes only negative feedback[r][s][t].

Even with uniformity of user attention and approach, checklists risk producing a counterproductive form of tunnel vision[u] because they can tend to discourage recognition of problematic interactions and interdependencies that may also contribute to unsafe conditions.15 Also, depending on format, a checklist may not provide the information on how to remedy issues nor the ability[v][w] to prioritize among issues in doing follow-up.3 What’s more, within an inspection system, incentive to pursue remedy may only be the anticipation of the next inspection, so self-regulating compliance in between inspections may not be facilitated.[x][y][z][aa][ab][ac]22

Recognition of these limitations necessitates reconsideration of the checklist-only approach, and as part of that reevaluation, it is important to begin with a good foundation.  The first step, therefore, is to establish the goal of the process.  This ensures that the tool is designed to fulfill a purpose that is widely understood and accepted.9 Examples of goals of an environmental health and safety inspection might be to improve safety of surroundings, to increase compliance with institutional requirements, to strengthen preparedness for external inspections, and/or to increase workers’ awareness and understanding of rules and regulations.  In all of these examples, the aim is to either prompt change or strengthen existing favorable conditions.  While checklists provide some guidance for change, they do not bring about that change, [ad][ae]and they are arguably very limited when it comes to definitively conveying what favorable conditions to strengthen.  The inclusion of positive feedback fulfills these particular goals.

A degree of skepticism and reluctance to actively include a record of positive observations[af][ag] in an inspection, is understandable since negative feedback can more readily influence recipients toward adherence to standards and regulations.  Characterized by correction and an implicit call to remedy, it leverages the strong emotional impact of deficiency to encourage limited deviation from what has been done before.19 However, arousal of strong negative emotions, such as discouragement, shame, and disappointment, also neurologically inhibits access to existing neural circuits thereby invoking cognitive, emotional, and perceptual impairment.[ah][ai][aj]10, 24, 25 In effect, this means that negative feedback can also reduce the comprehension of content and thus possibly run counter to the desired goal of bringing about follow up and change.2 

This skepticism and reluctance may understandably extend to even including positive feedback with negative feedback since affirmative statements do not leave as strong of an impression as critical ones.  However, studies have shown that the details of negative comment will not be retained without sufficient accompanying positive comment.[ak][al][am][an][ao][ap][aq]1[ar][as] The importance of this balance has also been shown for workplace team performance.19 The correlation observed between higher team performance and a higher ratio of positive comments in the study by Losada and Heaphy is attributed to an expansive emotional space, opening possibilities for action and creativity.  By contrast, lower performing teams demonstrated restrictive emotional spaces as reflected in a low ratio of positive comments.  These spaces were characterized by a lack of mutual support and enthusiasm, as well as an atmosphere of distrust and [at]cynicism.18

The consequence of positive feedback in and of itself also provides compelling reason to regularly and actively provide it. Beyond increasing comprehension of corrections by offsetting critical feedback, affirmative assessment facilitates change by broadening the array of solutions considered by recipients of the feedback.13 This dynamic occurs because positive feedback adds to an employees’ security and thus amplifies their confidence to build on their existing strengths, thus empowering them to perform at a higher level[au].7 

Principles of management point out that to achieve high performance employees need to have tangible examples of right actions to take[av][aw][ax][ay][az], including knowing what current actions to continue doing.14, 21 A significant example of this is the way that Dallas Cowboys football coach, Tom Landry, turned his new team around.  He put together highlight reels for each player that featured their successes.  That way they could identify within those clips what they were doing right and focus their efforts on strengthening that.  He recognized that it was not obvious to natural athletes how they achieved high performance, and the same can be true for employees and inspection recipients.7 

In addition, behavioral science studies have demonstrated that affirmation builds trust and rapport between the giver and the receive[ba]r[bb][bc][bd][be][bf][bg].6 In the context of an evaluation, this added psychological security contributes to employees revealing more about their workplace which can be an essential component of a thorough and successful inspection.27 Therefore, positive feedback encourages the dialogue needed for conveying adaptive prioritization and practical means of remedy, both of which are often requisite to solving critical safety issues.[bh]

Giving positive feedback also often affirms an individual’s sense of identity in terms of their meaningful contributions and personal efforts. Acknowledging those qualities, therefore, can amplify them.  This connection to personal value can evoke the highest levels of excitement and enthusiasm in a worker, and, in turn, generate an eagerness to perform and fuel energy to take action.8 

Looked at from another perspective, safety inspections can feel personal. Many lab workers spend a significant amount of time in the lab, and as a result, they may experience inspection reports of that setting as a reflection on their performance rather than strictly an objective statement of observable work. Consideration of this affective impact of inspection results is important since, when it comes to learning, attitudes can influence the cognitive process.29  Indeed, this consideration can be pivotal in transforming a teachable moment into an occasion of learning.4 

To elaborate, positive feedback can influence the affective experience in a way that can shift recipients’ receptiveness to correction[bi][bj].  Notice of inspection deficiencies can trigger a sense of vulnerability, need, doubt, and/or uncertainty. At the same time, though, positive feedback can promote a sense of confidence and assurance that is valuable for active construction of new understanding.  Therefore, positive feedback can encourage the transformation of the intense interest that results from correction into the contextualized comprehension necessary for successful follow-up to recorded deficiencies.  

Overall, then, a balance of both positive and negative feedback is crucial to ensuring adherence to regulations and encouraging achievement.[bk]2, 12 Since positive emotion can influence how individuals respond to and take action from feedback, the way that feedback is formatted and delivered can determine whether or not it is effective.20  Hence, rooting an organization in the value of positivity can reap some noteworthy benefits including higher employee performance, increased creativity, and an eagerness in employees to engage.6

To gain these benefits, it is, therefore, necessary to expand from the approach of the checklist alone.16 Principles of evaluation recommend a report that includes both judgmental and descriptive information.  This will provide recipients with the information they seek regarding how well they did and the successfulness of their particular efforts.27 Putting the two together creates a more powerful tool for influence and for catalyzing transformation.

In this paper the authors would like to propose an alternative way to format safety inspection information, in particular, to provide feedback that goes beyond the checklist-only approach. The authors’ submission of this approach stems from their experiences of implementing a practice of documenting positive inspection findings within a large academic department. They would like to establish, based on educational and organizational management principles, that this practice played an essential role in the successful outcome of the inspection process in terms of corrections of safety violations, extended compliance, and user satisfaction. [bl][bm][bn][bo]

[a]There is also a legal responsibility of the purchaser of a hazardous chemical (usually the institution, at a researcher’s request) to assure it is used in a safe and healthy manner for a wide variety of stakeholders

[b]I would add what topics will be covered by the inspection to this list. The inspection programs I was involved in/led had a lot of trouble deciding whether the inspections we conducted were for the benefit of the labs, the institution or the regulatory authorities. Each of these stakeholders had a different set of issues they wanted to have oversight over. And the EHS staff had limited time and expertise to address the issues adequately from each of those perspectives.

[c]This is interesting to think about. As LSTs have taken on peer-to-peer inspections, they have been using them as an educational tool for graduate students. I would imagine that, even with the same checklist, what would be emphasized by an LST team versus EHS would end up being quite a bit different as influenced by what each group considered to be the purpose of the inspection activity.

[d]Hmm, well maybe the issue is trying to cram the interests and perspectives of so many different stakeholders into a single, annual, time-limited event 😛

[e]I haven’t really thought critically about who the stakeholders are of an inspection and who they serve. I agree with your point Jessica, that the EHS and LST inspections would be different and focus on different aspects. I feel include to ask my DEHS rep for his inspection check list and compare it to ours.

[f]This is an aspirational goal for checklists, but is not often well met. This is because laboratories vary so much in the way they use chemicals that a one size checklist does not fit all. This is another reason that the narrative approach suggested by this article is so appealing

[g]We have drafted hazard specific walkthrough rubrics that help address that issue but a lot of effort went into drafting them and you need someone with expertise in that hazard area to properly address them.

[h]Well I do think that, even with the issues that you’ve described, checklists still work towards decreasing variability.

In essence, I think of checklists as directing the conversation and providing a list of things to make sure that you check for (if relevant). Without such a document, the free-form structure WOULD result in more variability and missed topics.

Which is really to say that, I think a hybrid approach would be the best!

[i]I agree that a hybrid approach is ideal, if staff resources permit. The challenge is that EHS staff are responding to a wide variety of lab science issues and have a hard time being confident that they are qualified to raise issues. Moving from major research institutions to a PUI, I finally feel like I have the time to provide support to not only raise concerns but help address them. At Keene State, we do modern science, but not exotic science.

[j]I feel like in the end, it always comes down to “we just don’t have the resources…”

Can we talk about that for a second? HOW DO WE GET MORE RESOURCES FOR EHS  : /

[k]I feel like that would also be the case for a “descriptive comment” inspection if the time is limited. Is there a way that the descriptive method could improve efficiency while still covering all inspection items?

[l]This is something my university struggles with in terms of a checklist. There’s quite a bit of variability between inspections in our labs done by different inspectors. Some inspectors will catch things that others would overlook. Our labs are vastly different but we are all given the same checklist – Our checklist is also extensive and the language used is quite confusing.

[m]I have also seen labs with poor safety habits use this to their advantage as well. I’ve known some people who strategically put a small problem front and center so that they can be criticized for that. Then the inspector feels like they have “done their job” and they don’t go looking for more and miss the much bigger problem(s) just out of sight.

[n]^ That just feels so insidious. I think the idea of the inspector looking for just anything to jot down to show they were looking is not great (and something I’ve run into), but you want it to be because they CANT find anything big, not just to hide it.

[o]Amanda, our JST has had seminars to help prepare inspectors and show them what to look for. We also include descriptions of what each inspection score should look like to help improve reproducibility.

[p]We had experience with this issue when we were doing our LSTs Peer Lab Walkthrough! For this event, we have volunteer graduate students serve as judges to walk through volunteering labs to score a rubric. One of the biggest things we’ve had to overhaul over the years is how to train and prepare our volunteers.

So far, we’ve landed on providing training, practice sessions, and using a TEAM of inspectors per lab (rather than just one). These things have definitely made a huge difference, but it’s also really far from addressing the issue (and this is WITH a checklist)

[q]I’d really like to go back to peer-peer walkthroughs and implement some of these ideas. Unfortunately, I don’t think we are there yet in terms of the pandemic for our university and our grad students being okay with this. Brady, did you mean you train the EHS inspectors or for JST walkthroughs? I’ve given advice about seeing some labs that have gone through inspections but a lot of items that were red flags to me (and to the grad students who ended up not pointing it out) were missed / not recorded.

[r]This was really interesting to read, because when I was working with my student safety team to create and design a Peer Lab Walkthrough, this was something we tried hard to get around even though we didn’t name the issue directly. 

We ended up making a rubric (which seems like a form of checklist) to guide the walkthrough and create some uniformity in responding, but we decided to make it be scored 1-5 with a score of *3* representing sufficiency. In this way, the rubric would both include negative AND positive feedback that would go towards their score in the competition.

[s]I think the other thing that is cool about the idea of a rubric, is there might be space for comments, but by already putting everything on a scale, it can give an indication that things are great/just fine without extensive writing!

[t]We also use a rubric but require a photo or description for scores above sufficient to further highlight exemplary safety.

[u]I very much saw this in my graduate school lab when EHS inspections were done. It was an odd experience for me because it made me feel like all of the things I was worried about were normal and not coming up on the radar for EHS inspections.

[v]I think this type of feedback would be really beneficial. From my experience (and hearing others), we do not get any feedback on how to fix issues. You can ask for help to fix the issues, but sometimes the recommendations don’t align with the labs needs / why it is an issue in the first place

[w]This was a major problem in my 1st research lab with the USDA. We had safety professionals who showed up once per year for our annual inspection (they were housed elsewhere). I was told that a set up we had wasn’t acceptable. I explained why we had things set up the way we did, then asked for advice on how to address the safety issues raised. The inspector literally shrugged his shoulders and gruffly said to me “that’s your problem to fix.” So – it didn’t get fixed. My PI had warned my about this attitude (so this wasn’t a one-off), but I was so sure that if we just had a reasonable, respectful conversation….

[x]I think this is a really key point. We had announced inspections and were aware of what was on the checklists. As a LSO, we would in the week leading up to it get the lab in tip-top shape. Fortunately, we didn’t always have a ton of stuff to fix in the week leading up to the inspection, but its easy to imagine reducing things to a yes/no checklists can mask long-term problems that are common in-between inspections

[y]My lab is similar to this. When I first joined and went through my first inspection – the week prior was awful trying to correct everything. I started implementing “clean-ups” every quarter because my lab would just go back to how it was after the inspection.

[z]Same

[aa]Our EHS was short-staffed and ended up doing “annual inspections” 3 years apart – once was before I joined my lab, and the next was ~1.5 years in. The pictures of the incorrect things found turned out to be identical to the inspection report from 3 years prior. Not just the same issues, but quite literally the same bottles/equipment.

[ab]Yeah, we had biannual lab cleanups that were all day events, and typically took place a couple months before our inspections; definitely helped us keep things clean.

One other big things is when we swapped to subpart K waste handling (cant keep longer than 12months), we just started disposing of all waste every six months so that way nothing could slip through the cracks before an inspection

[ac]Jessica, you’re talking about me and I don’t like it XD

[ad]I came to feel that checklists were for things that could be reviewed when noone was in the room and that narrative comments would summarize conversations about inspectors’ observations. These are usually two very different sets of topics.

[ae]We considered doing inspections “off-hours” to focus on the checklist items, until we realized there was no such thing as off hours in most academic labs. Yale EHS found many more EHS problems during 3rd shift inspections than daytime inspections

[af]This also feels like it would be a great teachable moment for those newer to the lab environment. We often think that if no one said anything, I guess it is okay even if we do not understand why it is okay. Elucidating that something is being done right “and this is why” is really helpful to both teach and reinforce good habits.

[ag]I think that relates well to the football example of highlighting the good practices to ensure they are recognized by the player or researcher.

[ah]I noticed that not much was said about complacency, which I think can both be a consequence of an overwhelm of negative feedback and also an issue in and of itself stemming from culture. And I think positive feedback and encouragement could combat both of these contributors to complacency!

[ai]Good point. It could also undermine the positive things that the lab was doing because they didn’t actually realize that the thing they were previously doing was actually positive. So complacency can lead to the loss of things that you were doing right before.

[aj]I have seen situations where labs were forced to abandon positive safety efforts because they were not aligned with institutional or legal expectations. Compliance risk was lowered, but physical risk increased.

[ak]The inclusion of both positive and negative comments shows that the inspector is being more thorough which to me would give it more credibility and lead to greater acceptance and retention of the feedback.

[al]I think they might be describing this phenomenon a few paragraphs down when they talk about trust between the giver and receiver.

[am]This is an interesting perspective. I have found that many PIs don’t show much respect for safety professionals when they deliver bad news. Perhaps the delivery of good news would help to reinforce the authority and knowledge base of the safety professionals – so that when they do have criticism to deliver, it will be taken more seriously.

[an]The ACS Pubs advice to reviewers of manuscripts is to describe the strengths of the paper before raising concerns. As an reviewer, I have found that approach very helpful because it makes me look for the good parts of the article before looking for flaw.

[ao]The request for corrective action should go along with a discussion with the PI of why change is needed and practical ways it might be accomplished.

[ap]I worked in a lab that had serious safety issues when I joined.  Wondering if the positive -negative feedback balance could have made a difference in changing the attitudes of the students and the PI.

[aq]It works well with PIs with an open mind; but some PIs have a sad history around EHS that has burnt them out on trying to work with EHS staff. This is particularly true if the EHS staff has a lot of turnover.

[ar]Sandwich format- two positives (bread) between the negative (filling).

[as]That crossed my mind right away as well.

[at]I wonder how this can contribute to a negative environment about lab-EHS interactions? If there is only negative commentary (or the perception of that) flowing in one direction, would seem that it would have an impact on that relationship

[au]I see how providing positive feedback along with the negative feedback could help do away with the feeling the inspector is just out to get you. Instead, they are here to provide help and support.

[av]This makes me consider how many “problem” examples I use in the safety training curriculum.

[aw]This is a good point. From a learning perspective, I think it would be incredibly helpful to see examples of things being done right – especially when what is being shown is a challenge and is in support of research being conducted.

[ax]I third this so hard! It was also something that I would seek out a lot when I was new but had trouble finding resources on. I saw a lot of bad examples—in training videos, in the environments around me—but I was definitely starved for a GOOD example to mimic.

[ay]I read just part of the full paper. It includes examples of positive feedback and an email that was sent. The example helped me get the flavor of what the authors were doing.

[az]This makes a lot of sense to me – especially from a “new lab employee” perspective.

[ba]Referenced in my comment a few paragraphs above.

[bb]I suspect that another factor in trust is the amount of time spent in a common environment. Inspectors don’t have a lot of credibility if they are only visit a place annually where a lab worker spends every day.

[bc]A lot of us do not know or recognize our inspectors by name or face (we’ve also had so much turn around in EHS personnel and don’t even have a CHO). I honestly would not be able to tell you who does inspections if not for being on our universities LST.  This has occurred in our lab (issue of trust) during a review of an inspection of our radiation safety inspection. The waste was not tested properly, so we were  questioned on our rad waste. It was later found that the inspector unfortunately didn’t perform the correct test for testing the pH. My PI did not take it very well after having to correct them about this.

[bd]I have run into similar issues when inspectors and PIs do not take the time to have a conversation, but connect only by e-mail. Many campuses have inspection tracking apps which make this too easy a temptation for both sides

[be]Not only a conversation but inspectors can be actively involved in improving conditions- eg supply signs, ppe, similar sop’s…

[bf]Yes, sometimes, it’s amazing how little money it can take to show a good faith effort from EHS to support their recommendations. Other times, if it is a capital cost issue, EHS is helpless to assist even if there is a immediate concern.

[bg]I have found inspections where the EHS openly discuss issues and observations as they are making the inspection very useful Gives me the chance to ask the million questions I have about safety in the lab.

[bh]We see this in our JST walkthroughs. We require photos or descriptions of above acceptable scores and have some open ended discussion questions at the end to highlight those exemplary safety protocols and to address areas that might have been lacking.

[bi]Through talking with other students reps, we usually never know what we are doing “right” during the inspections. I think this would benefit in a way that shared positive feedback, would then help the other labs that might have been marked on their inspection checklist for that same item and allow them a resource on how to correct issues.

[bj]I think this is an especially important point in an educational environment. Graduate students are there to learn many things, such as how to maintain a lab. It is weird to be treated as if we should already know – and get no feedback about what is being done right and why it is right. I often felt like I was just sort of guessing at things.

[bk]At UVM, we hosted an annual party where we reviewed the most successful lab inspections. This was reasonably well received, particularly by lab staff who were permanent and wanted to know how others in the department were doing on the safety  front

[bl]This can be successful, UNTIL a regulatory inspection happens that finds significant legal concerns related primarily to paperwork issues. Then upper management is likely to say “enough of the nice guy approach – we need to stop the citations.” Been there, done that.

[bm]Fundamentally, I think there needs to be two different offices. One to protect the people, and one to protect the institution *huff huff*

[bn]When that occurs, there is a very large amount of friction between those offices. And the Provost ends up being the referee. That is why sometimes there is no follow up to an inspection report that sounds dire.

[bo]But I feel like there is ALREADY friction between these two issues. They’re just not as tangible and don’t get as much attention as they would if you have two offices directly conflicting.

These things WILL conflict sometimes, and I feel like we need a champion for both sides. It’s like having a union. Right now, the institution is the only one with a real hand in the game, so right now that perspective is the one that will always win out. It needs more balance.

Pragmatism as a teaching philosophy in the safety sciences

On March 10, Dr. Patricia Shields discussed the article she co-authored with three safety professionals about using “pragmatism” as a safety philosophy in the safety sciences. Her summary powerpoint and the comments form the table read of this article are below.

The full paper can be found here: https://www.sciencedirect.com/science/article/pii/S0925753520304926?casa_token=gG7VtvjEqqsAAAAA:Of4B_mGRk-HwwH-q_WQLybg2zDGPtjcYVFCg0ZgnYe5riPefhOJ6nDCGF2YwjMrhSR2wGfIABg

Excerpts from “Pragmatism as a teaching philosophy in the safety sciences: A higher education pedagogy perspective”

03/03 Table Read for The Art & State of Safety Journal Club

Excerpts from “Pragmatism as a teaching philosophy in the safety sciences: A higher education pedagogy perspective”

Full paper can be found here: https://www.sciencedirect.com/science/article/pii/S0925753520304926?casa_token=gG7VtvjEqqsAAAAA:Of4B_mGRk-HwwH-q_WQLybg2zDGPtjcYVFCg0ZgnYe5riPefhOJ6nDCGF2YwjMrhSR2wGfIABg

Meeting Plan

  • (5 minutes) Jessica to open meeting
  • (15 minutes) All participants read complete document
  • (10 minutes) All participants use “Comments” function to share thoughts
  • (10 minutes) All participants read others’ Comments & respond
  • (10 minutes) All participants return to their own Comments & respond
  • (5 minutes) Jessica announces next week’s plans & closes meeting

  1. Introduction

(FYI, most of the Introduction has been cut)

Elkjaer (2009) has previously alluded to this lack of appreciation and value of pragmatism ‘as a relevant learning theory’ (p. 91) in spite of the growing recognition of its important role in education and teaching (Dewey, 1923, 1938; Garrison and Neiman, 2003; Shields, 2003a; Sharma et al., 2018), scholarship and academic development (Bradley, 2001), academic practice (Shields, 2004; 2006), curriculum (Biesta, 2014) and online learning (Jayanti and Singh, 2009). This article, therefore, addresses this anomaly by arguing for the appropriateness of pragmatism as a highly relevant philosophical cornerstone, especially for safety science educators[a].

2. The Scholarship of Learning and Teaching (SoLT)

(FYI, this section has been cut)

3. Pragmatism as a teaching philosophy

3.1. Teaching philosophies

(FYI, most of this section has been cut)

The research paradigms used extensively in higher education are positivism and interpretivism and are often being cited by faculty as influencing their teaching philosophy (Cohen et al., 2006). These two are usually associated with quantitative and qualitative research methods respectively but both prove problematic for the teaching of the safety sciences. First, safety science relies on both quantitative and qualitative methods. Second, neither uses a ‘problem’ orientation in its approach to methods and safety science is inherently problem and practice oriented and certainly should be with respect to its teaching.[b][c][d]

Third, the mixed methods literature has recognized this drawback and adopted pragmatism as their research paradigm because it takes the research problem as its point of departure (Johnson and Onwuegbuzie, 2004). In contrast to positivism and interpretivism, pragmatism holds the view that the research question that needs to be answered is more important than either the philosophical stance or the methods that support such stance. Pragmatism is traditionally embraced as the para­digm of mixed methods hence, it turns the incompatibility theory on its head by combining qualitative and quantitative research approaches, and “offers an immediate and useful middle position philosophically and methodologically; a practical and outcome-oriented method of inquiry that is based on action and leads” (Johnson and Onwuegbunzie, 2004, p. 17). The pluralism of pragmatism allows it to work across and within methodological and theoretical approaches, which for the purpose of the intent of this paper is consistent with a safety science multi-disciplinary approach.

This places practice, where the problem must originate, as an important component of mixed methods. This practice orientation res­onates with the goals of learning and teaching in safety science. Therefore, presented here is the philosophy of ‘pragmatism’ which we argue is much better suited for guiding or informing safety science teaching endeavours.

3.2. The foundations of pragmatism

(FYI, this section has been cut)

3.3. Value of pragmatism for the safety sciences

(FYI, this section has been cut)

4. Safety science higher education in Australia

(FYI, this section has been cut)

5. Pragmatism and evidence informed practice (EIP)

Safety science education has traditionally taken an evidence-informed practice (EIP) stance for its teaching practice. Evidence informed practice is not a one-dimensional concept and its definition is still under debate with various academic lenses being applied to the notion of ‘research as evidence’ and how EIP can be measured (Nelson and Campbell, 2017). However, Bryk (2015) is attributed to offering up the view that EIP is a “fine-grained practice-relevant knowledge, generated by educators, which can be applied formatively to support professional learning and student achievement” (Nelson and Campbell, p. 129).[e]

This includes the expectation that students will be able to use their theoretical knowledge, gained through their academic studies, including research in the field, and translate this knowledge into practical appli­cations in the real world[f][g][h][i][j][k][l][m][n][o]. There are continued efforts to recognise these Research to Practice (RtP) endeavours, as an example, the Journal of Safety, Health and Environmental Research in 2012 devoted an issue to ‘Bridging the Gap Between Academia and the Safety, Health and Envi­ronmental (SH&E) Practitioner. The issue demonstrated “the vital role of transferring SH&E knowledge and interventions into highly effective prevention practices for improving worker safety and health” (Choi et al., 2012, p.1). In that issue Chen et al. (2012, p. 27) found that the ‘Singapore Workplace Safety and Health Research Agenda: Research-to-Practice’ prioritizes, first, organisational and business aspects of work­ place health and safety (WHS) and second, WHS risks and solutions.

Other researchers in that same issue (Loushine, 2012, p. 19) examined ‘The Importance of Scientific Training for Authors of Occupational Safety Publications’ and found that there needs to be “attention on the coordination of research and publication efforts between practitioners and academics/researchers to validate and advance the safety field body of knowledge” (p. 19).

Shields (1998) introduced the notion of ‘classical pragmatism’ as a way to address the academic/practitioner divide in the public admin­istration space. She also notes that the pure EIP approach often contains a lack of congruence between practitioner needs and research[p][q] (Shields, 2006). She identifies theory as a source of tension. Practitioners often see theory as an academic concern divorced from problems faced in their professional world. Here, pragmatism bridges theory and practice because theory[r] is considered a “tool of practice” which can strengthen student/practitioner skills and make academic (process and products) stand up to the light of practice (Shields, 2006, p. 3). The pragmatist philosopher, John Dewey used a map metaphor to describe the role of theory, whereby a map is not reality, but it is judged by its ability to help a traveller reach their chosen destination[s] (Dewey, 1938).

This perspective is often demonstrated in the student’s capstone, empirical research project. Using a problematic situation as a starting point, they introduce literature, experience and informed conceptual frameworks as theoretical tools that help align all aspects of a research process (research purpose, question, related literature, method and statistical technique). Thus, student/practitioners/researchers, led by a practical problem, could develop or find a theory by drawing on diverse (pluralistic) literature as well as their experience with the problematic situation. This provisional theory guides choice of methodology, vari­able measurement, data collection and analysis, which is subsequently shared (participatory) and evaluated. Practical problems are therefore addressed by the student’s conceptual framework, which is considered a tool related to the problem under investigation. This approach thus emphasizes the connective function of theory (Shields, 2006). The use of this pragmatic framework has allowed a bridge between theory and for it to be successfully applied to higher education more broadly (Bach­kirova et al., 2017; El-Hani and Mortimer, 2007). Texas State University has embedded a pragmatism informed research methodology in its Master of Public Administration program with success measured in student awards, citations and recognition in policy related publications (Shields et al., 2012).

Therefore, it is proposed that safety science is a discipline which would, and should, also benefit from alignment with philosophical pragmatism. This would represent a much wider stance and a shift from viewing safety science education with merely an EIP lens, where the main consideration for teaching practice is that students are presented with research which provides them the required ‘scientific evidence’ and that the teaching of this research is enough to inform their practice of the discipline [t][u](Hargreaves, 1996, 1997). It should be noted that pragmatism does not abandon evidence, rather it contextualizes it in a problematic situation.

6. The significance of pragmatism as a teaching philosophy

[v]

For pragmatism to penetrate the safety science education field it needs to be relatively easy to apply and transmit. Fortunately, Brendel (2006) has developed a simple four P’s framework, which captures pragmatism’s basic tenets and can easily be applied as to teaching (Bruce, 2010). The 4P’s of pragmatism include the notions that educa­tion needs to be Practical (scientific inquiry should incorporate practical problem solving), Pluralistic (the study of phenomena should be multi-and inter- disciplinary), Participatory (learning includes diverse perspectives of multiple stakeholders) and Provisional (experience is advanced by flexibility, exploration and revision), as shown in Fig. 2.

The majority of safety science students simultaneously study and work in agencies or organisations as safety professionals. Hence, they appreciate the pragmatic teaching approach whereby teacher, student and external stakeholders influence learning by incorporating multiple perspectives. When teaching is filtered through a pragmatic philosoph­ical lens, students’ learning is framed by their key domain area of in­terest as well as their professional context and work experienc[w][x][y][z][aa][ab]e. It encourages them to ‘try on’ their work as experiential[ac][ad][ae] learning, which they can take into and out of the classroom. Flexibility, integration, reflection and critical thinking are nurtured. Pragmatism and the four Ps can facilitate this process.

Ideally, the classroom environment incorporates communities of inquiry where students and teachers work on practical problems appli­cable to the health and safety domain. The pluralistic, expansive com­munity of inquiry concept incorporates participatory links to the wider public, including industry and workers (Shields, 2003b). The commu­nity of inquiry also encourages ongoing experimentation (provisional). The ‘practical problem’ and ‘theory as tool’ orientation provides op­portunities to bridge the sometime rigid dualisms between theory and practice. This teaching lens also incorporates a spirit of critical opti­mism, which leads to a commitment by the teacher [af]and the higher ed­ucation institution to continually experiment and work to improve the content delivery and student learning experience (Shields, 2003a).

Pragmatism emphasizes classroom environments which foster trans­formations in thinking and these transformations in thinking can often be observed in the quality of student’s final research project (Shields and Rangarajan, 2013). Most students graduating from postgraduate degrees in the safety sciences are required to produce a major piece of work (thesis) with broad practical value. Ideally they grow and develop useful skills from the learning experience and the thesis is useful to their employer/or­ganization and has applicability to the wider community in which they work as safety professionals.[ag][ah][ai][aj]

6.1. Pragmatic learning – student success – enhancement to practice

Higher education safety science pedagogy should be embedded in the notion that most of the students who attend come with some depth of practical experience and practical wisdom whom the academe should treat as lifelong learners and researchers[ak]. The academe should provide them with tools and skills to be stronger lifelong learners equipped to contribute to safety science practice[al][am][an][ao][ap][aq].

The universities in which the researchers of this article are aligned use pragmatism as a multi/trans-disciplinary approach in order to bridge the gap between academic theory (research) and practice. Whilst two of these universities teach safety science, the third one places pragmatism in the public administration domain and has for many years successfully incorporated the use of pragmatism to bridge the gap be­tween academia and practice (Shields and Tajalli, 2006; Foy, 2019).

The value of using pragmatism as a teaching philosophy is one which has been successfully demonstrated to bridge this gap. A snippet of just some of the student feedback on student learning from the use of a pragmatism philosophy of teaching are evidenced below:

Having been a railway man for over thirty years I recognised that a gap

needed to be closed in my academic knowledge to advance further in the

business and wider industry and the Safety Science courses have provided

the vehicle for this to occur. Importantly I have been able to link the

learning in these courses and the assignments directly to the activities of

my rail organisation. That’s a big selling point in today’s business world.

(Safety Science Student, Phil O’Connell)

In 2014, I was promoted to Administrative Division Chief of Safety. On several occasions, I found myself utilizing the skills I learned to help evaluate and improve issues and programs in my fire department. In particular I was able to [ar]use case study research to show that our Safety

Division was understaffed. As a result, I successfully increased our

numbers of Safety Officers from 5 to 26. I have also used the same

techniques to improve our departments PPE and cancer prevention pro­

grams. The greatest challenge, however, came when we had 100 fire

fighters exposed to a potentially massive amount of asbestos during a

major high rise fire. Our department had never dealt with an exposure of

its magnitude. I was able to help our department solve a very difficult

problem concerning asbestos and its effect on our PPE. I even received

calls from other fire departments who were interested in our method.

(Public Administration Student – Brian O’Neill)

These students have gone on to have their research cited and widely acknowledge (O’Connell et al., 2016; O’Connell et al., 2018; O’Neill, 2008) as have many other students under this pragmatic philosophy for learning and teaching.[as]

6.2. Pragmatic learning – student success – theoretical advancement

Whilst the embedding of pragmatism as a teaching philosophy is relatively new for Australian universities teaching in the safety science space, it is well entrenched within the public administration programs at Texas State University. Approximately 60 percent of students in this program work full time in state, local federal or non-profit organiza­tions. [at][au]Their capstone papers focus on the practical problems of public policy, public administration and nonprofit administration. [av][aw][ax][ay][az][ba]Problems with “disorganised graduate capstone papers with weak literature re­views” (Shields and Rangarajan, 2013, p. 3) pushed the faculty to adopt pragmatism as a teaching framework. This approach enhanced students’ Applied Research Projects (ARP), which have demonstrated remarkable industry, field and community impact (Shields, 1998). [bb]For example, five of the papers won first place in the United States among schools of public affairs and administration. A content analysis of the Texas State Uni­versity applied research papers (ARPs) revealed that “most of these ARPs are methodical inquiries into problems encountered by practitioners at the workplace. Hence a dynamic interplay of practitioner experience informs public administration research, and rigorous research informs practitioner response to administration/management problems” (Shields et al., 2012, pp. 176–177).

(FYI, paragraph cut)

7. Conclusion

Higher education teachers who have used pragmatism as their teaching philosophy for some time have led the way for an interest in pragmatism as a teaching philosophy to spread and gain momentum into other domains. However, despite this and publications which endorse the use of pragmatism, there still appears to be little understanding of the benefits and rationale for pragmatism to be used as a teaching phi­losophy over other more established and entrenched research focused philosophies. [bc]Therefore, this paper has tried to distil both an under­standing of what pragmatism represents and the ‘how and why’ it should be used more broadly, particularly for safety science educators.

Pragmatism goes beyond what is offered by the more singular notion of evidence-informed practice, especially within the safety sciences higher educational programs. Its value in other domains has been well established particularly where more problem focused, and practical applied applications are required.[bd] Further, significant positive results in student’s research outputs from having a pragmatic research [be]framework are now well demonstrated. Where student work can be used to inform decision making, policy making and problem solving that impacts wider inquiry its value stands out, as already evidenced in both the public administration space and safety science space.[bf]

In relation to the safety sciences, the higher educational pedagogist can be confident that the path to pragmatism is a well-worn, even if it may be unfamiliar to the discipline. It is recommended to extend teaching practices, past only valuing the evidence-informed practice stance, to reduce the theory to practice divide. This can be done by incorporating a broader philosophical (4 Ps) pragmatic perspective in order to develop a professional practice community of safety science problem solvers.

Therefore, embracing pragmatism as a teaching philosophy is encouraged in the higher education sector,[bg][bh] and recent acknowledgments of, and acceptance for this teaching philosophy stance, has instilled greater confidence of its recognition and credibility for its wider use. For the safety science educator, they can be proud that its adoption as a teaching philosophy is a long awaited natural development instigated by the early pragmatists forebearers who worked in the safety field.

[a]Is safety a science? I can see arguments that it is, but I can also see arguments that it is a cultural eductation about community expectations for workplace decision-making. (There are many different “communities” potentially included in this concept.)

[b]Would you include constructivism as a different paradigm?

[c]I see interpretivism and constructivism as very similar. The methods literature often treats them as basically the same. In many ways it depends on whether the problem is approached inductively or deductively. Construstivism is associated with inductive exploratory research often.

[d]I wonder if sometimes there is insufficiency of reflection to make constructivism too close to interpretivism.

[e]EIP or EBP (Evidence-Based Practice has become much more popular in general STEM education in the past 5-10 years, especially as part of the DBER (Discipline-Based Education Research) set of practices.

[f]Previous TA training I received stressed the importance of applying lecture content to new problems to help students learn and retain knowledge. I think thats a stong benefit of pragmatism.

[g]Again, I’m wondering a bit of the distinction between this and constructivism?

[h]I find that there are many missed opportunities in lecture courses and textbooks to really connect students to the safety aspects of the chemicals being described. For example, with the number of times HF is used as an example of textbook problems, it would be nice to include something about how incredibly hazardous it is to work with!

[i]Just today in an honors general chemistry course we talked about the hazards of perchlorate salts. I was surprised that the textbook was using it as a regular example, along with perchloric acid, without a hint of a discussion about safety…

[j]I believe this can also be applied to “less hazardous” compounds also, there is, in my opinion, a huge disconnect between the overall properties of a compound and its hazardous nature. For example, ethyl acetate, commonly used, not extremely hazardous, but just this week I had multiple students ask why it they needed to work with it in a hood rather than their open benchtop.

[k]One of the learning opportunities in pragmatic safety science is uncovering hidden assumptions in standard practices. My “hazmat chemists” instincts are very different from the “research chemists” instincts about the same chemicals. It takes a lot of practice to go into a conversation about these chemicals with an open mind.. (This has come up this week with a clean out of research lab and very different perceptions of the value and hazards of specific chemical containers.)

[l]It would be really cool to see an organic textbook for example that has inset sections on the safety considerations of different reactions. My O chem professor would sometimes highlight reactions that were good on paper and problematic in reality, but it should a more frequent discussion.

[m]This is something that gets addressed in our organic labs actually. They “design” their own experiment. They’re given a number of chemicals in a list (some are controlled substances, some are very expensive) and are asked to choose which ones they would like to use for their experiment. We then use their choice from groups to go over both safety aspects and expense aspects and how we can then still do our experiment with other chemicals.

[n]That is a great exercise. I especially like how practical and open-ended it is.

[o]Overall, in an organic chemistry course practical knowledge of synthesis is mostly untouched as many of the classic reactions used to teach the course are fairly complex experimentally. I.e. sandmeyer reactions are conveniently simple  to explain but harder to accomplish in person.

[p]This seems to be an issue across many fields. Often times we see that those performing the practice and those performing the research speak different languages and consider very different things important.

[q]I see this a lot in experimental and computational work. Different languages, different skill sets, and different approaches

[r]Safety science also has this issue internally. There is an interesting paper that was covered in a podcast awhile ago about “reality-based safety science”: https://safetyofwork.com/episodes/ep20-what-is-reality-based-safety-science

[s]I’m thinking about an analogy with computational and experimental chemistry also. I like the “tools of practice” bridge.

[t]How would this compare to case studies?

[u]I would imagine that for Case Studies to become research that someone would have to gather case studies and look for trends. I see Case Studies as an opportunity to share one experience or one set of experiences with the community in the hopes that with enough Case Studies a meaningful research study could be conducted.

[v]Case studies are definitely included in this.  Scientific evidence here would mean that the evidence was collected with a scientific attitude. There is no belief that actual objectivity is possible but something close should be strived for.

[w]Allowing students to pursue their interests is alway a benefit while learning. Its been a struggle to organize researcher safety meetings in a way to engage participants by allowing them to follow their interests, especially with virtual meetings. Has anyone found strategies that facilitate that interest and engagement?

[x]Something I had started to explore just before the lockdown was to try to set up opportunities for grad students to discuss the risk assessments around their own project work. In this way, they could show off their expertise while helping to educate others – and possibly reveal some things that they hadn’t thought about or didn’t know. I really liked how Texas A&M did their Table Discussions in which they invited students who had something in common (i.e. all those who use gloveboxes), presented a Safety Moment about them, then invited students to share their own stories, strategies, and concerns with one another about glovebox usage.

[y]We started doing round tables that would discuss safety topics within their own focus area (inorganic, organic, physicals/atmospheric), similar to what Jessica mentioned with gloveboxes and that’s gotten a lot response and interest.

[z]Those sounds like great ideas. We already have our department research groups divided into hazard classes so it would be easy to have them meet in those groups. Thanks for the suggestions. I also like the idea of participants presentation to eachother instead of a lecture style event.

[aa]I like this a lot. Is there much faculty involvement?

[ab]We don’t get as much faculty involvement due to their busy schedules. But we have had safety panels with faculty with different safety specialties such as lasers, slink lines, compressed gases, physical hazard etc.

[ac]Is pragmatism a bridge between theoretical and experiential learning?

[ad]I believe that it is most useful when the bridge runs both ways

[ae]Excellent point. One should inform the other.

[af]Action research is certainly on the continuum of research that can be informed by pragmatism. The pluralism of pragmatism comes to play here.

[ag]Hopefully within the safety sciences this aspiration is realized more often than in other disciplines. Too many times, theses and dissertations get lost in the archives and go unread.

[ah]Again, this is something that makes me think of the ideas behind Action Research. Since it is a research method by which the researcher questions their own practice, the thesis that ultimately comes of it could potentially be of interest to their own employers or teams (even if no one else reads it).

[ai]Safety research tends to be somewhat more read because it is often driven by the need to support a risk decision. But as Covid has shown, this may not improve the quality of the scientific literature that is being read. The rush to publish (no or small amounts of data) has really slowed the understanding of best safety practices

[aj]I see what you mean Jessica even if the actual manuscript is not disseminated a researcher self-evaluating their own practice can definitely serve a self-check where one can see places to improve.

[ak]How would you say the idea of “pragmatism” relates, if at all, to the concept of Action Research?

[al]Would a pragmatic point of view work in beginner safety courses?

[am]I think that the “citizen scientist” movement is an attempt at a pragmatic approach to purer enviromental sciences, but I’m not convinced that this kinds of projects improve science literacy. They seem to go to stop at the crowd sourced data collection phase and then the professionals interpret the data for the collectors

[an]This goes back to the expert/novice question. Would a pragmatic approach work for both? I can see the advantage in graduate/postgraduate education. I’m wondering if the knowledge base is broad enough for beginners?

[ao]I agree. You don’t know what you don’t know.

[ap]It is also very frustrating for the beginner to put in a lot of effort collecting data and then be told that that data is fatally flawed for an obscure reason

[aq]Pragmatism would call on the expert to listen carefully to the novice particularly if the novice is in the world of practice. This is where the participatory nature of pragmatism comes in. Both should have a voice.

[ar]Brian specifically mentions case study as a method he used.

[as]I actually think of a managers need to solve problems like safety issues at work could be looked at as a mini “applied” case study.  The context of the problem shapes the parameters of the case.

[at]Do the students who are not working full time have a good sense of applications? And does it make them feel better prepared for common workplace problems?

[au]I would think that even if they didn’t work full time, they could still pick some sort of problem in the public domain to really seriously do a lot of research on. If nothing else, it could give them a sense of why the problem is so intractable.

[av]My sister was involved in one of these programs after 15 years of experience and she said that the content was marginally interesting, but being able to network with fellow professionals was quite valuable, both the stories and solutions they shared and for future follow up to ask questions of. That seems like quite a pragmatic aspect of this program

[aw]I would think that the networking would be part of the purpose – and this is really true for any research program as well. You basically find that small group of people who are really interested in the same problems in which you are interested so that you can all swap stories, publications, and ideas in order to drive all research forward.

[ax]I agree – I think that academic leadership sees this opportunity more clearly than faculty members who are assigned 10 or 15 grad students to mentor at once, though. ACS is providing some education around this opportunity for new faculty, but it’s a challenge to incorporate mentoring skills along with teaching, research and service duties faculty are handed

[ay]This is why the Community of Inquiry is so important. Community comes first.  I actually have an article on the community of inquiry if anyone is interested.

[az]Reframing things as a community of scholars is very powerful.

[ba]I’d be glad to include any references that you think would be helpful on the web page for this discussion if you would like to share them. We get about 100 views of these pages after they go up, so the impact is not limited to the attendees at a particular session

[bb]How long are the courses? One semester? I often find it difficult for students to finish an in-depth lit review in that time frame.

[bc]This link might also be useful.  https://link.springer.com/article/10.1007/s11135-020-01072-9  It deals with deductive exploratory research and covers many of these themes.

[bd]I very much appreciate the use of this pedagogy as it applies to practical content!

[be]I believe the students that give their courses a good faith effort come away with tools to apply to their work.  We look at the research project as a project management challenge and apply project management ideas throughout. This is sometimes the most important lesson, particularly for the pre-service students.

[bf]This is a very important idea. When I pursued my 1st bachelors, in political science, I was incredibly disappointed to find how much research and practice diverged.

[bg]There is an important distinction here between undergrad and graduate students in higher ed. Traditional undergrads tend to be learning more practical skills outside of the curriculum. I wonder what the experience of non-traditional and community college students are in this regard?

[bh]It does seem like this approach lends itself very well to setting where previous or current experience is required.

“Safe fieldwork strategies for at-risk individuals, their supervisors and institutions” and “Trauma and Design”

CHAS Journal Club Nov 10, 2021

On November 10, the CHAS Journal Club discussed two articles related to social safety considerations in research environments. The discussion was lead by Anthony Appleton, of Colorado State University. Anthony’s slides are below and the comments from the table read of the two articles can be found after that.

Table Read for The Art & State of Safety Journal Club

Excerpts from “Safe fieldwork strategies for at-risk individuals, their supervisors and institutions” and “Trauma and Design”

Full articles can be found here:

Safe fieldwork strategies: https://www.nature.com/articles/s41559-020-01328-5.pdf

Trauma and Design: https://medium.com/surviving-ideo/trauma-and-design-62838cc14e94

Safe fieldwork strategies for at-risk individuals, their supervisors and institutions

Everyone deserves to conduct fieldwork[a] as safely as possible; yet not all fieldworkers face the same risks going into the field. At-risk individuals include minority identities of the following: race/ethnicity, sexual orientation, disability, gender identity and/or religion. When individuals from these backgrounds enter unfamiliar communities in the course of fieldwork[b][c][d], they may be placed in an uncomfortable and potentially unsafe ‘othered’ position, and prejudice may manifest against them. Both immediately and over the long term, prejudice-driven conflict can threaten a researcher’s physical health and safety[e], up to and including their life. Additionally, such situations impact mental health, productivity and professional development.

The risk to a diverse scientific community

Given the value of a diverse scientific community[f], the increased risk to certain populations in the field — and the actions needed to protect such individuals — must be addressed by the entire scientific community if we are to build and retain diversity in disciplines that require fieldwork. While many field-based disciplines are aware of the lack of diversity in their cohorts, there may be less awareness of the fact that the career advancement of minoritized researchers can be stunted or permanently derailed[g] after a negative experience during fieldwork.

Defining and assessing risk

Fieldwork in certain geographic areas and/or working alone has led many researchers to feel uncomfortable, frightened and/or threatened by local community members and/or their scientific colleagues. Local community members may use individuals’ identities as a

biased marker of danger to the community, putting them at risk from law enforcement and vigilante behaviours. Researchers’ feelings of discomfort in the field have been reaffirmed by the murders of Black, Indigenous and people of colour including Emmett Till, Tamir Rice, Ahmaud Arbery and Breonna Taylor; however, fieldwork also presents increased risk for individuals in other demographics. For example, researchers who wear clothing denoting a minority religion or those whose gender identity, disability and/or sexual orientation are made visible can be at increased risk when conducting fieldwork. Several studies have documented the high incidence of harassment or misconduct that occurs in the field. Based on lived experience, many at-risk individuals already consider how they will handle harassment or misconduct before they ever get into the field, but this is a burden that must be shared[h][i] by their lab, departments and institutions[j] as well. Labs, departments and institutions must address such risks by informing future fieldworkers of potential risks and discussing these with them, as well as making available resources and protocols for filing complaints and accessing[k][l][m] training well before the risk presents itself.

Conversations aimed at discussing potential risks rarely occur between researchers and their supervisors, especially in situations where supervisors may not be aware of the risk posed[n] or understand the considerable impact[o] of these threats on the researcher, their productivity and their professional development. Quoted from Barker[p][q]: “…faculty members of majority groups (such as White faculty in predominantly White institutions (PWI)) may not have an understanding of the ‘educational and non-academic experiences’ of ethnic minority graduate students or lack ‘experience in working in diverse contexts’.” This extends to any supervisor who does not share identity(ies) with those whom they supervise, and would have had to receive specific training on this subject matter in order to be aware of these potential risks.

Dispatches from the field

The following are examples of situations that at-risk researchers have experienced in the field: police have been called on them; a gun has been pulled on them[r][s][t][u] (by law enforcement and/or local community members); hate symbols have been displayed at or near the field site; the field site is an area with a history of hate crimes against their identity (including ‘sundown towns’, in which all-white communities physically, or through threats of extreme violence, forced people of colour out of town by sundown); available housing has historically problematic connotations (for example, a former plantation where people were enslaved); service has been refused (for example, food or housing); slurs have been used or researchers verbally abused due to misunderstandings about a disability; undue monitoring or stalking by unknown and potentially aggressive individuals; sexual harassment and/or assault occurred. Such traumatic situations are a routine expectation in the lives of at-risk researchers. The chance of these scenarios arising is exacerbated in field settings where researchers are alone[v][w], in an unfamiliar area with little-to-no institutional or peer support, or are with research team members who may be uninformed, unaware or not trusted. In these situations, many at-risk researchers actively modify their behaviour in an attempt to avoid the kinds of situations described above. However, doing so is mentally draining, with clear downstream effects on their ability to conduct research.[x][y][z]

Mitigating risk[aa][ab][ac]

The isolating and severe burden of fieldwork risks to minoritized individuals means that supervisors[ad] bear a responsibility to educate themselves[ae] on the differential risks posed to their students and junior colleagues in the field. When learning of risks and the realized potential for negative experiences in the field, the supervisor should work with at-risk researchers to develop strategies and practices for mitigation in ongoing and future research environments.[af] Designing best practices for safety in the field for at-risk researchers will inform all team members and supervisors of ways to promote safe research, maximize productivity and engender a more inclusive culture in their community. This means asking[ag][ah][ai][aj][ak][al][am] who is at heightened risk, including but not limited to those expressing visible signs of their race/ethnicity, disability, sexual orientation, gender identity/expression (for example, femme-identifying, transgender, non-binary) and/or religion (for example, Jewish, Muslim and Sikh[an][ao]). Importantly, the condition of being ‘at-risk’ is fluid with respect to fieldwork and extends to any identity that is viewed as different[ap] from the local community in which the research is being conducted. In some cases, fieldwork presents a situation where a majority identity at their home institution can be the minority identity at the field site, whether nearby or international. Supervisors,  colleagues and students must also interrogate where and when risk is likely to occur: an individual could be at-risk whenever someone perceives them as different in the location where they conduct research. Given the variety of places that at-risk situations can occur, both at home, in country or abroad, researchers and supervisors must work under the expectation that prejudice can arise in any situation.[aq]

Strategies for researchers, supervisors, and institutions to minimize risk

Here we provide a list of actions to minimize risk and dange[ar][as]r while in the field compiled from researchers, supervisors and institutional authorities from numerous affiliations. These strategies are used to augment basic safety best practices. Furthermore, the actions can be used in concert with each other and are flexible with regards to the field site and the risk level to the researcher. These strategies are not comprehensive; rather, they can be tailored to a researcher’s situation.

We acknowledge that it is an unfair burden that at-risk populations[at] must take additional precautions to protect themselves. We therefore encourage supervisors, departments and institutions to collectively work to minimize these harms by: (1) meeting with all trainees to discuss these guidelines, and maintaining the accessibility of these guidelines (Box 1) and additional resources (Table 1); (2) fostering a department-wide discussion on

safety during fieldwork for all researchers; (3) urging supervisors to create and integrate contextualized safety guidelines for researchers in lab, departmental and institutional resources.

A hold harmless recommendation for all

Topics related to identity are inherently difficult to broach, and may involve serious legal components. For example, many supervisors have been trained to avoid references to a researcher’s identity and to ensure that all researchers they supervise are treated equally regardless of their identities.[au] Many institutions codify this practice in ways that conflict with the goals outlined in the previous sentence, as engaging in dialogue with at-risk individuals is viewed as a form of targeting or negative bias. In a perfect world, all individuals would be aware of these risks and take appropriate actions to mitigate them and support individuals at heightened risk. In reality, these topics will likely often arise just as an at-risk individual is preparing to engage in fieldwork, or even during the course of fieldwork. We therefore strongly encourage all relevant individuals and institutions to ‘hold harmless’ any good-faith effort to use this document as a framework for engaging in

a dialogue about these core issues of safety and inclusion. Specifically, we recommend that it should never be considered a form of bias or discrimination for a supervisor to offer a discussion on these topics to any individual that they supervise[av][aw][ax][ay]. The researcher or supervisee receiving that offer should have the full discretion and agency to pursue it further, or not. Simply sharing this document [az]is one potential means to make such an offer in a supportive and non-coercive way, and aligns with the goals we have outlined towards making fieldwork safe, equitable and fruitful for all.

Trauma and Design

1. Validating your experience. It’s important to know that workplace trauma can be destabilizing, demoralizing, and dehumanizing. And when it happens in a design-centric organization where there are sometimes shallowly professed[ba] values of human-centeredness, empathy, and the myth of bringing your full, authentic self to work, it can leave you spinning in a dizzying state of cognitive dissonance and moral injury.

A common side effect of workplace abuse is invalidation, which is defined as “the rejection or dismissal of your thoughts, feelings, emotions, and behaviors as being valid

and understandable.” Invalidation can cause significant damage or upset to your psychological health and well-being. What’s worse, the ripple effects of these layers of dismissal are traumatic, often happen in isolation, and may lead to passive or more overt forms of workplace and institutional betrayal. If this is (or has been) your experience, it’s important to know that (1) you are not alone and (2) your experience is valid and real.[bb]

2. Seeking safety. Workplace-induced emotional trauma is very real and, unfortunately, on the rise. The research is also clear: continuous exposure to trauma can hurt our bodies and lead to debilitating levels of burnout, anxiety, depression, traumatic stress, and a host of other health issues. Episodic and patterned experiences like micro- and macro-aggressions, bullying, gaslighting, betrayal, manipulation, and other forms of organizational abuse can have both immediate and lasting psychological and physiological effects. So, what can we do?[bc]

• To go to HR and management or not? There is a natural inclination to document and

report workplace abuse and to then work within the HR structures that are in place where we work. But many profit- and productivity-driven workplaces are remarkably inept at putting employees (the primary human resource) first[bd][be][bf][bg]. The nauseating effects of this can lead to deeply entrenched incompetent or avoidant behaviors by the very people who we expect to listen to and support us (read: HR). Even with this said, there is value in documenting events as they occur so that you can remember the details and not forget the context later. You may also have a situation so egregious or blatantly illegal that documentation will be necessary.

• Do you need accommodations? Employees need to be cared for in ways that our leaders don’t always recognize, nor value. Workplace trauma, as well as current and past trauma, can get exacerbated resulting in impairing symptoms or a legally protected disability accommodation. Sometimes seeking out accommodations as part of the process can hold your immediate supervisor accountable (as well as their respective leadership chain) to meet your needs. The Job Accommodation Network (JAN) is a source of free, expert, and confidential guidance on job accommodations and disability employment issues. JAN provides free one-on-one practical guidance and technical assistance on job accommodation solutions.

• Do not “manage up.” Many of the avenues that HR systems afford us can lead to empty promises and give us a sense of helplessness and hopelessness. As a result, the harm done can lead to a retraumatization of what you’ve already been enduring. Additionally, as a social worker, it would be disingenuous and unethical for me to suggest that you find ways to “manage up” or gray rock[bh] so that you can temporarily minimize the effects[bi][bj][bk]. Managing up is a popular narrative[bl][bm][bn][bo][bp] that, I believe, just perpetuates how we deal with cycles and patterns of abuse — be it in the workplace or elsewhere. And gray rocking, which can be quite effective to get through in the interim, is not a healthy, long-term solution to what you are enduring.

• Where should I turn? Let me be honest: many HR programs are ill-equipped, lack the knowledge, or are simply unwilling to hold perpetrators of workplace abuse accountable. If this is not the case where you work, congratulations! But if the aforementioned is familiar, it is crucial to practice self-compassion and self-trust and to seek reassurance and psychological and cultural safety with trusted friends or colleagues. Let them know that you may not want or need their advice, but rather their trust and confidence in listening to and witnessing your story.[bq][br] Can this friend or colleague help you assess the risks of staying? You may find it empowering to think this through with them and to also write about it. Writing into the wound[bs], as Roxane Gay has said, may also be a helpful, therapeutic exercise with a licensed professional or in community with others who witness, trust, and respect you. Please remember: your friends and colleagues are just that — friends and colleagues. Sometimes situations are more serious and complex and should be referred to someone who has the cultural and clinical training to help you address the layers of complexity. There may be times when your unresolved trauma, prolonged grief, or more serious and long-lasting symptoms of mental health concerns need to be processed with a licensed professional (more on working with a clinician below).

• Creating your freedom map. There are times when you have exhausted your options; you’ve played by the rules set forth and are caught in a neverending wait and see. Your current reality might be that leaving is simply not an option. You might be the only paid worker in your family or perhaps you need the insurance or the job market might be too volatile. These are all valid reasons for choosing to not leave — or to not leave just yet. However, if leaving feels scary for other reasons (fear of failing, worried about what people will think, concerned about damage to your professional reputation), consider this: are you willing and able to test the limits of what your body can endure? Sometimes leaving — a radical act in and of itself — is the best option for your health, well-being, and future work. If you’re at this stage, I strongly suggest devising a plan of action for leaving and map out your escape plan. Some of the questions to consider might be: When will I leave? What do I need in order to leave? What do I want to do next? How can I take care of myself now and in the future? Who can I rely on as part of my support system? Spelling this out and naming what you need in your freedom map will give you power.

3. Healing in community and finding and talking to a mental health professional. There are enormous benefits to healing with others and working with a licensed clinical mental health professional (i.e., clinician, therapist, psychotherapist, counselor, etc). Therapy can provide a safe space to share and understand the interconnected dots of what you’re going through. Sometimes trauma in the workplace can trigger unresolved childhood traumas and other struggles that we, as a society, have been conditioned to either just deal with or suppress. Have grit! Be resilient! It’s not that bad! [bt][bu][bv][bw]These are white supremacy and productivity narratives that infuriate me. If it were that easy, you wouldn’t still be reading. What’s more, the power of community healing is found in the validation, empowerment, and organizing to challenge fear-based work culture — not to just learn to cope with dysfunction[bx][by]. If you are new to therapy or revisiting it after having a break from it, consider this part of the overall commitment to yourself for lifelong healing and recovery. There is a growing amount of culturally responsive therapy options — many of which did not exist even a few years ago. Below are just a few resources for finding an inclusive, culturally responsive therapist.

5. Learn and understand the language of trauma and what it means to be trauma-informed — especially in the context of design. There is a literacy around trauma[bz][ca] that is missing in our organizations, in ourselves, and in our design work.[cb][cc][cd] Now more than ever, we need to be at least trauma-informed so that we can lead and work within trauma-responsive teams and organizations. Responding to this need is one of the reasons why I started Social Workers Who Design. My own practice and research are committed to being trauma-informed and becoming trauma-responsive in design.

[a]Fieldwork certainly broadens both the types of risks and their severity compared to lab work. There were four employee work related deaths at UVM while I was working there for 25 years – 3 were field events and 1 was lab related.

[b]It seems like this would still apply broadly to conferences, workshops, collaborations, and other work-related travel for STEM students

[c]Agreed!

[d]ACS has raised this concern for its meetings; I’m sure it’s not being done in a proactive fashion…

[e]certainly a consideration that is applicable to research well beyond fieldwork!

[f]I think that the 21st Century will need to find ways to move beyond colonial science which is part of the reason this article needed to be written.

[g]Or that those researchers will avoid fieldwork that could benefit from their participation

[h]By sharing this burden we can also develop a wider range of ways to minimize these risks.

[i]Even writing and talking about the burden can help make it seen and believed. Probably not quantitated though, which is what modern management wants to do to address complex issues.

[j]This is another challenge in the academic environment, where the research topic and methods are often driven by the investigator rather than their supervisor. Many times we in EHS hear of the field work being conducted only after it is underway and a regulatory question arises.

[k]Interesting choice to say ”accessing” rather than “requiring”. On the other hand, can cultural literacy be trained or does it have to be lived?

[l]I think the idea behind the training is to get people to practice thinking differently about situations – an opportunity to simulate this conversation before it needs to be had in real life.

[m]Yes, that’s a more valuable approach to training than an information transfer approach (i.e. woman in Afghanistan have to be extra careful this year).

[n]Recognize hazards

[o]Assess risks

[p]Intersectionality is a key element here – even if within ethnic groups, male and female perceptions of risks can be quite different in valid ways.

[q]Yes! Intersectionality is VERY important.

[r]This was one of the deaths at UVM – a anthropologists in Brazil was shot dead by a local

[s]When I did fieldwork for the USDA, I got a very quick rundown of things from my lab manager. Basically, make sure that your USDA badge is easily accessible just in case anyone questions your presence. However, working for the govt may also be reason for someone to be hostile to you, so basically if a local challenges you, just pack up and leave. Their biggest challenge in the region was being shot at by anti-govt folks.

[t]Good point. We need to learn de-escalation approaches that don’t undermine the point we are trying to make.

[u]Is this in the US or overseas?

[v]Is working alone a methodological advantage in some research settings?

[w]Depending on what you are doing, too much noise can be a problem.

[x]I have heard this from some of my students.

[y]I think about this whenever I hear the “don’t work alone” or “don’t work odd hours” policies. I’ve known many people who did these 2 things in order to avoid someone they have had bad interactions with – including in my own graduate lab.

[z]I wonder how much of those “unhealthy attitudes and behaviors” seen in academia are due to some of these situations; trying to avoid certain people or situations.

[aa]This whole section talks basically about a risk assessment that should be done before the fieldwork begins. I wonder if there are any regulatory bodies that have authority or provide guidelines for doing this. Or, if universities have policies on travel for work, etc. Otherwise, the concern if it’s just “you should do this” and not enforced anywhere is that people won’t do it, which is maybe why this is a problem in the first place

[ab]The only “regulatory body” with international jurisdiction is the Dept of State, who tracks political volatiality, but leaves specific activities up to the individual

[ac]CSU offers a huge array of assistance to our high-risk and/or international travelers. You MUST register, otherwise these benefits will and/or may not be available to you.

[ad]Does supervisor = funder in this context? I know that the some parts of the Dept of Defense has much stricter safety expectations than academia.

[ae]For my research group meeting we start with a safety moment and diversity moment where we present something from our culture or background that helps educate us all but not necessarily in predicting risks.

[af]Minimize risks and Prepare for emergencies

[ag]Verbally inquiring?  I’m not sure how this would be approached….

[ah]Not to sound like too much of an enthusiastic convert, but this is potentially a space in which apply RAMP could be used – i.e. if you encourage a general conversation about recognizing hazards in different research settings, you can invite the conversation without making certain individuals feel explicitly called out – and giving them a space where it is expected/comfortable to bring about hazards that others may not have considered.

[ai]A great way to do this is to simply ask “How can I best support you in your research efforts?”

[aj]It seems like part of this is the PI identifying themselves the potential risks of their students based on obvious information available or differences in the environment, so this sort of risk analysis could be made available to students without them having to come forward explicitly

[ak]Jessica and Anthony, those are both great ideas.  Maybe implement RAMP and explain it by saying that we are trying to best support them in their research efforts.

[al]Kali, when I initially read this the idea of trying to predict the situations someone might face in an unfamiliar location seemed impossibly daunting, but the way you phrased it makes it much more achievable!

[am]I think the danger of PIs simply doing this for themselves is exactly what is pointed out in the article. One’s own experiences and biases may blind them to hazards that may seem obvious to others. In some situations it could actually be worse to have a PI think they figured it all out without consulting with their research group members because you then have a top-down approach that those it is enforced upon do not feel invited to discuss or challenge.

[an]This reminded me of the paper that was shared in the slack channel on finding PPE solutions while wearing a hijab and PPE/cleanroom issues with long hair which is important to religions such as Sikhism.

[ao]I like that paper because it took the approach of “what can we do to resolve the issue” rather than “well you are the one creating the problem.”

[ap]I suspect that members of the local community who cooperate with an unwelcome researcher are also at risk.

[aq]Does this increased risk arise from researchers asking culturally insensitive questions in a prying way? Not all cultures will share their thought processes with anyone who asks.

[ar]When you manage people as a group versus individuals, one can create a sense of everyone belonging so that there is no “other.”

[as]This is an interesting comment that I believe I am interpreting another way. I have found that being managed “as a group” is part of the problem. Some managers fail to see the ways individuals in their groups are impacted differently by situations or policies – then if that person questions or challenges it, they are seen as “the problem.”

[at]Are there off-setting opportunities that at risk populations benefit from?

[au]A question of equality versus equity

[av]This definitely depends on the workplace…if you don’t have the support behind you, one could be risking their job if they spoke out or asked about an individual’s identity.

[aw]Or you could be risking your job as a supervisor if you assume your risks are the same as everyone else. Trying to increase safety should not come with a sense of fear.

[ax]Tough one to do well. Offer to group is easier than an individual otherwise you do feel targeted.

[ay]I agree with this. It’s hard when it’s only a single student or small group going though. Perhaps there is a procedure to analyze risk you can use with everyone equally.

[az]I think that this is a good opportunity to raise this concern; particularly with subject matter experts from other disciplines who may have experience in the locations being researched.

[ba]Meaning, it is something the institution says it values but doesn’t follow through on that value?

[bb]This is about acknowledging others feelings and letting them feel heard.

[bc]One of the best books I’ve ever read is “Difficult Conversations” by Douglas Stone, Bruce Patton, and Sheila Heen. In that book, the authors highlight the central problem that we can see the intentions behind our own actions, and the impacts of others’ actions, but others’ intentions and the impacts of our own actions are often opaque to us. It may take, dare I say it, grit, to initiate a conversation about behaviors that have caused you pain, but I don’t believe there’s any other way to resolve those situations.

[bd]True

[be]My experience is the Human Resources staff are in a silo of their own, separated from the mission of the larger organization. This means that they have a hard time connecting to an evolving workplace

[bf]SInce they get all of the hard personnel cases, they tend to manage to avoid the last bad experience rather than to respond to emerging challenges

[bg]I also think it is important to keep in mind that HR is hired to protect the company – not individuals. It is why I find it awkward when people automatically just say “take it to HR.” HR are trained to be mediators to try to get things to quietly blow over. They aren’t going to charge in to fight your battles for you.

[bh]Putting the definition here because I had to look it up: The grey rock method is a strategy some people use when interacting with manipulative or abusive individuals. It involves becoming as unresponsive as possible to the abusive person’s behavior.

[bi]not quite clear on these meanings in this context

[bj]Gray rock basically would mean being unresponsive to “defuse” the traumatic situation (or person initiating it).

[bk]https://180rule.com/the-gray-rock-method-of-dealing-with-psychopaths/

[bl]Agree that this is a very misused term. The concept of “managing up” really only works if all actors are well-meaning. If someone is being hostile or abusive, telling the victim of the behavior to “manage up” is really just telling them that no one is going to help them.

[bm]In this context, “managing up” can mean find allies at your level that can help you identify potential support that are not in your direct reporting line. The caveat here is that my experience is entirely in academia, which has very thin reporting lines with lots of turn over, so working around specific managers is often a less risky approach than it might be in other settings. I have seen it work in some situations and not in other situations. Your Milage May Vary

[bn]To be honest, the definition you used here does not square with the understanding that I have of “managing up.” What you just described is simply looking for other managers (apart from your own) to deal with the situation. “Managing Up” specifically has to do with your relationship with your manager(s).

[bo]I’ve taken one course on “Managing Up”, and the definition I got from it was understanding that your boss is a fallible human being, and that as you get to know them, you should try to interact with them in a way that accommodates that (e.g. following up by phone or in person if you know they’re bad at reading their email).

[bp]I have understood it as being proactive when it comes to solving problems in a way that helps your manager & makes them look good – while also helping yourself. It requires having a clear understanding of what the goals are for both of you – while also recognizing your manager’s strengths and weaknesses relative to your own, and finding a way to work with that. I don’t see anything wrong with that as advice – but the “weaknesses” to be managed here really shouldn’t be outright abusive behaviors that they do not see & work to correct for themselves. You’re an employee – not a punching bag or a therapist.

[bq]I like the suggestion to explicitly mention that a trusting ear and confidence is what is expected instead of advice. I know I appreciate that distinction personally.

[br]As someone who struggles with active listening, and jumps immediately to problem-solving mode, I wholeheartedly agree!

[bs]Particularly if it helps clarify your thoughts, both for yourself and for potential allies

[bt]That’s a sign of a dysfunctional safety culture its just as bad as playing the blame game

[bu]This is well described in a recent article in JCHAS “Listening to the Well, Listening to Each Other, and Listening to the Silence—New Safety Lessons from Deepwater Horizon” https://pubs.acs.org/doi/10.1021/acs.chas.1c00050

The authors describe the HR aspects of the Deepwater Horizon as part of the safety system that led to that explosion

[bv]These are more terms that I feel have been misused and abused. Having grit to try multiple ways to solve a legitimate challenge is good to encourage; “having grit” to put up with someone being abusive towards you is NOT okay to encourage.

[bw]We have been seeing more and more survivors of professional sports experiences describe their experiences publicly and how grit was used as a rhetorical device to avoid case. It’s encouraging to see these people “come out”

[bx]This is certainly very difficult-“challenging fear-based work culture”

[by]I believe that this is one of the reasons that unions arose internationally and are still valued in many countries. Unfortuantely, the regulatory environmenta of the US has taken this peer support function away from unions in favor of simple economic bargaining (I just came from a union meeting at lunch time, where this played out in real time.)

[bz]“Trauma” is a huge word to throw around as its usually meaning a severe emotional response to life-threatening  events or series of events that are emotionally disturbing. In my experince, some minorities even find using the word “trauma” offensive. How about depressed, tired, and all those other descriptives.

[ca]There is a cost to identifying one’s status as a victim, not just in social standing, but in personal mental health. It takes a lot of bravery to identify one’s trauma publicly. (See the comments about athletes above.)

[cb]Even after looking at the complete article, I’m confused about what design work this refers to. I can take it two ways:

[cc]1. People who design spaces, web sites, projects(?) with social dynamics in mind?

[cd]2. People who work in design firms (i.e. creative thinkers) who find those firms toxic. I have known several people with that experience. Some creative thinkers can be rather deaf to other people’s feedback.

Safety Culture Transformation – The impact of training on explicit and implicit safety attitudes

On October 27, 2021, the CHAS Journal Club head from the lead author of the paper “Safety culture transformation—The impact of training on explicit and implicit safety attitudes”. The complete open access paper can be found on line at this link. The presentation file used by Nicki Marquadt, the presenting author, includes the graphics and statistical data from the paper.

Comments from the Table Read

On October 20, the journal club did a silent table read of an abridged portion of the article. This version of the article and their comments are below.

1. INTRODUCTION

Safety attitudes of workers and managers have a large impact on safety behavior and performance in many industries (Clarke, 2006, 2010; Ford & Tetrick, 2011, Ricci et al., 2018). They are an integral part of an organizational safety culture[a] and can therefore influence occupational health and safety, organizational reliability, and product safety (Burns et al., 2006; Guldenmund, 2000; Marquardt et al., 2012; Xu et al., 2014).

There are different forms of interventions for safety culture and safety attitude change, trainings are one of them. Safety trainings seek to emphasize the importance of safety behavior and promote appropriate, safety-oriented attitudes among employees[b][c][d][e][f][g][h][i] (Ricci et al., 2016, 2018).-*

However, research in the field of social cognition has shown that attitudes can be grouped in two different forms: On the one hand, there are conscious and reflective so-called explicit attitudes and on the other hand, there are mainly unconscious implicit attitudes (Greenwald & Banaji, 1995). Although there is an ongoing debate whether implicit attitudes are unconscious or partly unconscious (Berger, 2020; Gawronski et al., 2006), most researchers affirm the existence of these two structurally distinctive attitudes (Greenwald & Nosek, 2009). Traditionally, researchers have studied explicit attitudes of employees by using questionnaires [j](e.g., Cox & Cox, 1991; Rundmo, 2000). However, increasingly more researchers now focus on implicit attitudes that can be assessed with reaction time measures like the Implicit Association Test[k][l] (IAT; Greenwald et al., 1998; Ledesma et al., 2015; Marquardt, 2010; Rydell et al., 2006). These implicit attitudes could provide better insights into what influences safety behavior because they are considered to be tightly linked with key safety indicators. Unlike explicit attitudes, they are considered unalterable by social desirable responses (Burns et al., 2006; Ledesma et al., 2018; Marquardt et al., 2012; Xu et al., 2014). Nevertheless, no empirical research on whether implicit and explicit safety attitudes are affected by training could be found yet. Therefore, the aim of this paper is to investigate the effects that training may have on implicit and explicit safety attitudes. The results could be used to draw implications for the improvement of safety training and safety culture development.

1.1 Explicit and implicit attitudes in safety contexts

Explicit attitudes are described as reflected which means a person has conscious control over them[m] (Strack & Deutsch, 2004). In their associative–propositional evaluation (APE) model, Gawronski and Bodenhausen (2006) assume that explicit attitudes are based on propositional processes. These consist of evaluations derived from logical conclusions. In addition, explicit attitudes are often influenced by social desirability[n][o][p][q][r], if the topic is rather sensitive such as moral issues (Maass et al., 2012; Marquardt, 2010; Van de Mortel, 2008). This has also been observed in safety research where, in a study on helmet use, the explicit measure was associated with a Social Desirability Scale (Ledesma et al., 2018). Furthermore, it is said that explicit attitudes can be changed faster and more completely than implicit ones (Dovidio et al., 2001; Gawronski et al., 2017).

On the other hand, implicit attitudes are considered automatic, impulsive, and widely unconscious (Rydell et al., 2006). According to Greenwald and Banaji (1995, p. 5), they can be defined as “introspectively unidentified (or inaccurately identified) traces of past experience” that mediate overt responses. Hence, they use the term “implicit” as a broad label for a wide range of mental states and processes such as unaware, unconscious, intuitive, and automatic which are difficult to identify introspectively by a subject. Gawronski and Bodenhausen (2006) describe implicit attitudes as affective reactions that arise when stimuli activate automatic networks of associations. However, although Gawronski and Bodenhausen (2006) do not deny “that certain affective reactions are below the threshold of experiential awareness” (p. 696), they are critical towards the “potential unconsciousness of implicit attitudes” (p. 696). Therefore, they use the term “implicit” predominantly for the aspect of automaticity of affective reactions. Nevertheless, research has shown that people are not fully aware of the influence of implicit attitudes on their thinking and behavior even though they are not always completely unconscious (Berger, 2020; Chen & Bargh, 1997; De Houwer et al., 2007; Gawronski et al., 2006). Many authors say that implicit attitudes remain more or less stable over time and are hard to change (Charlesworth & Banaji, 2019; Dovidio et al., 2001; Wilson et al., 2000). In line with this, past studies in which attempts were made to change implicit attitudes often failed to achieve significant improvements (e.g., Marquardt, 2016; Vingilis et al., 2015).

1.3 Training and safety attitude change[s][t]

As mentioned in the introduction, the main question of this paper is to find out whether training can change implicit and explicit safety attitudes. Safety training can improve a person’s ability to correctly identify, assess, and respond to possible hazards in the work environment, which in turn can lead to better safety culture (Burke et al., 2006; Duffy, 2003; Wu et al., 2007). Besides individual safety training increasingly more industries such as aviation, medicine, and offshore oil and gas industry implement group trainings labeled as Crew Resource Management (CRM) training to address shared knowledge and task coordination in dynamic and dangerous work settings (Salas et al., 2006).

There are many different factors, which determine the effectiveness of safety trainings (Burke et al., 2006; Ricci et al., 2016) such as the training method (e.g., classroom lectures) and training duration (e.g., 8 h).

As can be seen in Figure 1, it can be stated that associative evaluations[u][v][w][x] (process) can be activated by different safety intervention stimuli such as training (input). These associative evaluations are the foundation for implicit safety attitudes (output) and propositional reasoning (processes), which in turn form the explicit safety attitudes (output). In addition, associative evaluations and propositional reasoning processes affect each other in many complex conscious and unconscious ways (Gawronski & Bodenhausen, 2006). However, change rates might be different. While the propositional processes adapt very quickly to the input (e.g., safety training), the associative evaluations might need longer periods of time for restructuring the associative network (Karpen et al., 2012). Therefore, divergences in the implicit and explicit measures resulting in inconsistent attitudes (output) can occur (McKenzie & Carrie, 2018).

1.4 Hypotheses and overview of the present studies

Based on the theories and findings introduced above, two main hypotheses are presented. Since previous research describes that explicit attitudes can be changed relatively quickly (Dovidio et al., 2001; Karpen et al., 2012), the first hypothesis states that:

  • H1: Explicit safety attitudes can be changed by training.
    Even though implicit attitudes are said to be more stable and harder to change (Dovidio et al.,
    2001; Gawronski et al., 2017; Wilson et al., 2000), changes by training in implicit attitudes can be expected too, due to changes in the associative evaluation processes (Lai et al., 2013) which affect the implicit attitudes (see EISAC model in Figure 1). Empirical research on the subject of implicit attitudinal change through training is scarce (Marquardt, 2016), however, it was shown that an influence on implicit attitudes is possible[y][z][aa] (Charlesworth & Banaji, 2019; Jackson et al., 2014; Lai et al., 2016; Rudman et al., 2001). Therefore, the second hypothesis states that:
  • H2: Implicit safety attitudes can be changed by training.

However, currently, there is a lack of empirical studies on implicit and explicit attitude change using longitudinal designs in different contexts (Lai et al., 2013). Also, in the field of safety training research, studies are needed to estimate training effectiveness over time (Burke et al., 2006). Therefore, to address the issues of time and context in safety attitude change by training, three studies with different training durations and measurement time frames in different safety-relevant contexts were conducted (see Table 1). In the first study, the short-term attitude change was measured 3 days prior and after a 2-h safety training in a chemical laboratory. In the second study, the medium-term attitude change was assessed 1 month prior and after a 2 days of CRM training for production workers. In the third study, the long-term attitude changes were measured within an advanced experimental design (12 months between pre- and post-measure) after a 12 weeks of safety ethics training in an occupational psychology student sample.[ab] To make this paper more succinct and to ease the comparability of used methods and reveled results, all three studies will be presented in parallel in the following method, results, and discussion sections. A summary table of all the studies can be seen in Table 1.

2. METHODS

Study 1

Fifteen participants (eight female and seven were male; mean age = 22.93 years; SD = 2.74) were recruited for the first study. The participants were from different countries with a focus on east and south Asia (e.g., India, Bangladesh, and China). They were enrolled in one class of an international environmental sciences study program with a major focus on practical experimental work in chemical and biological laboratories in Germany. Participation in regular safety training was mandatory for all participants to be admitted to working in these laboratories. To ensure safe working in the laboratories, the environmental sciences study program has traditionally small classes of 15–20 students. Hence, the sample represents the vast majority of one entire class of this study program. However, due to the lockdown caused by the COVID-19 pandemic, there was no opportunity to increase the sample size in a subsequent study. Consequently, the sample size was very small.

2.1.2 Study 2

A sample of 81 German assembly-line workers of an automotive manufacturer participated in Study 2. The workers were grouped into self-directed teams responsible for gearbox manufacturing. Hence, human error during the production process could threaten the health and safety of the affected workers and also the product safety of the gearbox which in turn affects the health and safety of prospective consumers. The gearbox production unit encompassed roughly 85 workers. Thus, the sample represents the vast majority of the production unit’s workforce. Due to the precondition of the evaluation being anonymous, as requested by the firm’s work council, personal data such as age, sex, and qualification could not be collected.

2.1.3 Study 3

In Study 3, complete data sets of 134 German participants (mean age = 24.14; SD = 5.49; 92 female, 42 male) could be collected. All participants were enrolled in Occupational Psychology and International Business study programs with a special focus on managerial decision making under uncertainty and risks. The sample represents the vast majority of two classes of this study program since one class typically includes roughly 60–70 students. Furthermore, 43 of these students also had a few years of work experience (mean = 4.31; SD = 4.07).

4. DISCUSSION

4.1 Discussion of results

The overall research objective of this paper was to find out about the possibility of explicit and implicit safety attitude changes by training. Therefore, two hypotheses were created. H1 stated that explicit safety attitudes can be changed by training. H2 stated that implicit safety attitudes can be changed by training. Based on the results of Studies 1–3, it can be concluded that explicit safety attitudes can be changed by safety training. In respect of effect sizes, significant small effects (Study 2), medium effects (Study 1), and even large effects (Study 3) were observed. Consequently, the first hypothesis (H1) was supported by all three studies. Nevertheless, compared to the meta-analytic results by Ricci et al. (2016) who obtained very large effect sizes, the effects of training on the explicit safety attitudes were lower in the present studies. In contrast, none of the three studies revealed significant changes in the implicit safety attitudes after the training. Even though there were positive changes in the post-measures, the effect sizes were marginal and nonsignificant. Accordingly, the second hypothesis (H2) was not confirmed in any of these three studies. In addition, it seems that the duration of safety training (e.g., 2 h, 2 days, or even 12 weeks) has no effect on the implicit attitudes[ac][ad][ae][af][ag][ah]. However, the effect sizes of short-term and medium-term training of Studies 1 and 2 were larger than those obtained in the study by Lai et al. (2016), whose effect sizes were close to zero after the follow-up measure 2–4 days after the intervention.

The results obtained in these studies differ with regard to effect size. This can partly be explained by the characteristics of the sample. For instance, in Studies 1 and 3, the participants of the training, as well as the control groups (Study 3 only), were students from occupational psychology and environmental sciences degree programs. Therefore, all students—even those of the control groups—are familiar with concepts of health and safety issues, sustainability, and prosocial behavior. Consequently, the degree programs could have had an impact on the implicit sensitization of the students which might have caused high values in implicit safety attitudes even in the control groups. The relatively high IAT-effects in all four groups prior and after the training are therefore an indication of a ceiling effect in the third study (see Table 3). This is line with the few empirical results gained by previous research in the field of implicit and explicit attitude change by training (Jackson et al., 2014; Marquardt, 2016). Specifically, Jackson et al. (2014) have also found a ceiling effect in the favorable implicit attitudes towards women in STEM of female participants, who showed no significant change in implicit attitudes after a diversity training.[ai][aj][ak]

Finally, it seems that the implicit attitudes were mainly unaffected by the training. The IAT data have shown no significant impact in any group comparison or pre- and post-measure comparison. To conclude, based on the current results it can be assumed that when there is a training effect, then it manifests itself in the explicit and not the implicit safety attitudes. One explanation might be that implicit safety attitudes are more stable unconscious dispositions which cannot be easily changed like explicit ones (Charlesworth & Banaji, 2019; Dovidio et al., 2001; Wilson et al., 2000). In respect of the EISAC model (see Section 1.3), unconscious associative evaluations might be activated by safety training, but not sustainably changed. A true implicit safety attitude change would refer to a shift in associative evaluations that persist across multiple safety contexts and over longer periods of time (Lai et al., 2013).[al][am]

5. PRACTICAL IMPLICATIONS AND CONCLUSION

What do the current empirical results mean for safety culture and training development? Based on the assumption that the implicit attitudes are harder to change (Gawronski et al., 2017) and thus may require active engagement via the central route of conviction (Petty & Cacioppo, 1986), this could be an explanation why there was no change in Study 1. This assumption is supported by the meta-analysis of Burke et al. (2006), who found large effect sizes for highly engaging training methods (e.g., behavior modeling, feedback, safety dialog) in general, and by the meta-analysis of Ricci et al. (2016) who obtained large effect sizes on attitudes in particular. However, the more engaging training methods such as interactive tutorials, case analyses, cooperative learning phases, role plays, and debriefs (structured group discussions)—which have proved strong meta-analytic effects (Ricci et al., 2016)—used in Studies 2 (CRM training) and 3 (Safety ethics training) did have a significant impact on the explicit but not implicit attitude change[an][ao]. In addition, it seems that more intense training with longer duration (e.g., such as 12 weeks in Study 3) has again no effect on the implicit attitude change. Therefore, maybe other approaches [ap][aq]can be more promising.

To sum up, even though the outlined conclusions are tentative, it could be very useful in the future to design realistic and affect-inducing training simulations via emergency simulators or virtual reality approaches[ar][as][at][au][av] [aw][ax][ay][az][ba](Sacks et al., 2013; Seymour et al., 2002) for all highly hazardous industries. If these simulations are accompanied by highly engaging behavioral (e.g., behavioral modeling; Burke et al., 2006, 2011), social (e.g., debriefs/structured group discussions; Ricci et al., 2016), and cognitive (e.g., implementation intentions; Lai et al., 2016) training methods, then they might facilitate a positive explicit and even implicit safety attitude change and finally a sustainable safety culture transformation.

[a]A theoretical question that occurs to me when reading this is:

Is “an organizational safety culture” the sum of the safety attitudes of workers and management or is there a synergy among these attitudes that creates a non-linear feedback effect?

[b]I would not have thought of this as the purpose of discreet trainings. I would have thought that the purpose of trainings is to teach the skills necessary to do a job safely.

[c]I agree. Safety Trainings are about acquiring skills to operate safely in a specific process…the collective (Total Environment) affects safety behavior.

[d]I think this could go back to the point below about fostering the environment – safety trainings communicating that safety is a part of the culture here.

[e]Safety professionals (myself included) have historically misused the term “training” to refer to what are really presentations.

[f]Agreed. I always say something that happens in a lecture hall with my butt in a chair is probably not a “training.” While I see the point made above, many places have “trainings” simply because they are legally required to have them. It says little to nothing about the safety culture of the whole environment.

[g]Maybe they go more into the actual training types used in the manuscript, but we typically start in a lecture hall and then move into the labs for our trainings, so I would still classify what we have as a training, but I can see what you mean about a training being more like a presentation in some cases.

[h]This is something I struggle with…but I’m trying to refer to the lecture style component as a safety presentation and the actual working with spill kits as a safety training.  It has been well-received!

[i]This is a core question and has been an ongoing struggle ever since I started EHS training in an education-oriented environment.

As a result, over time I have moved my educational objectives from content based (e.g. what is an MSDS?) to awareness based (what steps should you take when you have a safety question). However, the EHS community is sloppy when talking about training and education, which are distinct activities.

[j]Looks like these would be used for more factual items such as evaluating what the researcher did, not how/why they did it

[k]I’m skeptical that IATs are predictive of real-world behavior in all, or even most, circumstances. I’d be more interested in an extension of this work that looks at whether training (or “training”) changes revealed preferences based on field observations.

[l]Yes – much more difficult to do but also much more relevant. I would be more interested in seeing if decision-making behavior changes under certain circumstances. This would tell you if training was effective or not.

[m]This is a little confusing to me but sounds like language that makes sense in another context.

[n]What are the safety-related social desirabilities of chemistry grad students?

[o]I would think these would be tied to not wanting to “get in trouble.”

[p]Also, likely linked to being wrong about something chemistry-related.

[q]What about the opposite? Not wear PPE to be cool?

[r]In my grad student days, I was primarily learning how to “fake it until I make it”. This often led to the imposter syndrome being socially desirable. This probably arose from the ongoing awareness of grading and other judgement systems that the academic environment relies on

[s]Were study participants aware or were the studies conducted blind? If I am an employee and I know my progress will be measured, I may behave differently than if I had not known.

[t]This points back to last week’s article.

[u]What are some other ways to activate our associative evaluations?

[v]I would think it would include things like witnessing your lab mates follow safety guidance, having your PI explicitly ask you about risk assessment on your experiments, having safety issues remedied quickly by your facility. Basically, the norms you would associate with your workplace.

[w]Right, I just wonder if there’d be another way besides the training (input) to produce the intended change in the associative evaluation process we go through to form an implicit attitude. We definitely have interactions on a daily basis which can influence that, but is there some other way to tell our subconscious mind something is important.

[x]In the days before social media, we used social marketing campaigns that were observably successful, but they relied on a core of career lab techs who supported a rotating cast of medical researchers. The lab techs were quite concerned about both their own safety and the quality of their science as a result of the 3 to 6 month rotation of the MD/PhD researchers.

The social marketing campaigns included 1) word of mouth, 2) supporting graphical materials and 3) ongoing EHS presence in labs to be the bad guys on behalf of the career lab techs

[y]This reminds me of leading vs lagging indicators for cultural change

[z]This also makes me think of the arguments around “get the hands to do the right things and the attitudes will follow” which is along the lines of what Geller describes.

[aa]That’s a great comparison. Emphasizes the importance of embedding it throughout the curriculum to be taught over long periods of time

[ab]A possible confounding variable here would have to do with how much that training was reinforced between the training and the survey period. 12 months out (or even 3 months out) a person may not even remember what was said or done in that specific training, so their attitudes are likely to be influenced by what has been happening in the mean time.

[ac]I don’t find this surprising. I would imagine that what was happening in the mean time (outside of the training) would have a larger impact on implicit attitudes.

[ad]I was really hoping to see a comparison using the same attitude time frame for the 3 different training durations. Like a short-term, medium, and long-term evaluation of the attitudes for all 3 training durations, but maybe this isn’t how things are done in these kinds of studies.

[ae]This seems to be the trouble with many of the behavioral sciences papers I read, where you can study what is available not something that lines up with your hypothesis

[af]I really would probably have been more interested in the long-term evaluation for the medium training duration personally to see their attitude over a longer period of time, for example.

[ag]I think this is incredibly hard to get right though. An individual training is rarely impactful enough for people to remember it. And lots of stuff happens in between when you take the training and when you are “measured” that could also impact your safety attitudes. If the training you just went through isn’t enforced by anyone anywhere, what value did it really have? Alternatively, if people already do things the right way, then the training may have just helped you learn how to do everything right – but was it the training or the environment that led to positive implicit safety attitudes? Very difficult to tease apart in reality.

[ah]Yeah, maybe have training follow-ups or an assessment of some sorts to determine if information was retained to kind of evaluate the impact the training had on other aspects as well as the attitudes.

[ai]What effect does this conclusion have on JEDI or DEI training?

[aj]I also found this point to be very interesting. I wonder if this paper discussed explicit attitudes. I’m not sure what explicit vs implicit attitudes would mean in a DEI context because they seem more interrelated (unconscious bias, etc.)

[ak]I am also curious how Implicit Attitude compares to Unconscious Bias.

[al]i.e. Integrated across the curriculum over time?

[am]One challenge I see here is the competing definitions of “safety”. There are chemical safety, personal security, community safety,  social safety all competing for part of the safety education pie. I think this is why many people’s eyes glaze over when safety training is brought up or presented

[an]The authors mention that social desirability is one reason explicit and implicit attitudes can diverge, but is it the only reason, or even the primary reason? I’m somehwat interested in the degree to which that played a role here (though I’m also still not entirely sure how much I care whether someone is a “true believer” when it comes to safety or just says/does all the right things because they know it’s expected of them).

[ao]This is a good point.

[ap]I am curious to learn more about these approaches.

[aq]I believe the author discusses more thoroughly in the full paper

[ar]Would these trainings only be for emergencies or all trainings? I feel that a lot of times we are told what emergencies might pop up and how you would handle them but never see them in action. This reminds me of a thought I had about making a lab safety-related video game that you could “fail” on handling an emergency situation in lab but you wouldn’t have the direct consequences in the real world.

[as]Love that idea, it makes sense that you would remember it better if you got to walk through the actual process. I wonder what the effect of engagement would be on implicit and explicit attitudes.

[at]Absolutely – I think valuable learning moments come from doing the action and it honestly would be safer to learn by making mistakes in a virtual environment when it comes to our kind of safety. The idea reminds me of the  tennis video games I used to play when I was younger and they helped me learn how to keep score in tennis. Now screen time would be a concern, but something like this could be looked at in some capacity.

[au]This idea is central to trying to bring VR into training. Obviously, you can’t actually have someone spill chemical all over themselves, etc – but VR makes it so you virtually could. And there are papers suggesting that the brain “reads” things happening in the VR world as if they really happened. Although one has to be careful with this because that also opens up the possibility that you could actually traumatize someone in the VR world.

[av]I know I was traumatized just jumping into a VR game where you fell through hoops (10/10 don’t recommend falling-based VR games), but maybe less of a VR game and more of like a cartoon character that they can customize so they see the impact exposure to different chemicals could have but they don’t have that traumatic experience of being burned themselves,for example.

[aw]In limited time and/or limited funding situations, how can academia utilize these training methodologies? Any creative solutions?

[ax]I’m also really surprised that the conclusion is to focus on training for the worker. I would think that changing attitudes (explicit and implicit) would have more to do with the environment that one works in than it does on a specific training.

[ay]I agree on this. I think the environment one finds themselves plays a part in shaping one’s attitudes and behaviors.

[az]AGREED

[ba]100% with the emphasis on the environment rather than the training

Are employee surveys biased? CHAS Journal club, Oct 13, 2021

Impression management as a response bias in workplace safety constructs

In October,, 2021 the CHAS Journal club reviewed the 2019 paper by Keiser & Payne examining the impact of “impression management” on the way workers in different sectors responded to safety climate surveys. The authors were able to attend to discuss their work with the group on October 13. Below is their presentation file as well as the comments from the table read the week before.

Our thanks to Drs. Keiser and Payne for their work and their willingness to talk with us about it!

10/06 Table Read for The Art & State of Safety Journal Club

Excerpts from “Are employee surveys biased? Impression management as a response bias in workplace safety constructs”

Full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753518315340?casa_token=oOShJnb3arMAAAAA:c4AcnB3fwnlDYlol3o2bcizGF_AlpgKLdEC0FPjkKg8h3CBg0YaAETq8mfCY0y-kn7YcLmOWFA

Meeting Plan

  • (5 minutes) Sarah to open meeting
  • (15 minutes) All participants read complete document
  • (10 minutes) All participants use “Comments” function to share thoughts
  • (10 minutes) All participants read others’ Comments & respond
  • (10 minutes) All participants return to their own Comments & respond
  • (5 minutes) Sarah announces next week’s plans & closes meeting

Introduction

The ultimate goal of workplace safety research is to reduce injuries and fatalities on the job.[a] Safety surveys that measure various safety-related constructs,including safety climate (Zohar, 1980), safety motivation and knowledge (Griffin and Neal, 2000), safety participation and compliance (Griffin and Neal, 2000), and outcome indices (e.g., injuries, incidents, and near misses) are the primary way that researchers gather relevant safety data. They are also used extensively in industry. It is quite common to administer self-report measures of both safety predictors and outcomes in the same survey, which introduces the possibility that method biases prevalent in self-report measures contaminate relationships among safety constructs (Podsakoff et al., 2012).

The impetus for the current investigation is the continued reliance by safety researchers and practitioners on self-report workplace safety surveys. Despite evidence that employees frequently underreport in-juries (Probst, 2015; Probst and Estrada, 2010), researchers have not directly examined the possibility that employees portray the workplace as safer than it really is on safety surveys[b]. Correspondingly, the current investigation strives to answer the following question: Are employee safety surveys biased? In this study,we focus on one potential biasing variable, impression management, defined as conscious attempts at exaggerating positive attributes and ignoring negative attributes (Connelly and Chang, 2016; Paulhus, 1984).The purpose of this study is to estimate the prevalence of impression management as a method bias in safety surveys based on the extent to which impression management contaminates self-reports of various workplace safety constructs and relationships among them.[c][d][e]

Study 1

Method

This study was part of a larger assessment of safety climate at a public research university in the United States using a sample of research laboratory personnel. The recruitment e-mail was concurrently sent to people who completed laboratory safety training in the previous two years (1841) and principal investigators (1897). Seven hundred forty-six laboratory personnel responded to the survey… To incentivize participation, respondents were given the option to provide their name and email address after they completed the survey in a separate survey link, in order to be included in a raffle for one of five $100 gift cards.

Measures:

  • Safety climate
  • Safety knowledge, compliance, and participation
  • Perceived job risk and safety outcomes
  • Impression management

Study 2

a second study was conducted to

  1. Further examine impression management as a method bias in self-reports of safety while
  2. Accounting for personality trait variance in impression management scales.

A personality measure was administered to respondents and controlled to more accurately estimate the degree to which self-report measures of safety constructs are susceptible to impression management as a response bias.

Method

A similar survey was distributed to all laboratory personnel at a different university located in Qatar. A recruitment email was sent to all faculty, staff, and students at the university (532 people), which included a link to an online laboratory safety survey. No incentive was provided for participating and no personally identifying information was collected from participants. A total of 123 laboratory personnel responded.[f]

Measures:

  • Same constructs as Study 1, plus
  • Personality

Study 3

Two limitations inherent in Study 1 and Study 2 were addressed in a third study, specifically, score reliability and generalizability.

Method

A safety survey was distributed to personnel at an oil and gas company in Qatar, as part of a larger collaboration to examine the effectiveness of a safety communication workshop. All employees (∼370) were invited to participate in the survey and 107 responded (29% response rate). Respondents were asked to report their employee identification numbers at the start of the survey, which was used to identify those who participated in the workshop. A majority of employees provided their identifying information (96, 90%).

Measures:

  • Same constructs used in Study 1, plus
  • Risk propensity
  • Safety communication
  • Safety motivation
  • Unlikely virtues

Conclusion[g][h][i][j][k][l]

Safety researchers have provided few direct estimates of method bias [m][n][o][p]in self-report measures of safety constructs. This oversight is especially problematic considering they rely heavily on self-reports to measure safety predictors and criteria.

The results from all three studies, but especially the first two, suggest that self-reports of safety are susceptible to dishonesty[q][r][s][t][u] aimed at presenting an overly positive representation of safety.[v][w][x][y][z][aa] In Study 1, self reports of safety knowledge, climate, and behavior appeared to be more susceptible to impression management compared to self-reports of perceived job risk and safety outcomes. Study 2 provided additional support for impression management as a method bias in self-reports of both safety predictors and outcomes. Further, relationships between impression management and safety constructs remained significant even when controlling for Alpha personality trait variance (conscientiousness, agreeableness, emotional stability). Findings from Study 3 provided less support for the biasing effect of impression management on self-report measures of safety constructs (average VRR=11%). However, the unlikely virtues measure [this is a measure of the tendency to claim uncommon positive traits] did reflect more reliable scores as those observed in Study 1 and Study 2 and it was significantly related to safety knowledge, motivation, and compliance. Controlling for the unlikely virtues measure led to the largest reductions in relationships with safety knowledge. Further exploratory comparison of identified vs. anonymous respondents observed that mean scores on the unlikely virtues measure were not significantly different for the identified subsample compared to the anonymous subsample; however, unlikely virtues had a larger impact on relationships among safety constructs for the anonymous subsample.

The argument for impression management as a biasing variable in self-reports of safety relied on the salient social consequences to responding and other costs to providing a less desirable response, including for instance negative reactions from management, remedial training, or overtime work[ab][ac]. Findings suggest that the influence of impression management on self-report measures of safety constructs depends on various factors[ad] (e.g., distinct safety constructs, the identifying approach, industry and/or safety salience) rather than the ubiquitous claim that impression management serves as a pervasive method bias.

The results of Study 1 and Study 3 suggest that impression management was most influential as a method bias in self-report measures of safety climate, knowledge, and behavior, compared to perceived risk and safety outcomes. These results might reflect the more concrete nature of these constructs based on actual experience with hazards and outcomes. Moreover, these findings are in line with Christian et al.’s (2009) conclusion that measurement biases are less of an issue for safety outcomes compared to safety behavior. These findings in combination with theoretical rationale suggest that the social consequences of responding are more strongly elicited by self-report measures of safety climate, knowledge, and behavior, compared to self-reports of perceived job risk and safety outcomes. Items in safety perception and behavior measures fittingly tend to be more personally (e.g., safety compliance – “I carry out my work in a safe manner.”) and socially relevant (e.g., safety climate – “My coworkers always follow safety procedures.”).

The results from Study 2, compared to findings from Study 1 and Study 3, suggest that assessments of job risk and outcomes are also susceptible to impression management. The Alpha personality factor generally accounted for a smaller portion of the variance in the relationships between impression management and perceived risk and safety outcomes. The largest effects of impression management on the relationships among safety constructs were for relationships with perceived risk and safety outcomes. These results align with research on injury underreporting (Probst et al., 2013; Probst and Estrada, 2010) and suggest that employees may have been reluctant to report safety outcomes even when they were administered on an anonymous survey used for research purposes.

We used three samples in part to determine if the effect of impression management generalizes. However, results from Study 3 were inconsistent with the observed effect of impression management in Studies 1 and 2. One possible explanation is that these findings are due to industry differences and specifically the salience of safety. There are clear risks associated with research laboratories as exemplified by notable incidents; [ae]however, the risks of bodily harm and death in the oil and gas industry tend to be much more salient (National Academies of Sciences, Engineering, and Medicine, 2018). Given these differences, employees from the oil and gas industry as reflected in this investigation might have been more motivated to provide a candid and honest response to self-report measures of safety.[af][ag][ah][ai][aj] This explanation, however, is in need of more rigorous assessment.

These results in combination apply more broadly to method bias [ak][al][am]in workplace safety research. The results of these studies highlight the need for safety researchers to acknowledge the potential influence of method bias and to assess the extent to which measurement conditions elicit particular biases.

It is also noteworthy that impression management suppressed relationships in some cases; thus, accounting for impression management might strengthen theoretically important relationships. These results also have meaningful implications for organizations because positively biased responding on safety surveys can contribute to the incorrect assumption that an organization is safer than it really is[an][ao][ap][aq][ar][as][at].

The results of Study 2 are particularly concerning and practically relevant as they suggest that employees in certain cases are likely to underreport the number of safety outcomes that they experience even when their survey responses are anonymous. However, these findings were not reflected in results from Study 1 and Study 3. Thus, it appears that impression management serves as a method bias among self-reports of safety outcomes only in particular situations. Further research[au][av][aw] is needed to explicate the conditions under which employees are more/less likely to provide honest responses to self-report measures of safety outcomes.

———————————————————————————————————————

BONUS MATERIAL FOR YOUR REFERENCE:

For reference only, not for reading during the table read

Respondents and Measures

  • Study 1

Respondents:

graduate students (229,37%),

undergraduate students (183, 30%),

research scientists and associates (123,20%),

post-doctoral researchers (28,5%),

laboratory managers (25, 4%),

principal investigators (23, 4%)

329 [53%] female;

287 [47%] male

377 [64%] White;

16 [3%] Black;

126 [21%] Asian;

72 [12%] Hispanic

Age (M=31,SD=13.24)

Respondents worked in various types of laboratories, including:

biological (219,29%),

Animal biological (212,28%),

human subjects/computer (126,17%),

Chemical (124,17%),

mechanical/electrical (65,9%)

Measures:

  • Safety Climate

Nine items from Beus et al. (2019) 30-item safety climate measure were used in the current study. The nine-item measure included one item from each of five safety climate dimensions (safety communication, co-worker safety practices, safety training, safety involvement, safety rewards) and two items from the management commitment and safety equipment and  housekeeping dimensions. The nine items were identified based on factor loadings from Beus et al. (2019). Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Safety knowledge, compliance, and participation

Respondents completed slightly modified versions of Griffin and Neal’s (2000) four-item measures of safety knowledge (e.g., “I know how to perform my job in the lab in a safe manner.”), compliance (e.g., “I carry out my work in the lab in a safe manner.”), and participation (e.g., “I promote safety within the laboratory.”). Items were completed using a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Perceived job risk and safety outcomes

Respondents completed a three-item measure of perceived job risk (e.g., “I encounter personally hazardous situations while in the laboratory;” 1=almost always untrue, 5=almost always true; Jermier et al., 1989). Respondents also provided safety incident data regarding the number of injuries, incidents, and near misses that they experienced in the last 12 months.

  • Impression Management

Four items were selected from Paulhus’s (1991) 20-item Balanced Inventory of Desirable Responding. These items were selected based on a review of Paulhus’s (1991) full measure and an assessment of those items that were most relevant and best representative of the full measure (Table 1). Items were completed using a five-point accuracy scale (1=very inaccurate, 5=very accurate). Ideally this survey would have included Paulhus’s (1991) full 20-item measure. However, as is often the case in survey research, we had to balance construct validity with survey length and concerns about respondent fatigue and for these reasons only a subset of Paulhus’s (1991) measure was included.

  • Study 2

Respondents:

research scientists or post-doctoral researchers (43; 39%)

principal investigators (12; 11%)

laboratory managers and coordinators (12; 11%)

graduate students (3; 3%)

Faculty teaching in a laboratory (3; 3%)

 one administrator (1%)

Respondents primarily worked in:

chemical (55; 45%)

mechanical/electrical (39; 32%)

Uncategorized laboratory (29; 24%)

Measures:

  • Safety Constructs

Respondents completed the same six self-report measures of safety constructs that were used in Study 1: safety climate, safety knowledge, safety compliance, safety participation, perceived job risk, and injuries, incidents, and near misses in the previous 12 months.

  • Impression Management

Respondents completed a five-item measure of impression management from the Bidimensional Impression Management Index (Table 1; Blasberg et al., 2014). Five items from the Communal Management subscale were selected based on an assessment of their quality and degree to which they represent the 10-item scale.5 A subset of Blasberg et al.’s (2014) full measure was used because of concerns from management about survey length. Items were responded to on a five-point agreement scale (1=strongly disagree, 5=strongly agree).

  • Personality

Conscientiousness, agreeableness, and emotional stability were assessed using six items from Gosling et al. (2003) 10-item personality measure. Four items from the 10-item measure assessing openness to experience and extraversion were not included in this study. Respondents were asked to indicate the degree to which adjectives were representative of them (i.e., Conscientiousness – “dependable, self-disciplined;”  agreeableness – “sympathetic, warm;” Emotional stability – “calm, emotionally stable”; 1=strongly disagree, 7=strongly agree) and combined to represent the Alpha personality factor. One conscientiousness item was dropped because it had a negative item-total correlation (“disorganized, careless” [reverse coded]). This was not surprising as it was the only reverse-scored personality item administered.

  • Study 3

Respondents:

The typical respondent was male (101, 94%) and had no supervisory responsibility (72, 67%); however, some women (6, 6%), supervisors (17, 16%), and managers/senior managers (16, 15%) also completed the survey.7 The sample was diverse in national origin with most respondents from India (44, 42%) and Pakistan (25, 24%).

Measures:

  • Safety Constructs

Respondents completed five of the same self-report measures of

safety constructs used in Study 1 and Study 2, including safety climate (Beus et al., 2019), safety knowledge (Griffin and Neal, 2000), safety compliance (Griffin and Neal, 2000), safety participation (Griffin and Neal, 2000), and injuries, incidents, and near misses in the previous 6 months. Respondents completed a similar measure of perceived job risk (Jermier et al., 1989) that included three additional items assessing the degree to which physical, administrative, and personal controls

  • Unlikely Virtues

Five items were selected from Weekley’s (2006) 10-item unlikely

virtues measure (see also Levashina et al., 2014; Table 1) and were responded to on a 5-point agreement scale (1=strongly disagree; 5=strongly agree). Akin to the previous studies, an abbreviated version of the measure was used because of constraints with survey length and the need to balance research and organizational objectives.

[a]In my mind, this is a negative way to start a safety research project. The ultimate goal of the organization is to complete its mission and injuries and fatalities are not part of the mission. So this puts the safety researcher immediately at odds with the organization.

[b]I wonder if this happens beyond surveys—do employees more generally portray a false sense of safety to co-workers, visitors, employers, trainees, etc? Is that made worse by surveying, or do surveys pick up on bias that exists more generally in the work culture?

[c]Employees always portray things in a better light on surveys because who really knows if its confidential

[d]Not just with regard to safety; most employees, I suspect, want to portray their businesses in a positive light. Good marketing…

[e]I think that this depends on the quality of the survey. If someone is pencil whipping a questionnaire, they are probably giving answers that will draw the least attention. However, if the questions are framed in an interesting way, I believe it is possible to have a survey be both a data collection tool and a discussion starter. Surveys are easy to generate, but hard to do well.

[f]In my experience, these are pretty high response rates for the lab surveys (around 20%).

[g]A concern that was raised by a reviewer on this paper was that it leads to a conclusion of blaming the workers. We certainly didn't set out to do that, but I can understand that perspective. I'm curious if others had that reaction.

[h]I had the same reaction and I can see how it could lead to a rosier estimate of safety conditions.

[i]There is an interesting note below where you mention the possible outcomes of surveys that "go poorly" if you will. If the result is that the workers are expected to spend more of their time and energy "fixing" the problem, it is probably no surprise that they will just say that there is no problem.

[j]I am always thinking about this type of thing—how results are framed and who the finger is being pointed at. I can see how this work can be interpreted that way, but I also see it from an even bigger picture—if people are feeling that they have to manage impressions (for financial safety, interpersonal safety, etc) then to me it stinks of a bigger cultural, systemic problem. Not really an individual one.

[k]Well – the "consequences" of the survey are really in the hands of the company or institution. A researcher can go in with the best of intentions, but a company can (and often does) respond in a way that discourages others from being forthright.

[l]Oh for sure! I didn't mean to shoulder the bigger problem on researchers or the way that research is conducted—rather, that there are other external pressures that are making individuals feel like managing people's impressions of them is somehow more vital than reporting safety issues, mistakes, needs, etc. Whether that's at the company, institution, or greater cultural level (probably everywhere), I don't think it's at the individual level.

[m]My first thought on bias in safety surveys had to do more with the survey model rather than method bias.  Most all safety surveys I have taken are based on the same template and questions generally approach safety from the same angle.  I haven't seen a survey that asks the same question several ways in the course of the survey or seen any control questions to attempt to determine validity of answers.  Perhaps some of the bias comes from the general survey format itself….

[n]I agree. In reviewing multiple types of surveys trying to target safety, there are many confounding variables. Trying to find a really good survey is tough – and I'm not entirely sure that it is possible to create something that can be applied by all. It is one of the reasons I was so intrigued by the BMS approach.

[o]Though—a lot of that work (asking questions multiple ways, asking control questions, determining validity and reliability, etc) is done in the original work that initially develops the survey metric. Just because it's not in a survey that one is taking or administering, doesn't necessarily mean that work isn't there

[p]Agreed – There are a lot of possible method biases in safety surveys. Maybe impression management isn't the most impactful. There just hasn't been much research in this area as it relates to safety measures, but certainly there is a lot out there on method biases more broadly. Stephanie and I had a follow up study (conference paper) looking at blatant extreme responding (using only the extreme endpoints on safety survey items). Ultimately, that too appears to be an issue

[q]In looking back over the summary, I was drawn to the use of the word 'dishonesty."  That implies intent.  I'm wondering whether it is equally likely that people are lousy at estimating risk and generally overestimate their own capabilities (Dunning Kruger anyone?).  So it is not so much about dishonesty but more about incompetency.

[r]They are more likely scared of retribution.

[s]This is an interesting point and I do think there is a part of the underestimation that has to do with an unintentional miscalibration. But, I think the work in this paper does go to show that some of the underestimation is related to people's proclivity to attempt to control how people perceive them and their performance.

[t]Even so, that proclivity is not necessarily outright dishonesty.

[u]I agree. I doubt that the respondents set out with an intent to be fraudulent or dishonest. Perhaps a milder or softer term would be more accurate?

[v]I wonder how strong this effect is for, say, graduate students who are in research labs under a PI who doesn't value safety

[w]I thinks its huge. I know I see difference in speaking with people in private versus our surveys

[x]Within my department, I know I became very cynical about surveys that were administered by the department or faculty members. Nothing ever seemed to change, so it didn't really matter what you said on them.

[y]I also think it is very significant. We are currently dealing with an issue where the students would not report safety violations to our Safety Concerns and Near Misses database because they were afraid of faculty reprisal. The lab is not especially safe, but if no one reports it, the conclusion might be drawn that no problems exist.

[z]And to bring it back to another point that was made earlier: when you're not sure if reporting will even trigger any helpful benefits, is the perceived risk of retribution worth some unknown maybe-benefit?

[aa]I heard a lot of the same concerns when we tried doing a "Near Miss" project. Even when anonymity was included, I had several people tell me that the details of the Near Miss would give away who they were, so they didn't want to share it.

[ab]Interesting point. It would seem here that folks fear if they say something is amiss with safety in the workplace, it will be treated as something wrong with themselves that must be fixed.

[ac]Yeah I feel like this kind of plays in to our discussion from last week, when we were talking about people feeling like they're personally in trouble if there is an incident

[ad]A related finding has been cited in other writings on surveys – if you give a survey, and nothing changes after the survey, then people catch on that the survey is essentially meaningless and they either don't take surveys anymore or just give positive answers because it isn't worth explaining negative answers.

[ae]There are risks associated with research labs, but I don't know if I would call them "clear". My sense is that "notable incidents" is a catchphrase people are using about academic lab safety to avoid quantitating the risks any more specifically.

[af]This is interesting to think about. One the one hand, if one works in a higher hazard environment maybe they just NOTICE hazardous situations more and think of them as more important. On the other hand, there is a lot of discussion around the normalization of hazards in an environment that would seem to suggest that they would not report on the hazards because they are normal.

[ag]Maybe they receive more training as well which helps them identify hazards easier. Oil & Gas industry Chemical engineers certainly get more training from my experience.

[ah]Oil and gas workers were also far more likely to participate in the study than the academic groups.  I think private industry has internalized safety differently (not necessarily better or worse) than academia.  And high hazard industries like oil and gas have a good feel for the cost of safety-related incidents.  That definitely gets passed on to the workforce

[ai]How does normalization take culture into effect? Industries have a much longer history of self-reporting and reporting of accidents in general than do academic institutions.

[aj]Some industries have histories of self-reporting in some periods of time. For example, oil and gas did a lot of soul searching after the Deepwater explosion (which occurred the day of a celebration of 3 years with no injury reports), but this trend can fade with time. Alcoa in the 1990s and 2000s is a good example of this. For example, I've looked into Paul H. O'Neill's history with Alcoa. He was safety champion whose work faded soon after he left.

[ak]I wonder if this can be used as a way to normalize the surveys somehow

[al]Hmm, yeah I think you could, but you would also have to take a measure of impression management so that you could remove the variance caused by that from your model.

Erg, but then long surveys…. the eternal dilemma.

[am]I bet there are overlapping biases too that have opposite effects, maybe all you could do is determine to what extent of un-reliability your survey has

[an]In the BMS paper we covered last semester, it was noted that after they started to do the managerial lab visits, the committee actually received MORE information about hazardous situations. They attributed this to the fact that the committee was being very serious about doing something about each issue that was discovered. Once people realized that their complaints would actually be heard & addressed, they were more willing to report.

[ao]and the visits allowed for personal interactions which can be kept confidential as opposed to a paper trail of a complaint

[ap]I imagine that it was also just vindicating to have another human listen to you about your concerns like you are also a human. I do find there is something inherently dehumanizing about surveys (and I say this as someone who relies on them for multiple things!). When it comes to safety in my own workplace, I would think having a human make time for me to discuss my concerns would draw out very different answers.

[aq]Prudent point

[ar]The Hawthorne Effect?

[as]I thought that had to do with simply being "studied" and how it impacts behavior. With the BMS study, they found that people were reporting more BECAUSE their problems were actually getting solved. So now it was actually "worth it" to report issues.

[at]It would be interesting to ask the same question of upper management in terms of whether their safety attitudes are "true" or not. I don't know of any organizations that don't talk the safety talk. Even Amazon includes a worker safety portion to its advertising campaign despite its pretty poor record in that regard.

[au]I wish they would have expanded on this more, I'm really curious to see what methods to do this are out there or what impact it would have, besides providing more support that self-reporting surveys shouldn't be used

[av]That is an excellent point and again something that the reviewers pushed for. We added some text to the discussion about alternative approaches to measure these constructs. Ultimately, what can we do if we buy into the premise that self-report surveys of safety are biased? Certainly one option is to use another referent (e.g., managers) instead of the workers themselves. But that also introduces its own set of bias. Additionally, there are some constructs that would be odd to measure based on anything other than self-report (e.g., safety climate). So I think it's still somewhat of an open question, but a very good one. I'm sure Stephanie will have thoughts on this too for our discussion next week. 🙂 But to me that is the crux of the issue: what do we do with self-reports that tend to be biased?

[aw]Love this, I will have to go read the full paper. Especially your point about safety climate, it will be interesting to see what solutions the field comes up with because everyone in academia uses surveys for this. Maybe it will end up being the same as incident reports, where they aren't a reliable indicator for the culture.

The Art & State of Safety Journal Club: “Mental models in warnings message design: A review and two case studies”

Sept 22, 2021 Table Read

The full paper can be found here: https://www.sciencedirect.com/science/article/abs/pii/S0925753513001598?via%3Dihub

Two case studies in consumer risk perception and exposure assessment, focusing on mothballs and elemental mercury.

For this Table Read, we will only be reviewing the two case studies presented in the paper.

4. Two case studies: mothballs and mercury

Two case studies illustrate the importance of careful adaptation to context[a][b][c]. In the first case, an expert model of consumer-product use of paradichlorobenzene mothballs is enhanced with information from lay users’ mental models, so the model can become more behaviorally realistic (Riley et al., 2006a). In the second case, the mental models elicitation is enhanced with ethnographic methods including participant observation in order to gain critical information about cultural context of mercury use in Latino and Caribbean communities in the New York area (Newby et al., 2006; Riley et al., 2001a, 2001b, 2006b).

Both cases are drawn from chemical consumer products applied in residential uses. The chemicals considered here – paradichlorobenzene and mercury – have a wide variety of consumer and occupational uses that underscore the importance of considering context in order to attain a realistic sense of beliefs about the chemical, exposure behaviors, and resultant risk.

This analysis focuses on what these case studies can tell us about the process of risk communication design[d] in order to take account of the multidimensional aspects of risk perception as well as the overall cultural context of risk. Thus, risk communications may be tailored to the beliefs held by individuals in a specific setting, as well as to the specifics of their situation (factors physical, social, and cultural) which influence perceptions of and decision making about risk.

[e][f][g][h]

4.1. Mothballs

Mothballs are used in homes to control moth infestations in clothing or other textiles. Mothballs are solids (paradichlorobenzene or naphthalene) that sublimate (move from a solid state to a gaseous state) at room temperature. Many are in the shape of balls about 1 in. in diameter, but they are also available as larger cakes or as flakes. The products with the highest exposed surface area (flakes) volatilize more quickly. The product works by the vapor killing adult moths, breaking the insect life cycle.

The primary exposure pathway is inhalation of product vapors, but dermal contact and ingestion may also occur. Cases of ingestion have included children mistaking mothballs for candy and individuals with psychological disorders who compulsively eat household items (Avila et al., 2006; Bates, 2002). Acute exposure to paradichlorobenzene can cause skin, eye, or nasal tissue irritation; acute exposure to naphthalene can cause hemolytic anemia, as well as neurological effects. Chronic exposures to either compound causes liver damage and central nervous system effects. Additional long-term effects of naphthalene exposure include retinal damage and cataracts (USEPA, 2000). Both paradichlorobenzene and naphthalene are classified as possible human carcinogens (IARC Group II B) (IARC, 1999). Since this classification in 1987, however, a mechanism for cancer development has been identified for both naphthalene and paradichlorobenzene, in which the chemicals block enzymes that are key to the process of apoptosis, the natural die-off of cells. Without apoptosis, tumors may form as cell growth continues unchecked (Kokel et al., 2006).

Indoor air quality researchers have studied mothballs through modeling and experiment (e.g., Chang and Krebs, 1992; Sparks et al., 1991, 1996; Tichenor et al., 1991; Wallace, 1991). Research on this topic has focused on developing and validating models of fate and transport of paradichlorobenzene or naphthalene in indoor air. Unfortunately, the effects of consumer behavior on exposure were not considered[i][j]. Due to the importance of the influence of consumer behavior on exposure, it is worth revisiting this work to incorporate realistic approximations of consumer behavior.

Understanding consumer decisions about purchasing, storage, and use is critical for arriving at realistic exposure estimates as well as effective risk management strategies and warnings content. Consumer decision-making is further based upon existing knowledge and understanding of exposure pathways, mental models of how risk arises (Morgan et al., 2001), and beliefs about the effectiveness of various risk-mitigation strategies. Riley et al. (2001a, 2001b) previously proposed a model of behaviorally realistic exposure assessment for chemical consumer products, in which exposure endpoints are modeled in order to estimate the relative effectiveness of different risk mitigation strategies, and by extension, to evaluate warnings (refer to Fig. 1). The goal is to develop warnings that provide readers with the information they need to manage the risks associated with a given product, including how hazards may arise, potential effects, and risk-mitigation strategies.

4.1.1. Methods

The idea behind behaviorally realistic exposure assessment is to consider both the behavioral and physical determinants of exposure in an integrated way (Riley et al., 2000). Thus, user interviews and/or observational studies are combined with modeling and/or experimental studies to quantitatively assess the relative importance of different risk mitigation strategies and to prioritize content for the design of warnings, based on what users already know about a product. Open-ended interviews elicit people’s beliefs about the product, how it works, how hazards arise, and how they may be prevented or mitigated. User-supplied information is used as input to the modeling or experimental design in order to reflect how people really interact with a given product. Modeling can be used to estimate user exposure or to understand the range of possible exposures that can result from different combinations of warning designs and reading strategies.

Riley et al. (2006a) recruited 22 adult volunteers [k][l][m][n][o][p][q][r]who had used mothballs from the business district in Northampton, Massachusetts. Interview questions probed five areas: motivation for product use and selection; detailed use data (location, time activity patterns, amount and frequency of use); mental models of how the product works and how hazards may arise; and risk perceptions; risk mitigation strategies. Responses were analyzed using categorical coding (Weisberg et al., 1996). A consumer exposure model utilized user-supplied data to determine the concentration of paradichlorobenzene in a two-box model (a room or compartment in which moth products are used, and a larger living space).

4.1.2. Uses

Table 1 illustrates the diversity of behavior surrounding the use[s][t][u][v][w] of mothballs in the home. It is clear that many users behave differently around the product from what one might assume from reading directions or warnings on the package label.

65% of participants reported using mothballs to kill or repel moths, which is its intended use. 35% reported other uses for the product, including as an air freshener and to repel rodents outdoors. Such uses imply different use behaviors related to the amount of product used and the location where it is applied. Effective use of paradichlorobenzene as an indoor insecticide requires use in an enclosed space, the more airtight the better. Ventilation is not recommended, and individuals should limit their exposure to the non-ventilated space. In contrast, use as a deodorizer disperses paradichlorobenzene throughout a space by design.

These different behaviors imply different resultant exposure levels. For use as an air freshener, the exposure might be higher due to using the product in the open in one’s living space. Exposures might also be lower, as in the reported outdoor use for controlling mammal pests.

A use not reported in this study, perhaps due to the small sample size, or perhaps due to the stigma associated with drug use, is the practice of huffing or sniffing – intentional inhalation in order to take advantage of the physiological effects of volatile chemicals (Weintraub et al., 2000). This use is worth mentioning due to its high potential for injury, even if this use is far less likely than other uses reported here.

The majority of users place mothballs outside of sealed containers in order to control moths, another use that is not recommended by experts or on package labeling[x][y][z][aa][ab][ac][ad][ae]. Even though the product is recommended for active infestations, many users report using the product preventively, increasing the frequency of use and resultant exposure above recommended or expected levels. Finally, the amount used is greater than indicated for a majority of the treatment scenarios reported. These variances from recommended use scenarios underscore the need for effective risk communication, and suggest priority areas for reducing risk.

These results indicate a wide range of residential uses with a variety of exposure patterns. In occupational settings, one might anticipate a similarly broad range of uses. In addition to industrial and commercial uses as mothballs (e.g., textile storage, dry cleaning) and air fresheners (e.g., taxi cabs, restaurants), paradichlorobenzene is used as an insecticide (ants, fruit borers) or fungicide (mold and mildew), as a reagent for manufacturing other chemical products, plastics and pharmaceuticals, and in dyeing (GRR Exports, 2006).

4.1.3. Exposures

Modeling of home uses illustrates the range of possible exposures[af] based on self-reported behavior, and compares a high and low case scenario from the self-reports to an ‘‘intended use’’ scenario that follows label instructions exactly.

Table 2 shows the inputs used for modeling and resultant exposures. The label employed for the expected use scenario advised that one box (10 oz, 65 mothballs) should be used for every 50 cubic feet (1.4 cubic meters) of tightly enclosed space. Thus, for a 2-cubic-meter closet, 90 mothballs were assumed for the intended use scenario. The low exposure scenario involved a participant self-report in which 10 mothballs were placed in a closed dresser drawer, and the high exposure scenario involved two boxes of mothballs reportedly placed in the corners of a 30-cubic-meter bedroom.

Results show that placing moth products in a tightly enclosed space significantly reduces the concentration in users’ living space.[ag][ah][ai][aj][ak][al][am][an] The high level of exposure resulting from the usage scenario with a large amount of mothballs placed directly in the living space coincided with reports from the user of a noticeable odor and adverse health effects that the user attributed to mothball use.

4.1.4. Risk perception

There were a wide range of beliefs about the function and hazards[ao][ap] of [aq][ar][as][at][au][av][aw]mothballs among participants, as well as a gap in knowledge between consumer and expert ideas of how the product works. Only 14% of the participants were able to correctly identify an active ingredient in mothballs, while 76% stated that they did not know the ingredients. Similarly, 68% could not correctly describe how moth products work, with 54% of all participants believing that moths are repelled by the unpleasant odor. Two-thirds of participants expressed health concerns related to using moth products[ax][ay]. 43% mentioned inhalation, 38% mentioned poisoning by ingestion, 21% mentioned cancer, and 19% mentioned dermal exposure. A few participants held beliefs that were completely divergent from expert models, for example a belief that mothballs ‘‘cause parasites’’ or ‘‘recrystallize in your lungs.’’

A particular concern arises from the common belief that moths are repelled by the smell of mothballs. This may well mean that users would want to be able to smell the product to know it is working – when in fact this would be an indication that users themselves were being exposed and possibly using the product incorrectly. Improvements to mothball warnings might seek to address this misconception of how mothballs work, and emphasize the importance of closed containers, concentrating the product near the treated materials and away from people.

4.2. Mercury as a consumer product

Elemental mercury is used in numerous consumer products, where it is typically encapsulated, causing injury only when a product breaks. Examples include thermometers, thermostats, and items containing mercury switches such as irons or sneakers with flashing lights. The primary hazard arises from the fact that mercury volatilizes at room temperature. Because of its tendency to adsorb onto room surfaces, it has long residence times in buildings compared with volatile organic compounds. Inhaled mercury vapor is readily taken up by the body; in the short term it can cause acute effects on the lungs, ranging from cough and chest pain to pulmonary edema and pneumonitis in severe cases. Long-term exposure can cause neurological symptoms including tremors, polyneuropathy, and deterioration of cognitive function (ATSDR, 1999).

The second case study focuses on specific uses of elemental mercury as a consumer product among members of Latino and Caribbean communities in the United States. Mercury is sold as a consumer product in botánicas (herbal pharmacies and spiritual supply stores), for a range of uses that are characterized variously as folkloric, spiritual or religious in nature.

4.2.1. Methods

Newby et al. (2006) conducted participant observation and interviews with 22 practitioners and shop owners[az], seeking to characterize both practices that involved mercury use and perceptions of resulting risks. These practices were compared and contrasted with uses reported in the literature as generally attributable to Latino and Caribbean religious and cultural traditions in order to distinguish between uses that are part of the Santeria religion, and other uses that are part of other religious practice or secular in nature. Special attention was paid to the context of Santeria, especially insider–outsider dynamics created by its secrecy, grounded in its histories of suppression by dominant cultures. Because the label Latino is applied to a broad diversity of ethnicities, races, and nationalities, the authors sought to attend to these differences as they apply to beliefs and practices related to mercury.

Uses reported in the literature and reported by participants to Newby et al. (2006) and Riley et al. (2001a, 2001b) were modeled to estimate resulting exposures. The fate and transport of mercury in indoor air is difficult to characterize because of its tendency to adsorb onto surfaces and the importance of droplet-size distributions on overall volatilization rates (Riley et al., 2006b). Nevertheless, simple mass transfer and indoor air quality models can be employed to illustrate the relative importance of different behaviors in determining exposure levels.

4.2.2. Uses

Many uses are enclosed, such as placing mercury in an amulet, gourd, walnut, or cement figure (Johnson, 1999; Riley et al., 2001a, 2001b, 2006b; Zayas and Ozuah, 1996). Other uses are more likely to elevate levels of mercury in indoor air to hazardous levels, including sprinkling of mercury indoors or in cars for good luck or protection, or adding mercury to cleaning compounds or cosmetic products (Johnson, 1999; Zayas and Ozuah, 1996).

Some uses, particularly those attributable to Santeria, are occupational in nature. Santeros and babalaos (priests and high priests) described being paid to prepare certain items that use mercury (Newby et al., 2006). Similarly, botanica personnel described selling mercury as well as creating certain preparations with it (Newby et al., 2006; Riley et al., 2001a, 2001b). One case report described exposure from a santero spilling mercury (Forman et al., 2000). Some of this work occurs in the home, making it both occupational and residential.

Across the U.S. population, including in Latino and Caribbean populations, it is more common for individuals to be exposed to elemental mercury vapor through accidental exposures such as thermometer, thermostat and other product breakage or spills from mercury found in schools and abandoned waste sites (Zeitz et al., 2002). The cultural and religious uses described above reflect key differences in use (including intentional vs. accidental exposure) that require attention in design of risk communications.

4.2.3. Exposures

Riley et al. (2001a, 2001b) solved a single-chamber indoor-air quality model analytically to estimate exposures based on scenarios derived from two interviews with mercury users. Riley et al. (2001a, 2001b) similarly modeled scenarios for sprinkling activities reported elsewhere in the literature. Riley et al. (2001a, 2001b) additionally employed mass transfer modeling combined with indoor air quality modeling to estimate resulting exposures from the contained uses described in interviews with practitioners (Newby et al., 2006).

Results presented in Table 3 show wide variation in predicted exposures resulting from different behavior patterns in different settings. Contained uses produce the lowest exposures. As long as the mercury remains encapsulated or submerged in other media, it poses little risk. By contrast, uses in open air can result in exposures orders of magnitude greater, depending on amounts and how the mercury is distributed, as droplet size and surface area are key determinants of exposure.

4.2.4. Risk perception

Newby et al. (2006) found that participants identified the risks of mercury use as primarily legal in nature.[ba][bb][bc][bd][be][bf] Concerns about getting caught by either police or health officials were strong[bg][bh][bi]. After these concerns, practitioners mentioned the risks of mercury use ‘‘backfiring’’ on a spiritual level, particularly if too much is used.[bj][bk][bl] There was some awareness of potential harmful health effects from mercury use[bm][bn], but the perceptions of mercury’s spiritual power and the perceived legal risks of possession and sale figured more prominently in users’ rationales for taking care in using it and clearly affected risk-mitigation strategies described (e.g., not discussing use or sales openly, giving people a bargain so they won’t tell authorities).

Newby et al. (2006) discuss at length the insider–outsider dynamics in the study, and their influence on the strength of fears of illegality of mercury. Because of taboos on sharing details of Santeria practice, the authors warn against providing certain details of practice in risk communications designed by outsiders, as it would undercut the credibility of the warning messages.

Mental models of risk perception are critically important in all cases of consumer mercury use, both intentional and unintentional. When a thermostat or thermometer breaks in a home, many users will use a vacuum to clean up the spil[bo][bp][bq][br]l, based on a mental model of mercury’s hazards that does not include a notion of mercury as volatile. A key gap in people’s knowledge of mercury relates to its volatility; most lay people do not realize that vacuuming mercury will greatly increase its indoor air concentration, causing a greater health hazard than simply leaving mercury on the floor (Schwartz et al., 1992; Zelman et al., 1991). Thus, many existing risk communications about mercury focus on accidental spills and how (or how not) to clean them up.[bs][bt]


[a]Interesting timing for me on this paper- we’re currently working on a scheme to communicate hazards to staff & faculty at a new facility.  We have an ethnic diversity to consider and a number of the spaces will host the general public for special events.  Lots of perspectives to account for…

[b]If you have info on this next week, it would be interesting to hear what challenges you have run into and what you have done to address them.

[c]I’d be game for that.  I’m just getting into the project and was starting to consider different risk perceptions among different audiences.  This paper has given me some food for thought

[d]This is different from the way I have used the term “risk communication” traditionally. Traditionally risk communication is designed to help a variety of stakeholders work through scientific information to come to a shared decision about risk. See https://www.epa.gov/risk-communication for example. However, this paper’s approach soundsm more like the public health approach used to collect more accurate information about a risk

[e]I really like the concept of “behaviorally realistic exposure assessment”. Interestingly, EPA has taken over regulation of workplace chemicals from OSHA because OSHA was legally constrained from using realistic exposure assessment (specifically, the assumption that PPE may not be work correctly all the time)

[f]Wow – that is crazy to hear that OSHA would be limited in that way. One would thing actual use would be an incredibly important thing to consider.

[g]OSHA is expected to assume that all of its regulations will be followed as part of its risk assessment. EPA doesn’t believe that. This impacted EPA’s TSCA risk assessment of Methylene Chloride

[h]There is a press story about this at

https://www.cbsnews.com/video/family-of-man-who-died-after-methylene-chloride-exposure-call-epa-decision-step-in-the-right-direction/

[i]I’m wondering if the researchers are in academia or from the company. If from companies which supplied mothballs, I’m surprised that this was not one of the first things that they considered.

[j]That’s an interesting question. Mothballs are a product with a long history and they were well dispersed in the economy before regulatory concerns about them arose. So the vendors probably had an established product line before the regulations arose.

[k]Wondering about the age distribution of the study group- when I think of mothballs, I think of my grandparents who would be about 100 years old now.  Maybe younger people would handle the mothballs differently since they are likely to be unfamiliar with them.

[l]I’m also wondering about how they recruited these volunteers because that could introduce bias, for example only people who already know what they are might be interested

[m]Volunteer recruitment is an interesting thought…what avenue did they use to recruit persons who had this product? Currently wondering who still uses them since I don’t know anyone personally who talks about it…

[n]It sounds like they went into the shopping area outside Smith College and recruited people off the street. Northampton is a diverse social environment, but I suspect mothball users are from a particular segment of society

[o]How the recruitment happened seems like a key method that wasn’t discussed sufficiently here. After re-reading this it might be people that they recruited people without knowing if they had used mothballs or not.

[p]Interesting thought. When I was in my undergraduate studies, one of my professors characterized a substance as “smelling like mothballs,” and me and all of my peers were like “What? What do mothballs smell like…?” Curious as to whether product risk assessment is different between these generational groups.

[q]Did you and your undergraduate peers go grab a box and sniff them to grok the reference?

[r]I certainly did not! But I wonder how many people would have, if there were available at the time!

[s]Would anyone like to share equivalents they have seen in research labs? Researchers using a product in a way different from intended? Did it cause any safety issues? Were the researchers surprised to find that they were not using as intended? Were the researchers wrong in their interpretations and safety assessment? If so, how?

[t]I presume you’re thinking of something with a specific vendor-defined use as opposed to a reagent situation where, for example, a change in the acid use to nitric led to unforeseen consequences.

[u]I agree that this can apply to the use of chemicals in research labs. Human error is why we want to build in multiple controls. In terms of examples, using certain gloves of PPE for improper uses is the first thing that comes to mind.

[v]I have seen hardware store products repurposed in lab settings with unfortunate results, but I can’t recall specific of these events off the top of my head. (One was an hubcap cleaning solution with HF in it used in the History Dept to restore granite architectural features.)

[w]I have seen antifreeze brought in for Chemistry labs and rock salt brought in for freezing point depression labs…not dangerous, but not what they were intended for.

[x]So is the take-away on this point and the ones that follow in the paragraph that another communication method is needed?  Reading the manual before use is rare (in my experience)- too wordy.  Maybe pictographs or some sort of icon-based method of communications.

[y]This seems like to takeaway to me! Pictures, videos—anything that makes the barrier to content engagement as low as possible. Even making sure that it is more difficult to miss the information when trying to use the product would likely help (ie, not having to look it up in a manual, not having to read long paragraphs, not having to keep another piece of paper around)

[z]In the complete article, they discuss three hurdles to risk understanding:

1. Cognitive heuristics

2. Information overload

3. Believability and self-efficacy

These all sound familiar from the research setting when discussing risks

[aa]Curious how many people actually read package labeling, and of those people how many take the labeling as best-practice and how many take it as suggested use. I’m also curious how an analogy to this behavior might be made in research settings. It seems to me that there would likely be a parallel.

[ab]I believe that the Consumer Product Safety Commission does research into this question

[ac]Other considerations: is the package labeling comprehensible for people (appropriate language)? If stuff is written really small, how many people actually take the time to read it? Would these sorts of instructions benefit more from pictures rather than words?

[ad]I was watching an satirical Better Call Saul “legal ethics” video yesterday where the instructor said “it doesn’t matter how small the writing is, you just have to have it there”. See https://www.dailymotion.com/video/x7s7223 for that specific “lesson”

[ae]I think we’d see a parallel in cleaning practices, especially if it’s a product that gets diluted different amounts for different tasks. Our undergraduate students for example think all soap is the same and use straight microwash if they see it out instead of diluting.

[af]Notably, even when putting them outside in a wide area, you can still smell them from a distance, which would make them a higher exposure than expected.  Pictures and larger writing on the boxes would definitely help, but general awareness may need to be shared another way.

[ag]Historical use was in drawers with sweaters, linens, etc (which is shown to be the “low exposure”)…were these products inadvertently found to be useful in other residential uses much later?

[ah]I wonder if the “other uses” were things consumers figured out and shared with their networks – but those uses would actually increase exposure.

[ai]It appears so!  Another issue may be the amount of the product used.  Using them outside rather than in a drawer, may minimize the exposure some, but that would be relative to exactly how much of the product was used…

[aj]The comment about being able to smell it to “know it is working” is also interesting. It makes me think of how certain smells (lemon scent) are associated with “cleanliness” even if it has nothing to do with the cleanliness.

[ak]I’ve also heard people refer to the smell of bleach as the smell of clean – although if you can smell it, it means you are being exposed to it!

[al]This is making me second guess every cleaning product I use!

[am]It also makes me wonder if added scent has been used to discourage people from overusing a product.

[an]I think that is why some people tout vinegar as the only cleaner you will ever need!

[ao]What do you think would lead to this wide range?

[ap]If they are considering beliefs that were passed down through parents and grandparents this would also correlate with consumers not giving attention to the packaging because they grew up with a certain set of beliefs and knowledge and they have never thought to question it.

[aq]Is it strange to hear this given that the use and directions are explained right on the packaging?

[ar]I don’t think it is that strange. I think a lot of people don’t bother to read instructions or labels closely, especially for a product that they feel they are already familiar with (grow up with it being used)

[as]Ah, per my earlier comment regarding whether or not people read the packaging—I am not actually very surprised by this. I think there is somewhat of an implicit sense that products made easily available are fairly safe for use. That, coupled with a human tendency to acclimate to unknown situations after no obvious negative consequences plus the shear volume of text meant to protect corporations (re: Terms of Use, etc), I think people sort of ignore these things in their day-to-day.

[at]I agree it seems that people use products they saw used throughout their childhood…believed them to be effective and hence don’t read the packaging.  (Clearly going home to read the packaging myself…as I used them recently to repel skunks from my yard).

[au]Since the effects of exposure to mothball active ingredients are not acute in all but the most extreme cases (like ingestion), it is unlikely that any ill health effects would even be linked to mothballs

[av]I have wondered if a similar pattern happens with researchers at early stages. If the researcher is interested to a reagent or a process by someone who doesn’t emphasize safety considerations, that early researcher things of it as relatively safe – then doesn’t do the risk assessment on their own.

[aw]Yes, exactly. Long-term consequences are much harder for us to grapple with than acute consequences, which may lead to overconfidence and overexposure

[ax]I wonder why with so many having health concerns, only 12% used on the correct “as needed” basis.

[ay]A very, very interesting question. I wonder if it has something to do with a sense that “we just don’t know” along with a need to find a solution to an acute problem. i.e., maybe people are worried about whether or not it is safe in a broad, undefined, and somewhat intractable manner, but are also looking to a quick solution to a problem they are currently facing, and perhaps ignore a pestering doubt

[az]Again here, I’m wondering why more is not described about how they identified participants, because it is a small sample size and there is a possibility for bias

[ba]This is an important finding. Public risk perception and professional risk perception can be quite different. I don’t think regulators consider how they might contribute to this gap because each chemical is regulated in isolation from other risks and exposures.

[bb]It is also related to the idea of how researchers view EHS inspections. Do they see them as opportunities for how they can make their research work safer? Or do they merely see them as annoying exercises that might get them “in trouble” somehow?

[bc]I think that in both Hg case and the research case, there is a power struggle expressed as a culture clash issue. Both the users of Hg for spiritual purposes and researchers are likely to feel misunderstood by mainstream society represented by the external oversight process

[bd]I think this is *such* an important takeaway. The sense as to whether long documents (terms of use and other contracts, etc) and regulatory bodies (EHS etc) are meant to protect the *people/consumers* or whether they are meant to protect the *corporation* I think is a big deal in our society. Contracts, usage information, etc abound, but it’s often viewed (and used) as a means to protect from liability, not as a means to protect from harm. I think people pick up on that.

[be]I am in total agreement – recently I sat through a legal deposition for occupational exposure related mesothelioma, it was unsettling how each representative from each company pushed the blame off in every other possible direction, including the defendant. There are way more legal protections in place for companies than I could have ever imagined.

[bf]There is some discussion of trying to address this problem with software user’s agreements, but I haven’t heard of this concern on the chemical use front.

[bg]This is to say there is a disconnect to reasoning behind the legal implications? Assuming most are not aware of the purpose of the regulations as protections for people?

[bh]I don’t know of any agency that would recognize use of Hg as a spiritual practice. Some Native Americans have found their spiritual practices outlawed because their use of a material is different scenario from the risk scenario that the regulators base their rules on

[bi]I agree with your comment about a disconnect. Perhaps if they understood more about the reasons for the laws they would be more worried about their health rather than getting in trouble.

[bj]To a practitioner, is a spiritual “backfire” completely different from a health effect, or just a different explanation of the same outcome?

[bk]Good question

[bl]Good point – I thought about this too. I’d love to hear more about what the “spiritual backfire” actually looked like. Makes me think of the movie “The Exorcism of Emily Rose” where they showed her story from the perspective of someone who thinks she is possessed by demons versus someone who thinks she is mentally ill.

[bm]I am curious to find out how risk communication plays a role here cause it seems those using the mercury know about its potential health hazard.

[bn]Agree – It does say “some” awareness so I would be interested to see how bad they think it is for health vs reality. It looks like they are doing a risk analysis of sorts and are thinking the benefits outweigh the risks.

[bo]I’m not sure how to articulate it, but this is very different than the spiritual use of mercury.  Spiritual users can understand the danger of mercury exposure but feel the results will be worth it.  The person wielding a vacuum does not understand how Hg’s hazard is increased through volatilization.  I suspect a label on vacuum cleaners that said ‘NOT FOR MERCURY CLEANUP’ would be fairly effective.

[bp]Would vacuum companies see this as worth doing today? I don’t think I really encountered mercury until I was working in labs – it is much less prevalent today than it used to be (especially in homes), so I wonder if they would not see it as worth it by the numbers.

[bq]Once you list one warning like this, vacuum companies might need to list out all sorts of other hazards that the vacuum is not appropriate for cleanup

[br]Also, Mercury is being phased out in homes but you still see it around sometimes in thermometers especially. Keep in mind this paper is from 2014.

[bs]I don’t understand this statement in the context of the paragraph. Which risk communication messages is she referring to? I know that institutional response to Hg spills has changed a lot over the last 30 years. There are hazmat emergency responses to them in schools and hospitals monthly

[bt]I think this vacuum example is just showing how there is a gap in the risk communications to the public (not practitioners), since they mainly focused on clean up rather than misuse. It would be nice if there was a reference or supporting info here. They may have looked at packaging from different mercury suppliers.

2020-21 CHAS Journal Club Index

During the 2020-21 academic year, an average of between 15 and 20 people gathered to review and discuss academic papers relevant to lab safety in academia.

During the fall, we followed the traditional model of a presenter who led the discussion after the group was encouraged to read the paper. In the spring, we began a two-step process: first a table read where the group silently collaboratively commented on an abbreviated version of the paper in a shared google document one week and then had an oral discussion the second week. The second approach enabled much more engagement by the group as a whole.

The spring papers we discussed were primarily focused on graduate student led Lab Safety Teams and included (in reverse chronological order):

The fall papers were focused primarily on the idea of safety culture and included (in reverse chronological order):

  • What Is A Culture Of Safety And Can It Be Changed?
  • Safety Culture & Communication
  • Supporting Scientists By Making Research Safer
  • Perspectives On Safety Culture
  • Making Safety Second Nature In An Academic Lab
  • We will pick up the Journal Club again in the fall of 2021.
    We are interested in looking at the psychology of safety with 2 things in mind:

    • (1) papers with well-done empirical studies, and
    • (2) studies that investigate an issue that is present in academia.

    It is likely that papers that are investigating the psychology of safety have focused primarily on industry (construction, airplanes, etc), so it will be important to identify the specific phenomenon they are investigating and be prepared to translate it to academia. Questions about the CHAS Journal Club can be directed to membership@dchas.org

    Engaging senior management to improve the safety culture

    The Art & State of Safety Journal Club, 05/05/21

    Excerpts from “Engaging senior management to improve the safety culture of a chemical development organization thru the SPYDR (Safety as Part of Your Daily Routine) lab visit program

    written by Victor Rosso, Jeffery Simon, Matthew Hickey, Christina Risatti, Chris Sfouggatakis, Lydia Breckenridge, Sha Lou, Robert Forest, Grace Chiou, Jonathan Marshall, and Jean Tom

    Presented by Victor Rosso

    Bristol-Myers Squibb

    The full paper can be found here: https://pubs.acs.org/doi/10.1016/j.jchas.2019.03.005 

    INTRODUCTION

    The improvement and enrichment of an organization’s safety culture are common goals throughout both industrial and academic research. As a chemical process development organization that designs and develops safe, efficient, environmentally appropriate and economically viable chemical processes for the manufacture of small molecule drug substances, we continually strive to improve our safety culture. Cultivating and energizing a rich safety culture is critical for an organization whose members are performing a multitude of processes at different scales using a broad spectrum of hazardous chemical reagents as its core activities. While we certainly place an emphasis on utilizing greener materials and safer reagents, the nature of our business requires us to work with all types of hazardous and reactive chemicals and the challenges we face are pertinent to any chemical research organization.

    In our organization of approximately 200 organic and analytical chemists[a] and chemical engineers, we have a Safety Culture Team (SCT) [b][c][d][e][f][g]whose mission is to develop programs to enhance the organization’s safety culture. To make this culture visible, the  team developed a key concept, Safety is Part of Your Daily Routine, into a brand with its own logo SPYDR. To build on this concept, we designed a program known as the SPYDR Lab Visits shown in Figure 1. The program engages our senior leadership[h][i] by having them interact with our scientists directly at the bench in the laboratory[j][k][l][m] to discuss safety concerns. This program, initiated in 2013, has visibly engaged our senior leaders directly in the organization’s safety culture and brought to our attention a wide range of safety concerns that would not readily appear[n][o][p] in a typical safety inspection. Furthermore, this program provides a mechanism for increased communication between all levels of the organization by arranging meetings between personnel who may not normally interact with one another on a regular basis. The success of this program has led to similar programs across other functional areas in the company.[q]

    A key safety objective for all organizations is to ensure that the entire organization can trust that the leadership is engaged in and supportive of the safety culture. [r][s][t][u]Therefore this program was designed to (1) emphasize that safety is a top priority from the top of the organization to the bottom[v][w][x][y][z], (2) engage our senior leadership with a prominent role in the safety conversations in the organization, (3) build a closer relationship between our senior leaders and the laboratory occupants and (4) utilize the feedback obtained from the visits to make the working environment better for our scientists. The program is a supplement to and not a replacement for the long standing laboratory inspection program done by the scientists in the organization.

    The program involves assigning the senior leaders to meet with 2–5 scientists in the scientists’ laboratory. There are approximately 40 laboratories in the organization, and over the course of the year, each laboratory will meet with 2–3 senior leaders and each senior leader will visit 4–6 different laboratories. All of this is organized using calendar entries which informs the senior leaders and scientists of where and when to meet, and contains the survey link to collect the feedback.

    As a result of this program, our senior leaders engage our bench scientists in conversations that are primarily driven to draw out the safety concerns of our scientists. However, these conversations can run the gamut of anything that is a concern to our team members[aa][ab]. This can range from safety issues, laboratory operations, and current research work to organizational changes and personal concerns. The senior leadership regularly reminds and encourages the scientists to engage on any topic of their choosing; this creates a collegial atmosphere for laboratory occupants to voice their safety concerns and ideas.

    The laboratory visit program was modeled around the Safety SPYDR and thus we designed the program to have 8 legs[ac]. The first two legs consist of the program’s goals for the visit. We asked the senior leaders to ensure that they state the purpose of the program, that they are visiting the laboratory to find ways to improve lab safety. The second leg, which is the primary goal, is to ask “what are your safety concerns?”. Often this is met with “we have no safety concerns”, but using techniques common in the interviewing process, the leaders ask deeper probing questions to draw out what the scientists care about and with additional probing[ad][ae][af][ag], root causes of the safety concerns will emerge. Once the scientists start talking about one safety concern, often multiple concerns will then surface, thus giving our safety teams an opportunity to deal with these concerns.

    The next two legs of the SPYDR Lab visits consist of observations we ask our senior leaders to make on laboratory clutter and access to emergency equipment[ah]. If the clutter level of a laboratory is deemed unacceptable,[ai][aj][ak][al] the SCT will look to provide support to address root causes of the clutter. Typical solutions have been addition of storage capacity, removal of excess equipment from the work spaces, and alternative workflows. The second observation is to ensure clear paths from the work areas to emergency equipment exist, should an incident occur. We wanted to make sure a direct line existed to the eyewash station/shower such that the occupant would not be tripping over excessive carts, chillers, shelving or miscellaneous equipment. These observations led to active coaching of our laboratory occupants to ensure safe egress existed and modifications to the work environment. For example, the relocation of many chillers to compartments underneath the hood from being on a cart in front of the hood enabled improved egress for a number of laboratories.

    For the final four legs of the SPYDR Visit, we ask the senior leaders to probe for understanding on various topics[am] that range from personal protective equipment selection, waste handling, reactor setup and chemical hazards. The visitor is asked to rate these areas from needs improvements, to average, high, or very high. Figure 2 compares these ratings from the first year (2013) with the current year (2018). In the first year of the  program, there were a few scattered “needs improvement” rating that resulted in communication with the line management of the laboratory. After the initial year, “needs improvement” ratings became very rare in all cases except clutter. In the current year, we shifted two topics[an] to Laboratory Ergonomics[ao] and Electricity, which uncovered additional opportunities for improvement.  We recommend changing the contents of these legs on a regular basis[ap] as it shifts the focus of the discussion and potentially uncovers new safety concerns.

    FEEDBACK MECHANISM

    The SPYDR lab visits are built around a feedback loop illustrated in Figure 3 that utilizes an online survey to both track completion of the visits as well as to communicate findings back to the SCT. The order of events around a laboratory visit consist of scheduling a half hour meeting between our senior leaders and the occupants in their laboratories. Once the visit is completed, the visitors will fill out the simple online survey (Figure 4) that details their findings for the visit. The SCT will meet regularly to review the surveys and take actions based on the occupants’ safety concerns. This often involves following up with the team members in the laboratory to ensure they know their safety concerns were heard[aq][ar].

    Two potential and significant detractors for this program exist. The first challenge is if the senior visitor does not show up for the visit, this results in a perception that senior management  does not embrace safety as a top priority. The second pitfall is if the visitor uncovers a safety concern, but does not  fill out the survey to report safety concerns, or if the SCT is unable to address a safety concern. In this case, there would be a perception that a safety concern was reported to a senior leader and “nothing happened”. To minimize these risks, there is significant  emphasis for the senior leaders to take ownership of the laboratory visits[as][at]  and for the SCT to take ownership of the action items and ensure the team members know their voices have been heard.

    DISCUSSION OF SAFETY CONCERNS

    A summary of safety concerns is illustrated in Table 1. By a wide margin, clutter was the predominant safety concern in 2013 as it was noted in 50% of the laboratories visited. Three major safety programs within the department were inspired by early visits in order to reduce clutter in the laboratories. This included several rounds of organized general laboratory cleanouts to remove old equipment[au][av]. A second program systematically purged old and/or duplicate chemicals throughout the department.[aw] Most recently, a third program created a systematic long term chemical inventory management system[ax][ay][az] that was designed to reduce clutter caused by the large number of processing samples stored in the department. This program has returned over 900 sq. feet of storage space to our laboratories and has greatly reduced the amount of clutter in the labs. Although clutter remains a common theme in our visits, the focus is now often related to removal of old instruments and equipment [ba][bb][bc][bd]rather than a gross shortage of storage space.

    In the first year of the program, one aspect of the laboratory visit was to discuss hazards associated with chemical reactions (feedback rate of 28%) and equipment setup (32%). A common thread in these discussions were expectations of collaboration and behavior from “visiting scientists”. These “visiting scientists” were colleagues[be][bf][bg][bh] and project team members from other laboratories coming to the specific laboratory in order to use its specialized equipment (examples: 20 liter reactors, automated reactor blocks). This caused certain friction between the visiting scientists and their hosts on safety expectations. The SCT addressed this by convening a meeting between hosts and visiting scientists to discuss root causes of friction to produce a list of “best practices” shown in Figure 5 to improve the work experience for both hosts and visitors that is still in use for specialty labs with shared equipment today.[bi][bj]

    The next major category of safety concerns for our laboratory visits was associated with facility repairs which was present in 24% of our first year visits. These included items such as leaking roofs, unsafe cabinet doors, or delays in re-energizing hoods after fire drills. These were addressed by connecting our scientists to the appropriate building managers who would be able to evaluate and address these safety concerns. After the initial year, most of the facility related concerns transitioned to the addition/removal of storage solutions within specific laboratories. Currently, when new laboratories are associated with the SPYDR Lab Visit program, major facility concerns will quickly be reported.

    These visits also brought to light a common problem occurring in the laboratories, that is, the loss of electrical power associated with circuit breakers being tripped when the electrical outlets associated with a laboratory hood were being used at capacity. This led to the identification of the need to increase the electrical capacity in the fume hoods and this Is now being addressed by an ongoing capital project.

    By the third year of the program, the nature of the safety concerns changed as many of the laboratory-based concerns had been addressed[bk]. Concerns raised now included site issues such as traffic patterns, pedestrian safety, walking in parking lots at night, and training. [bl]Among the items addressed for the site include on-site intersections being modified and movement of a fence line to enable safer crosswalks and improvements for the driver’s line-of-sight. A simple question raised about fire extinguisher training and who was permitted to use an AED device led to the expansion of departmental fire extinguisher training to a broader group and the offering of AED/CPR training to the broader organization.

    These safety concerns would not be typically detected by a laboratory safety inspection program and are only accessible by directly asking the occupants what their safety concerns are. [bm][bn][bo]Through the SCT, these issues were resolved over time as the team took accountability to move the issue through various channels (facilities, capital projects, ordering of equipment) to develop and implement the solutions.

    CONCLUSION

    Since 2013, this novel program[bp] has successfully engaged our leadership with laboratory personnel and has led to hundreds of concerns being addressed[bq]. The concerns have arisen from over 300 laboratory visits, and more than a thousand safety conversations with our scientists. Because this is not a safety inspection program, these visits routinely uncover new safety concerns that would not be expected to surface in our typical laboratory inspection program. The SPYDR visit program is a strong supplement to the laboratory inspection program, and has produced a measurable impact on the safety culture.

    A collateral benefit from the program is that it drives social interactions within the department where senior leaders who may not necessarily interact with certain parts of the organization have a chance to visit these team members in their workplace and learn firsthand what they do in the organization[br][bs][bt][bu].

    [a]Only a bit bigger than some of the bigger graduate chemistry programs in the US.

    [b]How large is the Safety culture Team?

    [c]in the range of 6-10

    [d]Does this fall in the “other duties as assigned” category or more driven by personal interest in the topic?

    [e]Do position descriptions during recruitment include Required or Preferred skills that would add value to inclusion on the SCT?

    [f]I was unable to access the article in its entirety so this question may be answered there….  What is the composition of the SCT- who in the organization participates?

    [g]representatives from various departments and leaders of safety teams

    [h]do some of the senior leadership going to the labs have lab experience?

    [i]yes

    [j]Was this a formal thing or out of the blue visits?

    [k]initially planned as random, unannounced, we had to revert to scheduled in order to ensure scientists were present and available when leaders stopped in

    [l]We had the same thing in academic lab inspections. While unannounced visits seemed more intuitive, the benefit of the visits wasn’t there if the lab workers weren’t available to work with the inspectors. So scheduling visits worked out better in the end

    [m]In terms of compliance inspections, I would think that the benefit of scheduled inspections is that it can motivate people to clean their labs before the inspection. While I get that it would be preferable that they clean their labs more regularly, the announced visit seems like it would guarantee that all labs get cleaned up at least once per year. And maybe they’ll see the benefit of the cleaner lab and be more inspired to keep it cleaner generally – but I realize that might just be wishful thinking.

    [n]So important. We keep running into the issues of experimental safety getting missed by 1-shot inspections.

    [o]Some of that could be addressed with better risk assessment training of research staff.

    [p]concerns are generally wide ranging, most started out as lab centric in the early years then expanded beyond the labs

    [q]Are these other functional areas related to safety or other issues (e.g. quality control, business processes, etc.)

    [r]This seems key but also can be super hard to obtain.

    [s]I think that it requires leadership that is familiar with all of the different kinds of expertise in the orgainzation to say that. Higher ed contains so many different types of expertise, it is difficult for leadership to know what this commitment entails

    [t]And far too often in my experience in academia those in leadership positions have limited management training, which can inhibit good leadership traits.

    [u]Many academics promoted into chair or dean level get stuck on  budgeting arguments rather than more strategic / visionary questions

    [v]I’ve found this expectation to be quite challenging at some higher ed institutions.

    [w]Everytime I bring it up to upper management in higher ed, they say “of course safety is #1”, but they don’t want to spend their leadership capital on it.

    [x]the program was designed to give  senior leaders a role specifically designed for them

    [y]@Ralph I completely agree!

    [z]This approach seems to be a way for leadership to get involved with out spending a lot of leadership capital.

    I always had my best luck “inspecting” labs when I could lead with science-related questions rather than compliance issues

    [aa]I think it is really cool that this is thought of expansively.

    [ab]Nice to not put bounds on safety concerns going into the conversation.  Reinforced later in the paper thru the identification and mitigation of hazards well outside the lab

    [ac]Are these legs connected to on boarding training for lab employees?

    [ad]This skill would be exceptionally important when discussing such issues with graduate students.

    [ae]Are scientists trained in this technique?  Or does the SCT have individuals selected for that skill set?  When I look around campus at TTU I can see lots of opportunity for collaboration by bringing “non-scientists” into the discussion to get new perspectives and possibly see new problems

    [af]This definitely takes practice, but it can also be learned in workshops and by observing good mentoring. The observation process requires a conscious commitment by both the mentor and the employee, though

    [ag]one thought at least for me, was the interviewing experience senior folks would have and this would be a chance to practice said skill

    [ah]Seems like the process could have some standard topics that can be replaced with new focus areas as the program matures or issues are addressed

    [ai]Lab clutter is an ongoing stress for me. Is the clutter related to current work or a legacy of previous work that hasn’t been officially decommissioned yet? 

    Did your organization develop a set of process decommissioning criteria to maintain lab housekeeping?

    [aj]Part of me feels that all researchers should at some point visit/tour a trace analytical laboratory.  Contamination is always of such concern when looking for things at ppb/ppt/ppq levels, that clutter rarely develops.  But outside of trace analysis laboratory its definitely a continuous problem in most research spaces.

    [ak]This is a good idea. I wonder if Bristol Myers Squibb has a program to rotate scientists among different lab groups to share “cross-cultural” learning?

    [al]@Chris – good point. I started research in a molecular genetics lab. While there were some issues, the benches and hoods were definitely MUCH cleaner and better organized because of concerns over contamination. Also, we have lab colonies of different insects in which things had to be kept very clean in order to keep lab-acquired disease transmission low for the insects. I was FLOORED by what I saw in chemistry labs once I joined my department. We very much had different ideas about what “clean” meant.

    [am]I really like this idea as well. Make sure everyone is on the same page.

    [an]I like the idea of shifting focus the previous issues have been addressed

    [ao]Great to see emphasis on an often overlooked topic

    [ap]Would reviewing these legs annually be regular enough? Or too often?

    [aq]So important – people are more willing to discuss issues if they feel like someone is really listening and is prepared to actually address the issues.

    [ar]And it demonstrates true commitment to the program and improvements.  Supports the trust built between the different stakeholders.

    [as]Is there some sort of training or prepping down with these senior leaders?

    [at]a short training session occurs to introduce leaders to the purpose of the program

    [au]Thank you! This is a challenge for all laboratory organizations I have worked for

    [av]Agreed!  Too often things are kept even when there is no definitive plan for future use.

    [aw]What % of the chemical stock did this purge represent?

    [ax]I’m always amazed when I learn of a laboratory that attempts to function without a structured chemical management system.  The ones without are often those that duplicate chemical purchases, often in quantities of scale (for price savings) that far exceed their consumption need.

    [ay]I once asked the chem lab manager about this. He said that 80% of his budget is people and 15% chemicals. He’d rather focus his time on managing the 80% than the 15%.

    He had a point, but I think he was passing up an important opportunity with that approach

    [az]@Chris – and grad students waste loads of time looking for the reagents and glassware they need for their experiments. And when they find them, sometimes they have been so poorly stored/ignored that they are contaminated or otherwise useless. Welcome to my lab!

    [ba]is this more of a challenge in academia vs. industry?

    [bb]This is definitely a pretty big issue for us at the university I work at. Constant struggle.

    [bc]One of the things I found frustrating while working at a govt lab is that I found out that we legally weren’t allowed to donate old equipment. I was simultaneously attending a tiny PUI nearby who would have LOVED to take the old equipment off their hands. Now working in an academic lab, I have been able to snag some donated equipment from industry labs.

    [bd]@Jessica as someone presently in government research I share your frustration!  I have to remind myself that the government systems are all too often setup to prevent abuse, rather than be efficient and benevolent.

    [be]Are these other laboratories from within your organization or external partners?

    [bf]visitors from other labs within the department,

    [bg]We had that challenge to some extent, but the bigger issues arose when visitors from other campuses showed up with different safety expectations than we were trying to instill. International visitors were a particularly interesting challenge…

    [bh]@Ralph that was often my experience too, dramatically differing safety expectations now being asked to share research space.

    [bi]I wish this occurred with greater frequency in academia.  Too often folks are too concerned about hurting a colleagues’ feelings or ego than to have a conversation to address safety concerns.

    [bj]I like the best practices approach- less prescriptive and allows researchers some latitude in meeting the requirements.  Provides an opportunity for someone (who is a subject matter expert in their field) to come up with a better solution

    [bk]That’s great, shows a commitment to the program and supports the trust that has been built between the stakeholders.

    [bl]These are important issues in setting the tone of a safety culture for an organization

    [bm]Such an important statement here.

    [bn]Agreed!

    [bo]+1

    [bp]This is a good one!

    [bq]Since I’m sure these were tracked, this is a nice metric- prevalence of a particular concern over time.

    [br]does this go both ways at all? do the research scientists have the chance to ask how their research projects impact the goals of senior leadership/company?

    [bs]there is a social interaction aspect here were scientists will get to interact with leaders they normally would not cross paths with, we can take this opportunity for our analytical leaders to visit chemists, chemistry leaders to visit engineers and engineering leads to visit analytical chemists

    [bt]Did business leadership (sales, marketing, etc.) have the opportunity to see this kind of interaction? Or do they have separate interactions with lab staff?

    [bu]In higher ed, it would be interesting to take admissions staff on lab tours to inform them about what is going on there and potentially give feedback about what students and parents are interested in