top of page
  • Writer's pictureCrone


Now it's time for Section III.


Section III

I have considered the benefits of an in-depth engagement with the facts of the matter and here I wish to consider the importance of engaging with ‘how the facts matter’ to various stakeholders.

For the purposes of this paper, I will explore one example. Through attending to testimony, autobiography and sociology, the engaged ethicist would come across the issue of workplace stress in laboratories. There is increasing concern about burnout stress, compassion fatigue and moral stress.

Burnout stress is a form of secondary PTSD arising through witnessing suffering. Compassion fatigue is described as a state of psychological and emotional dysfunction resulting from prolonged exposure to compassion stress. Moral stress occurs when one is aware of the ethical principles at stake, but external factors prevent one from doing what one feels is right (Johnson and Smajdor 2019; LaFollette et al. 2020; Newsome et al. 2019).

These harms appear to be exacerbated where:

a) staff feel that animal suffering is not warranted by the results.

b) staff feel that animal welfare could be increased but their recommendations are not heard by their seniors.

c) staff feel that they cannot express moral doubts, grief, sympathy or guilt.

Firstly, these harms are morally significant in themselves and would be increased if, in an attempt to obtain more clinically relevant knowledge, more rodents were used - especially were there questions regarding when or if such research would lead to substantive benefits.

Secondly, this suggests a possible inertia built into the system. Let me explain.

Clearly, welfare is of high priority in UK institutions. ‘Good science’ is itself benefited by reducing stress in laboratory animals, so an awareness of animal suffering is a salient factor for both scientific and welfare reasons (Balcombe, Barnard, and Sandusky 2004). For example, a paper on the welfare of rodents in rheumatoid arthritis studies explains that researchers can tell if a mouse is suffering beyond what is experimentally acceptable by learning to recognise ‘pain faces’ (Hawkins et al. 2015). For technicians, who often deem themselves to be a ‘buffer’ between their animal charges and the requirements of the science (Birke et al. 2007), the issue of welfare will be predominant and is likely to be accompanied by compassion. As former technician Rachel Weiss put it ‘the only way that I could care for [the chimps] was to care about them’.⁠[1]

However, there is evidence that the educational system and culture of researcher scientists encourages a less compassion-centred attitude. Bioethicist and former researcher John Gluck explains that during his training he learned to ‘put aside identification with animal pain and suffering and replace it with a passion for advancing scientific knowledge’. Later, when he developed moral doubts, he became aware that even to question the validity of animal models roused rancour among his colleagues (Gluck 2016). Researchers, familiar with and accepting of certain protocols, may not take kindly to being questioned about their methods (Birke et al. 2007; Rollin 1981). Dr. Alysson Muotri, for example, had to ‘muster the courage’[2] to challenge his colleagues about the validity of using mouse brains to explore complex human neurological disorders. Now he uses ‘brain organoids’.

If one returns to the factors that exacerbate workplace stress, (a) and (b) seem specifically related to concerns over whether the 3Rs are being properly implemented while (c) appears to be a recognition that even where the balance of harm and benefit is acceptable in a utilitarian framework, there remains a sense that harms to animals are still regrettable. Such concerns and moral qualms, acting, perhaps, as the ‘conscience’ of the institution, may be red flags, demonstrating, say, that in a given experiment the harm is not justified by the benefits, rather than ‘emotional’ or ‘irrational’ intuitions.

If such voices are silenced, then it is possible that the practice itself may be precluding the stated aims of the guidelines (as outlined in Section II) to reduce reliance on animal models. Recommendations, such as the 3Rs, may not be enough to guarantee the transition from rodent use - even in cases of dubious or diminishing returns. Either structural changes (to encourage open discussion and value evolution) or non-moral incentives or punishments may be required to enable the normative aims of the recommendations to be more consistently pursued. It is notable, for example, that the ban on animal use in cosmetics testing has been ‘a key accelerator in relation to the development of alternative methods’[3].

To be clear, while I believe it is valid ‘to incorporate the messy micro-knowledge, the narrative details and the emotional realities into our ethical deliberations’ I am not saying that there is justification for deriving ‘ethical precepts more directly from this empirical groundwork’ (Parker 2009). So, I am not arguing that emotional responses are always reliable pointers to moral truth. However, in this case, engaging with the lived experience of laboratory staff brings to light information that casts doubt on the efficacy of simply making stronger recommendations. Thus, it does appear that an understanding of what people think, feel and act adds a valuable dimension that can inform ethical deliberation.

This is the second pillar supporting my argument that in a case such as that I have outlined, engaged bioethics is better placed than philosophical bioethics to resolve the real-life issue.

[1] [2] [3]

2 views0 comments

Recent Posts

See All


bottom of page