Well, it's not so much a zooming out as a backing off.
You see, I have realised that I need to know what 'equal consideration' means and also what 'moral status' means. If something has equal moral status with something else, then surely one can't harm it with less justification than the other being?
This comes to the fore because David DeGrazia argues that equal consideration (EC) is compatible with unequal moral status. Yes, really. There is such a thing as 'utility-trumping' moral status - which would mean that no matter what the benefit to others, the being with this status could not be sacrificed. Deontologists hold this view about humans - certainly about fit, rational humans. It is just plain wrong to kill a person even if one by that means would save five people.
Under this unequal status argument, a being with lesser cognitive capacities might be deemed - like a NHA - may be justifiably harmed in certain circumstances. DeGrazia offers this definition of moral status: it is the degree (relative to other beings) of moral resistance to having one's interests, - especially one's most important interests - thwarted. To say that a philosopher has higher or greater moral status than a pig is not to say that the philosopher's trivial interests (liking bacon) trump the pig's greater interests (being alive), but it does mean that where one has to choose between philosopher and pig, one chooses the former. Yet, he claims, this does not mean that their interests are not considered equally. He defines equal consideration as requiring equal moral weight, importance or consideration to relevantly similar interests.
He goes on to say that those who acknowledge important moral differences between humans and NHA (for example, the greater cognitive and psychological complexity of a normal adult human versus a squirrel) might want to say that two beings have equal moral status if and only if their relatively similar interests must be given equal consideration.
This is important because utilitarianism, which has privileged animal ethics, has a universal principle at its root - the interests of each and all must count, and count equally. It was Jeremy Bentham who brought NHA squarely into the domain of ethics by insisting that “the question is not, Can they reason? Nor, can they talk? But, can they suffer?”
To cause suffering is bad; so, to cause a NHA to suffer is bad.
But the loophole comes with the 'relevantly similar interests'. The assumption seems to be that while all sentient creatures can feel pain, the greater complexity of higher mammalian brains increases the potential for suffering - and the ability for humans to fear for the future and so on makes the potential harm for them greater still. In addition, because humans can project into the future, their preferences and desires are more extensive. And because of the cognitive powers, humans can perhaps experience more and richer pleasures as well. Thus, it may be 'worse' to harm a human than a NHA and 'better' to privilege the potential happiness of a human over a NHA.
All this means that the death of a cognitively normal mature human has greater weight than that of a simpler creature who may have no desires and preferences for the future with the exception of the desire to continue to live. A desire which, one could argue, incorporates the whole range of stated desires that a human might have for various projects and preferences.
Thus, the NHA don't have lower moral status in utilitarianism it's 'just' that their interests, which are being equally considered, are not relevantly similar.
On the rights view, as proposed by Tom Regan, the requirement not to harm is more stringent. Rights are like a 'no trespassing' sign and can 'trump' other claims. They have moral force in protecting individuals who are deemed to have 'inherent value' as they are 'subjects of a life'. In his view, unequal interests do not have moral significance in determining the respectful treatment of rights-holders and have little practical significance in moral decision making. Unequal interests only count in 'lifeboat' situations (emergencies - and then, as stated the other day, Regan will save the human every time).
John Rossi disagrees with the claim that equal consideration can be compatible with a sliding scale of moral status. He defines equal consideration like this:
To grant equal consideration to two beings A and B is to not discount or disregard B’s interests just because B is not like A, or because B’s interests are less valuable than A’s interests.
Where does this leave us? Truth be told, I don't know...
Comments