top of page
Search
Writer's pictureCrone

Egoism

Another brief one.


Psychological egoism is the idea that we are all motivated by self-interest.

Rational egoism is the idea that self-interest provides reasons for our actions.


The difference seems to me rather subtle. But maybe that's just the limitations of my brain.


One of the key arguments used to show that collaborative and collective action will always fail due to our self-interest is the 'tragedy of the commons'. It ten shepherds share a field, they'll all be interested in adding more sheep although the overall grazing goes down and none of the sheep fare so well, because the benefit of more sheep outweighs the thinness of the sheep. But ultimately, there'll be no grass and no one has any sheep.


Elinor Ostrom thought this was wrong and did empirical research, discovering that in real like, collective management does work. She won the Nobel Prize for her book on the subject in 2009. She says collective action works in some circumstances - for it to work, you need: 1) Clearly defined boundaries; 2) Proportional equivalence between benefits and costs; 3) Collective choice arrangements; 4) Monitoring; 5) Graduated sanctions; 6) Fast and fair conflict resolution; 7) Local autonomy; 8) Appropriate relations with other tiers of rule-making authority (polycentric governance). But people work this out for themselves, given the opportunity.


So, sure, we are egoists, but we do - or can - recognise that our actions affect others. This may lead to it being in our self-interest to co-operate or/and it may lead to care, compassion and collaborative action. What seems most crucial is if we believe we are all working together toward a shared aim, in a win-win scenario. Then, we find ways to work as a unit, not fight as individuals.


What gets me is that many moral theories assume we are utterly atomised, self-interested individuals and that thus we have to have rules (or commandments) to make us good. The case of utilitarianism is different. The real utilitarian, non partial and benevolent, is seeking to maximise happiness. Thus he, the noble decision maker, has to work out what consequences will occur if he chooses x rather than y. And you know what he thinks when he, this benevolent and impartial, universilising ethical being, thinks? He thinks that everyone else will behave like an utterly atomised, sel-interested individual! How the hell does that work?


Take a look at the tree on the cover. Maybe benevolence is the green part and egoism the gold; maybe, more likely, it's the reverse - but surely we should be coming up with ideas that accept that at least there's a chance for the benevolent part to have some influence?


More to the point, shouldn't we be trying to find a way to encourage the growth and development of the benevolence, rather than just telling people it ain't there at all?

1 view0 comments

Recent Posts

See All

Comentários


bottom of page