The effective altruism community is a group of nerds, but instead of nerding out about train engines, Star Wars, or 18th-century swordfighting, they nerd out about one question: Given limited resources and all of humanity’s accumulated knowledge about the social and physical sciences, what is the most cost-effective way to improve the world?
While the focus began on figuring out which charity is the best place to spend your marginal dollar, and much work still focuses on how to do that, the EA community has expanded to questions of how analytic, altruistic-minded people should best allocate their time and social capital, as well.
People in the community have settled on several possible answers to the question, “Of all the problems to work on, on what should members of the EA community focus the most?” Some examples of those answers include improving animal welfare, global poverty reduction, and improving biosecurity measures against engineered or accidental pandemics. (Notably, members of the community personally prepared for COVID weeks before their governments enacted emergency orders.)
For years, I’ve assumed that the differences in cause area selection are determined solely by people’s prior beliefs, i.e. if you believe animals are “moral patients” in the philosophy lingo, then you’re more likely to prioritize animal welfare; if you believe currently living people are moral patients and people who haven’t been born yet are not, then you’re more likely to prioritize global poverty reduction (over e.g. existential risk reduction).
However, with the fresh acquisition of some basic data science skills and some anonymized survey data, I thought of an interesting question: Do a person’s personality traits affect which cause area they’re likely to prioritize? And if so, how?