Is Effective Altruism to Blame for Sam Bankman-Fried?:
Although it is wrong to kill one person to save five people, the authors write, “Almost all ethicists agree that these rights and rules are not absolute. If you had to kill one person to save 100,000 others, most would agree that it would be the right thing to do.”
In “exceptional circumstances,” the EAs allow, consequentialism may trump other considerations. And Sam Bankman-Fried might reasonably have considered his own circumstances exceptional. It is highly unusual for a devotee of an altruistic movement to amass a $16 billion fortune, thereby liberating all of that movement’s institutions from cash constraints. If killing one person to save 100,000 is morally permissible, then couldn’t one say the same of scamming crypto investors for the sake of feeding the poor (and/or, preventing the robot apocalypse)?
This kind of hypothetical ethics has always bothered me.
I feel like people are looking for some way out, a get-out-of-jail-free card that tells them “Yes, there are circumstances where you can do a bad thing and not have to feel bad about it.”
If you’re faced with a situation of trying to decide upon the least bad option, there is no ethical calculus that will make you feel good about any of those options. In fact, if you are sitting there at the controls of trolley trying to decide whether to kill one person or to let a bunch of people die, I would suggest that you should feel bad after making the decision, even if you feel like you took the least terrible route you could. That bad feeling—that anguish and guilt about whether you made the right choice—is what matters. Even if you feel like you chose the least bad option, you are still responsible for it.
If you had some magical ethical calculus that let you off without feeling bad about the choice you made, that seems like evil to me, or at least the start of some really horrifying stuff.