In chapter 10 of The
Fundamentals of Ethics, Russ Shafer-Landau presents several challenges to
consequentialism. Shafer-Landau sets up each of his arguments by first
presenting each popular argument against consequentialism, then posing the strongest
consequentialist counter argument and ends each argument by evaluating the
validity of the argument. Aside from the problem of ignoring justice, one of the
strongest objections to consequentialism was the indeterminate measurement system
for wellbeing. The argument is that since there is no physical scale to measure
the value of an action, utilitarianism is false (Shafer-Landau 136). The counterargument that
utilitarians present is that by comparing scenarios that cause harm and benefit,
actions can be categorized with respect to one another and this does not
require situations to have numerical values. For instance, Shafer-Landau explains how the
harm caused by a cholera outbreak is obviously far greater than the harm caused
by a husband cursing his wife (Shafer-Landau 136). Even though there is no
clear numerical value placed on an action, by comparing actions to one another,
we are able to compare our options to determine which action is more favorable
than the other. He ends with the point
that although this may be true in some cases, the majority of dilemmas in the
real world are not as cut and dry. The most optimific act is usually unclear
because we can be dealing with two situations that could similarly produce
some benefit at the cost of some harm. In these situations the most optimific
result becomes more about interpretation. Since the net optimific act then
becomes unclear, this is why it can be argued that utilitarianism does not always
give a clear answer and does not explain how we should decide when faced
with a multifaceted dilemma.
I agree with Shafer-Landau that in complex situations,
consequentialism does not offer any helpful guidance. A utilitarian’s goal is
to create the greatest amount of net well-being but it assumes that wellbeing
is the same for everyone. My definition of net wellbeing could be something
completely different than your definition. Suppose that you were given the
chance to donate a million dollars to one and only one charity. According to
utilitarianism, you should donate this money to the organization that will
produce the most net wellbeing. Suppose that you had to choose between two highly
regarded organizations aimed at the same goal: finding foster homes for abused offspring.
The only difference was that one organization worked towards finding homes for abused
children while the other organization worked towards finding homes for abused cats.
Furthermore, suppose that if the million dollars does not reach the cats, then
50 cats will be put to death because feeding and providing for the cats is monetarily
too much for this organization. The
child abuse organization is not on the verge of bankruptcy and the children’s
lives are not in danger but the million dollars would definitely help the
organization to spread awareness and significantly increase the number of children
finding loving homes. Since utilitarians argue for marginal cases, the belief that
a cat’s life is worth less than a human’s cannot be justification for your
decision. With that in mind, from a utilitarian standpoint, both donations
would produce benefits but both contain the above mentioned costs. What do you
do?
In situations such as this, it becomes very difficult to
determine a net wellbeing and turning to utilitarianism for guidance gives no concrete
correct answer. Would the cost of 10, 20, or 100 cats per day morally justify
donating to that organization over the child adoption organization? On one
hand, you have to think of the possible benefits to society that one of those
children could make by being given a chance to grow up with a loving family. But
do not forget that the cats are innocent baby kittens on the verge of painful death.
The point of this dilemma was to show that utilitarianism gives a correct
answer only when faced with straightforward problems that have an obvious moral
hierarchy. Where utilitarianism fails to guide us is when we are faced with
more complex moral dilemmas that contain some benefits and some harm.
I agree with Joe and Shafer-Landau when they say that Consequentialism does not offer any helpful guidance in complex situations. I also see where Joe is coming from when he says that everyone has a different definition of well being. However, I think that our goal, when making moral decisions, should not create the greatest amount of well-being for only ourselves; that would be selfish and I think that is what Utilitarians are getting at. When making moral decisions the results should produce the greatest amount of well-being for everyone, not just ourselves. I think that is a problem with a lot of people, including myself at times. We make moral decisions based off how it will affect us instead of putting it into a larger picture or greater scheme. We are creatures that are naturally hardwired to act in our own interest. For example if someone was trying to choke you and kill you because your death would save another person's life who would later go on to cure cancer, the first thing your body/instincts would do is fight back; even though your death could save millions.
ReplyDeleteWhat I am getting at is that one of the main reasons why I believe it is hard for us to determine a net well-being is because sometimes we are biased. We want the results to benefit ourselves, rather then benefit other individuals. I am not saying that we shouldn't be included in overall amount of well-being. Instead, what I am saying is that we need to think of others rather then ourselves at times. If something does not benefit me as much as it does millions of other people, then so be it; we should still go with the action that benefits the millions of people.