Reflective Equilibrium

Most of us want to do the right thing at least some of the time. Often it's pretty clear what is—and maybe even more often what isn't--right: we shouldn't humiliate other people for the fun of it, steal people's stuff or kick the dog when we get mad. Other times it's not so clear. How should we decide what to do in cases like this? More generally, what makes an action right or wrong?

Many people think that you can't reason about moral issues: you either buy into a ready-made Moral Code or rely on feelings. Both these options are problematic. How do you choose a Moral Code? As for feelings, sometimes we don't have any, sometimes they're confusing and sometimes they mislead us.

Ethical issues are controversial, but that doesn't mean we can't reason about them. One way of understanding moral reasoning is as a back-and-forth process where we consider our feelings or moral "intuitions" in clear cases, try to formulate general principles on the basis of these intuitions, and then test our principles against further cases until we reach a "reflective equilibrium." The following quote from the Stanford Encyclopedia of Philosophy (on online resource for all your philosophical needs at http://plato.stanford.edu ) describes this process:

The method of reflective equilibrium consists in working back and forth among our considered judgments (some say our "intuitions") about particular instances or cases [and] the principles or rules that we believe govern them…revising any of these elements wherever necessary in order to achieve an acceptable coherence among them. The method succeeds and we achieve reflective equilibrium when we arrive at an acceptable coherence among these beliefs. An acceptable coherence requires that our beliefs not only be consistent with each other…but that some of these beliefs provide support or provide a best explanation for others. Moreover, in the process we may not only modify prior beliefs but add new beliefs as well.

In practical contexts, this deliberation may help us come to a conclusion about what we ought to do when we had not at all been sure earlier. We arrive at an optimal equilibrium when the component judgments, principles, and theories are ones we are un-inclined to revise any further because together they have the highest degree of acceptability or credibility for us.

The key idea underlying this view of justification is that we "test" various parts of our system of beliefs against the other beliefs we hold, looking for ways in which some of these beliefs support others, seeking coherence among the widest set of beliefs, and revising and refining them at all levels when challenges to some arise from others. For example, a moral principle or moral judgment about a particular action…would be justified if it cohered with the rest of our beliefs about right action…on due reflection and after appropriate revisions throughout our system of beliefs.

Moral reasoning understood in this way is not very different from commonsense and scientific reasoning, where we make observations, formulate hypotheses, and test them against further data until we arrive at principles which are, at least provisionally, satisfactory. In formulating moral principles the "observations" are our moral intuitions. Like other observations however they are incomplete and fallible, so we generalize in order to arrive at principles to guide us where we have no clear intuitions, test our principles against further data and, because we recognize that our intuitions are not infallible, remain open to the possibility some are misleading.