DECISION MAKING

I MAKING CHOICES

Decision making is very much like problem-solving, and we will note the similarities as they come up.

Reaching a decision can be difficult--any alternative can have many attributes. Also, many decisions involve uncertainty and risk.

We will consider the effects of selecting between alternatives based on multiple options and what we should do (a normative perspective, i.e., utility theory) and what we actually do (a descriptive perspective, i.e., subjective utility theory).

A.) Compensatory Models

Attractive attributes of an alternative can compensate for less attractive ones--a systematic decision-making procedure has to be followed.

1.) Additive Model:

Scores are assigned to each attribute (+ or -) and then added together. Each alternative (or option or choice) is evaluated independently. Each alternative is evaluated completely, in comparison to the other alternatives.

a.) Sometimes attributes are 'weighted' to reflect the greater emphasis placed on some attributes.

b.) Sometimes attributes interact so that one can compensate for another's weaknesses.

2.) Additive Difference Model:

Scores are assigned to each attribute (+ or -) and then a difference score is calculated across each attribute. These difference scores are then added up. Compares alternatives in an attribute-by-attribute fashion.

B.) Noncompensatory Models:

Unattractive attributes result in rejection of the alternative. Does not require calculations.

1.) Elimination by Aspects (Tversky, 1972)

We may sequentially evaluate the attributes (aspects) of an alternative and eliminate that alternative if some attribute does not meet minimum standards.

Assumes attributes differ in importance, so the final choice may depend on the ordering of attributes, with more critical attributes being evaluated first (hierarchical organization—perhaps based on schematic evaluation of importance of attributes).

2.) Conjunctive Model:

Requires that all attributes meet a minimum standard. Therefore, each alternative is completely evaluated before moving on. The first alternative meeting all minimum criteria is selected (i.e., not all alternatives may be evaluated).

This relates to Simon's (1957) idea of a satisficing search: we may not select the best alternative, being satisfied with a good alternative we may settle on it.

Note the similarity to problem-solving theory in that a particular solution/decision may be ‘good’, while not being the ‘best’—particularly when one’s capacity to considering all options may be exceeded.

C.) Selecting a Strategy:

Payne (1976) combined these four models into a 2 x 2 matrix with the two variables defined as intradimensional/ interdimensional and constant/variable.

In intradimensional search the decision maker selects an attribute and evaluates it across all alternatives, i.e., additive-difference and elimination-by-aspects models.

In interdimensional search all relevant attributes are evaluated for one alternative before the next alternative is evaluated, i.e., additive and conjunctive models.

There can be a constant number of attributes evaluated: Not all attributes may be evaluated--just the 'important' ones; but the same ones should be evaluated for all alternatives, i.e., compensatory models.

There can be a variable number of attributes evaluated: Since some alternatives may be completely eliminated early on, not all attributes for all alternatives need to be evaluated, i.e., noncompensatory models.

So, Payne let subjects examine sets of attributes for different numbers of alternatives.

He noted in which order they examined the attributes and asked them to reason out loud (i.e., he collected verbal protocols--like Newell & Simon, he gathered his data from verbal protocols of participants as they made decisions.).

He argued that subjects would follow different strategies of searching for information as the demands of the task changed.

The results confirmed his expectations: the tasks differed in the number of alternatives and in the number of dimensions. Subjects changed from searching a constant number of attributes for two alternatives to a variable number of dimensions as the number of alternatives increased.

When asked to evaluate many alternatives the complexity of the task was reduced by using variable procedures to eliminate some alternatives quickly. When only a few alternatives remained then a cognitively more demanding procedure to make the final evaluation and choice was used.

II DECISION MAKING UNDER UNCERTAINTY:

RISKY DECISION-MAKING

Most decisions are made under uncertainty and so you need to determine the probability of particular outcomes.

This is mostly accomplished through the use of various heuristics--rules of thumb for achieving a solution; these are relatively rapid rules-of-thumb for decision-making but do not guarantee the ‘best’ solution.

A.) Availability

Availability is a heuristic--suggests that we evaluate the probability of an event by judging the ease with which examples come to mind.

This method works well when the examples that come to mind correspond to actual instances. Unfortunately, research shows we are prone to biases in retrieving examples.

Kahneman & Tversky (1973) and Slovic, Fischoff, & Lichtenstein (1976) provided evidence of biased ability to retrieve example--judging frequency of words beginning with a particular letter versus having the letter embedded in the word and relative frequency of lethal events, respectively.

Wright & Bower (1992) also showed effects of mood on retrieval of relevant examples of past events--sad mood: more negative events retrieved; happy mood: more positive events retrieved.

B.) Representativeness

With the representativeness heuristic the focus is on how well an event is typical of other events in its class--this gets back to forming concepts and categories.

Kahneman & Tversky (1972) showed this can be applied to our concept of randomness--most of us believe that only an unsystematic event is random, when in fact a systematic event is equally probable (although there may be fewer systematic possibilities, leading to a not altogether false perception/bias).

There are two major reasons why we often make mistakes in estimating probabilities of event based on representativeness:

1.) Ignoring Sample Size: all things being equal, the larger the sample, the more likely it will reflect accurate population probabilities.

2.) Failure to Account for Prior Probabilities: we tend to take subjective factors into account while failing to obtain statistical information.

III EXPECTED VALUE

Besides taking into account prior probabilities, decision making can be improved by taking into account the consequences of various actions, i.e, their value.

Psychologists have examined the value that people attach to outcomes by examining whether people will engage in gambling behaviors based on the probabilities of winning and losing in terms of the value of each event--amount won or lost.

A.) Calculating Expected Value

The formula for this is:

P(W) x V(W) + P(L) x V(L)

Classic example: "I’m going to roll a die. If a 6 appears you win $5, otherwise you win nothing. It costs $1 to play. ((1/6) x $4) + ((5/6) x -$1) = -$1/6

So you lose 17 cents each time played

Most people fail to account for the V(W) properly, however--forgetting to subtract the dollar it cost to play from the winnings--this creates the illusion of breaking even:

((1/6) x $5) + ((5/6) x $1) = 0

B.) Subjective Expected Utility:

Instead of looking at value it makes sense to look at utility--the subjective value assigned to an outcome by the decision maker.

The concept of ‘utility’ helps to understand why people gamble when in fact there is an overall loss to be expected--but just the ‘fun’ of gambling may be worth something to some individuals.

Subjective expected utility is calculated the same as expected value, except the subjective value is inserted into the equation.

Besides changing utility to subjective value, the true probability, when it is unknown, can be replaced by a subjective probability--what one suspects the true probability might be.

Again, bias can be a factor here--especially when the subjective probability is calculated based on availability or representativeness.

So now the final subjective utility can be calculated as:

((SP(W) x U(W) + SP(L) x U(L)

This model is a descriptive model compared to expected value, which is a normative model--it is better at predicting what people actually do.

The draw back is that what people actually do may not reflect the ‘best’ decision.

What all of these utility/ values have in common is an assumption that we either formally or implicitly actually calculate these outcomes--an unlikely situation!
 
 

IV SCHEMA/IMAGE THEORY

Beach & Mitchell (1990); Mitchell & Beach (1990) developed a theory in which a larger framework is considered than in simple decision models--it builds on schema theory.

A.) Images

Images of one's goals and one's self are considered: these are the schemata: organized knowledge structures of each individual person’s LTM. They are made up of several types of images:

1.) Value Image: Beliefs and values which influence goal selection.

2.) Trajectory Image: Future agenda--where one is going to

3.) Strategic Image: Plans and actions to achieve these--sequences of activities for achieving a goal.

B.) Images as they Influence two Types of Decisions:

1.) Adoption Decision: refers to which decision is selected--affected by:

a.) compatibility - degree of consistency between a course of action and one’s personal values and beliefs.

b.) profitability - comes into play when several alternatives are equally good--can be achieved by other decision making strategies (i.e., additive models).

2.) Progress Decision: reevaluation of an initial decision to monitor its progress. This is much like an assessment of subgoals in means/ends analysis.

Schematic models predict that one would use a simpler strategy early on, when many decisions are possible; but once the number of alternatives is narrowed down than a more complex strategy can be instituted. This agrees with the work by Payne (1972).