A.) Everyday examples of STM and LTM
1.) STM: seldom used phone number--look up a number in the phonebook, dial it and then forget it.
2.) LTM: your own name, the way home.
B.) Experimental examples of STM and LTM
1.) Peterson & Peterson (1959)
a.) The paradigm: subjects were presented with 3 letters to remember. Then they were to count backwards by 3s. Then they were asked to recall the three letters.
b.) Results: poor recall.
c.) Conclusions: rehearsal is necessary for maintenance of items in STM. Loss is by decay.
To retain information in STM for as long as 20 sec you need rehearsal. Counting interrupts rehearsal and causes loss of items. This contrasts to those items in memory that you don't forget.
2.) Serial Position Curve: an experimental demonstration of both STM and LTM in operation. At immediate recall there is both a primacy and a recency effect. At delayed recall there is only a primacy effect.
a.) Paradigm: list of words is read aloud. There is an immediate recall condition and then a little later there is a delayed recall condition.
b.) Typical results: at immediate recall performance is not constant over the serial position of the words--words at the beginning and end of the list show a higher rate of recall; words in the middle are recalled less well.
c.) Primacy effect: the probability of a higher rate of recall of the first few items: these items get the most rehearsal and get undivided attention and so are in LTM.
d.) Recency effect: the probability of a higher rate of recall of the last few items: these were an STM dump but fail to get into LTM because recall phase interferes with rehearsal.
A.) General characteristics:
1.) transient - temporary encoding.
2.) has a back-and-forth relationship with LTM: STM can contain information from LTM to work on. Often, information in STM enters LTM.
3.) direct access to STM contents.
4.) called working memory - holds information for current use only; called active memory - units of this memory are in a special, active state; called primary memory - first stage of memory, as opposed to secondary memory
B.) Limited capacity - Miller's concept of 7 +/- 2. It is a fundamental limitation on our mental capacity.
C.) Rehearsal maintenance - as long as you rehearse items they can be maintained indefinitely. When you stop rehearsing, memory for the information is often lost.
D.) Chunking of information to increase capacity - the capacity of STM varies with the meaningfulness of the material. A chunk is a memory unit--STM capacity is not limited by a physically defined unit but by a meaningfulness unit.
A.) Decay Theory - forgetting should occur simply due to the passage of time, as long as there is no rehearsal.
B.) Interference Theory - memory for other things, or performance of another task interferes with memory and causes forgetting. Interference can be of two types:
a.) Retroactive interference: later occurring information interferes with previously occurring information.
b.) Proactive interference: earlier occurring information interferes with later occurring information, e.g., Keppel & Underwood study where items on trials 1-3 are very similar and then on trial 4 they may again be similar or may be different in nature. When items are very similar they interfere with one another; when they are very different they are more readily recalled.
C.) Waugh & Norman (1965)
1.) Paradigm: subjects were presented with a list of 16 digits. The last digit was a probe digit which had occurred once previously in the list. The task was to report the digit that followed the probe the first time it had been presented. The location of the probe digit was varied from list to list. Rate of presentation of digits was also varied.
a.) Decay theory - performance should be better for faster presentation rates because there is less elapsed time between the end of the list and the probe digit, therefore, there is less time for the information to decay from memory.
b.) Interference theory - performance should deteriorate the further back in the list the probe digit occurred for the first time because there are more digits in between the first and second presentations.
3.) Results: while recall probability was somewhat better, across all conditions of the number of interfering digits, it was not significantly so. The effect of the number of interfering digits, however, was highly significant -- recall probability was greatly, and significantly lower with increases in the number of interfering digits.
4.) Conclusion: forgetting attributed to interference is significantly greater than that attributed to decay.
D.) Release from Proactive Interference
Wickens, Born & Allen (1963) presented subjects with short lists of words all from the same category--obtained the usual decline in recall across trials. For half the subjects they presented words from a different category on the final trial. Recall improved nearly to the level for the first trial.
They interpreted the increasing decline as reflecting the build up of PI and the return to initial recall rates following category change as a release from PI.
E.) Proactive Interference in the Peterson & Peterson Task
Keppel & Underwood (1962) reanalyzed Peterson & Peterson’s data and found that performance was relatively intact across early trial for all delay intervals, and showed a pronounced decline across trials.
Again, this was interpreted as showing increasing PI across trials, with the effect greatest at the longer intervals.
Baddeley (1992) has developed a model of short term memory which accounts for many of the experimental findings that the modal model cannot--i.e., modality effects.
It consists of three components:
(1) a phonological loop that processes acoustic information.
(2) a visuospatial sketchpad that processes visual/spatial information.
(3) a central executive that manages the use of working memory--a decision making component.
Sternberg (1966) proposed that short term memory can be scanned in one of two ways, either all at once, called parallel search, or one item at a time, called serial search.
Also proposed that search could progress in one of two ways, either stopping as soon as a match was made, called self-terminating, or continuing through all items being held in STM, with a decision at the end of the process, called exhaustive search.
Subjects are presented with a short list of digits to memorize (up to 7--capacity of STM) and are next presented with a single ‘probe’ digit.
If it is recognized as having been part of the memory response is ‘yes’, if not, response is ‘no’.
Speed is emphasized, accuracy is encouraged.
Normally this is a very easy task.
Results: increasing response times, thought to reflect increasing scan time through STM, with increasing set sizes.
Prototypical illustration of information processing reliance on stage of processing.
Search is serial & exhaustive:
Serial because overall there is an increase in search time as more items are added to the memory set.
Exhaustive because there is no appreciable difference between positive and negative trials (except for a slight tendency for overall faster responses to positive trials which may reflect a response bias).
Why these results?
Making a decision is energy consuming--much like Broadbent’s flap mechanism--so it is more efficient to only make one overall decision, rather one per match until a positive match occurs.
Cavanagh (1972) provided evidence for a linear relationship between span and scan rates.
The regularity of the relationship suggests that as items become more complex they require more capacity and therefore have both a shorter span in terms of storage capacity and a slower scan rate in terms of processing capacity.
Cavanagh (1972) suggests the basis for the relationship relates to the number of features per item.
A & S proposed a structural model of memory, incorporating what was then thought to be the state of knowledge about memory, reflecting the I-P paradigm.
A.) Structural Features:
Relatively stable information processing sequences over which we have minimal voluntary control, and use consistently from one situation to the next regardless of the content of incoming information - refers to specific "stores".
B.) Control Processes:
These are memory routines or strategies that are selected, constructed, and used at the option of the person, depending on the current situation.
1.) rehearsal - repetition of information.
2.) coding - placing to-be-remembered information in the context of already well-known information, i.e., mnemonics.
3.) imaging - creating visual images to remember verbal information.
C.) Computer Analogy:
Example of a structural feature: a specified DO loop will always be carried out regardless of content. Example of a control process: IF-THEN instructions are only carried out when the symbolic material takes on specific values. The same is true with within each memory store.
D.) Memory Stores:
Structural features of memory are the memory stores--they preserve information in different formats, for different durations, and for different purposes, and lose information in different ways.
1.) Sensory Register: External information enters via the senses and goes to the sensory register. Separate registers exist for separate sensory modalities.
Information enters passively and is a near literal record of its sensory image. The store lasts for 1/2 sec. It can decay, dissipate over time or be written over.--It preserves incoming information long enough to be attended to and selectively transmitted further into the memory system.
2.) Short Term Store: Lasts up to 1/2 minute, has a limited capacity of 7+/-2 chunks of information, begins to extract some meaning from input and forgetting is by either decay or interference. Usually further processing is necessary for transfer to LTM.
3.) Long Term Store: Information flows into the LTS from the STS either consciously--as when you memorize a phone number) or unconsciously--as when you recall the number of windows in your house. It seems that the longer information remains in the STS the more strongly it is represented in the LTS.
LTS characteristics are:
a.) Capacity seems unlimited (perhaps constrained by intelligence?).
b.) Information loss can be by decay interference or from retrieval failures.
c.) Information is generally coded for meaning (semantically).
d.) Duration is theoretically infinite.
E.) Problems with multistore models:
1.) Experimental failures:
a.) Coding - multistore models maintained that only the LTS could hold semantic information. The STS was assumed to only hold acoustic codes and all other information was assumed to be reduced to its acoustic code. There were experiments which suggested, however, that semantic codes were also held in the STS. Furthermore, visual and articulatory codes were demonstrated as well.
i) Conrad (1964) provided evidence for acoustic confusions: when letters were falsely recalled , the errors most often sounded like the target letters, ex: B. Z, V, D, P, C, T, G might all be confused.
ii) Shulman (1971) provided evidence for semantic confusions: when words were falsely recalled, the errors most often were of words which had a similar meaning, i.e., baby for infant or ship for boat.
iii) Wickens, Born & Allen (1963) can be interpreted as evidence for semantic coding in terms of category membership.
b.) Forgetting functions - Early studies suggested information was lost from the SR in less than a second, and from the STS after 18 seconds if there were no rehearsal.
Later experiments showed estimates of duration of the SR up to 20 sec--overlapping with duration of the STS. Thus retention duration was not a good basis for distinguishing between the two systems.
Studies of the STS showed that there was no forgetting after 15 sec even when rehearsal was prevented. Other studies showed retention greater than 30 seconds.
Capacity - in the sensory registers is generally designated as ‘large’--difficult to quantify except for vision and audition where the limit appears to be 9-12 sensory units!
Studies of STS capacity are highly variable, ranging from 2-20 words--nevertheless this is a fairly limited capacity compared to the LTS and appears to be the best evidence for a truly separate system.
In terms of LTS there do seem to be very real limits linked to intelligence.
Conceptual Failures - Multistore models were developed from experiments in list-learning. These models do not account for human activities that have meaning at their core.
Thus, the models are limited in their application to such normal, but complex activities as conversation, reading a book or comprehending the intentions of a TV actor.