Jump to content

Unitary theories of memory

From Wikipedia, the free encyclopedia

Unitary theories of memory are hypotheses that attempt to unify mechanisms of short-term and long-term memory. One can find early contributions to unitary memory theories in the works of John McGeoch in the 1930s and Benton Underwood, Geoffrey Keppel, and Arthur Melton in the 1950s and 1960s. Robert Crowder argued against a separate short-term store starting in the late 1980s.[1] James Nairne proposed one of the first unitary theories, which criticized Alan Baddeley's working memory model,[2] which is the dominant theory of the functions of short-term memory. Other theories since Nairne have been proposed; they highlight alternative mechanisms that the working memory model initially overlooked.

Background

[edit]

Overview

[edit]

Working memory is the system that is responsible for the transient holding and processing of new and already stored information, an important process for reasoning, comprehension, learning and memory updating. Working memory is generally used synonymously with short term memory, but this depends on how the two forms of memory are defined.[3] Working memory includes subsystems that store and manipulate visual images or verbal information, as well as a central executive that coordinates the subsystems. It includes visual representation of the possible moves, and awareness of the flow of information into and out of memory, all stored for a limited amount of time.[4]

In 1974, Baddeley and Hitch[5] introduced and made popular the multicomponent model of working memory. This theory proposes a central executive that, among other things, is responsible for directing attention to relevant information, suppressing irrelevant information and inappropriate actions, and for coordinating cognitive processes when more than one task must be done at the same time. The central executive has two "slave systems" responsible for short-term maintenance of information, and a "central executive" is responsible for the supervision of information integration and for coordinating the slave systems. One slave system, the phonological loop (PL), stores phonological information (that is, the sound of language) and prevents its decay by continuously articulating its contents, thereby refreshing the information in a rehearsal loop. It can, for example, maintain a seven-digit telephone number for as long as one repeats the number to oneself again and again. The other slave system, the visuo-spatial sketchpad, stores visual and spatial information. It can be used, for example, for constructing and manipulating visual images, and for the representation of mental maps. The sketchpad can be further broken down into a visual subsystem (dealing with, for instance, shape, colour, and texture), and a spatial subsystem (dealing with location).

In 2000, Baddeley extended the model by adding a fourth component, the episodic buffer, which holds representations that integrate phonological, visual, and spatial information, and possibly information not covered by the slave systems (e.g., semantic information, musical information). The component is episodic because it is assumed to bind information into a unitary episodic representation. The episodic buffer resembles Tulving's concept of episodic memory, but it differs in that the episodic buffer is a temporary store.[6]

Models

[edit]

Feature model

[edit]

The feature model was first described by Nairne (1990)[2] The primary feature of this model is the use of cues for both short term memory and long term memory. Cues become associated with a memory and can later be used to retrieve memories from long term storage. When a memory is formed, cues associated with the formed memory form a constellation of nerve networks in which the memory is stored. While the cues are present, the nerves that form the constellation are active and the memory is being worked on in short term memory. When the cues are no longer present, the constellation is no longer active and is waiting in long term memory storage for when the cues are once again present. Rather than using the theory of decay for explaining how memories get forgotten, the feature model focuses on item based interference instead. This means that other items that use the same or similar constellations of cues get in the way of remembering other memories. For example, if you are trying to remember a fruit from a list of electronics, you will find that the fruit is easy to recall. Recalling a fruit from a list of other fruit, on the other hand, will prove a far more difficult task. This is the theory behind memory loss used in the feature model.

OSCAR model

[edit]

The Oscillator Based Associative Recall (OSCAR) Model was proposed by Browne, Preece and Hulme in 2000 [7] The OSCAR Model is another cue driven model of memory. In this model, the cues work as a pointer to a memory’s position in the mind. Memories themselves are stored as context vectors on what Brown calls the oscillator part of the theory. While a memory is not being used in short term memory, the context vector oscillates further away from the starting position. When a cue becomes present, the context vector that the cue points to gets oscillated back to the starting point, where the memory can then be used in short term memory. There is no theory of decay in this model, so to account for memory loss or fading the theory states that memories move further and further from the starting point, and retrieval becomes more difficult the further the memory is from the gas station (starting position). Furthermore, some memories oscillate faster than others. The theory behind this is that memories frequently accessed and used will move slower away from the starting point since the probability of the memory being retrieved is relatively high. This critics the above model.

Differences

[edit]

Unlike the standard models of short term/working memory, unitary theories do not assume a direct connection between activation level and memory success. They also do not propose role rehearsal, and they focus more on item based interference rather than memory decay.[8] Unitary theories also generally have only one loop or no loop at all, whereas the Working Memory model relies on multiple loops to explain the different forms of memory input. Because of the multiple loop theory, Working Memory has a more difficult time explaining how short term memory gets consolidated into long term memory, whereas the Unitary Theories generally explain long term memory using the same single principle that they use to explain short term memory. On the other hand, Unitary Theories have more trouble explaining situations where short term memory differs such as the word length effect,[9] which can be more easily explained by the Working Memory model.

Criticisms

[edit]

The most prevalent criticism of the unitary models is that they try to oversimplify many complex processes into a single mechanism. Studies have shown that short-term memory and long-term memory are two distinct processes that emphasize different levels of activation in the brain among different cortical areas. Furthermore, the rate of decay is much faster in short-term memory as opposed to long-term memory. The OSCAR model in particular does not account for this phenomenon. It also makes an assumption that memories are always circulating to different areas of the brain—never having a definite cortical space that stores the memory. While this transfer of storage is occurring, the rate of decay in the OSCAR model is either constant or non-existent. Both rates are implausible, as a constant rate of decay neglects all external factors that may contribute to a faster rate of decay (e.g. stress, focusing attention on other things). Failing to neglect decay at all is even more implausible as it makes two wrong assertions—memories are not forgotten, and when the information is remembered, it is remembered perfectly every time. While the unitary theories of memory may give attention to alternate factors that the working memory model had initially overlooked, new data has surfaced and the model has adapted to new findings. It is still the standard model of how short-term memory works, and it will continue to be the standard model until new evidence overturns it.

See also

[edit]

References

[edit]
  1. ^ Cowan, N. (1995). Attention and memory: An integrated framework. Oxford Psychology Series #26. New York: Oxford University Press.
  2. ^ a b Nairne, J. (1990) A feature model of immediate memory. Memory & Cognition 18 (3), 251-269
  3. ^ Cowan, Nelson (2008). "What are the differences between long-term, short-term, and working memory?". Prog. Brain Res. 169 (169): 323–338. doi:10.1016/S0079-6123(07)00020-9. PMC 2657600. PMID 18394484.
  4. ^ Schacter, Daniel (2011) [2009]. Psychology Second Edition. United States of America: Worth Publishers. p. 227. ISBN 978-1-4292-3719-2.
  5. ^ Baddeley, Alan D.; Hitch, Graham (1974). Gordon H. Bower (ed.). Working Memory. Vol. 2. Academic Press. pp. 47–89. doi:10.1016/S0079-7421(08)60452-1. ISBN 978-0-12-543308-2. OCLC 777285348. {{cite book}}: |work= ignored (help)
  6. ^ Baddeley, A. D. (2000). "The episodic buffer: a new component of working memory?" (PDF). Trends Cogn. Sci. 4 (11): 417–423. doi:10.1016/S1364-6613(00)01538-2. PMID 11058819. S2CID 14333234.
  7. ^ Brown GDA, Preece T, Hulme C. 2000. Oscillator-based memory for serial order. Psychological Review 107:127-8.
  8. ^ Nairne, J. (2002) Remembering Over the Short-Term: The Case Against the Standard Model. Annu. Rev. Psychol. 53:53–81
  9. ^ Nairne, J., & Neath, I. (1995) Proactive interference plays a role in the word-length effect. Psychonomic Bulletin & Review December 1997, Volume 4, Issue 4, pp 541-545