Theories of human learning and memory

Theories of human learning and memory tend to emphasise either of two approaches. Although the majority of theorists would acknowledge the importance of both in the acquisition, storage and retrieval of information, models incline to focus on either the structures or processes involved in memory.

The structural approach is typified by the multi-store (or modal) model of memory expounded by several theorists (e.g. Atkinson & Shiffrin, 1968, cited in Eysenck and Keane, 1995). This model proposes three types of memory store (sensory, short-term and long term), with information being transferred from one store to another via the mechanisms of attention and rehearsal. This model is represented in figure 1 taken from Eysenck and Keane (1995, p125).

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Figure 1 The multi-store model of memory

Much of the evidence for this model came serial position studies (e.g. Glanzer & Cunitz, 1966, cited in Eysenck and Keane, 1995) and studies on memory impaired patients (e.g. Baddeley & Warrington, 1970, cited in Eysenck and Keane, 1995). Although it has been demonstrated that these memory stores differ from each other in terms of temporal duration, capacity, forgetting mechanisms and the effects of brain damage, as Eysenck and Keane (1995) point out, the model is over-simplified particularly in the view of the memory stores being unitary and the over emphasis on the role of rehearsal in the transfer of information from short to long term memory. The approach has also been criticised for its concentration on the structure of memory with a concomitant lack of attention to the processes involved (Eysenck and Keane, 1995).

In contrast to the above Craik and Lockhart (1972, cited in Eysenck and Keane, 1995) focused on the processes involved in long term memory, an approach known as ‘levels of processing’. According to this framework the depth (or level) of processing conducted on material determines the strength of the memory trace laid down in long term memory which will in turn determine subsequent recall ability. It is not the time spent processing neither the stimuli nor the amount of rehearsal of the information, which defines ‘depth’ but rather the meaningfulness of the stimuli. Craik and Lockhart (1972, cited in Eysenck and Keane, 1995) propose different levels of processing from analysis of physical attributes of the stimuli (shallow processing) to semantic analysis (deep processing).

The research generated by this framework has, in general, had findings in line with predictions (e.g. Craik and Lockhart, 1972, Hyde and Jenkins, 1973, Craik and Tulving, 1975, all cited in Eysenck and Keane, 1995). The most commonly used methodological approach has been to engage participants in an incidental memory experiment involving orientating tasks which manipulate the depth of processing required of the material to be learned. For example participants might be presented with a list of words and asked to state whether they are in upper or lower case letters (shallow processing), to provide words that rhyme with the words on the list (intermediate processing) or to state whether the words belong to specific categories (semantic or deep processing). Memory may be assessed by either free recall or recognition tasks.

Several other processing factors, apart from depth have been identified as important in influencing long term memory. Craik and Tulving (1975, cited in Eysenck and Keane, 1995) demonstrated that manipulating the amount of elaboration required of participants by changing the complexity of sentences in a semantic task, significantly effected cued recall. Distinctiveness of material has also been shown to significantly increase recognition scores (Eysenck and Eysenck, 1980, cited in Eysenck and Keane, 1995).

One of the major criticisms of the levels of processing approach has been the circular nature of the argument. Depth of processing determines the strength of the memory trace which determines ability to remember, but the lack of an independent measure of depth means that amount remembered can be used to define the depth at which the information has been processed. However Craik and Lockhart (1972, cited in Eysenck and Keane, 1995) argued that semantic processing is the deepest level of processing and thus should provide the best memory performance. In contradiction to this Rogers, Kuiper and Kirker (1977, cited in Eysenck and Keane, 1995) demonstrated that self-referent processing led to significantly higher recall to not only structural and phonemic processing but also semantic processing. However Klein and Kihlstrom (1986, cited in Eysenck and Keane, 1995) found that the superiority of self-referent over semantic processing was eliminated when the organisation of the material was controlled.

Given the above research findings it would appear that whether there is a level of processing deeper than that of semantic processing is open to debate. It could be argued that mental imagery might provide such a level of processing. In order to form a mental image of a word, not only would the meaning of the word have to be accessed but additional information of how the word relates to and differs from other concepts would also have to be available.

The aim of the present study is to investigate via an incidental learning paradigm the levels of processing framework with the inclusion of an imagery condition. If mental imagery is a deeper level of processing then participants in this condition should exhibit higher recognition scores than those in a semantic condition who should in turn perform better than participants completing a rhyming orienting task.

Participants

One hundred and forty-two second year undergraduate psychology students from a research methods class participated in this experiment.

Materials

Each participant was seated in front of a PC, running ms-dos, 96 commonly used words were flashed on the screen, 48 of these words were presented twice.

Design

The experiment used an indpendent measures design. The independent variable was which of the three conditions the participant was under; shallow, imagineability and semantic. The dependent variable was d prime, a measure of how precisely the participant could differentiate the original words from new words.

Procedure

To start off, the participants were split into three groups , each group undertaking a different condition. The first group were told to categorise the words shown, this is known as semantic processing. For example, a cat would be categorised as an animal. The second group were told to rate the words on how easily it can be visualised; so a cat for example would have a high imageability. The third group were told to to match each word presented on screen with their own rhyming word; so cat would rhyme with hat, pat, fat etc. This is a form of shallow proccessing.

After the sets of words have been presented for the first time there is an interval upon which the participant is asked to evaluate the experiment on an 8-scale rating. This serves as a distractor task before the second part of the experiment.

In the second part of the experiment the participants were presented with the same set of 48 words, these were mixed with 48 new words. The aim of the second task is for the participants to differentiate the new set of words from the original. The new set of words were added as foils, to stop them just pressing ‘yes’ continuously and obtaining a perfect score.

At the end of the experiment, participants were were informed of the design and purpose of the experiment.