






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
There are explain substraction method in psychology, the additive factors method, using the slope and given diagrams.
Typology: Lecture notes
1 / 10
This page cannot be seen from the preview
Don't miss anything!
(^1) The name additive factors method was invented by Sternberg in 1972.
At one time it was believed that thought was instantaneous and therefore attempts to measure its speed were futile. This belief was overthrown in 1850 when Herman von Helmholtz measured the speed of neural transmission and showed that nerve impulses travel at speeds that are not particularly fast (around 30 m/s or about 100 kilometers per hour). Because the physical substrate of thought is presumably based in the nervous system, Helmholtz’s measurement led people to believe that thoughts might also require a measurable amount of time to take place. With his measurement, known as the subtraction method , Helmholtz also introduced an important idea to psychological research: comparisons between carefully selected conditions might provide relatively direct measurements of otherwise inaccessible processes. Although contemporary psychologists now usually prefer a modification to the subtraction method known as the additive factors method (introduced by Ulric Neisser, Saul Sternberg and other cognitive psychologists in the 1950s and 1960s) , the subtraction method still plays a key role in brain^1 imaging studies of psychological functioning.
The subtraction method. The basic idea of the subtraction method for reaction time is that a measure of the duration of a particular process can be found by obtaining two measurements of time that include the process and subtracting one from the other. The method applies to situations in which time can be directly measured for the completion of a physically inseparable series of events that each take time but cannot be measured individually. The challenge solved by the subtraction method is to measure the time taken by one or more of these distinct but inseparable events. For example, deciding which of two racers is the faster runner is not the same as deciding who can run the Herman von Helmholtz (1821-1894) faster race (the latter is determined by who finishes the
race first). The time to finish the race depends on the time for the runner to respond to the starter pistol, get out of the starting blocks, reach top running speed, and then cross the finish line. As competitive racers know, if one racer gets out the blocks or reaches top running speed more quickly than another, that racer could win the race even if the other person has a higher top running speed. Thus, just timing the race does not provide a pure measure of running speed. Timing a race is a measure of non-running components combined with running components in a continuous flow of activity. The subtraction method eliminates the non-running components from the total time and allows running speed to be measured with only a stopwatch, activated when the starter pistol sounds and halted when a runner crosses the finish line. In the case of the runner, the critical ingredient for the subtraction method is the ability to make two measurements that differ only because of running speed. This can be achieved by measuring the time for the runner to cross a finish line placed at, for example, 90 meters (m), as well as the time for the runner to cross a finish line placed at 100 m. Comparing these two times provides a relatively pure measure of running speed, because the time to reach 90 m differs from the time to reach 100 m only because of the additional distance. Suppose the time to cross the 90 m mark is 9.2 seconds (s), and the time to reach the finish line at 100m is 10 s. The actual running speed is the distance covered (100m - 90m = 10 m) divided by the time to cover that distance (10s - 9.2s = .8 s). The resulting measurement (10m / .8s = 12.5 m /s or about 28 mph) is the top running speed. Helmholtz was the first person to use this method to measure the speed of the nerve impulse. He placed a stimulating electrode at one end of a sciatic nerve (dissected from a frog), placed a recording electrode on the nerve about 4 cm away, and recorded the time for the nerve impulse triggered by his stimulation to be detected by the recording electrode. Then he moved the recording electrode a little farther from the stimulating electrode, stimulated the nerve a second time, and recorded the time for this impulse to be detected. By subtracting the time for detection when the electrode was 4 cm away from the stimulator from the time for detection when the electrode was farther from the stimulator, Helmholtz arrived at a measure of the transmission speed of the nerve impulse: about 30 m/s.
by subtraction. In equation form, Donders proposed the following: Simple Reaction Time = Signal Motor
Perception Response
Go/ No Go Reaction Time = Signal Stimulus Motor
Perception Discrimination Response
Choice Reaction Time = Signal Stimulus Response Motor
Perception Discrimination Choice Response
Subtracting the time for the Go/No Go task from the time for the Choice Reaction Time task, Donders believed, provided a pure measure of the response choice time. Similarly, he believed that subtracting the time for the Simple Reaction Time task from the time for the Go/ No Go Reaction task gave a pure measure of stimulus discrimination time.
Problems with subtraction. As a practical matter, Donders’ method ran into a serious problem when some studies found average times for the choice reaction task that were shorter than the average times for the stimulus discrimination task. This result is completely inconsistent with the idea that the choice task incorporates the stimulus discrimination task. The major problem with Donders’ version of the subtraction method appears to be the fact that, with repetition on almost any task, people become experts on it, and expert performance is simply not comparable to the performance of novices. Thus, unless the degree of expertise has been equated for a simple reaction time task and a choice reaction time task, the time for completing one task will bear no meaningful relation to the time for completing the other.
The Additive Factors method To get around the problems created by comparing one task to another in the subtraction method, contemporary psychologists have developed the additive factors method. As in Donders’ analysis, the choice reaction time task is considered to be a complex task that includes multiple underlying component processes. To measure these
processes, however, the additive factors method relies, not on different tasks, but on different variations of the choice reaction time task. Thus, the task does not change, but the specific information required for different choices does. The logic of the additive factors method involves three assumptions about measurement.
An illustration of the additive factors method: Visual scanning Looking for a friend in a crowded hall, we scan our visual field, shifting our attention from face to face, region to region, until we find the target of our search. The time to shift attention from one face to another is an example of an internal process that cannot be directly observed but which can measured using the additive factors method. To illustrate the method, this section will describe the logic and the steps involved in measuring scanning time, based on a laboratory task in which the subject is looking for a target letter in displays of letters that may or
Now, suppose the task is changed so that the subject has to find an "N" among a set of 16 letters, but that in all other respects the task remains the same as the 4-letter task just described. In this case, the subject would scan a display of 16 letters until finding an "N" and pressing the response key marked "Present" or not finding the letter and pressing the key marked “Absent”. In this case the observed RT can likewise be calculated from the times for the steps of encoding, scanning, deciding and making a movement, as shown in Equation 7-2:
RT 16 = ET 16 + ST 16 + DT 16 + MT 16 Eq. 7-
Common sense leads us to expect that finding an "N" among 16 letters is harder than finding it among 4 letters, hence RT 16 is expected to be greater than RT. But, in terms of the 4 equations, where does the additional time come from? That is, which among the 4 separate steps takes more time, when the display contains 16 instead of 4 items? Consider the encoding time. If “encoding time” refers to nothing more than the time to convert the visual display from visual energy to neural activation, then encoding time should not depend on display size, and ET 16 = ET. 4 Similarly, if movement time refers to just the time to initiate a movement, then movement time also should not depend on display size, and MT 16 = MT. 4 Finally, if “decision time” refers to the time to decide what overt response to make, then decision time should not depend on display size either, and DT 16 = DT. 4 According to this argument, the only step that takes more time between responding to a 4- letter display and responding to a 16-letter display is the “scanning time”. There are more letters to scan with 16 letters than with 4 letters, and, if scanning a single item takes a fixed amount of time, then more time is needed to scan more items. In other words, if the reaction time for the 4 item display is subtracted from the reaction time for the 16 item display, the difference, RT 16 - RT , is a direct function of the extra time needed to scan 16 as compared to 4 items: 4
Assuming that the difference, ST 16 - ST , is due simply to the time to scan an additional 4 12 items in the 16-item display, this equation leads to a direct measure of the scanning rate for 12 items:
Furthermore, assuming that scanning each item takes a fixed amount of time, the time to scan 12 items is simply the scanning time for 1 item multiplied by 12:
ST 12 = ST 1 x 12 (5)
Rearranging and dividing by 12 gives the following equations for the time to scan a single item:
This example shows how to measure the scanning time for a single item, using two different display sizes. Including more display sizes, such as displays with 8 items, permits additional measurements of scanning time. If displays with 8 letters were included, then, following the argument just presented for displays of 4 and 16 letters, it would be possible to calculate an ST 8 from RT 16 and RT 8 and an ST 4 from RT 8 and RT , to yield additional estimates 4 of ST. 1 Presented with several different estimates of ST , we need a way to choose the "best" 1 estimate. Using slope to measure processing speed As an illustration of how to get a best estimate of
Figure 7-3. Schematic diagram of a time accuracy tradeoff.
a measure of scanning time.
Speed-accuracy tradeoffs. The speed-accuracy tradeoff refers to the fact that everyone can “trade” speed for accuracy or vice versa. When it is important to do a task accurately, then we tend to go more slowly, whereas when it is important to complete a task quickly, we are willing to accept some errors. The importance of this common sense fact is not always fully appreciated when reaction time measures of performance are studied in research. However, one implication of the possibility for tradeoffs between speed and accuracy is that comparisons between two conditions on the basis of the time to complete the task can only be meaningful when the error rates are the same (consider typing speed). The nature of the tradeoff is illustrated graphically in Figure 7-1, which plots accuracy on the y-axis as a function of time since the onset of a signal stimulus, such as test probe. For a short initial period, no change in accuracy occurs, because a minimum amount of time is required before any information is available to guide a decision. Once some information has been extracted, accuracy starts to rise fairly quickly as the elapsed time continues to increase, but eventually the level of accuracy reaches a plateau, and further increases become smaller and smaller.