Effects of Differential Reinforcement on Discriminant Behavior

[to Bottom of Page]


An experiment was conducted in which the effects of primary reinforcement on discriminant behavior were examined. The condition examined was to determine if behavior in the reinforced would increase, and behavior in the non-reinforced condition would decrease. The experiment involved 8 rats of the Sprague-Dawley strain, which were conditioned in Skinner boxes. The rats were habituated on the first day, during which the operant level of bar presses was recorded. This was followed by 6 days of shaping, and when asymptote was reached, 7 days of discrimination training during which the rats were introduced to the lights. A control session was run on the last day. Several t-tests were run, which found support for the hypothesis, t(7) = 10.97, p < 0,01. The results can be explained by Thorndike's 3 laws, Dinsmoor (1950), Smith and Hoy (1954), Herrick, Myers, and Korotkin (1959).


Investigating discriminative behavior in rats has been the topic of many research projects and reports over the years. From the study of infra-human subjects, much has been generalized and applied to humans, and the betterment of the human condition.

Almost 100 years ago, Thorndike conducted experiments with cats (Keller and Schoenfeld, 1950), in which he used fifteen different forms of a "problem box", representing different problems requiring solution. A cat was dropped into the box, and was then required to manipulate a release mechanism in order to escape to get a small bit of food. The hungry cat operated the release mechanism, thereby gaining freedom and access to food, after which, the experimenter would immediately put the cat back in the box for another trial. The first trial often took up to 10 minutes, but after as few as five trials the cat had effectively solved the problem, and was able to escape in under 30 seconds. From his experiments, Thorndike generalized three laws; the Law of Effect states that if a response made in a situation, is closely followed by satisfaction it will be more likely to recur, and a response followed by discomfort will be less likely to recur such that the animal makes a connection between the response and the consequences of that response; the Law of Exercise states that connections are strengthened between the stimulus and response through repetition, and weakened through disuse; and the Law of Readiness states that if an animal is motivated, it will be more likely to respond to the stimulus and provide the required response.

Dinsmoor (1950) conducted an experiment using male rats as subjects, in which the rats were to be trained to discriminate between a light-on and light-off condition. The rat was to press a bar for reinforcement during the SD condition and not press the bar during Sd condition. SD is defined as the condition when the operant behavior is reinforced, and Sd is defined as the condition when reinforcement is not available. There were other conditions such that the subject would not be accidentally reinforced immediately after Sd, by not allowing SD to occur within a period of 30 seconds after a response during the Sd condition. Dinsmoor found that responses in the SD condition increased, while responses in the Sd condition decreased.

Dinsmoor (1952) conducted an experiment to investigate the retention or loss of a learned behavior over time. The experiment involved discrimination training. White rats were trained to bar press for food during an SD and to not bar press during the Sd time. After the discrimination training was completed, the rats were given a rest period of 30 weeks. At the end the rest period, the rats were again brought back to 85% of their normal body weight and tested for retention of discrimination. There was some loss of discrimination, this was shown by a decrease in the rate of bar pressing during SD and an increase in the rate of bar pressing during Sd, and some decline in total responding. Dinsmoor concluded that a well-established discrimination may be largely retained over a period of 30 weeks, and the loss of discrimination is not foregone with the mere passage of time.

In an experiment by Smith and Hoy (1954), discrimination training was given to 24 white rats. They set out to test a hypothesis that as discrimination forms, responses shift from S to SD, such that an increase in the rate of response during SD is accompanied by a corresponding decrease in rate of response during Sd. They tested the subjects for 27 days of discrimination training, and 33 days of reverse discrimination, and they found that, as discrimination formed, the overall rate of responding remained constant, and that as responses during Sd periods declined responses during the SD increased.

Herrick, Myers, and Korotkin (1959) conducted an experiment to test the hypothesis presented by Smith and Hoy (1954), in which they found that the total number of responses in a discrimination session remained constant. Herrick et al. (1959) conducted discrimination training on 8 male white rats for 40 days. Their results were in disagreement with those obtained by Smith and Hoy (1954). Herrick et al. (1959) found that found that the rates of responding doubled during the 40 sessions. From this data they concluded that not only was the rate of responding not a constant, but that the rate of responding actually increased.

Egger and Miller (1962) conducted an experiment to determine the effects of secondary reinforcement. They hypothesized that in the presence of 2 stimuli, the stimulus that is the more reliable in predicting the primary reinforcer would become the secondary reinforcer. To test their hypothesis they conducted an experiment using 88 male rats as subjects. They provided half the subjects with a stimulus which always preceded the delivery of primary reinforcement, and where a second stimulus was redundant. The other half of the rats were conditioned to a stimulus which only predicted primary reinforcement if followed by a second stimulus. Their results showed that the second stimulus was a stronger secondary reinforcer when it was informative, rather than when it was redundant.

Based on the above review it is hypothesized that if a behavior is reinforced in one condition, and not reinforced in another condition, then the behavior in the reinforced condition will increase while behavior in the non-reinforced condition will decrease.

[Return to index] [Next section]

[Abstract] [Method] [Results] [Discussion] [Reference]

[To Andy's Psychology Menu Page] [To Welcome Page]

How you can help... I hope you enjoyed these pages and found them useful in some way. Now, here's how you can help. Some of you have expressed a desire to help support these pages financially, I thought about it, and here it is... All I'm asking is a small donation ($3.00 - $5.00). Use of these pages is still free, no charge, you may continue to use them free of charge, as much as you want, as often as you want, anytime you want... But you can also use the links below to donate and help keep these pages here, and help the site expand. You may use your PayPal account, or most any credit card. Thank you for your continued support.

Select this button to donate $3.00
Select this button to donate $5.00

To comment about these pages contact Andy at andyda@earthlink.net.