The Yin and Yang of Reinforcement
Most everyone who is a behavior analyst and works with helping people change, focuses on the use of positive reinforcement to do so. Positive reinforcement is, indeed, one of the premier principles guiding our work (see, e.g., Aubrey Daniels’s important book, Bringing Out the Best in People). Reinforcement starts with delivering some event dependent on a response. If, as a result, the response becomes more likely thereafter, then the event is said to be a reinforcer and the outcome defines positive reinforcement.
Missing from this tale is the fact that much of the effect of these reinforcers on behavior comes not just from their being reinforcers, but from the context in which the response-reinforcer dependency is created, that is, on the schedule of reinforcement. The same dependency can have quite different effects on behavior depending on whether it is scheduled, for example, after fixed or variable numbers of responses or following fixed or variable time intervals. In so-called fixed-interval schedules, for example, periods of no responding (immediately after the response is reinforced) alternate with increasing response rates as the time for a response to be reinforced comes closer. In the variable-interval schedule, by contrast, with the same reinforcer responding occurs more regularly and at moderate rates. There are a large number of different schedules of reinforcement, and each produces unique behavioral effects.
Reinforcement schedules also ensure more “bang for the buck.” In many early applications, each instance of the target response was reinforced (technically a fixed-ratio schedule, but also commonly, albeit imprecisely, called “continuous reinforcement”). This proved to be a less than optimal arrangement. Satiation for the reinforcer occurs quickly, rendering the reinforcer ineffective. Also, practically speaking, unless there is an endless supply, reinforcers following each instance are quickly depleted. Perhaps most importantly from a treatment perspective, discontinuing reinforcement (what is called “extinction”) following continuous reinforcement leads to quick deterioration of the target behavior, much quicker than is observed if the reinforcement is scheduled intermittently.
It isn’t possible to overstate the importance of the schedule in determining the effect of reinforcement. In fact, talking about reinforcement without talking about how the reinforcement is scheduled is a little like talking about using an expensive fountain pen without considering the ink required to make it work, or talking about plowing a field with a tractor without considering the plow needed to turn the earth. Reinforcement makes a response more likely in the future, but it does so in the context of the circumstances, that is, the schedule, that allow the target response to produce the reinforcer.
Prominent behavior analysts have labeled schedules of reinforcement as “fundamental determinants of behavior.” One even called them the “amino acids of behavior,” acknowledging their essential role in understanding how reinforcement has its behavioral effects. Like the complementary and inseparable forces of yin and yang, reinforcers and the schedules by which they are delivered are inexorably linked. It is impossible to imagine one without the other.