Information Processing Theory

As I noted in my discussion on behaviorism, Education theory has generally moved historically through three general overriding theories, behaviorism, information processing, and constructivism. The emergence of the information processing model (often called the "cognitive" model) is sometimes referred to as the "cognitive revolution" within the field of psychology and education. This view was certainly prominant during the time I was in graduate school (late 1980s), and, in fact, I believe it is still safe to say it is in many ways the overriding view among experimental psychologists today. However, within the field of education this model is starting to fall out of favor. (We will talk more about this in the virtual lecture on "constructivism", the third/current metaphor).

The birth of the cognitive model actually dates back to Edward Tollman's work with rats in the 1920s. Tollman, like most behaviorists of the time, worked with rats (since the view was that they worked on a stimulus response system that was easily generalizable to humans). In one of his most famous experiments he had three groups of rats run a maze. He put all rats in a maze once a day for sixteen days. In group 1 the rats received food for running the maze correctly, and, in group 3 they never received "reinforcement". Consistent with behaviorist theory, group 1 performed much better at running the maze (making almost no mistakes after10 days). However, the interesting group was group 2. These rats received no "reinforcement", for 10 days. On day 11 they were given food for running the maze correctly. Naturally they did very poorly for the first 11 days, but a day after receiving this reinforcement, not only did they perform as well as the rats that had been reinforced every day, but they even performed a little better! This was certainly inconsistent with classic behaviorist theory in that behavior is thought to be "strengthened" every time it is reinforced, so the rats in the reinforce-every-day group should have performed much better than the group reinforced for only one day. Tollman's explanation for this, to behaviorists, was radical. He suggested that the rats in group two had developed a "cognitive map" through their daily exploration of the maze. He had thus used a non-observable process to describe behavior, anathema to the behaviorists, and, at this point cognitive psychology was born, though it took many decades before it became an overriding model, replacing behaviorism, in psychology and education.

Not surprisingly, the cognitive or information processing model became much more popular with the advent of computers. At this point psychologists actually had a tool that represented non-observable processes in an observable fashion. The computer program, a clearly reliable measure, could serve to represent what was going on "internally" in the human brain. This is why the traditional information processing model of memory that you'll read about in the text, sounds so much like a computer. Sensory memory is sometimes called the "sensory buffer" in that it is like a temporally buffer in a computer that stores information briefly (sort of like your cache in a web browser). Then comes short term memory which is very much like a file that you're working on, before you save it (so short term memory is like RAM). And, long term memory is, of course, analogous to a hard drive.

Unfortunately, despite the promise of artificial intelligence and the information processing model for describing memory, cognitive scientists have come to recognize that the computer can only serve as a pretty general and loose model for human memory. First of all, of course, the computer is capable of working with an incredibly large amount of data quickly and capable of doing this with incredible accuracy. In fact, I think it's safe to say that, in this sense, computers have really surpassed expectation. Compared to a human the computer is much more accurate. However, the computer also has interesting limitations when compared to a human. For example, if I type "logut", at the "saucer>" prompt in my Unix account the computer (which is capable of doing incredibly intricate calculations in it's "head" in milliseconds) responds "unknown command". However, it would not take a very advanced human who had seen the term "logout" as many times as the computer has to say "oh, I recognize that, he means 'logout'". Or, how about if I program a computer to recognize my face, by storing every single infinitesimally tiny portion of my face as data into the computer. It matches that pattern to my face and says "that's Richard Hall". Then I turn slightly to the left and it says "who the is that?" Whereas, again, it wouldn't take a very advanced human to recognize this was the same person. The reason is that we are efficient, we can make inferences quickly with incomplete data. This is a problem that researchers within the field of artificial intelligence have struggled with for decades, and are just now beginning to solve, and it has created a nearly intractable problem for those researchers using the computer as a simulated model of human memory.