Between-Subjects
One-Way ANOVA
Description

formulas | example | calculator (will open in another window)

The one-way Analysis of Variance (ANOVA) is used with one categorical independent variable and one continuous variable. The independent variable can consist of any number of groups (levels).

For example, an experimenter hypothesizes that learning in groups of three will be more effective than learning in pairs or individually. Students are randomly assigned to three groups and all students study a section of text. Those in group one study the text individually (control group), those in group two study in groups of two and those in group three study in groups of three. After studying for some set period of time all students complete a test over the materials they studied. First, note that this is a between-subjects design since there are different subjects in each experimental condition. Second, notice that ,instead of two groups (i.e., levels) of the independent variable, we now have three. The t-test, which is often used in similar experiments with two group, is only appropriate for situations where there are only two levels of one independent variable. When there is a categorical independent variable and a continuous dependent variable and there are more than two levels of the independent variable and/or there is more than one independent variable (a case that would require a multi-way, as opposed to one way ANOVA), then the appropriate analysis is the work horse of experimental psychology research, the analysis of variance.

In the case where there are more than two levels of the independent variable the analysis goes through two steps. First, we carry out an over-all F test to determine if there is any significant difference existing among any of the means. If this F score is statistically significant, then we carry out a second step in which we compare sets of two means at a time in order to determine specifically, where the significance difference lies. Let's say that we have run the experiment on group learning and we recognize that this is an experiment for which the appropriate analysis is the between-subjects one-way analysis of variance. We use a statistical program and analyze the data with group as the independent variable and test score as the dependent variable.

Our results might look something like the following:

source

D.F.

Sum of Squares

Mean Squares

F Ratio

F Prob.

Between Groups

2

1392.47

696.23

11.87

.0002

Within Groups

27

1583.40

58.64

   

Total

29

2975.87

     

The "Between Groups" row represents what is often called "explained variance" or "systematic variance". We can think of this as variance that is due to the independent variable, the difference among the three groups. For example the difference between a person's score in group one and a person's score in group two would represent explained variance. The "Within Groups" variance represents what is often called "error variance". This is the variance within your groups, variance that is not due to the independent variable. For example, the difference between one person in group one and another person in group one would represent error variance. Intuitively, it's important to understand that, at it's heart, the analysis of variance and the F score it yields is a ratio of explained variance versus error. This actual F score (ratio) is in the next to last column, and the probability of an F of this magnitude is in the final column. As you can see this F score is well below the .05 cut off, so that we can conclude then that the groups are statistically significantly different from one another. But two very important questions remain. First, which means are significantly different from which other means and, second what were the actual scores of the group?

To answer the pair comparisons questions we run a series of Tukey's post-hoc tests, which are like a series of t-tests. The post-hoc tests are more stringent than the regular t-tests however, due to the fact that the more tests you perform the more likely it is that you will find a significant difference just by chance. Your post hoc tests which statistical programs often presented in a table, might look something like this:

Mean Group

G
r
p

1

G
r
p

2

G
r
p

3

72.30

Grp 1

     

86.60

Grp 2

*

   

86.90

Grp 3

*

   

This table represents a matrix with the groups listed along each axis. It’s important, at this point to note the way the groups were originally coded (1=individual; 2=dyad; 3=triad.). First of all, note the means. Clearly, the mean for those in group 1 was substantially lower than the other two groups, which were practically identical. When we look at the Tukey’s Post-hoc table, we see that the post hoc tests are consistent with what we observed with the means. Note that the stars are in the boxes that correspond to groups (1 vs. 2) and (1 vs. 3). This means that that these mean differences were statistically significant. We can sum these results up by saying something like "Those who studied individually scored significantly lower that those who studied in dyads or triads, while the latter two groups did not differ significantly from one another." In other words, this experiment indicates that studying in a group is more effective than studying individually, but the size of the group (two vs. three members) is not important. The experimenter’s original hypothesis that learning in triads is more effective than learning in dyads or individually was not supported. In the case of this experiment, this seems obvious based on the means, but in many "real world" studies this is not the case, and the estimates of statistical significance become very important.

The analysis of variance is a simple test to do with a computer, but can get pretty complicated when calculated by hand, even with a small sample size. However, caculating a one-way analysis of variance and subsequent post hoc tests by hand will give you an appreciation for what the computer is doing and also help you to better understand the underlying logic.