Experimental Design Revisited

 HomeContact us    
  Main Concepts  | Demonstration  | Activity  | Teaching Tips  | Data Collection & Analysis  | Practice Questions  | Milestone  | Fathom Tutorial 
 

   

 Main concepts

Whew! You've made it to the last unit. Even better: no new concepts here. We're revisiting Experimental Design and touching on some of the details that we glossed over before. Our feeling is that these details are more easily comprehended once you've got some real experience under your belt. So enjoy!

• In a controlled experiment, if the treatment groups are “alike” then we can focus on the differences in the responses after treatment.

• We have four principles in the design of our experiment: Control, Blocking, Randomization, and Replication.

• Since we want treatment groups to be as similar as possible, we CONTROL any potentially influential variables that we can. For example, if we are interested in the effect of caffeine on pulse rates, and we plan to give cola and caffeine-free cola to the subjects, we need to control the cola temperature, the amount given, the time of day and other factors that might interfere with our results.

• If we fear that variables beyond our control (or beyond our ability to control) such as gender or weight might influence the results of our experiment, we BLOCK. We place our subjects into blocks so that those within a block are similar with respect to these variables. For example, subjects within a block might be of the same gender and of similar weight, e.g. a block of light-weight females, mid-weight females, heavy-weight females, light-weight males, mid-weight males, and heavy-weight males.

• A paired design is a special case of blocking, in which the blocks are (in most cases) individual subjects. For example, a study that measures blood pressure before and after taking a drug would form blocks in which each subject was within his or her own block and would base the analysis on the difference in each subject's before and after scores. This is a paired design. The lead study in the Unit 1 Demonstration is an example of a paired design in which the blocks consist of two different children. Each block (pair) has a child in the Control group (no lead exposure) and a child in the Exposed group (lead exposure).

• Since there will always be other variables that may influence our results, we RANDOMIZE within the blocks to even out the variability due to these other outside factors. This random allocation of treatments (or random assignment, as some books call it) should not be confused with random selection of subjects from a population. Virtually all experiments (and all those whose subjects are people) are conducted on a non-random sample of subjects or experimental units; it's often just those people who volunteer or those lab rats that were most easily obtained.

• RANDOMIZATION in an experiment is used for two different purposes. (1) Researchers can select subjects at random, although this is seldom done in practice. (2) Treatments can be assigned to subjects in a random manner or subjects can be assigned to treatments. Random allocation of treatments establishes a cause-effect relationship between the treatment and response variables that otherwise would have been only an association. Although these randomization techniques in the selection of subjects and in the assignment of treatments are not exactly the same, they both use chance (impartial) selection, rather than human (subjective) selection. To quote author Mario Triola, “Data carelessly collected may be so completely useless that no amount of statistical torturing can salvage them.”

• RANDOM SAMPLING from a population is done in surveys to estimate parameters of interest, such as the proportion of Americans who approve of the President. Random sampling makes extrapolation to a larger population possible. Be aware that in a survey there is no random allocation of treatments because a survey is not an experiment.

• We use REPLICATION in experiments, meaning the same treatment on multiple subjects, in order to assess variability -- the magnitude of variability, the sources of variability, and the shape of the variability (e.g. normal?). With only a single observation, you have no means for assessing how different the response variable might be if you were to make a second observation. So, the notion of replication connects to our quest for a large sample.

• You might also hear the word "replication" in the context of assessing the validity of scientific studies. Scientists will often claim that so-and-so's experiment was not "replicable" -- which means they tried it and it didn't work. Or they'll say that they managed to replicate so-and-so's results, which further confirms the theory. While interesting from a philosophy-of-science point of view, it's not really what the AP curriculum is about.