Hypothesis Tests

 HomeContact us    
  Main Concepts | Demonstration  | Activity  | Teaching Tips  | Data Collection & Analysis  | Practice Questions  | Milestone  | Fathom Tutorial 
 

   

 Main Concepts

• If the observed value is too many standard errors from the expected value, we don't believe the null hypothesis.

• p-values are commonly misinterpreted. Please remember that they are conditional probabilities: they are conditioned on the null hypothesis being true. So if the p-value were, say, 0.04, it would be correct to say "The probability of getting a test statistic more extreme than the one we observed is, if the null hypothesis is true, 0.04." It is very wrong to say "The probability the null hypothesis is true is 0.04."

• Confidence Intervals and Two-sided Hypothesis Tests are equivalent. These means that if you decide to reject the null hypothesis if the 95% confidence interval does not contain the null value, then you'll reach the same decision as if you had performed a significance test with alpha = 100%-95% = 5%. And vice versa; if you reject the null hypothesis in your hypothesis test with significance level alpha, then the 100%-alpha confidence interval will not contain the null hypothesis value.

• It is always important to check the conditions before applying a hypothesis test. Hypothesis testing can't save a poorly designed experiment.

• Beware of "fishing expeditions". If you have a set of statements that you want to test, and for all of them the null hypothesis is true, then we would expect 5% of these true null hypotheses to be rejected anyway. Some statistically naive researchers will perform test after test, in search of a rejected hypothesis.

• It is cheating to state a null and alternative hypothesis after looking at the results of the test.

• The null and alternative hypotheses must be stated in terms of population parameters. But students need to practice translating "real" situations into formalized statements in terms of parameters.

• Every test statistic (in AP Stats) will take this form: (sample statistic - null hypothesis value of parameter)/ standard error.

• You'll hear people talk about a "significant result". They usually mean "statistically significant" and this means that the null hypothesis was rejected.

• Statistical significance is not the same as practical significance.

•Power of a test is affected by several factors. Roughly speaking, the power measures the test's ability to discern differences between the null hypothesis and the true value of the parameter. Increasing the sample size will increase the power. Increasing the significance level decreases the power. Increasing the population standard deviation decreases the power. Read the paper "On Power" in the Statistics Teacher's Corner at AP Central. (Requires free registration on web site, if you don't have it already.)