For the following situations you should be able to RECOGNIZE which test should be used, FIND THE FORMULA for the test statistic in Zar or in your notes, and WRITE DOWN the appropriate tabled value for comparison at the 5% level of significance.

 

a.    10 pairs of twin warthogs are randomly selected and one warthog of each pair is randomly assigned to be treated with wart remover; the other is treated with a placebo (an inert substance).  We wish to test that the two treatments will remove a different mean number of warts per warthog.  Assume initially that our response has a normal distribution.

 

      This is a paired design, so we should use a paired t-test:

     

 

b.    Same as [a] above, except now we have two completely independent random samples of warthogs, with 10 in each group.

      In this case a two-sample t-test is appropriate:

     

 

c.    Same as [a] in the beginning, except no longer assume the data are normally distributed.

     

      We need a non-parametric test, in this case a paired non-parametric test: Wilcoxon signed rank test.

      T-=sum(ranks with minus sign); T+=sum(ranks with plus sign)

      Tmin= min(T-,T+)

      Reject if Tmin <= T0.05(2),10 = 8

 

d.    Same as [b] above, but data are no longer normally distributed.

      We need a non-parametric test, in this case a two-sample non-parametric test: Mann-Whitney/ Wilcoxon Rank Sum test.

      R1 = sum(ranks for X1); R2 = sum(ranks for X2)

 

      U1 = n1n2 +n1(n1+1)/2-R1; U2 = n1n2 +n2(n2+1)/2-R2

      Umax = max(U1,U2); Reject if Umax >= U0.05(2),10,10 = 77

 

e.    Same as [b] above, but now we want to test whether the *variability* of response is the same in each of the two groups.

     

      We want to compare the variance of two independent samples from normally distributed populations, so we use the F-test:

       

      Reject if F>F0.05(2),9,9 = 4.03