An approximate answer to the right question is worth a great deal more than a precise answer to the wrong question.
--The first golden rule of mathematics, sometimes attributed to John Tukey
With many calculations, one can win; with few one cannot. How much less chance of victory has one who makes none at all!
--Sun Tzu 'Art of War'
The T-test may be used to compare the means of a criterion variable for two independent samples or for two dependent samples (ex., before-after studies, matched-pairs studies), or between a sample mean and a known mean (one-sample t-test). In regression analysis, A T-test can be used to test any single linear constraint. Nonlinear constraints are usually tested by using a W, LR or LM test, but sometimes an "asymptotic" T-test is encountered: the nonlinear constraint is written with its right-hand side equal to zero, the left-hand side is estimated and then divided by the square root of an estimate of its asymptotic variance to produce the asymptotic T statistics.
For example, here is the formula to test mean difference for the case of equal sample sizes, n, in both groups:
Three Different Types of T-test:
(1) One-sample T-tests test whether the mean of one variable differs from a constant (ex., does the mean grade of 72 for a sample of students differ significantly from the passing grade of 70?). When p<.05 the researcher concludes the group mean is significantly different from the constant.
(2) Independent sample T-tests are used to compare the means of two independently sampled groups (ex., do those working in high noise differ on a performance variable compared to those working in low noise, where individuals are randomly assigned to the high-noise or low-noise groups?) . When p<.05 the researcher concludes the two groups are significantly different in their means. This test is often used to compare the means of two groups in the same sample (ex., men vs. women) even though individuals are not (in the case of gender, cannot be) assigned randomly to the two groups (to "men" and to "women"). Random assignment would have controlled for unmeasured variables. This opens up the possibility that other variables either mask or enhance any apparent significant difference in means. That is, the independent sample t-test tests the uncontrolled difference in means between two groups If a significant difference is found, it may be due not just to gender; control variables may be at work. The researcher will wish to introduce control variables, as in any multivariate analysis.
(3) Paired sample T-tests compare means where the two groups are correlated, as in before-after, repeated measures, matched-pairs, or case-control studies (ex., mean candidate evaluations before and after hearing a speech by the candidate). The algorithm applied to the data is different from the independent sample t-test, but interpretation of output is otherwise the same.
Associated Assumptions:
(1) Approximately Normal Distribution of the measure in the two groups is assumed. There are tests for normality. The t-test may be unreliable when the two samples come from widely different shaped distributions (see Gardner, 1975). Moore (1995) suggests data for t-tests should be normally distributed for sample size less than 15, and should be approximately notmal and without outliers for samples between 15 and 40; but may markedly skewed when sample size is greater than 40.
(2) Roughly Similar Variances: There is a test for homogeneity of variance, also called a test of homoscedasticity. In SPSS homogeneity of variances is tested by "Levene's Test for Equality of Variances", with F value and corresponding significance. There are also other tests for homogeneity of variances. The T-test may be unreliable when the two samples are unequal in size and also have unequal variances (see Gardner, 1975).
(3) Dependent/Independent Samples. The samples may be independent or dependent (ex., before-after, matched pairs). However, the calculation of T differs accordingly. In the one-sample test, it is assumed that the observations are independent.
One last note is that, don't confuse a T test with analyses of a contingency table (Fishers or chi-square test). Use a T test to compare a continuous variable (e.g., blood pressure or weight). Use a contingency table to compare a categorical variable (e.g., pass vs. fail, viable vs. not viable).
Reference:
Gardner, P. L. (1975). Scales and statistics. Review of Educational Research. 45: 43-57. Discusses assumptions of the t-test.
Moore, D. S. (1995). The Basic Practice of Statistics. NY: Freeman and Co.
0 comments:
Post a Comment