Tests for Significance
Once sample data has been gathered through an observational study or experiment, statistical inference allows analysts to assess evidence in favor or some claim about the population from which the sample has been drawn. The methods of inference used to support or reject claims based on sample data are known as tests of significance or tests for statistical significance.
One can say that the significance test is the process used to determine whether the null hypothesis is rejected, in favor of the alternative research hypothesis, or not.
The test involves comparing the observed values with theorized values. The tests establish whether there is a relationship between the variables, or whether pure chance could produce the observed results.
In everyday language, ‘significance’ means that something is extremely important.
Tests for statistical significance are used to address the question: what is the probability that what we think is a relationship between two variables is really just a chance occurrence?
They tell us what the probability is that we would be making an error if we assume that we have found that a relationship exists.
We can never be completely 100% certain that a relationship exists between two variables. There are too many sources of error to be controlled, for example, sampling error, researcher bias, problems with reliability and validity, simple mistakes, etc.
But using probability theory and the normal curve, we can estimate the probability of being wrong, if we assume that our finding a relationship is true. If the probability of being wrong is small, then we say that our observation of the relationship is a statistically significant finding.
Statistical significance means that there is a good chance that we are right in finding that a relationship exists between two variables. But statistical significance is not the same as practical significance. We can have a statistically significant finding, but the implications of that finding may have no practical application. The researcher must always examine both the statistical and the practical significance of any research finding.
For example, we may find that there is a statistically significant relationship between a citizen’s age and satisfaction with city recreation services. It may be that older citizens are 5% less satisfied than younger citizens with city recreation services. But is 5% a large enough difference to be concerned about?
Often times, when differences are small but statistically significant, it is due to a very large sample size; in a sample of a smaller size, the differences would not be enough to be statistically significant.
Steps in Testing for Statistical Significance:
1) State the Research Hypothesis
2) State the Null Hypothesis
3) Select a probability of error level (alpha level)
4) Select and compute the test for statistical significance
5) Interpret the results
Dr. Anil Singhal, MDTests for Significance
by Dr. Anil Singhal, MD ( Author at Research Methodology )
Posted on April 5th, 2015 at 7:40 pm.