Statistical Significance: The p-value

Lot of researchers don’t properly understand the concept of “Statistical Significance” or the p-value, as it is popularly called. In simple words, we can understand statistical significance as the probability that the observed relationship (between variables) or the observed difference (between means) in the sample data is purely by chance and the same does not exists if we consider the entire population. Alternately, in less technical terms, statistical significance of a test result tells us about the degree to which the results are ‘true’, i.e.¬†representative of the population.

According to Brownlee (1960), the p-value represents a decreasing index of the reliability of a result, i.e. higher the p-value the less we can believe that the observed relation (or the observed difference in the mean value) between variables in the sample is a reliable indicator of the relation (or difference) between the respective variables in the population. The p-value represents the probability of error that is involved in accepting our observed result as representative of the population. For example, a p-value of 0.05 indicates that there is a 5% probability that the relation between the variables found in our sample is fluke. In other words, suppose that in the population there is no relationship between variables, and we repeat the experiment one after the another buy drawing samples from the population. Then we would expect that approximately 1 in every 20 replications of the experiment there would be relation between the variables. In many areas of research, the p-value of 0.05 is customarily treated as a “border-line acceptable” error level.

Comments are closed.