- In general, a measure of how confidently an observed event or difference between two or more groups can be attributed to a hypothesized cause. The p-value is the most commonly encountered way of reporting statistical significance. The (frequentist) interpretation of a p -value of 0.05 is that if you repeated an experiment that yielded a particular result 100 times, you would expect that particular result or one more extreme five times by chance alone. More formally, one forms a null hypothesis about what the underlying data or relationships are. The null hypothesis is typically that something is not present, that there is no effect, or that there is no difference between the experimental group and the controls in an experiment. One then calculates the probability of observing those data if the null hypothesis is correct, using an appropriate statistical test (which will depend on the shape of the distribution of the sampled variables). If the p -value is small (0.05 is conventionally used) the result is said to be 'statistically significant' (i.e. it is highly unlikely that the null hypothesis is true). Clinical significance and policy significance are entirely different from statistical significance. One can have highly statistically significant es timates of things that are wholly irrelevant clinically, biologically or in terms of public policy. One reason why it may be irrelevant is that an effect may be highly statistically significant but so small in its absolute effect as to be completely uninteresting.