The Bayesian method is a way of revising beliefs about probabilities or the value of a parameter as new information is obtained. The old information might, for example, be based on a systematic review or a consensus panel of experts or a straightforwardly subjective judgment. The probability is termed the prior probability (or the prior distribution within which the true value of a parameter is believed to lie). The new information might be obtained from a recently completed trial. The revised probability (or distribution) is called the posterior probability. As Rawlins (2008) nicely relates, bookmakers are instinctive Bayesians: a horse's form book corresponds to the prior; the outcome of the last outing on a race course is the trial; the odds for today's race are the posterior odds (Rawlins 2008).
Suppose there is a population in which a characteristic (like having cancer) is true for a given fraction and untrue for the rest. It is obviously useful to be able to calculate the conditional probability that a particular observation comes from a person truly having the characteristic. This is where Bayesian statistics can help.
It is generally known that 1 per cent of women at aged 40 who participate in routine screening have breast cancer (this is the prior). Eighty per cent of women with breast cancer will get positive mammographies (true positives); 9.6 per cent of women without breast cancer will also get positive mammographies (false positives) (these all being data obtained from a clinical trial). Now suppose a woman in this age group has a positive mammography in a routine screening. What is the probability that she actually has breast cancer? It is not 1 per cent. The correct answer is 7.8 per cent. To see why, these are the steps: out of 10 000 women, 100 have breast cancer; 80 of those 100 have positive mammographies. From the same 10 000 women, 9900 will not have breast cancer and of those 9900 women, 950 will also get positive tests - but falsely. This makes the total number of women with positive (true and false) tests 950 + 80 or 1030. Of those 1030 women with positive tests, 80 will have cancer. Expressed as a proportion, this is 80/1030 or 0.07767 or 7.8 per cent.
This example of diagnosis illustrates how Bayesian methods allow a prior belief (in this case about the probability of cancer) to be revised in the light of new information from the test results (probability of a test result conditional on having cancer) to form a posterior belief (the probability of cancer conditional on the test results).
The issues raised in considering the relative merits of Bayesian or frequentist approaches to probability arise acutely because of the all-pervading nature of uncertainty in medicine, public health, population health and health economics where evidence may accumulate over time from a variety of sources. For purposes of cost-effectiveness analysis there is often uncertainty about the detailed natural history of a disease (for example, the probability that a breast cancer detected in situ by mammography will progress to invasive cancer is not known and, if it does progress, the time between preclinical detectability and symptomatic disease is also not known). The character of outcomes beyond a period of a clinical trial is often unknown, along with the distribution across types of patients of beneficial and harmful outcomes and costs and whether the way measured outcomes have been defined is appropriate or correlated with outcomes that are appropriate (construct validity).
In addition, the use of Bayesian methods enables probability state ments to be made in the form of the probability of more general hypotheses being true given the evidence, especially hypotheses that are of direct relevance to policy decision-making. For example, in cost-effectiveness analysis it allows statements to be made about the probability of an intervention being cost-effective given the accumulated evidence. However, the often subjective nature of forming Bayesian priors, which may require judgment or a particular interpretation of existing evidence, means that it is important to consider the sensitivity of the posterior results to alternative specification of the priors.
The approach is named after Thomas Bayes (1702-61), an English Presbyterian minister. An amateur mathematician, he was elected a Fellow of the Royal Society in 1742 even though he had no published works on mathematics and, moreover, published none in his lifetime under his own name.