I was discussing numeric literacy with a colleague and Hartford's story and conclusion is sobering:
An article published in the Annals of Internal Medicine in March put these questions to a panel of more than 400 doctors with relevant clinical experience. Eighty-two per cent thought they’d been shown evidence that test “A” saved lives – they hadn’t – and of those, 83 per cent thought the benefit was large or very large. Only 60 per cent thought that test “B” saved lives, and fewer than one-third thought the benefit was large or very large – which is intriguing, because of the few people on course to die from cancer, the test saves 20 per cent of them. In short, the doctors simply did not understand the statistics on cancer screening.
The practical implications of this are obvious and worrying. It seems that doctors may need a good deal of help interpreting the evidence they are likely to be exposed to on clinical effectiveness, while epidemiologists and statisticians need to think hard about how they present their discoveries.