But here’s the thing: when the economists were shown both the graph and the detailed numbers, the number of economists getting the answer spectacularly wrong — the number giving an answer of less than 10 — soared. Just working with their eyeballs, 3% of economists got it wrong. Working with the numbers as well, that proportion rose to 61%! And when a third group was given the numbers and no chart at all, fully 72% of them — professional economists all — got the answer badly wrong.
I’m certainly guilty of this kind of thing: I see a paper demonstrating a statistically significant correlation between one variable and another, and I generally assume that if the experiment were repeated, we’d see the same thing again. But that’s not actually true.
And so it’s easy to see, I think, how economists become convinced of things that the rest of us aren’t sure of at all — and how the economists often end up being wrong, while the rest of us were right to be dubious.
What’s more, if economists are bad at this kind of thing, just imagine what other social scientists are like, or even doctors. Next time you see a piece of pop-science talking about interesting findings from some paper or other, bear this in mind. A lot of papers are written; a few of them have interesting findings. Those are the papers which tend to get publicity. But there’s also a very good chance that they don’t actually show what the headlines say that they show."
“How eventual is eventual consistency? How consistent is eventual consistency? PBS provides answers to these questions using new techniques and simple modeling. Find out how and play with models in your browser on this page.”
nice HTML5-based adjustable graph with a bunch of knobs for things like tolerable staleness, accuracy, replica configuration, etc. tweak away!
A checklist would look something like the following. Every story on new research should include the sample size and highlight where it may be too small to draw general conclusions. Any increase in risk should be reported in absolute terms as well as percentages: For example, a “50 percent increase” in risk or a “doubling” of risk could merely mean an increase from 1 in 1,000 to 1.5 or 2 in 1,000. A story about medical research should provide a realistic time frame for the work’s translation into a treatment or cure. It should emphasize what stage findings are at: If it is a small study in mice, it is just the beginning; if it’s a huge clinical trial involving thousands of people, it is more significant. Stories about shocking findings should include the wider context: The first study to find something unusual is inevitably very preliminary; the 50th study to show the same thing may be justifiably alarming. Articles should mention where the story has come from: a conference lecture, an interview with a scientist, or a study in a peer-reviewed journal, for example."