No takebacks!
August 14, 2010 1:21 AM   Subscribe

Tracking retractions as a window into the scientific process.

From the first post, "Why write a blog about retractions?":
So why write a blog on retractions?

First, science takes justifiable pride in the fact that it is self-correcting — most of the time. Usually, that just means more or better data, not fraud or mistakes that would require a retraction. But when a retraction is necessary, how long does that self-correction take? The Wakefield retraction, for example, was issued 12 years after the original study, and six years after serious questions had been raised publicly by journalist Andrew Deer. Retractions are therefore a window into the scientific process.

Second, retractions are not often well-publicized. Sure, there are the high-profile cases such as Reuben’s and Wakefield’s. But most retractions live in obscurity in Medline and other databases. That means those who funded the retracted research — often taxpayers — aren’t particularly likely to find out about them. Nor are investors always likely to hear about retractions on basic science papers whose findings may have formed the basis for companies into which they pour dollars. So we hope this blog will form an informal repository for the retractions we find, and might even spur the creation of a retraction database such as the one called for here by K.M Korpela.

Third, they’re often the clues to great stories about fraud or other malfeasance, as Adam learned when he chased down the Reuben story. The reverse can also be true. The Cancer Letter’s expose of Potti and his fake Rhodes Scholarship is what led his co-authors to remind The Lancet Oncology of their concerns, and then the editors to issue their expression of concern. And they can even lead to lawsuits for damaged reputations. If highlighting retractions will give journalists more tools to uncover fraud and misuse of funds, we’re happy to help. And if those stories are appropriate for our respective news outlets, you’ll only read about them on Retraction Watch once we’ve covered them there.

Finally, we’re interested in whether journals are consistent. How long do they wait before printing a retraction? What requires one? How much of a public announcement, if any, do they make? Does a journal with a low rate of retractions have a better peer review and editing process, or is it just sweeping more mistakes under the rug?
A few highlights: retracting retractions, monkey business (previously on MetaFilter), and Jesus.
posted by WalterMitty (12 comments total) 21 users marked this as a favorite
 
A most excellent blog, thank you, WalterMitty.
posted by unliteral at 1:29 AM on August 14, 2010


I see that I have a bit of reading to do. Thanks for this.
posted by Splunge at 3:27 AM on August 14, 2010


I hate to be the skeptic, but the retractions identified by this blog are either ones based on obviously outlandish science, or ones where a researcher has admitted a mistake.

With the pressure put on (even tenured) academics at today's Universities, I have to question whether a
majority of published papers are substantial, or merely a means of keeping the funding entities/deans/scientific colleagues at bay.
posted by clearly at 3:58 AM on August 14, 2010


Actually, scratch that whole "I hate to be the skeptic" line. Skepticism is invaluable to the progress of science, and is needed now more than ever.
posted by clearly at 4:03 AM on August 14, 2010 [1 favorite]


the retractions identified by this blog are either ones based on obviously outlandish science, or ones where a researcher has admitted a mistake.

To be fair I think the blog's mission is a good one. From their first post, outlined above: retractions are not often well-publicized

This is very true. Such as this post. News sources often cover the release of new studies with rigor, retractions, regardless of how outlandish the science is, not so much. Of course, the mission is null and void if people don't know about the blog. Science media written for the laymen is notoriously crappy, I suspect this blog will be very useful as time goes on.
posted by IvoShandor at 4:31 AM on August 14, 2010 [1 favorite]


From my soon to be deleted double on Hausergate:

A colleague who had written jokingly in the past about the "Hauser Effect"--"Time and time again, a trait seen only in humans, or only in apes, or only in humans and chimps, a trait that might make a good phylogenetic marker to circumscribe a phylogenetically coherent special class of animals to which we afford a common right, is found in New World Monkeys (or some other group) by experimenters such as Hauser"--responds to the investigation. (Hauser's work was just about everywhere in popular science journalism--his book Moral Minds scored a lengthy NYT review by Richard Rorty.) Another cognitive scientist weighs in on the bigger picture of academic journals and the politics of ideas.
posted by availablelight at 5:28 AM on August 14, 2010


For those interested in scientific fraud, if you missed this the first time around, I highly recommend the book, Plastic Fantastic for a peek into fraud on a massive and almost unimaginable scale.
posted by ssmug at 6:04 AM on August 14, 2010


Awesome, it's like deleted thread for Science!
posted by condour75 at 6:34 AM on August 14, 2010 [4 favorites]


This is a great find, my personal thanks for it.

Consider the opposite: what happens to a fake study when it is detected as fake?
3/2009 there was a Scientific American article describing some faked studies (i.e. invented data) on post-operative analgesia. The articles ranged 2000-2008.

But more importantly, the fake articles remained online, with no indication on the online articles themselves that they were already determined to have been fake.

The SA article was in 3/2009. The author formally plead guilty 1/2010-- and on that day the articles were still available online (I had checked.)

If you go online today (8/2010) those articles are finally gone-- so for anywhere between 9-17 months after SA had written an article about them being fakes, they were still available for use.

Now you can say they were "pending investigation," but the title of the SA article is "A Medical Madoff: Anesthesiologist Faked Data in 21 Studies." And even if it was pending investigation, should you allow people access to them?

IMO, the weak link is peer review. To everyone who is not a peer reviewer, it sounds like a group of doctors go over the raw data and see if the conclusions are accurate. But peer review usually not much more than spell check, and for sure no one ever looks at the raw data. The peers are usually chosen by the author ("suggest three people who could review your paper"), and although it's anonymous the club is small enough that you know who is who (just like with grants.)

And peer review is a function of the feudal journal system. There's no reason we should have them, period. They don't provide any protection from bad science or even fraud, but the average person thinks they do, which is worse.

There's no reason in 2010 all raw data shouldn't go online, including video when appropriate.
posted by TheLastPsychiatrist at 11:57 AM on August 14, 2010 [1 favorite]


IMO, the weak link is peer review. To everyone who is not a peer reviewer, it sounds like a group of doctors go over the raw data and see if the conclusions are accurate. But peer review usually not much more than spell check, and for sure no one ever looks at the raw data. The peers are usually chosen by the author ("suggest three people who could review your paper"), and although it's anonymous the club is small enough that you know who is who (just like with grants.)

I disagree wholeheartedly, having gone through the peer review process several times and finding it in general very productive. I'm not sure how your field (Psychiatry?) compares.

I'd say the weak link is publishers who are competing to generate revenue by having the latest and greatest in their issues coupled with researchers who are willing to push nitty-gritty experimental details and boring data to the background behind their sexy results.

Efforts like this blog (and more open-access non-profit journals) promote transparency, which would probably help a lot for cases like the Scientific American example you cite.
posted by beepbeepboopboop at 12:48 PM on August 14, 2010


peer review usually not much more than spell check

Seconding beepbeepboopboop, that's way too strong. Peer review has a lot of serious flaws, especially when the review isn't double blind, and many of those are pretty well documented. But it's pretty rare in my field (cognitive science) to be able to get any paper through without serious arguments with the reviewers. Who are of course, completely wrong on all counts regarding my manuscript, I should like to add.
posted by mixing at 2:57 PM on August 14, 2010 [3 favorites]


But peer review usually not much more than spell check

LOL! Who would have known that the secret to getting published in top journals was a spell check? Whatever did we do before Microsoft Word?

for sure no one ever looks at the raw data

Factually incorrect - I have been asked for (and provided) my raw data before. Ditto my colleagues.

The peers are usually chosen by the author

When journals do this, it is to alleviate the demand for reviewers. Even then, every journal I know that does this selects one reviewer from your selection, and supplements two more from their own stable.

They don't provide any protection from bad science

OMG! Just like prison guards don't prevent escapes! Get rid of them too!

There's no reason in 2010 all raw data shouldn't go online

Oh sure, if you're studying the boiling point of water. But what of commercially, politically or personally sensitive data?

To be honest, I am not surprised that "the average person" doesn't understand the review process, when utter falsehoods like this are propagated.
posted by Sutekh at 7:17 AM on August 15, 2010


« Older Magic highway   |   Pro Wrestler Lance Cade Dead at 29 Newer »


This thread has been archived and is closed to new comments