Our oncologist friends told us that because their field was advancing so rapidly, they seldom cited full papers. To provide updated information, oncologists mostly cited conference abstracts. They anticipated they would soon have to cite data from press conferences.
That made us look very backward. At the time when the development of therapies for chronic hepatitis B was at its height, we did cite some conference abstracts. This dampened a lot in the last one to two years after the data of the major drugs in this field had been published as full papers. After all, why should we cite the year six data of a drug presented in the latest conference when the year five data have been published in peer-reviewed journals? The early online publication of accepted papers by biomedical journals also facilitates rapid dissemination of data. Certainly, we are far from the stage of citing New York Times except for fun.
A comprehensive review of 29,729 conference abstracts found that only 53% of all abstracts were eventually published as full papers at 9 years.[Scherer RW et al. Cochrane Database Syst Rev 2007 Apr 18;(2):MR000005] The observation may have a number of reasons. There may be publication bias where negative studies are less likely to be published. Some studies may have major methodological flaws. While they look alright in abstract form, they cannot get through the scrutiny of peer reviewers when presented in full. Moreover, some conference abstracts are obviously put up by pharmaceutical companies for promotional purpose and never intended to be published as full papers.
While it is charming to cite a reference to support our view, we must remember that big message does not equal big evidence. We should scrutinize the reliability of data, whether they are from scientific papers, abstracts or New York Times.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment