Guidelines for Reporting Health Research - A User's Manual

Guidelines for Reporting Health Research - A User's Manual

von: David Moher, Douglas Altman, Kenneth Schulz, Iveta Simera, Elizabeth Wager

BMJ Books, 2014

ISBN: 9781118715611

Sprache: Englisch

344 Seiten, Download: 2692 KB

 
Format:  EPUB, auch als Online-Lesen

geeignet für: geeignet für alle DRM-fähigen eReader geeignet für alle DRM-fähigen eReader Apple iPad, Android Tablet PC's Apple iPod touch, iPhone und Android Smartphones Online-Lesen


 

eBook anfordern

Mehr zum Inhalt

Guidelines for Reporting Health Research - A User's Manual



Foreword


Guides to guidelines


Drummond Rennie, MD

University of California, San Francisco, USA

Introduction


Good patient care must be based on treatments that have been shown by good research to be effective. An intrinsic part of good research is a published paper that closely reflects the work done and the conclusions drawn. This book is about preventing, even curing, a widespread endemic disease: biased and inadequate reporting. This bias and poor reporting threatens to overwhelm the credibility of research and to ensure that our treatments are based on fiction, not fact.

Over the past two decades, there has been a spate of published guidelines on reporting, ostensibly to help authors improve the quality of their manuscripts. Following the guidelines, manuscripts will include all the information necessary for an informed reader to be fully persuaded by the paper. At the same time, the articles will be well organized, easy to read, well argued, and self-critical. From the design phase of the research, when they may serve as an intervention to remind investigators, editors, and reviewers who find it easy to get the facts, and to note what facts are missing, all the way through to the reader of the published article who finds it easy to access the facts, all of them in context.

To which, given the ignorance, ineptitude, inattention, and bias of so many investigators, reviewers, and journal editors, I would add a decisive “Maybe!”

How did it start? How did we get here?


In 1966, 47 years ago, Dr Stanley Schor, a biostatistician in the Department of Biostatistics at the American Medical Association, in Chicago, and Irving Karten, then a medical student, published in JAMA the results of a careful examination of a random sample of published reports taken from the 10 most prominent medical journals. Schor and Karten focused their attention on half of the reports that they considered to be “analytical studies,” 149 in number, as opposed to reports of cases. They identified 12 types of statistical errors, and they found that the conclusions were invalid in 73%. “None of the ten journals had more than 40% of its analytical studies considered acceptable; two of the ten had no acceptable reports.” Schor and Karten speculated on the implications for medical practice, given that these defects occurred in the most widely read and respected journals, and they ended presciently: “since, with the introduction of computers, much work is being done to make the results of studies appearing in medical journals more accessible to physicians, a considerable amount of misinformation could be disseminated rapidly.” Boy, did they get that one right!

Better yet, this extraordinary paper also included the results of an experiment: 514 manuscripts submitted to one journal were reviewed by a statistician. Only 26% were “acceptable” statistically. However, the intervention of a statistical review raised the “acceptable” rate to 74%. Schor and Karten's recommendation was that a statistician be made part of the investigator's team and of the editors' team as well [1]. Their findings were confirmed by many others, for example, Gardner and Bond [2].

I got my first taste of editing in 1977 at the New England Journal of Medicine, and first there and then at JAMA the Journal of the American Medical Association, my daily job has been to try to select the best reports of the most innovative, important, and relevant research submitted to a large-circulation general medical journal. Although the best papers were exciting and solid, they seemed like islands floating in a swamp of paper rubbish. So from the start, the Schor/Karten paper was a beacon. Not only did the authors identify a major problem in the literature, and did so using scientific methods, but they tested a solution and then made recommendations based on good evidence.

This became a major motivation for establishing the Peer Review Congresses. Exasperatedly, in 1986, I wrote:

One trouble is that despite this system (of peer review), anyone who reads journals widely and critically is forced to realize that there are scarcely any bars to eventual publication [3].

Was the broad literature so bad despite peer review or because of it? What sort of product, clinical research reports, was the public funding and we journals disseminating? Only research could find out, and so from the start the Congresses were limited strictly to reports of research.

At the same time, Iain Chalmers and his group in Oxford were struggling to make sense of the entire literature on interventions in health care, using and refining the science of meta-analysis to apply it to clinical reports. This meant that, with Chalmers' inspired creation of the Cochrane Collaboration, a great many bright individuals such as Altman, Moher, Dickersin, Chalmers, Schulz, Gøtzsche, and others were bringing intense skepticism and systematic scrutiny to assess the completeness and quality of reporting of clinical research and to identify those essential items, the inadequate reporting of which was associated with bias. The actual extent of biases, say, because of financial conflicts or failure to publish, could be measured, and from that came changes in the practices of journals, research institutions, and individual researchers. Eventually, there even came changes in the law (e.g., requirements to register clinical trials and then to post their results). Much of this research was presented at the Congresses [4–6]. The evidence was overwhelming that poor reporting biased conclusions – usually about recommended therapies [7]. The principles of randomized controlled trials, the bedrock of evidence about therapies, had been established 40 years before and none of it was rocket science. But time and again investigators had been shown to be making numerous simple but crucial mistakes in the reporting of such trials.

What to do about it?


In the early 1990s, two groups came up with recommendations for reporting randomized trials [8, 9]. These were published but produced no discernible effect. In discussions with David Moher, he suggested to me that JAMA should publish a clinical trial according to the SORT recommendation, which we did [10], calling for comments – which we got in large numbers. It was obvious that one of the reasons that the SORT recommendations never caught on was that while they were the product of a great deal of effort by distinguished experts, no one had actually tried them out in practice. When this was done, the resultant paper was unreadable, as the guidelines allowed no editorial flexibility and broke up the logic and flow of the article.

David and I realized that editors were crucial in this process. Put bluntly, if editors demanded it at a time when the authors were likely to be in a compliant frame of mind – when acceptance of their manuscript depended on their following orders, then editorial policy would become the standard for the profession.

Owing to the genius, persistence, and diplomacy of David Moher, the two groups got their representatives together, and from this CONSORT was born in 1996 [10–13]. Criticism was drowned in a flood of approval. This was because the evidence for inclusion of items on the checklist was presented, and the community was encouraged to comment. The backing of journal editors forced investigators to accept the standards, and the cooperation of editors was made easier when they were reassured, on Doug Altman's suggestion, that different journals were allowed flexibility in where they asked authors to include particular items. The guidelines were provisional, they were to be studied, and that there was a process for revision as new evidence accumulated.

The acceptance of CONSORT was soon followed by the creation and publication of reporting guidelines in many other clinical areas. The founding of the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) [14] Network in 2008 was not only a recognition of the success of such guidelines but also the need to get authors to write articles fit for purpose and provide much needed resources for all those involved with medical journals. As such, it represents a huge step in improving the transparency and quality of reporting research.

Are we there yet?


Forty-seven years later, Lang and Altman, referring to the Schor/Karten article that I mentioned at the beginning, write about the changes that seem to have occurred.

Articles with even major errors continue to pass editorial and peer review and to be published in leading journals. The truth is that the problem of poor statistical reporting is long-standing, widespread, potentially serious, concerns mostly basic statistics, and yet is largely unsuspected by most readers of the biomedical literature [15].

Lang and Altman refer to the statistical design and analysis of studies, but a study where these elements are faulty cannot be trusted. The report IS the research, and my bet is that other parts of a considerable proportion of clinical reports are likely to be just as faulty. That was my complaint in 1986, and it is depressing that it is still our beef after all these efforts. I suspect there is more bad research reported simply because every year there are more research reports, but whether things are improving or getting worse is unclear. What it does mean is that we have work to do. This book is an excellent place to start the prevention and cure of a vastly...

Kategorien

Service

Info/Kontakt