search
for
 About Bioline  All Journals  Testimonials  Membership  News


Indian Journal of Medical Microbiology
Medknow Publications on behalf of Indian Association of Medical Microbiology
ISSN: 0255-0857 EISSN: 1998-3646
Vol. 23, Num. 4, 2005, pp. 210-213

Indian Journal of Medical Microbiology, Vol. 23, No. 4, October-December, 2005, pp. 210-213

Editorial

Better reporting of studies of diagnostic accuracy

*Corresponding author (email: )
Division of Epidemology (MP), School of Public Health, University of California, Berkeley, USA and Jhaveri Microbiology Centr (SS), L.V. Prasad Eye Institute, Banjara Hills, Hyderabad-500 034, India

Code Number: mb05067

As a leading journal in medical microbiology published from India, the Indian Journal of Medical Microbiology (IJMM) receives several manuscripts that report on the evaluation of existing and novel diagnostic tests. Such reports are critically important for microbiologists and clinicians because they contribute to the evidence base of diagnostic test research, and enable clinicians and laboratory scientists to make informed decisions on whether a test should be used in medical practice or not.

Unfortunately, in our experience, a substantial proportion of the manuscripts fail to survive the peer review process. At least two major factors are responsible for the significant "manuscript mortality rate": poor study quality, and poor reporting of the study. It is important to appreciate the difference between the two. Poor study quality pertains to flaws in study design and conduct that lead to invalid (biased) results. [1],[2],[3] Poor reporting, on the other hand, refers to incomplete or inadequate reporting of the design, conduct, analysis and results of a study.[4],[5] A poorly reported study may have actually been well designed and executed. But it is impossible to know that without contacting the authors for information that was missing in the published paper.[6]

In epidemiological terms, poor quality studies are those that lead to biased estimates of test accuracy. For example, if the interpretation of the index test is influenced by knowledge of the results of the reference standard ("gold standard"), this bias, often called "review bias," can result in overestimation of the sensitivity and specificity.[1],[3] There are several other threats to validity and their importance and impact on research findings have been reviewed elsewhere. [1],[2],[3],[5] There is empiric evidence that certain biases are a greater threat to validity than others.[1],[3] Biased results from poorly designed studies can lead to premature adoption of diagnostic tests that may have little or no benefits, and result in adverse clinical consequences related to misleading estimates of test accuracy.

What if a study has been designed and conducted well (i.e., with minimal bias), but poorly reported? Several empiric studies and reviews suggest that this is a major concern with published diagnostic studies.[1],[2],[4],[5],[7],[8] Authors often fail to explicitly report all the critical components of a diagnostic study, making it impossible for the peer reviewers and the readers to evaluate the scientific validity of the study. For example, in a meta-analysis on nucleic acid amplification (NAA) tests for tuberculous meningitis, 74% of 49 studies did not report on whether the NAA test results were interpreted blindly, without the knowledge of culture, the reference standard.[8] When authors of several of these studies were contacted, the proportion with missing information on blinding reduced from 74% to 31%.[6],[8] Evidently, some authors had incorporated blinding in their study design, but had failed to mention it in their manuscripts. As another example, in a meta-analysis on bacteriophage assays for diagnosis of tuberculosis, only 2 of 13 (15%) studies reported blinded comparison of phage assays with the reference standard.[9] Authors often fail to report the kind of study design employed (e.g., cross-section versus case-control), whether the study was prospective or retrospective, whether patients were randomly or consecutively sampled, and whether all patients underwent the reference standard test, irrespective of the results of the index test.

Poorly reported studies are frustrating for editors, peer reviewers, researchers, and, ultimately, readers and users of medical literature. Lack of transparency in reporting makes it hard to judge the validity of a study, and this greatly diminishes its clinical impact and relevance. It is likely that poorly reported studies will be rated as poor quality studies by peer reviewers and editors, leading to a higher likelihood of manuscript rejection. We suspect poor reporting may be one reason why many studies might fail to make the cut with high impact international journals.

What can we do to improve reporting of diagnostic studies? A recent initiative in this regard is noteworthy. To improve quality of reporting of diagnostic studies, the Standards for Reporting of Diagnostic Accuracy (STARD; pronounced "STAR-D") initiative was launched by an international consortium of investigators.[4],[5] The objective of this initiative is to improve the quality of reporting, and to encourage authors and editors to use a more standardized and transparent format for reporting manuscripts of diagnostic accuracy studies. The STARD statement has been simultaneously published in several journals, and a few journals (e.g., JAMA , Annals of Internal Medicine , Lancet, Clinical Chemistry ) have already made it mandatory for authors to use the STARD checklist and flow diagram as a template for submitting diagnostic study manuscripts. The STARD initiative follows an earlier effort called CONSORT, designed to improve reporting of randomized controlled trials.[10] Several journals now require reports of randomized controlled trials to follow the CONSORT guidelines. Similar efforts are also underway to improve the quality of reporting of meta-analyses and other types of publications (see http://www.consort-statement.org for an overview of such initiatives).

The table is a reproduction of the STARD checklist.[4] This checklist has 25 items covering all the major sections of a well-written manuscript. The rationale and justification for each of the items can be found elsewhere.[5] Authors could copy and paste these items as subheadings in their manuscript and create a nicely structured manuscript template. The STARD flow diagram[4] [Figure - 1] can be used by authors to provide information on patient recruitment, the order of test execution, number of patients who underwent various tests and the number of patients who had indeterminate, invalid, or missing test results.

As members of the IJMM editorial team, we strongly encourage our contributors to read the STARD guidelines, and use the STARD checklist and flow diagram in their submissions to our Journal . The use of STARD guidelines, we anticipate, will facilitate the peer review process, potentially increase manuscript acceptance rates, and ultimately improve the readability and clinical impact of IJMM articles. In the long run, we also hope the STARD guidelines will encourage investigators to design better quality studies that are more likely to have a global impact on clinical and laboratory practice. Lastly, we invite feedback from our readers on their practical experiences with using the STARD guidelines, and how such guidelines can be adapted for IJMM contributors[Table - 1].

References

1.Lijmer JG, Mol BW, Heisterkamp S, Bonsel GJ, Prins MH, van der Meulen JH, et al . Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 1999; 282: 1061-6.  Back to cited text no. 1    
2.Whiting P, Rutjes AW, Reitsma JB, Bossuyt PM, Kleijnen J. The development of QUADAS: a tool for the quality assessment of studies of diagnostic accuracy included in systematic reviews. BMC Med Res Methodol 2003; 3: 25.  Back to cited text no. 2  [PUBMED]  [FULLTEXT]
3.Whiting P, Rutjes AW, Reitsma JB, Glas AS, Bossuyt PM, Kleijnen J. Sources of variation and bias in studies of diagnostic accuracy: a systematic review. Ann Intern Med 2004; 140: 189-202.  Back to cited text no. 3  [PUBMED]  [FULLTEXT]
4.Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al . Towards complete and accurate reporting of studies of diagnostic accuracy: the STARD initiative. Standards for Reporting of Diagnostic Accuracy. Clin Chem 2003; 49: 1-6.  Back to cited text no. 4    
5.Bossuyt PM, Reitsma JB, Bruns DE, Gatsonis CA, Glasziou PP, Irwig LM, et al . The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Clin Chem 2003; 49: 7-18.  Back to cited text no. 5    
6.Pai M, Flores LL, Hubbard A, Riley LW, Colford JM, Jr. Quanlity assessment in meta-analyses of diagnostic studies: what difference does email contact with authors make? XI Cochrane Colloquium, Barcelona, Spain 2003.  Back to cited text no. 6    
7.Pai M, Flores LL, Hubbard A, Riley LW, Colford JM, Jr. Nucleic acid amplification tests in the diagnosis of tuberculous pleuritis: a systematic review and meta-analysis. BMC Infect Dis 2004; 4: 6.  Back to cited text no. 7    
8.Pai M, Flores LL, Pai N, Hubbard A, Riley LW, Colford JM, Jr. Diagnostic accuracy of nucleic acid amplification tests for tuberculous meningitis: a systematic review and meta-analysis. Lancet Infect Dis 2003; 3: 633-43.  Back to cited text no. 8    
9.Kalantri S, Pai M, Pascopella L, Riley L, Reingold A. Bacteriophage- based tests for the detection of Mycobacterium tuberculosis in clinical specimens: a systematic review and meta- analysis. BMC Infect Dis 2005; 5: 59.  Back to cited text no. 9    
10.Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomised trials. Lancet 2001; 357: 1191-4.  Back to cited text no. 10    

Copyright 2005 - Indian Journal of Medical Microbiology


The following images related to this document are available:

Photo images

[mb05067t1.jpg] [mb05067f1.jpg]
Home Faq Resources Email Bioline
© Bioline International, 1989 - 2024, Site last up-dated on 01-Sep-2022.
Site created and maintained by the Reference Center on Environmental Information, CRIA, Brazil
System hosted by the Google Cloud Platform, GCP, Brazil