Objectives Published negative studies should have the same rigour of methodological quality as studies with positive findings. However, the methodological quality of negative versus positive studies is not known. The objective was to assess the reported methodological quality of positive versus negative studies published in Indian medical journals.
Design A systematic review (SR) was performed of all comparative studies published in Indian medical journals with a clinical science focus and impact factor >1 between 2011 and 2013. The methodological quality of randomised controlled trials (RCTs) was assessed using the Cochrane risk of bias tool, and the Newcastle-Ottawa scale for observational studies. The results were considered positive if the primary outcome was statistically significant and negative otherwise. When the primary outcome was not specified, we used data on the first outcome reported in the history followed by the results section. Differences in various methodological quality domains between positive versus negative studies were assessed by Fisher's exact test.
Results Seven journals with 259 comparative studies were included in this SR. 24% (63/259) were RCTs, 24% (63/259) cohort studies, and 49% (128/259) case–control studies. 53% (137/259) of studies explicitly reported the primary outcome. Five studies did not report sufficient data to enable us to determine if results were positive or negative. Statistical significance was determined by p value in 78.3% (199/254), CI in 2.8% (7/254), both p value and CI in 11.8% (30/254), and only descriptive in 6.3% (16/254) of studies. The overall methodological quality was poor and no statistically significant differences between reporting of methodological quality were detected between studies with positive versus negative findings.
Conclusions There was no difference in the reported methodological quality of positive versus negative studies. However, the uneven reporting of positive versus negative studies (72% vs 28%) indicates a publication bias in Indian medical journals with an impact factor of >1.
- Methodological quality
- Publication bias
- Clinical trial
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Strengths and limitations of this study
This is the first study comparing the methodological quality of research studies performed in India with positive versus negative results.
This study includes all comparative studies (ie, randomised controlled trials and observational studies).
An important limitation includes the restriction of studies to journals with an impact factor >1 and published in India only.
Medical research conducted in accordance with the highest methodological standards in the field is critical for the overall well-being of patients and populations alike. Research using inappropriate and questionable methodology may yield misleading findings which, instead of benefiting patients, can result in harm as it can favour ineffective interventions, support wrong hypotheses or suppress an effective intervention.1 Poor-quality research can result in wasted efforts of investigators, participants and funders. Therefore, the conduct and publication of research with appropriate and highest standards in the field is of utmost importance.
Several studies have assessed the overall methodological quality of scientific studies published in biomedical journals and concluded that the methodological quality of published research does not meet accepted standards.2–7 While the overall assessment of the methodological quality of scientific research has been performed, the methodological quality of studies with positive versus negative findings in clinical medicine has not been compared.
Peer-reviewed scientific publications are a good indicator of the academic contributions of a country. Historically, the majority of scientific contributions in the form of peer-reviewed publications have been dominated by developed countries.8 Nevertheless, in the past decade, there has been an unprecedented surge of scientific publications from developing countries, specifically from India.9 However, it is uncertain if the quality of research has kept pace with the quantity of publications. That is, the overall methodological quality of studies published in Indian medical journals has not been explored systematically. Accordingly, the primary aim of this study is to assess the overall methodological quality of studies published in Indian medical journals and compare the methodological quality of positive studies with negative studies.
Materials and methods
All peer-reviewed journals in the field of clinical medicine published in India with an impact factor greater than one were eligible for inclusion in the systematic review. We are aware that the choice of impact factor as a selection criterion may be controversial.10 However, the impact factor metric, despite its strengths and limitations, is the most widely used metric to determine the reach of a journal or article to global audiences.11 Therefore, for operational feasibility, we used an impact factor of >1 as a selection criterion. Journals with a focus on basic science were not eligible for inclusion. Given the spike in scientific publications in recent years,9 the search was limited to studies published in the past 3 years (2011–2013).
Information sources and search
A comprehensive list of all peer-reviewed medical journals published in India with an impact factor was obtained from the Web of Science Journal Citation Report Database for the year 2012.12 This database contains citation information from 11 000 technical journals from about 3300 publishers in over 80 countries. For all journals with an impact factor >1, we reviewed the scope and mission document to determine whether a journal had clinical medicine focus. Two authors (JC and MC) independently reviewed the scope and mission document to assess for eligibility. Any discrepancies were resolved by consensus. Relevant articles from all journals meeting the inclusion criteria were downloaded from the individual journal website.
All research publications regardless of publication type (eg, full article, short communications/brief reports and research letters) addressing a clinical question for any disease with a comparator were included in the final analysis.
Data collection process
All research publications were obtained from the respective journals’ website. The selection of articles was performed by two authors in duplicate as per the a priori inclusion/exclusion criteria (JC and MC). All data from included studies were extracted in duplicate by all authors (JC, MC, RJ, RM, TR and AK) using a standardised data extraction form. Two authors (JC and AK) reviewed randomly selected 50% of the included studies. Data entry and subsequent analyses were performed by two authors (JC and TR).
The following information was extracted from each included study: journal name, title of the article, date of publication, study design, source of funding, information about primary and secondary end points, method for assessment of significance (p values, CI or descriptive statistics), and assessment of risk of bias and risk of random error.
Determination of positive versus negative results
The result from a study was considered positive if the primary outcome was statistically significant and negative otherwise. When the primary outcome was not specified, we used data on the first outcome reported in the background section followed by the results section.
Assessment of methodological quality
The assessment of methodological quality of randomised controlled trials (RCTs) was performed using the Cochrane risk of bias assessment tool.13 For observational studies, the risk of bias was assessed using the Newcastle-Ottawa scale.14 The assessment of risk of random error was assessed for the reporting of sample size calculations, α and β error, and effect size.
Descriptive statistics were used to report overall data in the form of frequency and percentages. All variables were compared between positive and negative studies using Fisher's exact test. SPSS V.22 was used for data analysis.15
A search of the Web of Science Journal citation report database from 2012 found 105 peer-reviewed journals published in India. Of the 105 journals, 25 were journals with a clinical medicine focus. Of these, seven journals met the pre-determined inclusion criteria (ie, impact factor >1) and were included in the final analysis. The reasons for exclusion are presented in figure 1. The included journals were Journal of Postgraduate Medicine (JPGM), Indian Pediatrics (IP), Indian Journal of Medical Research (IJMR), Journal of Vector Borne Disease (JVBD), Indian Journal of Dermatology, Venereology and Leprology (IJDVL), Indian Journal of Cancer (IJC) and Neurology India (NI). These seven journals published a total of 259 studies involving a comparator. Of the 259 studies, 63 (24.3%) were RCTs, 63 (24.3%) were cohort studies and 128 (49.4%) were case–control studies.
The characteristics of included studies are summarised in table 1.
Briefly, 74 (28.6%) studies were funded by government agencies, 7 (2.8%) were supported or sponsored by industry, and 8 (3.1%) were funded by other sources like the authors’ institution. Information related to the funding was not mentioned in 167 (65.7%) studies.
The majority of studies (n=235, 92.5%) were single centre studies. Fifteen (5.9%) were multicentre national studies and 3 (1.2%) were multicentre international studies. One study did not report information regarding the centre.
Of the 63 RCTs, 60 (95.2%) used the parallel study design and 3 (4.7%) used the factorial design. The comparator in 11 RCTs was placebo, observation/no active treatment in 6, and active treatment in 46 for that disease condition.
Five studies did not report sufficient data to enable us to determine if the results were positive or negative. Fifty-two per cent (132/254) of studies explicitly reported the primary outcome. Statistical significance was determined by p value in 78.3% (199/254), CI in 2.8% (7/254), both p value and CI in 11.8% (30/254), and only the descriptive method in 6.3% (16/254) of studies.
The overall methodological quality and comparison of negative versus positive studies is summarised in table 2.
Briefly, of 259 studies, findings from 187 (73.6%) were positive and 67 (26.4%) were negative, while results from 5 (1.9%) studies could not be categorised as negative or positive.
Comparison of methodological quality according to positive versus negative findings
Randomised controlled trial
The majority of the methodological quality domains of random sequence generation (52.3%), allocation concealment (39.6%) and blinding (22.2%) were reported inadequately. Incomplete reporting of data were observed in 44.4% of RCTs. Selective reporting of results was done in 15.8% of RCTs. Reporting of sample size calculation and various components of sample size calculation were not adequate (see table 2). There was no significant difference between positive and negative clinical trials for reporting of these methodological parameters.
For cohort studies, the majority of methodological domains mentioned in the Newcastle-Ottawa scale were under-reported. Of these, “Outcome of interest not present at start” was reported in only 36.5% of studies. Selection of consecutive cases (41.4%), selection of appropriate control (32%) and ascertainment of exposure (39.8%) was grossly under-reported in the case–control studies. There was no statistically significant difference between the positive and negative cohort studies for the reporting of all parameters in all observational studies (see table 2).
To the best of our knowledge, this is the first study assessing the methodological quality of observational studies and RCTs published in Indian medical journals with an impact factor >1. Previous studies have evaluated the overall methodological quality in the context of clinical trials.16–18 However, we believe that this is the first study comparing the methodological quality of positive versus negative studies. The results show that the overall quality of reporting of methodological parameters was low in the articles published in Indian medical journals with an impact factor >1 and there was no significant difference between positive and negative studies for the reporting of these methodological parameters. Nevertheless, whether the results are an artefact of the quality of reporting or study conduct cannot be determined. While assessment of publication bias was not the aim of the study, it appears that there was a significant publication bias in Indian medical journals with an impact factor >1 in terms of publication of studies with positive (73.6%) versus negative results (26.4%), which is a clear violation of the uncertainty principle.19
Our study also has some limitations. We only included journals with an impact factor greater than 1. Therefore, the findings may not be generalisable to all journals published in India. Nonetheless, because the impact factor is considered a predictor of journal quality, although controversial, the extent of poor reporting in these studies can be generalised to prominent Indian medical journals.20 However, whether the methodological quality of studies published in other Indian journals may be of equal quality, better or worse than these studies needs empirical assessment. Additionally, this study is based on the methodological parameters reported in published articles. It is certainly possible that a few parameters were measured by study investigators but not reported in the published article because of word constraints or other technical reasons. Finally, we have only included articles published in the past 3 years (2011–2013) as we aimed to assess the current reporting of methodological quality. There may be a concern that the negative results obtained in any study may actually be the false-negative results because of the poor methodology or insufficient sample size.17 ,18 Though the intent of this paper is not to assess the reasons for results being negative or positive, which have been assessed in other studies, our aim was only to compare the methodological quality of studies with negative versus positive findings.17 ,18
The results from our study are also in accordance with other global studies conducted with similar objectives indicating that such conduct or under-reporting is not confined to Indian medical journals.16–18 ,21–24 It is surprising that despite the availability of reporting guidelines such as the CONSORT statement, the inclusion of important methodological parameters in published clinical trials remains inadequate and needs significant improvement.25 Similarly, the reporting of different methodological parameters was also inadequate for observational studies. Still, unlike clinical trials, the majority of methodological parameters were reported more than 50% of the time. The results are somewhat assuring in a way and indicate that journals seems to follow the same quality standards for positive as well as negative studies in the review process. This is in contrast to a recent paper that reviewed studies published in nursing journals in which investigators found significantly higher level of methodological quality for negative studies.26 Nevertheless, the authors also reported that positive studies were published more frequently than negative studies (73.6% vs 26.4%), in line with our results. The uneven distribution of positive and negative studies was similar among observational and RCT cohort in our study. While this study was not designed to detect publication bias, the uneven distribution of positive versus negative studies is highly indicative of the presence of bias.
On the basis of our findings, we conclude that the reported methodological quality of studies published in seven Indian clinical focus journals with an impact factor >1 is weak. Additionally, there was no significant difference between the positive and negative studies with respect to parameters related to the methodological quality. Future studies should include a representative sample of all Indian journals so that the findings can be more generalisable. As this is the first study to compare positive versus negative studies, future efforts can target articles published in journals from other countries and articles related to the different clinical specialties.
Twitter Follow Ambuj Kumar at @drambuj
Contributors JC, MC, RM, RJ, TR and AK designed the systematic review, performed data collection and extraction, contacted the original authors for missing or confusing information, carried out the statistical analysis and interpretation of the data, and wrote the first draft of the report. JC and MC searched for articles. All authors assessed their eligibility and performed a major revision of this report. When discrepancies occurred, they were resolved by discussion between JC, RM, TR and AK. All authors approved the final version of the manuscript.
Funding This work was supported by Award Number D43TW006793 from the Fogarty International Center.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.