Article Text

Original research
Lag times in the publication of network meta-analyses: a survey
  1. Fernanda S Tonin1,
  2. Ariane G Araujo1,
  3. Mariana M Fachi1,
  4. Vinicius L Ferreira1,
  5. Roberto Pontarolo2,
  6. Fernando Fernandez-Llimos3
  1. 1Pharmaceutical Sciences Postgraduate Programme, Federal University of Paraná, Curitiba, Brazil
  2. 2Department of Pharmacy, Federal University of Paraná, Curitiba, Brazil
  3. 3Laboratory of Pharmacology, Faculty of Pharmacy, University of Porto, Porto, Portugal
  1. Correspondence to Professor Fernando Fernandez-Llimos; fllimos{at}ff.up.pt

Abstract

Objective We assessed the extent of lag times in the publication and indexing of network meta-analyses (NMAs).

Study design This was a survey of published NMAs on drug interventions.

Setting NMAs indexed in PubMed (searches updated in May 2020).

Primary and secondary outcome measures Lag times were measured as the time between the last systematic search and the article submission, acceptance, online publication, indexing and Medical Subject Headings (MeSH) allocation dates. Time-to-event analyses were performed considering independent variables (geographical origin, Journal Impact Factor, Scopus CiteScore, open access status) (SPSS V.24, R/RStudio).

Results We included 1245 NMAs. The median time from last search to article submission was 6.8 months (204 days (IQR 95–381)), and to publication was 11.6 months. Only 5% of authors updated their search after first submission. There is a very slightly decreasing historical trend of acceptance (rho=−0.087; p=0.010), online publication (rho=−0.080; p=0.008) and indexing (rho=−0.080; p=0.007) lag times. Journal Impact Factor influenced the MeSH allocation process, but not the other lag times. The comparison between open access versus subscription journals confirmed meaningless differences in acceptance, online publication and indexing lag times.

Conclusion Efforts by authors to update their search before submission are needed to reduce evidence production time. Peer reviewers and editors should ensure authors’ compliance with NMA standards. The accuracy of these findings depends on the accuracy of the metadata used; as we evaluated only NMA on drug interventions, results may not be generalisable to all types of studies.

  • journalism (see medical journalism)
  • clinical pharmacology
  • medical journalism
  • statistics & research methods

Data availability statement

Data are available in a public, open access repository. Data are available upon reasonable request. Data are available at DOI 10.17605/OSF.IO/MD3CU. Additional data are available upon reasonable request.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This study evaluated publication lag times between the last systematic search and article submission, acceptance, publication, indexing and Medical Subject Headings allocation dates of network meta-analyses (NMAs) about drug interventions.

  • Time-to-event analyses were performed considering characteristics of the NMA.

  • Correlations of the publication process lag times with time trends (years) were calculated using Spearman’s r.

  • Exploratory variables were articles’ geographical origin, Journal Impact Factor, Scopus CiteScore and open access status.

  • The accuracy of the results depends on the accuracy of the available metadata.

Introduction

Syntheses of user-driven evidence in healthcare need to be up to date and integrate recent data.1 2 Systematic reviews with conventional pairwise meta-analyses and network meta-analyses (NMAs) are the gold standard for synthetising evidence from primary studies.3 4 NMAs have the advantage of statistically combining both direct evidence (ie, available in the literature) and indirect evidence (ie, estimated based on common treatment comparators) across several treatments in a single model.5 Over the past years, NMAs have become more widely used, increasing from fewer than 15 NMAs published per year from 2003 to 2008 to approximately 150 NMAs annually after 2014.6 However, although high-quality NMAs can produce a broader body of evidence, this technique is resource intensive and time consuming.7 8

Previous studies have demonstrated that the time between the last systematic search performed by the authors of a systematic review with conventional meta-analysis and the publication of their research is usually greater than 1 year.9 10 Although rates at which systematic reviews go out of date may differ according to several factors (eg, reviews addressing current question, new relevant studies, good access or use of previous reviews), only a minority of systematic reviews (less than 25% in Cochrane) are updated after 2 years.11–13

In clinical areas with high publishing speed, the results of a study may become quickly outdated and may no longer be useful in real-world settings.14–16 In these cases, it is recommended that authors update their reviews annually, especially because approximately one-tenth of their findings support daily clinical decision-making.17 18 Some authors claim that updating a meta-analysis may be challenging because it involves multiple tests as evidence accumulates and the effect sizes are recalculated at each step, which may increase type I error.19 20 On the other hand, researchers are aware that inconsistent and outdated information may significantly compromise decision-making and research planning.12 15

Nonetheless, authors may not be the only ones responsible for outdated evidence. The time lag of the peer review process of a scientific article (eg, the time between paper submission and publication) negatively contributes to evidence lifespan because during this period, the evidence is not accessible to end users.9 11 21 Additionally, the use of Medical Subject Headings (MeSH) terms in systematic searches enhances the sensitivity of the search strategy by retrieving records that would not have been identified using free text words only. Thus, the retrieval of relevant literature may be hampered by the long interval between article inclusion in PubMed and its MeSH assignment.14–16

Thus, we aimed to assess the extent of time lags in the publication and indexing of NMAs on drug interventions by performing descriptive and survival analyses.

Methods

Search strategy and selection criteria

The internal protocol for this research (Portuguese version—original) is available in the online supplemental appendix 1. This study was not pre-registered.

All systematic reviews with NMAs evaluating drug treatments indexed in PubMed (https://pubmed.ncbi.nlm.nih.gov/) were compiled. Systematic searches were conducted in PubMed without time or language limits (updated on 1 May 2020). The complete search strategies are available in the online supplemental appendix 2 table A1.

We included studies using NMAs of any type (ie, at least three interventions with open or closed loops) of experimental, quasiexperimental or observational trials, comparing any pharmacological intervention (alone or in combination with other pharmacological interventions), regardless of regimen or dosage, in patients with any clinical condition. Non-NMAs, protocols, studies reporting data of non-pharmacological intervention and studies published in non-Roman characters were excluded. The selection of the studies was performed by two authors (FST and AGA) individually, and discrepancies were decided by a third author (FF-L).

Data extraction

We used a standardised collection form to extract data about NMA general characteristics (eg, year of publication, journal) and information on the date of the last systematic search reported by the authors. This process was also performed independently by two researchers (FST and AGA), and discussed with a third author (FF-L) when necessary.

Other dates were automatically obtained from PubMed by exporting the metadata in MEDLINE format and extracted from the following MEDLINE fields: PHST (received) (submission date), PHST (accepted) (acceptance date), DEP (online publication date), EDAT (PubMed indexing date) and MHDA (MeSH allocation date). When these data were not available through PubMed, the journal’s website was consulted. When the exact search date was not presented, the 15th day of the month was used (eg, 15 October).

We also extracted the data of exploratory variables including the journal’s origin (country), which was automatically collected from the ‘publisher address’ field available in the Science Citation Index Expanded List (Clarivate Analytics, Philadelphia). When not available, journal origins were manually searched in the National Library of Medicine Catalog (https://www.ncbi.nlm.nih.gov/nlmcatalog). The 2019 Journal Impact Factor list was obtained from the Journal Citation Reports (JCR) available at the Web of Science (Clarivate Analytics), and CiteScore values were obtained from the Scopus CiteScore list updated on 30 April 2020 (https://www.scopus.com) (SCImago Journal Rank, SJR). According to journals’ subject category, they are divided into quartiles (1st, 2nd, 3rd and 4th quartiles) both in JCR and SJR. These quartiles rank the journals from highest to lowest based on their impact factor or impact index, respectively. Journals were considered open access when included in the Directory of Open Access Journal (DOAJ) (http://www.doaj.org; extracted 28 March 2020). Journal business models were classified according to the DOAJ journal list as article processing charges (APC journals) or altruistic journals, defined as open access journals without APCs. Journals not included in the DOAJ list were considered subscription journals.

Data analyses

The duration of the different stages of the publication process was calculated (in days) according to the following methods: ‘submission lag time’ was calculated as the difference between the submission date (PHST (received)) and date of last search; ‘acceptance lag time’ is the time between the last search date and acceptance date (PHST (accepted)); ‘online publication lag time’ is the time between the last search date and online publication date (DEP); ‘indexing lag time’ is the time between searches and the PubMed indexing date (EDAT); and ‘MeSH allocation lag time’ is the time between searches and the MeSH allocation date (MHDA).

The variable normality was assessed with Shapiro-Wilk (SW) test with additional visual inspection of the Q-Q plots. Descriptive exploratory statistics were used to summarise the data, with absolute and relative frequencies to describe categorical variables and the median, IQR and minimum and maximum values for continuous (non-normal) variables. The correlations of the publication process lag times with time trends (years) were calculated using Spearman’s rho.

The survival time (ie, time to event) was estimated using the interval between the date of the last systematic search and the dates of interest (submission, acceptance, online publication, indexing and MeSH allocation). Kaplan-Meier curves were used to graphically represent the results of the survival analysis (ie, the occurrence of the event as a function of time). The data were reported as the median (days) with a 95% CI.22–24 Negative values (ie, last systematic search updated by the authors after the article submission) were considered null for the survival analysis (ie, not included in the analyses). This approach was selected as negative values are not suitable for survival analyses (eg, they can alter the median survival as they artificially move from longest to shortest lag). Similarly, setting negative values as zero days or very short delays can be a source of bias as negative submission lag times do have initially a positive lag which becomes negative with the update of the search.

To accurately evaluate survival at different times of the study, both the log-rank, Gehan-Breslow and Tarone-Ware tests were used. The log-rank test compares the cumulative survival curves between the different categories of the same variable under the null hypothesis that the risk is the same in all strata and equally weights all points. This test has higher statistical power for comparing curves in the beginning of the follow-up (ie, first third of the graph). The Gehan-Breslow test provides greater weight time points by the number of cases at risk (ie, each case is event sensitive at the beginning of the observation), which is useful for statistically evaluating the survival curves in the middle of the follow-up period (ie, second third of the graph). The Tarone-Ware test weighs the observation time and the weight time points by the square root of the number of cases at risk (ie, each case is event sensitive in the middle of the follow-up), which is useful for assessing the end of the follow-up period, represented by the last third of the graph.25 26All these exploratory statistical analyses were conducted in IBM SPSS Statistics V.24.0 (IBM) and R using RStudio V.1.2 interface (RStudio, Boston); p values below 5% were considered statistically significant.

Results

The systematic search yielded 4715 records, of which 1630 were fully appraised and 1245 were selected for final analyses (figure 1) (complete list of included studies is deposited at the Open Science Framework repository and available at DOI 10.17605/OSF.IO/MD3CU).

Figure 1

Flow chart of included network meta-analyses (NMA).

Studies were published between 2003 and 2020 in more than 505 different journals, with PLoS ONE (n=53), Oncotarget (n=36) and Medicine (n=31) identified as the three most productive journals. Half of the NMAs were published in 72 journals, and 289 journals published only one NMA. In total, these journals are published in 26 different countries, with the USA (n=603; 48.4%) as the most productive country, followed by the UK (n=363; 29.2%). The median publication year was 2017, with 25% NMAs published before 2015 and another 25% published since 2019. CiteScore and Impact Factor were obtained for 1198 (96.2%) and 1160 (93.2%) publications, respectively, with medians of 3.02 (IQR 2.22–4.07) and 3.274 (IQR 2.376–5.149), respectively. Only 351 articles (28.2%) were published in open access journals, with a vast majority of these being published in APC journals (n=340/351; 96.9%) and the remaining 3.1% (n=11/351 NMAs) being published in altruistic journals (see table 1).

Table 1

NMA characteristics

The date of the last systematic search was reported by 1134 NMAs (91.1%). The PubMed indexing date was recorded for all NMAs; however, submission, acceptance and online publication dates were submitted to PubMed by only 925 (74.3%), 973 (78.2%) and 1199 (96.3%) studies, respectively. MeSH terms were allocated to 802 (64.4%) articles.

The median time from search to submission was 191 days (IQR 84–370; minimum −339 and maximum 1358), which represents approximately 6.4 months, with a maximum lag time of 1358 days (45.3 months). A total of 42 (5.0%) NMAs had their search updated after submission (eg, as requested by editors or reviewers during revision), thus presenting a negative value on the lag time. The remaining 801 articles present a median submission lag time of 204 days (IQR 95–381) that represents 6.8 months. Journal processing time, counting from the day of submission until the date of online publication, was approximately 157 days (5.2 months) (online publication lag time: 321 days (IQR 187–498), or around 10 months). After acceptance, articles took approximately 11 days to be indexed in PubMed. The indexing lag time from the last systematic search to indexing was 359 days (IQR 218–549; minimum 17 and maximum 1706) which represents approximately 12 months. The median cumulative time for MeSH allocation was 634 days (IQR 439–860; minimum 53 and maximum 2467) or approximately 21 months. All the lag times evaluated were non-normally distributed (SW, p<0.001).

The submission lag time presents an almost flat trend (Spearman’s rho=−0.072; p=0.034) (figure 2), while both acceptance (rho=−0.087; p=0.010), online publication (rho=−0.080; p=0.008) and indexing (rho=−0.080; p=0.007) lag times presented very slightly decreasing historical trends. A low decreasing historical trend existed in the MeSH allocation lag time (rho=−0.167; p<0.001). Violin plots are provided in the online supplemental appendix 2 figures A1–A4.

Figure 2

Submission lag time (days) according to the historical trend (years).

The results of the survival analyses are presented in table 2. No significant differences in lag times were observed among journals according to their geographical origin (classified as US journals vs other countries) or Scopus CiteScore metrics. However, journals from the first quartile according to Journal Impact Factor (ie, top 25% of journals in the list) presented lower lag times compared with other groups for the MeSH allocation process (log-rank: p=0.023; Gehan-Breslow: p=0.006; Tarone-Ware: p=0.009). The comparison between open access and subscription journals confirmed meaningless differences in acceptance and online publication lag times, with non-significant log-rank (p=0.388; p=0.548) and Tarone-Ware (p=0.076; p=0.115) but significant for the Gehan-Breslow comparator (p=0.027; p=0.040). Very small and meaningless differences were also obtained in indexing lag time between the subgroups of open access versus subscription (log-rank: p=0.381; Gehan-Breslow: p=0.014; Tarone-Ware: p=0.056) and the journal business models (ie, altruistic, APC and open access) (log-rank: p=0.515; Gehan-Breslow: p=0.044; Tarone-Ware: p=0.130). No differences among subgroups were observed in any portion of the survival curves for the submission time lags (see online supplemental appendix 2 figures A5–A29 for Kaplan-Meier curves).

Table 2

Time-to-event analyses of NMA publication lag time (days)

Discussion

We were able to evaluate the lag times in the publication process of 1245 systematic reviews with NMAs on drug interventions published in more than 500 different journals (2003–2020) and demonstrate that the median time from the last search by the authors to the first online publication is approximately 12 months. Over the past years, slightly decreasing trends in lag times have been observed, with important influence of Journal Impact Factor for MeSH allocation process. Further characteristics of the journals (eg, geographical origin, open access condition and business model category) had no influence on publication process lag times.

We found that approximately 10% of the authors did not report when the systematic searches of the review were conducted or updated. In the past, the limited space in printed journals was a major obstacle to fully reporting the study’s methods and results, which led to concise but often incomplete publications.27 However, journal space limitations are disappearing, and in-depth detailed descriptions of research methods and results can be reported through online supplemental files.27–29 Thus, the search date for any synthesis needs to be visible in the report metadata.

The prolonged publication process is a concern among researchers from all scientific areas. Powell reported that researchers are increasingly concerned about the time required to publish their work, especially when considering possible rejections, revisions and resubmissions.30 Studies have shown that the publication lag time (the time between submission and acceptance) is over 100 days,13 with times continuously increasing. For instance, the median review time has grown from 85 to 150 days at Nature and from 37 to 125 days for PLoS ONE.30 31 We found a negligible decrease in publication lag time, which may result from an increasing time devoted to peer reviews32 being offset by a decreasing online publication time due to the use of early-view and ahead-of-print systems.33 34

However, we found that an important part of the lag time in NMA publications is caused by the authors themselves. The median time from their last search to their article submission was approximately 7 months, but this median time for some studies was almost 4 years (over 45 months). In systematic reviews and conventional meta-analyses, studies about the time between the authors’ last searches and article publications showed medians of approximately 8–14 months,10 35 similar to those found in our study for NMAs. In evidence synthesis studies, the time from the publication of the primary studies to the NMA authors’ systematic searches should be added to calculate the total evidence dissemination delay. Previous studies have reported that the median time taken for the results of primary studies to be incorporated into a systematic review ranges from 2.5 to 6.5 years.36 37

Thus, to maximise the novelty of a review, an update of the search is recommended before submission for publication.32 This can be performed by rerunning searches for all relevant databases days or weeks before submission. If we consider research topics with high publication rates, search strategies should be updated regularly to allow authors to keep track of newly added studies. Additionally, as most NMAs include randomised controlled trials (RCT) as primary source of evidence, searches in trial registers should be encouraged, as it may allow finding emerging data. The results of studies that are not yet complete (eg, in the pipeline) can also be added in an NMA as long as authors provide this information in the manuscript.

We found that fewer than 5% of searches were updated after submission and before publication. A survival analysis of 100 quantitative systematic reviews on drug interventions demonstrated that the failure to incorporate new evidence about the risks and benefits of treatments substantively changed the study’s conclusions. This occurred for at least one primary outcome in 25% of the systematic reviews within 2 years of publication, in approximately 15% within 1 year and in approximately 10% at the time of the publication.3 11 16 For NMAs, it has been demonstrated that with an update frequency of 6 and 12 months, the median number of new trials to be included is 1 (IQR 0–1) and 2 (IQR 1–4), respectively.38

Updating a systematic review is generally more efficient than starting all over again when new evidence emerges, also because of the worldwide redundant production of studies on the same topic that could be avoided.39 A recent study showed that the workload associated with updating an NMA represents only approximately one-tenth of the initial workload.38 Some organisations, such as the Cochrane Collaboration, previously recommended updating systematic reviews every 2 years,40 although this time may be shorter in fields with higher publication output. However, few of the estimated 2500 new English-language systematic reviews indexed annually in MEDLINE are reported as updates.36 41

Although the NMA technique is time consuming and resource intensive, new methods might enable developers to produce a knowledge base more rapidly,8 and thus help to improve lag times. We acknowledge that the inclusion of one or more studies after NMA updated searches may impact on additional statistical comparisons and reflect in modifications of multiple sections of the article and data presentation. However, several tools have been developed to facilitate this task and can potentially help automate or semiautomate the process. Recommendations on how to update search strategies for systematic reviews already exist and show that regular updates, even before article writing, help researchers to monitor the publication of new references. The Cochrane handbook mentions that re-executing the search can be performed by using the last date of the original search as the beginning date for the update.40 Reference management software can also facilitate this process. Literature suggests using two reference manager files: one containing the current results as they are downloaded from the complete set of databases; and another with the findings from the original search. By subtracting the records found in the original search from the current results (ie, deduplication feature of the reference manager software), only records that were not previously screened will remain in the library.32 Some empirical guidance on how to update reviews9 42 and further approaches such as the Grading of Recommendations, Assessment, Development and Evaluations (GRADE), Ottawa method, RAND method, statistical prediction tool and value of information analysis should be increasingly used to better disseminate NMA findings.41–43

Additionally, the evolution of technology enables the conduction of ‘live cumulative NMAs’ (ie, living meta-analyses), defined as a continual surveillance approach of the literature. This technique produces a global comparison of multiple treatments with ‘real-time update online summaries’ to provide evidence to users, which can further facilitate informed research prioritisation, decision-making and evidence gap mapping. This ‘evidence synthesis ecosystem’ implies a continuous process built around a clinical question of interest and no longer as a small team independently answering a specific clinical question at a single point in time.3 19 38 A recent example of this approach, developed as an international research initiative supported by the WHO and Cochrane in response to the pandemic caused by the SARS-CoV-2 virus, is the so-called ‘COVID-NMA’: a living mapping and living systematic review of COVID-19 trials to inform decision. To date, over 3000 trials registered on the WHO platform and more than 300 RCTs with complete data extraction and results on preventive interventions, treatments and vaccines for COVID-19 are available on this platform—updated every week (https://covid-nma.com/). These ‘living’ techniques should be further encouraged and supported by funders and other stakeholders.

The main implications of our findings are to draw the attention of authors and publishers on the long lag times between healthcare research and its translation into practice, especially considering the broader body of evidence synthetised by NMAs. Decision-makers are often faced with time-sensitive policy questions, so if NMAs are to be useful (eg, to ground health technology approvals with major impact on patient’s healthcare and economics), they need first to answer relevant questions, but also be conducted within a time frame that is useful to ground decision-making processes. Thus, standards for systematic reviews’ dates of search or update should be set for submission and publication. We recommend a minimum commitment for authors and editors based on having a last update of the systematic search before first submission of less than 90 days (ie, submission lag <90 days), which represents half of the median lag time found in our study.

Our study has some limitations. We included only NMAs on drug interventions, so the results may not be generalisable to all types of NMAs. Searches in different databases and with other descriptors may present slightly different results. Dates were evaluated only when provided by the articles and submitted by publishers to PubMed. The accuracy of our results depends on the accuracy of these metadata. As in almost all scientific research, missing data were present in around 10%–20% of variables. We try to reduce the issues associated with missing data (eg, lower statistical power, biased estimates) by maximising the collection of dates using standardised forms fulfilled by two researchers independently, with additional manual consultation of journals’ website. We avoid performing statistical analysis using single imputation or last observation carried forward as they are not optimal approaches due to potential bias and invalid conclusions; given the relatively small sample in our study and low rate of missing data, multiple imputations were not used.44 45 The mechanisms and reasons to explain the missing data may vary according to each study. We used different statistical tests in our exploratory analyses, which could result in challenge of multiple testing; however, this would lead to subgroup differences overestimation that we already considered meaningless. We have not performed multivariate analyses; however, the non-significant results obtained in the univariate analyses reduce the need of additional multivariate analyses to assert the confounding bias. We opted to use DOAJ to classify journals according to their business model because this database covers more than 14 400 journals and became a standard on open access classification (used by many other bibliographic databases like Scopus or Web of Science); however, other classifications may reveal different results. For the MeSH allocation lag time analyses, aiming at comparing among equals, only articles with allocated MeSH terms were included (ie, no censoring), which could lead to a bias for journals with slower MeSH allocation process; however, the incidence of this bias is low considering a median process time of only around 250 days in a 13-year period. The analyses reported here were performed on a data set extracted in May 2020; the final article was submitted in January 2021. The editorial process was longer than expected; however, as this study refers to a meta-research, updating the findings is not a major requirement.

Conclusions

Publishing an NMA takes more than 6 months after the authors submit the article, time that should be added to the more than 6 months that authors delayed their submission after completing the literature searches. Efforts from both authors and publishers to reduce the time spent in the production of systematic reviews and NMAs can contribute to the more rapid and accurate production of healthcare evidence, reducing the gap between research evidence and healthcare practice. We suggest minimum commitment from authors to perform a last update of their systematic search before first submission of less than 90 days. Peer reviewers and editors should ensure authors’ compliance with NMA standards, including requiring the search date in the report metadata. Living NMAs should be further encouraged and positively supported by funders and other stakeholders.

Data availability statement

Data are available in a public, open access repository. Data are available upon reasonable request. Data are available at DOI 10.17605/OSF.IO/MD3CU. Additional data are available upon reasonable request.

Ethics statements

Patient consent for publication

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors FST: conceptualisation; formal analysis; investigation; methodology; writing—original draft; writing—review and editing. AGA, VLF: data curation; validation; writing—review and editing. MMF: methodology; investigation; writing—review and editing. RP: conceptualisation; project administration; supervision; writing—review and editing. FF-L: conceptualisation; formal analysis; methodology; writing—original draft; writing—review and editing.

  • Funding This work was supported in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior–Brasil (CAPES), Finance Code 001.

  • Disclaimer The funding sources had no role in the study design, data collection, data analyses, data interpretation or writing of the report.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.