Introduction Therapeutic options for type 2 diabetes mellitus (T2DM) have expanded over the last decade with the emergence of cardioprotective novel agents, but without such data for older drugs, leaving a critical gap in our understanding of the relative effects of T2DM agents on cardiovascular risk.
Methods and analysis The large-scale evidence generations across a network of databases for T2DM (LEGEND-T2DM) initiative is a series of systematic, large-scale, multinational, real-world comparative cardiovascular effectiveness and safety studies of all four major second-line anti-hyperglycaemic agents, including sodium–glucose co-transporter-2 inhibitor, glucagon-like peptide-1 receptor agonist, dipeptidyl peptidase-4 inhibitor and sulfonylureas. LEGEND-T2DM will leverage the Observational Health Data Sciences and Informatics (OHDSI) community that provides access to a global network of administrative claims and electronic health record data sources, representing 190 million patients in the USA and about 50 million internationally. LEGEND-T2DM will identify all adult, patients with T2DM who newly initiate a traditionally second-line T2DM agent. Using an active comparator, new-user cohort design, LEGEND-T2DM will execute all pairwise class-versus-class and drug-versus-drug comparisons in each data source, producing extensive study diagnostics that assess reliability and generalisability through cohort balance and equipoise to examine the relative risk of cardiovascular and safety outcomes. The primary cardiovascular outcomes include a composite of major adverse cardiovascular events and a series of safety outcomes. The study will pursue data-driven, large-scale propensity adjustment for measured confounding, a large set of negative control outcome experiments to address unmeasured and systematic bias.
Ethics and dissemination The study ensures data safety through a federated analytic approach and follows research best practices, including prespecification and full disclosure of results. LEGEND-T2DM is dedicated to open science and transparency and will publicly share all analytic code from reproducible cohort definitions through turn-key software, enabling other research groups to leverage our methods, data and results to verify and extend our findings.
- Health informatics
- DIABETES & ENDOCRINOLOGY
This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.
Statistics from Altmetric.com
If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.
Strengths and limitations of this study
The proposal seeks to use health information encompassing millions of patients with type 2 diabetes mellitus (T2DM) in the multinational Observational Health Data Science and Informatics (OHDSI) community to determine real-world comparative effectiveness and safety of traditionally second-line T2DM agents.
The proposed set of studies will be comprehensive, with a systematic pairwise comparisons of all sodium–glucose co-transporter-2 inhibitor, glucagon-like peptide-1 receptor agonist, dipeptidyl peptidase-4 inhibitor and sulfonylurea agents at the drug, class and population subgroup level.
The studies will focus on a broad set of outcomes, including comprehensive measures of adverse cardiovascular events as well as secondary effectiveness and safety outcomes.
The studies use robust methods an observational, active-comparator, new-user cohort design with a systematic framework to address residual confounding, publication bias and p-hacking using data-driven, large-scale propensity adjustment for measured confounding, a large set of negative control outcome experiments to address unmeasured and systematic bias, prespecification and full disclosure of hypotheses tested and their results. These approaches capitalise on mature OHDSI open-source resources and a large body of clinical and quantitative research that the LEGEND-T2DM investigators originated and continue to drive.
The study will focus on drug effectiveness rather than safety without the ability to systematically track the adherence to individual agents across cohorts.
Rationale and background
The landscape of therapeutic options for type 2 diabetes mellitus (T2DM) has been dramatically transformed over the last decade.1 The emergence of drugs targeting the sodium–glucose co-transporter-2 (SGLT2) and the glucagon-like peptide-1 (GLP1) receptor has expanded the role of T2DM agents from lowering blood glucose to directly reducing cardiovascular risk.2 A series of large randomised clinical trials designed to evaluate the cardiovascular safety of SGLT2 inhibitors (SGLT2Is) and GLP1 receptor agonists (GLP1RAs) found that use of many of these agents led to a reduction in major adverse cardiovascular events, including myocardial infarction, hospitalisation for heart failure and cardiovascular mortality.3–6 However, other T2DM drugs widely used before the introduction of these novel agents, such as sulfonylureas, did not undergo similarly comprehensive trials to evaluate their cardiovascular efficacy or safety. Moreover, direct comparisons of newer agents with dipeptidyl peptidase-4 (DPP4) inhibitors (DPP4Is), with neutral effects on major cardiovascular outcomes,7–10 have not been conducted. Nevertheless, DPP4Is and sulfonylureas continue to be used in clinical practice and are recommended as second-line T2DM agents in national clinical practice guidelines.
Several challenges remain in formulating T2DM treatment recommendations based on existing evidence.11 First, trials of novel agents did not pursue head-to-head comparisons to older agents and were instead designed as additive treatments on the background of commonly used T2DM agents. Therefore, the relative cardiovascular efficacy and safety of novel agents compared with older agents is not known, and indirect estimates have relied on summary-level data restricted to common comparators12–14 and are less reliable.15 16 Second, trials of novel agents have tested individual drugs against placebo but have not directly compared SGLT2Is with GLP1RAs in reducing adverse cardiovascular event risk. Moreover, there is no evidence to guide the use of individual drugs within each class and across different drug classes, particularly among patients at lower cardiovascular risk than recruited in clinical trials. Third, randomised trials focused on cardiovascular efficacy and safety but were not powered to adequately assess the safety of these agents across a spectrum of non-cardiovascular outcomes. Finally, restricted enrolment across regions, and subgroups of age, sex and race further limits the efficacy and safety assessment that may guide individual patients’ treatment.
Evidence gaps from these trials also pose a challenge in designing treatment algorithms, which rely on comparative effectiveness and safety of drugs. Perhaps, as a result, there is large variation in clinical practice guidelines and in clinical practice with regard to these medications, with many patients initiated on the newer therapies and many others treated with older regimens.17–21 Among the second-line options, there is much variation with respect to the order of drugs used. This lack of consensus about the best approach provides an opportunity for systematic, large-scale observational studies.
To inform critical decisions facing patients with diabetes, their caregivers, clinicians, policy-makers and healthcare system leaders, we have launched the large-scale evidence generation and evaluation across a network of databases for T2DM (LEGEND-T2DM) initiative to execute a series of comprehensive observational studies to compare cardiovascular outcome rates and safety of second-line T2DM glucose-lowering agents. Specifically, these studies aim:
To determine, through systematic evaluation, the comparative effectiveness of traditionally second-line T2DM agents, SGLT2Is and GLP1RAs, with each other and with DPP4Is and sulfonylureas, for cardiovascular outcomes.
To determine, through systematic evaluation, the comparative safety of traditionally second-line T2DM agents among patients with T2DM.
To assess heterogeneity in effectiveness and safety of traditionally second-line T2DM agents among key patient subgroups: using stratified patient cohorts, we will quantify differential effectiveness and safety across subgroups of patients based on age, sex, race, renal impairment and baseline cardiovascular risk.
LEGEND-T2DM will execute three systematic, large-scale observational studies of second-line T2DM agents to estimate the relative risks of cardiovascular effectiveness and safety outcomes.
The Class-versus-class study will provide all pairwise comparisons between the four major T2DM agent classes to evaluate their comparative effects on cardiovascular risk (Objective 1) and patient-centred safety outcomes (Objective 2).
The drug-versus-drug study will furnish head-to-head pairwise comparisons between individual agents within and across classes (both Objectives 1 and 2).
The heterogeneity study will refine these comparisons for patients with T2DM for important subgroups (Objective 3). In contrast to a single comparison approach, LEGEND-T2DM will provide a comprehensive view of the findings and their consistency across populations, drugs and outcomes. We will model each study on our successful collaborative research evaluating the comparative effectiveness of antihypertensives recently published in The Lancet.22
Table 1 list the four major T2DM agent classes and the individual agents licensed in the USA within each class. We will examine all class-wise comparisons and all ingredient-wise comparisons. For each comparison, we are interested in the relative risk of each of the cardiovascular and safety outcomes described in the Outcomes section.
For each study, we will employ an active comparator, new-user cohort design.23–25 New-user cohort design is advocated as the primary design to be considered for comparative effectiveness and drug safety.26–28 By identifying patients who start a new treatment course and using therapy initiation as the start of follow-up, the new-user design models a randomised controlled trial (RCT), where treatment commences at the index study visit. Exploiting such an index date allows a clear separation of baseline patient characteristics that occur prior to index date and are usable as covariates in the analysis without concern of inadvertently introducing mediator variables that arise between exposure and outcome.29 Excluding prevalent users as those without a sufficient washout period prior to first exposure occurrence further reduces bias due to balancing mediators on the causal pathway, time-varying hazards and depletion of susceptibles.28 30 Our systematic framework across studies further will address residual confounding, publication bias and p-hacking using data-driven, large-scale propensity adjustment for measured confounding,31 a large set of negative control outcome experiments to address unmeasured and systematic bias32–34 and full disclosure of hypotheses tested.35 Figure 1 illustrates our design for all studies that the following sections describe in more detail.
We will execute LEGEND-T2DM as a series of OHDSI network studies. All data partners within OHDSI are encouraged to participate voluntarily and can do so conveniently, because of the community’s shared Observational Medical Outcomes Partnership (OMOP) common data model (CDM) and OHDSI tool stack. Many OHDSI community data partners have already committed to participate and we will recruit further data partners through OHDSI’s standard recruitment process, which includes protocol publication on OHDSI’s GitHub, an announcement in OHDSI’s research forum, presentation at the weekly OHDSI all-hands-on meeting and direct requests to data holders.
Table 2 lists the 13 already committed data sources for LEGEND-T2DM; these sources encompass a large variety of practice types and populations. For each data source, we report a brief description and size of the population it represents and its patient capture process and start date. While the earliest patient capture begins in 1989 (Columbia University Irving Medical Center, CUIMC), the vast majority come from the mid-2000s to today, providing almost two decades of T2DM treatment coverage. US populations include those commercially and publicly insured, enriched for older individuals (MDCR, VA), lower socioeconomic status (MDCD) and racially diverse (VA >20% Black or African American, CUIMC 8%). The US data sources may capture the same patients across multiple sources. Different views of the same patients are an advantage in capturing the diversity of real-world health events that patients experience. Across Commercial Claims and Encounters (CCAE; commercially insured), MCDR (Medicare) and MCDC (Medicaid), we expect little overlap in terms of the same observations recorded at the same time for a patient; patients can flow between sources (eg, a CCAE patient who retires can opt-in to become an MDCR patient), but the enrolment time periods stand distinct. On the other hand, Optum, PanTher, OpenClaims, CUIMC and Yale New Haven Health System may overlap in time with the other US data sources. While it remains against licensing agreements to attempt to link patients between most data sources, Optum reports <20% overlap between their claims and electronic health record (EHR) data sources that is reassuringly small. All data sources will receive institutional review board approval or exemption for their participation before executing LEGEND-T2DM.
We will include all subjects in a data source who meet inclusion criteria for one or more traditionally second-line T2DM agent exposure cohorts. Broadly, these cohorts will consist of patients with T2DM either with or without prior metformin monotherapy who initiate treatment with one of the 22 drug ingredients that comprise the DPP4I, GLP1RA, SGLT2I and sulfonylurea drug classes (table 1). We do not consider thiazolidinediones, given their known association with a risk of heart failure and bladder cancer.36 37 We describe specific definitions for exposure cohorts for each study in the following sections.
Class-versus-class study comparisons
The class-versus-class study will construct four exposure cohorts for new users of any drug ingredient within the four traditionally second-line drug classes in table 1. Cohort entry (index date) for each patient is their first observed exposure to any drug ingredient for the four second-line drug classes. Consistent with an idealised target trial for T2DM therapy and cardiovascular risk,38 39 inclusion criteria for patients based on the index date will include:
T2DM diagnosis and no type 1 or secondary diabetes mellitus diagnosis before the index date;
At least 1 year of observation time before the index date (to improve new-user sensitivity).
No prior drug exposure to a comparator second-line or other antihyperglycaemic agent (ie, thiazolidinediones, acarbose, acetohexamide, bromocriptine, glibornuride, miglitol and nateglinide) or >30 days insulin exposure before index date.
We will construct and compare separately cohorts patients either with:
At least 3 months of metformin use before the index date.
No prior metformin use before the index date.
No prior metformin use before the index date.
In the first case, 3 months of metformin is consistent with ADA guidelines.40 In the second case, we are interested in relative effectiveness and safety of these traditionally second-line agents in patients who initiate their treatments without first using metformin. We purposefully do not automatically exclude or restrict to patients with a history of myocardial infarction, stroke or other major cardiovascular events, which will allow us to report relative effectiveness and safety for individuals with both low or moderate and high cardiovascular risk. Likewise, we do not automatically exclude or restrict to individuals with severe renal impairment.41 We will use cohort diagnostics, such as achieving covariate balance and clinical empirical equipoise between exposure cohorts (see the Sample size and study power section) and stakeholder input to guide the possible need to exclude other prior diagnoses, such as congestive heart failure, pancreatitis or cancer.41
Online supplemental appendix A.1 reports the complete OHDSI ATLAS cohort description for new users of DDP4 inhibitors with prior metformin use. This description lists complete specification of cohort entry events, additional inclusion criteria, cohort exit events and all associated standard OMOP CDM concept code sets used in the definition. We generate programmatically equivalent cohort definitions for new others of each drug class with and without prior metformin use. ATLAS then automatically translates these definitions into network-deployable SQL source code. Online supplemental appendix A.2 lists the inclusion criteria modifier for no prior metformin use.
Of note, the inclusion criteria do not directly incorporate quantitative measures of poor glycaemic control, such as one or more elevated serum hemoglobin A1c (HbA1c) measurements; such laboratory values are irregularly captured in large claims and even EHR data sources. Older ADA guidelines (but not since 2020 for patients with cardiovascular disease (CVD)42) advise escalating to a second-line agent only when glycaemic control is not met with metformin monotherapy, nicely mirroring our cohort design for our historical data. We will conduct sensitivity analyses involving available HbA1c measurements to demonstrate their balance between exposure cohorts (described later in the Sample size and study power section). In the unlikely event that balance is not met, we will consider an inclusion criterion of at least two HbA1c measurements ≥7% within 6 months before the index.39 We will also conduct sensitivity analyses to assess prior insulin use exclusions, bearing in mind difficulties in assessing insulin use end-dates.
For each data source, we will then execute all pairwise class comparisons for which the data source yields ≥1000 patients in each arm. Significantly fewer numbers of patients strongly suggest data source-specific differences in prescribing practices that may introduce residual bias and sufficient samples sizes are required to construct effective propensity score (PS) models.43
Drug-versus-drug study comparisons
The drug-versus-drug study will construct 2×22 exposure cohorts for new users of each drug ingredient in table 1. We will apply the same cohort definition, inclusion criteria and patient count minimum as described in the Class-versus-class study comparisons section.
For each data source, we will then execute all pairwise drug comparisons. While we will publicly report studies results for all pairwise comparisons, we will focus primary clinical interpretation and scientific publishing to the (within DPP4Is) (within GLP1RAs) (within SGLT2Is) (within SUs) comparisons that pit drugs within the same class against each other, as well as across-class comparisons that stakeholders deem pertinent given their experiences.
Online supplemental appendix A.3 reports the complete OHDSI ATLAS cohort description for new users of aloglipitin with prior metformin use. Again, we programmatically construct all new-user drug-level cohort and automatically translate into SQL.
Heterogeneity study comparisons
The heterogeneity study will further stratify all 237 class-level and drug-level exposure cohorts in the Class-versus-class study comparisons section and the Drug-versus-drug study comparisons section by clinically important patient characteristics that modify cardiovascular risk or relative treatment heterogeneity to provide patient-focused treatment recommendations. These factors will include:
· Age (18–44 years/45–64 years/≥65 years at the index date).
Race (African American or black).
Cardiovascular risk (low or moderate/high, defined by established CVD at the index date).
Renal impairment (at the index date).
We will define patients at high cardiovascular risk as those who fulfil at index date an established CVD definition that has been previously developed and validated for risk stratification among new users of second-line T2DM agents.44 Under this definition, established CVD means having at least one diagnosis code for a condition indicating CVD, such as atherosclerotic vascular disease, cerebrovascular disease, ischaemic heart disease or peripheral vascular disease, or having undergone at least one procedure indicating CVD, such as percutaneous coronary intervention, coronary artery bypass graft or revascularisation, any time on or prior to the exposure start. Likewise, we will define renal impairment through diagnosis codes for chronic kidney disease and end-stage renal disease, dialysis procedures and laboratory measurements of estimated glomerular filtration rate, serum creatinine and urine albumin.
Online supplemental appendix A.4 presents complete OHDSI ATLAS specifications for these subgroups, including all standard OMOP CDM concept codes defining cardiovascular risk and renal disease.
We will validate exposure cohorts and aggregate drug utilisation using comprehensive cohort characterisation tools against both claims and EHR data sources. Chief among these tools stands OHDSI’s CohortDiagnostic package (github). For any cohort and data source mapped to OMOP CDM, this package systematically generates incidence new-user rates (stratified by age, gender and calendar year), cohort characteristics (all comorbidities, drug use, procedures and health utilisation) and the actual codes found in the data triggering the various rules in the cohort definitions. This can allow researchers and stakeholders to understand the heterogeneity of source coding for exposures and health outcomes as well as the impact of various inclusion criteria on overall cohort counts (details described in the Sample size and study power section).
Across all data sources and pairwise exposure cohorts, we will assess relative risks of 32 cardiovascular and patient-centred outcomes (table 3). Primary outcomes of interest are:
Three-point major adverse cardiovascular events (MACE), including acute myocardial infarction, stroke and sudden cardiac death.
Four-point MACE that additionally includes heart failure hospitalisation.
Secondary outcomes include:
Individual MACE components.
Acute renal failure.
In data sources with laboratory measurements, secondary outcomes further include:
Measured renal dysfunction.
We will also study second-line T2DM drug side-effects and safety concerns highlighted in the 2018 ADA guidelines40 and from RCTs, including:
Abnormal weight change.
We will employ the same level of systematic rigour in studying outcomes regardless of their primary or secondary label (online supplemental appendix B).
A majority of outcome definitions have been previously implemented and validated in our own work22 44–48 based heavily on prior development by others (see references in table 344–101). To assess across-source consistency and general clinical validity, we will characterise outcome incidence, stratified by age, sex and index year for each data source.
Contemporary utilisation of drug classes and individual agents
For all cohorts in the three studies, we will describe overall utilisation as well as temporal trends in the use of each drug class and agents within the class. Furthermore, we will evaluate these trends in patient groups by age (18–44 years/45–64 years/≥65 years), gender, race and geographic regions. Since the emergence of novel medications in the management of T2DM in 2014, there has been a rapid expansion in both the number of drug classes and individual agents. These data will provide insight into the current patterns of use and possible disparities. These data are critical to guide the real-world application of treatment decision pathways for the treatment of patients with T2DM.
Specifically, we will calculate and validate aggregate drug utilisation using the OHDSI’s CohortDiagnostic package against both claims and EHR data sources. The CohortDiagnostics package works in two steps: (1) generate the utilisation results and diagnostics against a data source and (2) explore the generated utilisation and diagnostics in a user-friendly graphical interface R-Shiny app. Through the interface, one can explore patient profiles of a random sample of subjects in a cohort. These diagnostics provide a consistent methodology to evaluate cohort definitions/phenotype algorithms across a variety of observational databases. This will enable researchers and stakeholders to become informed on the appropriateness of including specific data sources within analyses, exposing potential risks related to heterogeneity and variability in patient care delivery that, when not addressed in the design, could result in errors such as highly correlated covariates in PS matching of a target and a comparator cohort. Thus, the added value of this approach is twofold in terms of exposing data quality for a study question and ensuring face validity checks are performed on proposed covariates to be used for balancing PSs.
Relative risk of cardiovascular and patient-centred outcomes
For all three studies, we will execute a systematic process to estimate the relative risk of cardiovascular and patient-centred outcomes between new users of second-line T2DM agents. The process will adjust for measured confounding, control from further residual (unmeasured) bias and accommodate important design choices to best emulate the nearly impossible to execute, idealised RCT that our stakeholders envision across data source populations, comparators, outcomes and subgroups.
To adjust for potential measured confounding and improve the balance between cohorts, we will build large-scale PS models102 for each pairwise comparison and data source using a consistent data-driven process through regularised regression.31 This process engineers a large set of predefined baseline patient characteristics, including age, gender, race, index month/year and other demographics and prior conditions, drug exposures, procedures, laboratory measurements and health service utilisation behaviours, to provide the most accurate prediction of treatment and balance patient cohorts across many characteristics. Construction of condition, drug, procedures and observations include occurrences within 365 days, 180 days and 30 days prior to index date and are aggregated at several SNOMED (conditions) and ingredient/ATC class (drugs) levels. Other demographic measures include comorbidity risk scores (Charlson, DCSI (diabetes complications severity index), CHADS2 (congestive heart failure, hypertension, age, diabetes and stroke 2) and CHAD2VASc (CHADS2 plus vascular disease history)). From prior work, feature counts have ranged in the 1000s–10 000s, and these large-scale PS models have outperformed high-dimensional PS (hdPS)103 in simulation and real-world examples.31 Given the subcutaneous route of administration of GLP1RAs compared with other drugs administered orally, device codes that represent needles and associated health management encounters will be excluded from PS construction.
Exclude patients who have experienced the outcome prior to their index date.
Stratify and variable-ratio match patients by PS.
Use Cox proportional hazards models.
to estimate HRs between alternative target and comparator treatments for the risk of each outcome in each data source. In addition, we will perform a sensitivity analysis that does not exclude individuals who previously experienced a glycaemic control outcome before the index date. The regression will condition on the PS strata/matching unit with treatment allocation as the sole explanatory variable and censor patients at the end of their time-at-risk (TAR) or data source observation period. We will prefer stratification over matching if both sufficiently balance patients (see the Sample size and study power section), as the former optimises patient inclusions and thus generalisability.
We will execute each comparison using three different TAR definitions, reflecting different and important causal contrasts:
Intent to treat (TAR: index +1 → end of observation) captures both direct treatment effects and (long-term) behavioural/treatment changes that initial assignment triggers.104
On-treatment 1 (TAR: index +1 → treatment discontinuation) is more patient centred105 and captures direct treatment effect while allowing for escalation with additional T2DM agents.
On-treatment 2 (TAR: index +1 → discontinuation or escalation with T2DM agents) carries the least possible confounding with other concurrent T2DM agents.
Our ‘on-treatment’ is often called ‘per-protocol’.106 Systematically executing with multiple causal contrasts enables us to identify potential biases that missing prescription data, treatment escalation and behavioural changes introduce, while preserving the ease of intent-to-treat interpretation and power if the data demonstrate them as unbiased. Online supplemental appendix A.5 reports the modified cohort exit rule for the on-treatment-2 TAR.
We will aggregate HR estimates across non-overlapping data sources to produce meta-analytic estimates using a random-effects meta-analysis.107 This classic meta-analysis assumes that per-data source likelihoods are approximately normally distributed.108 This assumption fails when outcomes are rare as we expect for some safety events. Here, our recent research shows that as the number of data sources increases, the non-normality effect increases to where coverage of 95% CIs can be as low as 5%. To counter this, we will also apply a Bayesian meta-analysis model109 110 that neither assumes normality nor requires patient-level data sharing by building on composite likelihood methods111 and enables us to introduce appropriate overlap weights between data sources.
Residual study bias from unmeasured and systematic sources often remains in observational studies even after controlling for measured confounding through PS adjustment.32 33 For each comparison-outcome effect, we will conduct negative control (falsification) outcome experiments, where the null hypothesis of no effect is believed to be true, using approximately 100 controls. We identified these controls through a data-rich algorithm112 that identifies prevalent OMOP condition concept occurrences that lack evidence of association with exposures in published literature, drug–product labelling and spontaneous reports, and were then adjudicated by clinical review. We previously validated 60 of the controls in LEGEND for Hypertension (LEGEND-HTN).22 Online supplemental appendix C lists these negative controls and their OMOP condition concept IDs.
Using the empirical null distributions from these experiments, we will calibrate each study effect HR estimate, its 95% CI and the p value to reject the null hypothesis of no differential effect.34 We will declare an HR as significantly different from no effect when its calibrated p <0.05 without correcting for multiple testing. Finally, blinded to all trial results, study investigators will evaluate study diagnostics for all comparisons to assess if they were likely to yield unbiased estimates (see the Sample size and study power section).
Sensitivity analyses and missingness
Because of the potential confounding effect of glycaemic control at baseline between treatment choice and outcomes and to better understand the impact of limited glucose level measurements on effectiveness and safety estimation that arises in administrative claims and some EHR data, we will perform prespecified sensitivity analyses for all studies within data sources that contain reliable glucose or haemoglobin A1c measurements. Within a study, for each exposure pair, we will first rebuild PS models, where we additionally include baseline glucose or haemoglobin A1c measurements as patient characteristics, stratify or match patients under the new PS models that directly adjust for potential confounding by glycaemic control and then estimate effectiveness and safety HRs.
A limitation of the Cox model is that no doubly robust procedure is believed to exist for estimating HRs, due to their non-collapsibility.113 Doubly robust procedures combine baseline patient characteristic-adjusted outcome and PS models to control for confounding and, in theory, remain unbiased when either (but not necessarily both) model is correctly specified.114 Doubly robust procedures do exist for hazard differences115 and we will validate the appropriateness of our univariable Cox modelling by comparing estimate differences under an additive hazards model116 with and without doubly robust adjustment.117 In practice, however, neither the outcome nor PS model is correctly specified, leading to systematic error in the observational setting.
Missing data of potential concern are patient demographics (gender, age and race) for our inclusion criteria. We will include only individuals whose baseline eligibility can be characterised that will most notably influence race subgroup assessments in the heterogeneity study. No further missing data can arise in our large-scale PS models because all features, with the exception of demographics, simply indicate the presence or absence of health records in a given time period. Finally, we limit the impact of missing data, such as prescription information, relating to exposure TAR by entertaining multiple definitions.29 In all reports, we will clearly tabulate numbers of missing observations and patient attrition.
Sample size and study power
Within each data source, we will execute all comparisons with ≥1000 eligible patients per arm. Blinded to effect estimates, investigators and stakeholders will evaluate extensive study diagnostics for each comparison to assess reliability and generalisability, and only report risk estimates that pass.25 35 These diagnostics will include:
Minimum detectable risk ratio as a typical proxy for power.
Preference score distributions to evaluate empirical equipoise10 and population generalisability.
Extensive patient characteristics to evaluate cohort balance before and after PS adjustment.
Negative control calibration plots to assess residual bias.
Kaplan-Meier plots to examine HR proportionality assumptions.
We will define cohorts to stand in empirical equipoise if the majority of patients carry preference scores between 0.3 and 0.7 and to achieve balance if all after-adjustment characteristics return absolute standardised mean differences <0.1.118
Strengths and limitations
LEGEND-T2DM is, to the best of our knowledge, the largest and most comprehensive study to provide evidence about the comparative effectiveness and safety of second-line T2DM agents. The LEGEND-T2DM studies will encompass over 1 million patients initiating second-line T2DM agents across at least 13 databases from 5 countries and will examine all pairwise comparisons between the four second-line drug classes against a panel of to-do health outcomes. Through an international network, LEGEND-T2DM seeks to take advantage of disparate health databases drawn from different sources and across a range of countries and practice settings. These large-scale and unfiltered populations better represent real-world practice than the restricted study populations in prescribed treatment and follow-up settings from RCTs. Our use of the OMOP CDM allows extension of the LEGEND-T2DM experiment to future databases and allows replication of these results on licensable databases that were used in this experiment while still maintaining patient privacy on patient-level data.
LEGEND-T2DM further advances the statistically rigorous and empirically validated methods we have developed in OHDSI that specifically address bias inherent in observational studies and allow for reliable causal inference. Patient characteristics and their treatment choices are likely to confound comparative effectiveness and safety estimates. Our approach combines active comparator new-user designs that emulate randomised clinical trials with large-scale propensity adjustment for measured confounding, a large set of negative control outcome experiments to address unmeasured and systematic bias, and full disclosure of hypotheses tested.
Each LEGEND-T2DM aim will represent evidence synthesis from a large number of bespoke studies across multiple data sources. Addressing questions one bespoke study at a time is prone to errors arising from multiple testing, random variation in effect estimates and publication bias. LEGEND-T2DM is designed to avoid these concerns through methodologic best practices119 with full study diagnostics and external replication.
Through open science, LEGEND-T2DM will allow any interested investigators to engage as partners in our work at many levels. We will publicly develop all protocols and analytic code. This invites additional data custodians to participate in LEGEND-T2DM and enables others to modify and reuse our approach for other investigations. We will also host real-time access to all study result artefacts for outside analysis and interpretation. Such an open science framework ensures a feed-forward effect on other scientific contributions in the community. Collectively, LEGEND-T2DM will generate patient-centred, high quality, generalisable evidence that will transform the clinical management of T2DM through our active collaboration with patients, clinicians and national medical societies. LEGEND-T2DM will spur scientific innovation through the generation of open-source resources in data science).
Even though many potential confounders will be included in these studies, there may be residual bias due to unmeasured or misspecified confounders, such as confounding by indication, differences in physician characteristics that may be associated with drug choice, concomitant use of other drugs started after the index date and informative censoring at the end of the on-treatment periods. To minimise this risk, we will use methods to detect residual bias through a large number of negative and positive controls.
Ideal negative controls carry identical confounding between exposures and the outcome of interest.120 The true confounding structure, however, is unknowable. Instead of attempting to find the elusive perfect negative control, we will rely on a large sample of controls that represent a wide range of confounding structures. If a study comparison proves to be unbiased for all negative controls, we can feel confident that it will also be unbiased for the outcome of interest. In our previous studies,22 25 121 using the active comparator, new-user cohort design we will employ here, we have observed minimal residual bias using negative controls. This stands in stark contrast to other designs such as the (nested) case–control that tends to show large residual bias because of incomparable exposure cohorts implied by the design.122
Observed follow-up times are limited and variable, potentially reducing power to detect differences in effectiveness and safety and, further, misclassification of study variables is unavoidable in secondary use of health data, so it is possible to misclassify treatments, covariates and outcomes. Based on our previous successful studies on antihypertensives, we do not expect differential misclassification, and, therefore, bias will most likely be toward the null. Finally, the EHR databases may be missing care episodes for patients due to care outside the respective health systems. Such bias, however, will also most likely be towards the null.
Finally, since our studies focus on healthcare datasets, as opposed to vital statistics datasets, the cause of the death among those suffering sudden cardiac death in the outpatient setting will not be identified as such.
Ethics and dissemination
LEGEND-T2DM does not involve human subjects research. The project does, however, use human data collected during routine healthcare provision. Most often the data are de-identified within data source. All data partners executing the LEGEND-T2DM studies within their data sources will have received institutional review board (IRB) approval or waiver for participation in accordance with their institutional governance prior to execution (see table 4). LEGEND-T2DM executes across a federated and distributed data network, where analysis code is sent to participating data partners and only aggregate summary statistics are returned, with no sharing of patient-level data between organisations.
Management and reporting of adverse events and adverse reactions
LEGEND-T2DM uses coded data that already exist in electronic databases. In these types of databases, it is not usually possible to link (ie, identify a potential causal association between) a particular product and medical event for any specific individual. Thus, the minimum criteria for reporting an adverse event (ie, identifiable patient, identifiable reporter, a suspect product and event) are not available and adverse events are not reportable as individual adverse event reports. The study results will be assessed for medically important findings.
Plans for disseminating and communicating study results
Open science aims to make scientific research, including its data process and software, and its dissemination, through publication and presentation, accessible to all levels of an inquiring society, amateur or professional123 and is a governing principle of LEGEND-T2DM. Open science delivers reproducible, transparent and reliable evidence. All aspects of LEGEND-T2DM (except private patient data) will be open and we will actively encourage other interested researchers, clinicians and patients to participate. This differs fundamentally from traditional studies that rarely open their analytic tools or share all result artefacts, and inform the community about hard-to-verify conclusions at completion.
Transparent and re-usable research tools
We will publicly register this protocol and announce its availability for feedback from stakeholders, the OHDSI community and within clinical professional societies. This protocol will link to open-source code for all steps to generating diagnostics, effect estimates, figures and tables. Such transparency is possible because we will construct our studies on top of the OHDSI tool stack of open-source software tools that are community developed and rigorously tested.25 We will publicly host LEGEND-T2DM source code at https://github.com/ohdsi-studies/LegendT2dm, allowing public contribution and review, and free re-use for anyone’s future research.
Continuous sharing of results
LEGEND-T2DM embodies a new approach to generating evidence from healthcare data that overcome weaknesses in the current process of answering and publishing (or not) one question at a time. Generating evidence for thousands of research and control questions using a systematic process enables us to not only evaluate that process and the coherence and consistency of the evidence but also to avoid p-hacking and publication bias.35 We will store and openly communicate all these results as they become available using a user-friendly web-based app that serves up all descriptive statistics, study diagnostics and effect estimates for each cohort comparison and outcome. Open access to this app will be through a public facing LEGEND-T2DM webpage.
Dissemination through scientific meetings and publications
We will deliver multiple presentations annually at scientific venues including the annual meetings of the American Diabetes Association, American College of Cardiology, American Heart Association and American Medical Informatics Association. We will also prepare multiple scientific publications for clinical, informatics and statistical journals.
Dissemination to general public
We believe in sharing our findings that will guide clinical care with the public. LEGEND-T2DM will use social media (Twitter) to facilitate this. With dedicated support from the OHDSI communications specialist, we will deliver regular press releases at key project stages, distributed via the extensive media networks of UCLA, Columbia and Yale.
Patient and public involvement
No patients were involved in the design of our studies.
Patient consent for publication
This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.
Twitter @hmkyale, @suchard_group
Contributors RK and MAS conceived the research and drafted the proposal in consultation with MS, YL, AO, RC, GH, PR and HMK, who provided critical feedback on the research proposal.
Funding This protocol was partially funded through the National Institutes of Health grants K23 HL153775, R01 LM006910 and R01 HG006139 and an Intergovernmental Personnel Act agreement with the US Department of Veterans Affairs. The funders had no role in the design and conduct of the protocol; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
Competing interests This protocol is undertaken within Observational Health Data Sciences and Informatics (OHDSI), an open collaboration. RK is a founder of Evidence2Health, and receives grant funding from the US National Institutes of Health. MJS and PBR are employees of Janssen Research and Development and shareholders in John & Johnson. GH receives grant funding from the US National Institutes of Health and the US Food & Drug Administration and contracts from Janssen Research and Development. HMK receives grants from the US Food & Drug Administration, Medtronics and Janssen Research and Development, is co-founder of HugoHealth and chairs the Cardiac Scientific Advisory Board for UnitedHealth. MAS receives grant funding from the US National Institutes of Health, the US Department of Veterans Affairs and the US Food & Drug Administration and contracts from Janssen Research and Development and IQVIA.
Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.
Provenance and peer review Not commissioned; externally peer reviewed.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.