Article Text

Download PDFPDF

Paper
Aiming at a moving target: research ethics in the context of evolving standards of care and prevention
  1. Seema Shah1,
  2. Reidar K Lie2
  1. 1Department of Bioethics, Clinical Center, National Institutes of Health, Bethesda, Maryland, USA
  2. 2Department of Philosophy, University of Bergen, Bergen, Norway
  1. Correspondence to Ms Seema Shah, Department of Bioethics, National Institutes of Health, 10 Center Drive, Building 10, Rm 1C118, Bethesda, MD 20892, USA; shahse{at}mail.nih.gov

Abstract

In rapidly evolving medical fields where the standard of care or prevention changes frequently, guidelines are increasingly likely to conflict with what participants receive in research. Although guidelines typically set the standard of care, there are some cases in which research can justifiably deviate from guidelines. When guidelines conflict with research, an ethical issue only arises if guidelines are rigorous and should be followed. Next, it is important that the cumulative evidence and the conclusions reached by the guidelines do not eliminate the need for further research. Even when guidelines are rigorous and the study still asks an important question, we argue that there may be good reasons for deviations in three cases: (1) when research poses no greater net risk than the standard of care; (2) when there is a continued need for additional evidence, for example, when subpopulations are not covered by the guidelines; and (3) less frequently, when clinical practice guidelines can be justified by the evidence, but practitioners disagree about the guidelines, and the guidelines are not consistently followed as a result. We suggest that procedural protections may be especially useful in deciding when studies in the third category can proceed.

  • Ethics Committees/Consultation
  • Research Ethics
  • Applied and Professional Ethics
  • Codes of/Position Statements on Professional Ethics
  • HIV Infection and AIDS

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

The Starting Antiretroviral Therapy at Three Points in Tuberculosis Therapy (SAPIT) trial has been criticised as deeply ethically flawed. The trial was conducted in South Africa and randomised HIV-positive patients with tuberculosis (TB) to receive antiretroviral treatment (ART) earlier or later after starting TB treatment.1 Sean Philpott and Udo Schüklenk claim the trial violated the Declaration of Helsinki's ethical requirement on standards of care because it did not follow WHO and South African clinical practice guidelines, which recommended starting ART as early as possible. They argue that following these guidelines would have prevented at least 10 deaths among the trial participants.2

The basis of the argument against the SAPIT trial is that research that offers participants less than the best current proven intervention is unethical and violates the principle of clinical equipoise.3–5 Although there has been considerable debate about equipoise and the standard of care in research,6 ,7 this debate has not directly addressed what to do when clinical practice guidelines conflict with the treatment or prevention interventions offered to research participants or when deviations from these guidelines can be justified. This issue is especially difficult when clinical practice guidelines change while research is ongoing.

Based on published and unpublished data and expert opinion, organisations, governmental bodies, professional societies and international agencies periodically publish treatment and prevention guidelines. Such guidelines are often transparent about their evidentiary basis; some grade the quality of evidence as higher if based on randomised control trials and lower if based on clinical observations.8 They aim to establish standards of care and change clinical practice and medical policy. Along with expert testimony, clinical practice guidelines can also inform the legal standard of care.9

In rapidly evolving medical fields, like HIV/TB treatment and prevention, where the standard of care or prevention changes frequently, clinical practice guidelines are increasingly likely to conflict with what participants receive in research.10 ,11 Is research unethical if it conflicts with clinical practice guidelines? First, an ethical issue only arises if the guidelines are rigorous and should be followed; a recent Institute of Medicine report provides helpful criteria for developing and evaluating clinical practice guidelines.11 ,12 Next, it is important that the research question under study is still important to answer given the body of evidence supporting the guidelines.

When clinical practice guidelines are rigorous and the study still asks an important question, we argue that there may be good reasons for deviations in three cases: (1) when research poses no greater net risk than the standard of care; (2) when there is a continued need for additional evidence, such as evidence about the use of the intervention in subpopulations; and (3) less frequently, when clinical practice guidelines can be justified by the evidence, but practitioners disagree about the guidelines, and the guidelines are not consistently followed as a result. Because it is more difficult to justify studies in category 3, procedural protections may be especially useful in deciding when studies in this category may proceed. We conclude by arguing that the SAPIT trial has elements of categories 2 and 3. Although the trial's deviation from guidelines was justifiable, because it is relatively difficult to justify deviations in category 3, increased procedural protections may have been helpful.

When can deviations be justified?

Research poses no greater net risk than the standard of care

The research setting can be very different from the real world context which guidelines address. For instance, guidelines may need to simplify complicated information for practitioners and policymakers, or suggest a standard of care that can be feasible on a large scale. These motivations sometimes make guidelines poorly-calibrated for the care and attention that can be provided in a research context. Moreover, because they have controlled conditions, close monitoring or interventions that could not be provided outside of the study, research can sometimes offer the same risk to benefit ratio as the standard of care without offering the standard of care itself.

Assuming that the deviation from standard of care is necessary to answer an important scientific question, when a study protects participants such that they will not be exposed to greater net risk than if the guideline had been followed, the study offers the functional equivalent of the standard of care and may not require a high level of justification. This view is consistent with influential ethics guidelines. The Declaration of Helsinki and the Council for International Organisations of the Medical Sciences both permit deviating from the best proven standard of care where there is scientific reason to do so and participants are not at risk of serious or irreversible harm.13 14

One example is a study testing whether HIV-positive pregnant women should receive preventative therapy for TB. In December 2010, WHO issued a strong recommendation that HIV-positive people in TB-endemic settings, regardless of their level of immunosuppression, should receive isoniazid preventative therapy to reduce their risk of developing TB.15 Recognising the limitations of TB programmes in developing countries, WHO did not require a positive tuberculin skin test result before initiating therapy, and indicated that the recommendation included pregnant.

At the time WHO recommendation was issued, a study on the safety and toxicity of isoniazid in pregnant women was in development. Isoniazid is associated with increased hepatoxicity. Given that pregnancy itself is a risk factor for hepatoxicity, the study was motivated by the concern that pregnant women might face higher risk from isoniazid than the general population. Furthermore, WHO recommendation might not be favourable for women who had not been exposed to TB. To address these concerns, the study was designed to randomise pregnant women to either immediate or delayed isoniazid (started at 12 weeks post partum). The risks were minimised to protect women in the control arm. The study required that all women with CD4 counts below 350 cells/mm3 be on ART, which would decrease the risk of being infected with TB, and that each visit would include TB screening. If TB was detected, the women and/or their infants would be referred to therapy. Thus, women in the study would not be exposed to greater net risk than if they had received guideline-driven isoniazid. As long as the study protections were the functional equivalent of providing pregnant women with isoniazid, the research could justifiably deviate from the guideline.

Continued need for additional evidence

A second justification for deviating from guidelines is when there is a continued, compelling need for additional evidence. This can be the case if a guideline has insufficient evidence to apply to a particular subpopulation, such as infants or children, and there is reason to suspect that the intervention poses different benefits and risks for that subpopulation. The previous example of isoniazid therapy in pregnant women may fall under this category.

More generally, guidelines may be promulgated without full evidentiary support for a variety of reasons. A recent article found that the Infectious Diseases Society of America practice guidelines are frequently issued based on expert opinion alone, and only 14% are based on evidence from randomised controlled trials.16 In cases like these, research can serve a critical function by filling gaps that remain even after much of the needed information is available, whether or not the results end up supporting what the guideline originally recommended.

For instance, complying with guidelines for treatment of hospital-acquired pneumonia recommending more frequent combination broad-spectrum treatment has been found to significantly increase mortality.17 This was likely because the guidelines should not have applied to all hospitalised patients. As one commentator explained, ‘Because of the fatal flaw in making of an accurate diagnosis of intensive-care unit pneumonia and the inherent inability to separate uninfected colonised patients from infected patients, it is probable that a notable number of uninfected patients received unnecessary broad-spectrum combination therapy’.18 Therefore, a study deviating from these guidelines to test a different strategy would likely have been justified.

Guidelines can be justified by the evidence but are controversial

Finally, if a guideline can be justified based on the evidence, but practitioners disagree about the interpretation of the evidence and therefore do not regularly comply with the guidelines, then it may be important for research to be conducted in certain cases. Two additional criteria for this category are that the research question is relevant to the source of disagreement and answering that question requires deviating from the guidelines. Additionally, this reason for deviation would not apply to cases where the failure to adopt the guideline can be addressed in other ways. If practitioners fail to adopt an intervention out of inertia, irrationality, insufficient reimbursement, lack of knowledge, lack of resources or lack of time, these issues are unlikely to be addressed by more research and cannot justify deviating from guidelines; rather, they may require increased educational efforts or policy changes.

It is very difficult to justify deviations from guidelines of this type. For instance, consider a study that tested whether treating people who are very immunosuppressed as if they had TB (without confirmation that they do have TB) could decrease their risk of death. This study planned to enrol people with CD4 counts below 50 cells/mm3, screen them for active TB and exclude those who screen positive, and randomise the remaining people to receive either standard of care monitoring or full TB treatment. As mentioned previously, WHO recommended isoniazid preventative therapy for HIV-positive people regardless of tuberculin skin test status in resource limited settings, which was controversial given that some people not at risk of TB would be receiving a drug with side effects. For this reason and because of cost constraints, WHO recommendations were not being implemented in many places around the world. This study attempted to develop a higher standard of care than what the guidelines required, but doing so required deviating from the guidelines: those in the experimental arm would receive more than WHO required, but those in the standard of care arm would receive less.

Failing to offer the standard of care was difficult to justify because participants with such advanced HIV disease were at significant risk of morbidity and mortality. Additionally, WHO recommendations were clear and beginning to be implemented: facts that made a placebo controlled trial much less relevant. Research comparing an experimental intervention against placebo would not help policymakers who were no longer seriously considering the option of not offering anything. The relevant question was whether to provide what WHO recommended or pick a more aggressive alternative. Therefore, this study would not satisfy the ethical criteria for deviating from the guidelines and was in fact redesigned to offer participants in the control arm isoniazid therapy.

Applying the analysis to the SAPIT trial

How does this analysis apply to the SAPIT trial, mentioned at the start of this paper? In 2005, SAPIT began testing when to start antiretroviral therapy for people who were co-infected with HIV and TB. There were three arms: (1) starting ART very soon after starting TB treatment, (2) starting ART after the 2-month intensive phase of TB treatment and (3) starting ART after completing TB treatment. All else being equal, it would seem best to start treatment as soon as possible. Nevertheless, there were several reasons for thinking that it may be better to delay starting ART, including potential risks from drug–drug interactions and immune reconstitution inflammatory syndrome, a condition that can occur when an HIV-positive patient starts treatment that begins allowing the immune system to rebuild. The immune system may then newly recognise microorganisms and generate an exaggerated inflammatory response that causes significant harm to the patient. In the absence of data, clinicians would have to weigh the possible adverse drug interactions from starting early TB treatment against the possibility of increased disease-related mortality from starting late. Both courses of action were potentially risky. Practitioners in South Africa were not routinely following these guidelines because of concerns about these risks, the high pill burden associated with treating both diseases at once and a lack of coordination in the healthcare infrastructure.1

The interim results of SAPIT showed that participants in the third arm (delayed ART until after completion of TB treatment) had a significantly higher rate of death. This death rate was even higher for the people with advanced disease (CD4 count<200 cells/mm3). The Data and Safety Monitoring Board stopped this arm early as a result. As previously mentioned, Philpott and Schüklenk have criticised this study for failing to follow the relevant guidelines.

What were the relevant guidelines? The trial began in 2005. The 2003 WHO guidelines stated: ‘The optimal time to initiate ART in patients with tuberculosis is not known’.19 Critics of the SAPIT study point out that those guidelines also say, ‘Pending current studies, WHO recommends that ART in patients with CD4 cell counts below 200/mm3 be started between 2 weeks and 2 months after the start of TB therapy, when the patient has stabilized on this therapy’.2 By contrast, the 2004 South African guidelines recommended the following: ‘If the patient has a history of WHO stage IV illness, and/or a CD4 count of less than 200 cells/mm3, complete 2 months of TB therapy before commencing ART’.20

Before the SAPIT trial, evidence about treating HIV/co-infected individuals was available only from observational studies, not randomised controlled trials. Based on this evidence, WHO's recommendation was ‘provisional’ and ‘pending current studies’.19 In November 2009, citing the SAPIT trial, WHO updated its guidance to recommend ART be started as soon as possible after starting TB treatment.21 This suggests that the SAPIT trial was not in conflict with WHO guidelines. However, the South African guidelines had stronger language that the trial did not follow when it randomised some participants with CD4 counts below 200 to delay their ART until after they had completed TB therapy.

Participants in the SAPIT trial in the sequential therapy arm with CD4 counts below 200 were at higher risk as a result of their treatment assignment. The crucial questions are whether there was sufficient evidence to draw this conclusion before the trial began and whether the study needed to enrol this subpopulation to answer the study question. Some have argued that by combining observational data available at the time of the trial and extrapolating from randomised controlled trials testing ART in various populations, there was enough evidence to conclude that this population should have started ART immediately.18 Given that the SAPIT results were a key factor in changing WHO recommendations so they were no longer provisional, however, it seems that there is a strong argument that there was disagreement and uncertainty in the field at the time that SAPIT was conducted about the specific research questions involved.

All of this suggests that the SAPIT case has elements of categories 2 and 3 in our classification. There was some justification for the guidelines recommending early ART treatment, but at least one guideline found the evidence to be insufficient. The recommendation was not based on data from randomised controlled trials, and there was disagreement among experts about when to start ART in relation to TB treatment. This disagreement seems sufficient to satisfy the general ethical requirement of clinical equipoise, and suggests that the SAPIT trial could have been justified in deviating from the guidelines.

In situations like these that involve considerable disagreement among experts, however, procedural protections could be very useful. Given that a sponsor and team are invested in their research, if they are the only ones arguing for a deviation from the guideline, this may seem suspect later. Even the perception of having conflicts of interest can be damaging and difficult to overcome. Moreover, hindsight biases make it difficult to render an objective judgment years after the conflict arose,22 ,23 particularly in resolving questions such as whether there was clinical equipoise surrounding a guideline at the time of the research. For these reasons, researchers and sponsors considering conducting a study in this category should consider additional protections like the following: (1) asking the Data and Safety Monitoring Board to consider the research question's relation to clinical practice guidelines and explicitly justify any deviations from guidelines; (2) gathering data from practitioners to understand why they are not complying with the guidelines and whether more data are in fact needed; and/or (3) convening an independent panel of ethicists, policymakers in relevant countries or officials in charge of healthcare implementation, community representatives, and clinicians to evaluate the specific question of whether the deviation from the guideline is justified.

Conclusions

Guidelines can be very important for advancing the standard of care. Researchers and sponsors have obligations to abide by guidelines when they clearly apply to the study in question and are not controversial. Nevertheless, studies that deviate from them can be justified when: (1) research offers the functional equivalent of the standard of care, (2) there is a continued need for more evidence and (3) more controversially, when there is strong disagreement about the guidelines and variable practice. Procedural protections may be especially useful in addressing this third category.

This analysis suggests that it may be helpful for guideline writers to carefully explain their evidence base and important areas for further research, and perhaps even describe research designs that would continue to be acceptable. Given the prevalence of hindsight biases, commentators should tread cautiously when condemning past research that conflicts with guidelines. Ideally, ethicists, sponsors and researchers would work together in advance to determine whether to continue a study in conflict with guidelines. Thoughtful analysis of when deviations from guidelines are justified is critically important in order to protect and respect research participants, to produce socially valuable knowledge, and to manage the uncertainty inherent in the development of new knowledge to improve health.

Acknowledgments

The authors would like to thank Annette Rid, Christine Grady, Joseph Millum, Emily Erbelding, Jing Bao, Patrick Jean-Philippe, anonymous reviewers who offered very helpful comments and the study teams who were willing to share information about the ethical issues they faced with others.

References

Footnotes

  • Funding This research was supported by the Intramural Research Program of the NIH Clinical Center and by the Division of AIDS at the National Institute of Allergy and Infectious Diseases.

  • Disclaimer The opinions expressed are the view of the authors. They do not represent any position or policy of the US National Institutes of Health, the Public Health Service, or the Department of Health and Human Services.

  • Competing interests One of the authors is a US government employee who must comply with the NIH Public Access Policy, and the author or NIH will deposit, or have deposited, in NIH's PubMed Central archive, an electronic version of the final, peer-reviewed manuscript upon acceptance for publication to be made publicly available no later than 12 months after the official date of publication.

  • Provenance and peer review Not commissioned; externally peer reviewed.

Linked Articles

  • The concise argument
    Kenneth Boyd

Other content recommended for you