Intended for healthcare professionals

Analysis

Details matter: predicting when nudging clinicians will succeed or fail

BMJ 2020; 370 doi: https://doi.org/10.1136/bmj.m3256 (Published 15 September 2020) Cite this as: BMJ 2020;370:m3256
  1. Craig R Fox, professor1,
  2. Jason N Doctor, professor2,
  3. Noah J Goldstein, professor1,
  4. Daniella Meeker, assistant professor2,
  5. Stephen D Persell, professor3,
  6. Jeffrey A Linder, professor3
  1. 1Anderson School of Management, University of California, Los Angeles, CA, USA
  2. 2University of Southern California, Los Angeles, CA, USA
  3. 3Northwestern University Feinberg School of Medicine, Chicago, IL, USA
  1. Correspondence to: C R Fox cfox{at}anderson.ucla.edu

Subtle implementation details can greatly influence the effectiveness of behavioural nudges because of their inherent subjective and social nature, argue Craig Fox and colleagues

Interest in promoting better medical decisions by applying insights from behavioural science research has surged.1 Such interventions break with the traditional assumption that patients and clinicians act purely according to rational self-interest. Instead, behavioural interventions acknowledge cognitive constraints, biases, and social motivations of clinicians and patients, and typically attempt to “nudge” desired behaviours—that is, influence actions through subtle modifications of the choice environment without substantially altering financial incentives or restricting options.23

Behavioural interventions directed at healthcare professionals seem especially promising given that clinicians increasingly take actions using electronic health records, so that decisions are tracked and nudges can be deployed via modifications to standardised workflows. For instance, setting prescription drugs to their generic equivalents by default, while allowing clinicians to easily opt out, increased the rate of generic prescribing from 40% to 96% in one month.4

Some approaches to nudging have yielded results that seem to contradict one another, which has led several observers to question the general efficacy of the behavioural approach to improving healthcare.56 For instance, an article entitled, “Is peer pressure’s potential to improve physician performance overrated?” concludes: “People are so eager for behavioural economics, and related fields, to solve healthcare problems, that they get carried away.”5

We believe this scepticism is misplaced, because it neglects critical implementation factors informed by prior science that determine the effectiveness and reproducibility of behavioural interventions. When attempting to nudge behaviour, small contextual details that are well known to behavioural scientists, but are not necessarily apparent from top-line summaries, often have a big effect on success or failure. Four important lessons can be learnt from critical re-examination of behavioural nudging approaches that have led to ostensibly mixed results.

Nudges operate on a subjective level

Often, successful behaviour change requires targeted individuals to notice the intervention, interpret the intervention appropriately, form an intention to behave in a desired way, and then follow through on that intention. Various mistakes in design and implementation can undermine the effectiveness of a nudge at any point along the chain, from attention to interpretation to intention to behaviour.

Consider the case of social comparison interventions. Since the early 2000s, it has been shown that peer comparison of clinician performance can be a powerful instrument for changing behaviour.7 For that reason, the disparity in results from a US performance message study89 and Swiss performance dashboard study10 initially seem curious. The US based study sent primary care physicians monthly emails summarising their rate of inappropriate antibiotic prescribing for acute upper respiratory infections, along with the rate of inappropriate prescribing by their top performing peers. Providing this social feedback reduced inappropriate antibiotic prescribing from 20% at baseline to less than 4% at the end of the 18 month intervention, an effect that persisted a year after the intervention was discontinued. By contrast, the Swiss study evaluated the effect of sending physicians quarterly reports on their rate of antibiotic prescribing over a two year period, along with prescribing information about their peers. This intervention did not significantly change prescribing.

Closer inspection shows that particular differences between intervention protocols easily explain these contrasting results. Firstly, whereas the performance message in the US based study provided simple email feedback with a subject line designed to draw attention to peer comparison information, the performance dashboard in the Swiss study provided copious feedback that did not direct each participant’s attention specifically to social comparison data. Research has shown that human attention is severely constrained, especially when individuals are busy or distracted.11 Secondly, the performance message study provided feedback more often (monthly) than the dashboard study (quarterly), which may have enhanced the salience and impact of the social comparison information.12 Thirdly, the dashboard provided information on all antibiotic prescriptions and not merely inappropriate ones. Clinicians were given licence to judge their own prescriptions as especially appropriate—many studies have shown that people are biased to view their own behaviour in an unrealistically positive light.13 Fourthly, the performance message provided an aspirational social norm in which a clinician’s performance was contrasted with “top performers” who had the lowest rate of inappropriate prescribing, which likely increased motivation to improve to join this special group.1415 By contrast, the dashboard provided feedback in the form of a descriptive social norm in which the performance of clinicians with the highest rates of antibiotic prescribing at each consultation was compared with that of the average Swiss primary care physician. This may be an overly broad comparison group given heterogeneity in practices. Descriptive social norms tend to be more impactful when the comparison group is similar to the target group.16 In summary, although the US and Swiss studies both attempted to leverage social comparison information, several design variations in the Swiss study may have undermined its effectiveness by failing to capture and direct attention to the most relevant and influential social information.

Moderating variables may be critical

Behavioural researchers are constantly seeking to determine which psychological aspects of an intervention amplify or dampen its impact. The presence of these moderating factors and their relative importance often get lost in the headline of high impact healthcare delivery system studies.

Consider the case of pre-commitment. Social psychologists have long known that explicit commitments to attitudes or behaviours can elicit strong intrapersonal and interpersonal pressure to behave consistently with those commitments.17 Drawing on this insight, one study asked clinicians at five outpatient primary care clinics to display posters in examination rooms for 12 weeks stating their commitment to responsible antibiotic prescribing for acute respiratory infections, and featuring photographs and signatures of the clinicians. Compared with control clinics, intervention clinics had an adjusted 20% decrease in inappropriate antibiotic prescribing.18

In a similar vein, another study invited 45 primary care physicians to pre-commit to Choosing Wisely recommendations against unnecessary antibiotic prescribing for acute sinusitis, as well as unnecessary imaging for low back pain and headaches. In this study the pre-commitment consisted of a signed paper that was returned to a study coordinator.19 For the next one to six months, medical assistants put paper reminders of clinicians’ commitments in the room during visits by appropriate patients (that is, patients with low back pain, headaches, or sinus symptoms). Investigators also sent weekly emails to support their communication with patients. However, the result of this intervention was only a small decrease in low value orders for low back pain that was not sustained at follow-up.

Again, key differences between protocols may account for diverging results. Firstly, whereas the poster study displayed commitments publicly, the paper commitments were expressed more privately; social psychologists have long known that commitments are more likely to promote consistent responses when they are more public.20 Secondly, the fact that posters were visible to patients in the poster intervention study may have reduced the expectation among patients that they would receive antibiotics in the first place (thereby reducing explicit requests, implicit pressure on clinicians, and possibly incorrect assumptions by clinicians that patients wanted antibiotics). By neglecting a moderating variable well known to social psychologists (public display of commitments), researchers in the study using paper commitments may have undermined the effectiveness of their intervention.

Nudges often require calibration

Implementing a previously successful behavioural intervention in a new setting may seem straightforward, especially if the original protocol is followed closely. However, differences among the targeted populations, specialties, and clinical contexts may require the intervention to be adjusted accordingly.

Consider the strategy of designating a desired behaviour as the default option (from which individuals can opt out), which has successfully increased participation of employees in retirement saving plans,2122 promoted energy conservation,23 and may explain large differences in organ donation consent rates across countries with different policies.24 Defaults have also been used to promote adherence to guidelines with patients dependent on ventilators25 and to increase prescribing of generic drugs.426

When manipulating default prescription quantities the most appropriate and effective number will surely differ between practices and practice areas that have different baseline prescribing rates and goals.27 Indeed, a study that manipulated default opioid prescriptions found that orthopaedic surgeons, who perform surgery to relieve pain, were nudged less successfully than other surgical specialties, for whom postoperative pain is merely a side effect of the operation.28

Even when default manipulations succeed overall, they may have adverse effects on some doctors’ decisions. For instance, an emergency department study set the electronic health records to default new opioid prescriptions to 10 tablets, which resulted in a reduction of the median number of tablets prescribed (from 11.3 to 10).27 As expected, this was achieved by dramatically increasing the proportion of the time clinicians prescribed exactly 10 tablets. However, overall success seems to have come at the expense of some patients, because clinicians were significantly less likely than before to prescribe fewer than 10 tablets and significantly more likely to prescribe more than 20 tablets.

We also note that when a nudge is administered in the presence of other interventions, the interactive effect may be difficult to predict. For instance, in the aforementioned opioid default study, when clinicians opted out of the primary study based 10 tablet default, a secondary default of 28 tablets, established by the health system, was often selected, likely contributing to the increased rate of prescribing more than 20 tablets.27 Finally, as use of multiple electronic health record alerts becomes commonplace, overwhelmed clinicians may simply ignore these nudges.29

Behavioural interventions are social interventions

It is important to recognise that the design of the choice environment presented to clinicians (“choice architecture”) is not experienced in a vacuum. It is embedded in a social ecosystem involving an implicit or explicit interaction between targeted individuals and the designer.30 As such, targeted clinicians may try to make sense of why a health system has chosen to present options and supporting information in a particular way. This is especially true when there has been a recent change in workflow or policy,30 and some nudges can backfire if the implementing administrator is distrusted by clinicians.31 Administrators contemplating changes in choice architecture should therefore consider engaging a broader group of clinicians in the design and implementation, and reflect on how changes are announced so that targeted clinicians are less suspicious of new interventions.

Moving forward

While we remain optimistic about the potential for nudge interventions to improve health, this comes with a caveat. It is tempting to think of nudges as preparations like drugs that are presented in defined doses with predictable effects. However, it is better to think of nudges as inductive rules of thumb concerning what kinds of interventions tend to have what kinds of effects on people in various situations. Thus, researchers, clinicians, electronic health record designers, quality improvement professionals, and health system leaders should review relevant behavioural science literature and consider collaborating with behavioural science experts. Those with expertise in behavioural sciences may be more sensitive to critical design details that influence attention and subjective understanding, the moderating variables that may operate, and how the social context will affect outcomes.

Because human behaviour is so complex, behavioural science literature can, at best, provide good educated guesses about the impact of an intervention on clinician behaviour. Thus, applications of new nudges, or established nudges in new clinical settings, should follow standard practice in implementation science: they should be piloted before scaling up. This will help calibrate them and surface unexpected moderating factors. Rather than question whether or not nudging in general “works,” it may be more productive to ask: “What kind of nudges would work best in this particular setting?” And if a nudge does not replicate, the first question should be: “What were the critical differences in implementation and context?”

Ideally, randomised trials comparing variations of nudges can provide empirical data about critical design features. Otherwise, we suggest re-evaluating the appropriateness of nudges to the problem and context, carefully reviewing design features, and measuring key aspects of implementation.3233 Ultimately, details matter when designing, applying, and evaluating nudges. These details can inform not only health science but also the behavioural science that inspired the intervention in the first place.

Key messages

  • Attempts to influence clinician behaviour using behavioural “nudges” are inherently subjective interventions; as such they require a targeted clinician’s attention and appropriate interpretation of information

  • Subtle implementation details of a particular nudge—for example, simplicity in feedback, orienting clinicians to aspirational goals, and publicising commitments—can affect targeted clinicians’ experience of the intervention and therefore have an outsized impact on its success or failure

  • Nudge approaches should therefore be carefully calibrated to new populations and new contexts, and piloted before being scaled up

  • Nudging entails an implicit social interaction between targeted clinicians and the choice architect; thus, trust between clinicians and administrators may be critical for nudges to succeed

Footnotes

  • Contributors and sources: CF and NG are experts in behavioural decision making and social influence, respectively; JD and DM are experts in health policy; JL and SP are physician researchers with experience studying primary care practices and clinician behaviours. The team has been collaborating for 10 years on behavioural health interventions. The work draws on the published literature examining interventions to address clinician decision making and behaviour change in healthcare settings as well as literature from applied behavioural science. The premise of this article was conceived of by CF. All other authors made contributions to the paper outline. CF prepared the first draft, and all other authors contributed critical edits and revisions.

  • Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References