Article Text

Download PDFPDF

Physicians’ attitudes towards the media and peer-review selection of the ‘best cancer doctor’: comparison of two different selection methods
  1. Dong Wook Shin1,2,3,
  2. Juhee Cho4,5,
  3. Hyung Kook Yang6,
  4. So Young Kim6,7,
  5. Soohyeon Lee8,
  6. Eun Joo Nam6,
  7. Joo Seop Chung9,
  8. Jeong-Soo Im10,
  9. Keeho Park6,
  10. Jong Hyock Park6,11
  1. 1 Department of Family Medicine, Samsung Medical Center, Seoul, South Korea
  2. 2 Supportive Care Center, Samsung Comprehensive Cancer Center, Seoul, South Korea
  3. 3 Department of Digital Health, SAIHST, Sungkyunkwan University, Seoul, South Korea
  4. 4 Department of Health, Behavior and Society & Epidemiology, Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland, USA
  5. 5 Cancer Education Center, Samsung Comprehensive Cancer Center, SAHIST and School of Medicine, Sungkyunkwan University School of Medicine, Seoul, South Korea
  6. 6 Division of Health Policy and Management, National Cancer Control Institute, National Cancer Center, Goyang, South Korea
  7. 7 Department of Public Health and Preventive Medicine, Chungbuk National University Hospital, Cheongju, South Korea
  8. 8 Division of medical oncology, Department of Internal Medicine, Yonsei University College of Medicine, Seoul, South Korea
  9. 9 Department of Hemato-Oncology, Pusan National University Hospital, Busan, South Korea
  10. 10 Department of Preventive Medicine, Gachon Medical School, Incheon, South Korea
  11. 11 College of Medicine/Graduate School of Health Science Business Convergence, Chungbuk National University, Cheongju, South Korea
  1. Correspondence to Dr Jong Hyock Park; jonghyock{at}gmail.com

Abstract

Objectives The choice of doctor is an important issue for patients with cancer, and the reputation of the doctor is the single most important factor for patients to choose a doctor. Media are providing information about the ‘best cancer doctor’, but they vary widely in their selection methodology. We investigated cancer physicians’ attitudes towards the selection of the ‘best cancer doctor’ by the media, by comparing two different selection methodologies: selection by media personnel or selection through peer-review system.

Design Nationwide, cross-sectional survey.

Setting National Cancer Center and 12 Regional Cancer Centers across Korea.

Participants A total of 680 cancer care physicians participated in the survey (75.5% participation rate), and two were excluded due to incomplete response.

Main outcome measures Physicians’ opinions on the credibility, fairness, validity, helpfulness to patients, their intention to use the information and helpfulness to improve the quality of cancer care of the two different methods.

Results Only a few physicians believed that the selection method of the ‘best cancer doctor’ by the media personnel was credible (9.1%), fair (6.1%) or valid (10.0%). In contrast, the majority agreed that the peer-selection method of the ‘best doctor’ is credible (74.7%), fair (64.7%) and valid (67.4%). More physicians believed the latter methods would be useful for patients when selecting their doctor (38.5% vs 82.2%) and may lead to improvement of the quality of cancer care from the perspective of the healthcare system (12.6% vs 59.8%). The need for ensuring objectiveness and transparency was also raised.

Conclusion Physicians showed different attitudes towards two different selection methods. Regulations or guidelines for selecting the ‘best cancer doctor’ and for disclosing the information should be considered in order to control the quality of the information and to protect the customers.

  • best cancer doctor
  • selection
  • reputation
  • oncology
  • media

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Strengths and limitations of this study

  • This study was performed as part of a nationwide survey of cancer physicians from all national and regional cancer centres across all the administrative regions of Korea, ensuring its representativeness.

  • This study covered rarely investigated topics of cancer physicians’ attitudes towards the selection of the ‘best cancer doctor’.

  • We were unable to evaluate many different forms of selecting the ‘best cancer doctors’.

  • Our findings might be context-specific to Korea, where patients have a high degree of freedom to choose among many specialists reimbursed by a single national health insurance provider.

Introduction 

When a person is diagnosed with cancer, it is natural for the patient and caregivers to seek information about the place and person that can provide the best cancer care for them.1 2 Many patients believe that the quality of care differs greatly between different providers, and that their choice in doctor may have critical impact on their health outcomes.3 A substantial proportion of patients were seeking information about doctors through friends and family, their primary care doctor and through the internet,4 and that they actively chose a doctor.4 Presumably, the choice of doctor would be even more important for patients with cancer, as cancer is a potentially life-threatening disease that usually requires long-term treatment and long-term, or even life-long, follow-up.5 A considerable number of patients travel to another city in order to get treatment from their preferred physician.6

The reputation of the doctor may be the single most important factor for the patients when they choose a doctor. The patients reportedly considered the surgeon’s reputation as the most important factors for their decision.4 However, the exact manner in which the reputations of doctors are formed and their influence on patient decisions has not been sufficiently assessed, particularly in cancer care. While online physician quality ratings by consumers are gaining popularity,7 8 the patients’ experiences with oncology care are affected by many other aspects in addition to the physician themselves, and these may hence not be a reliable source of information.9 Therefore, the best method for patients to obtain information regarding the ‘best cancer doctor’ remains unclear.

The media have quickly responded to this need. However, they vary widely in their methodology across the world. In the USA, one of the most notable examples is the Best Doctors’ ‘Best doctor in America,’ which is based on a survey of physicians on their own choice in a hypothetical situation wherein one of their family members is ill.10 Castle Connolly identifies top doctors based on a nomination process, who are then reviewed by its physician-led research team; this information is subsequently disclosed over the internet.11 According to the Consumers’ Research Council of America, their top doctors are selected based on a point system that considers experience, training, membership in professional associations and board certification.12 13 In Japan, TV documentary programmes such as ‘This is world’s super doctor’ and ‘Fighting doctors, this is Japanese best doctor’ select a specialist in a field and provide relevant healthcare information together with the physician’s background.14 15 These programmes do not have or publicise their methodology of select ‘top doctors’ and selected by media personnel themselves. In Korea, similar TV documentary programmes, such as ‘best doctor’ exists. In addition, similar approaches are commonly used by many newspaper and magazines. Recently, in Korea, MK news media, in cooperation with Deloitte, began to select the ‘best cancer doctor and hospital’ through the survey based on endorsement from physicians in the same field but affiliated with other institutions.16 In sum, two approaches are commonly used for selecting ‘the best doctors’ in media which are selection by media personal without specific criteria and systematic selection by peer review and peer endorsement.

Many hospitals use this information as advertisement17 18 which may greatly affect patients and families in their selection of a cancer doctor. Theoretically, if this ‘best cancer doctor’ information is reliable, it will help the patients visit the highest quality cancer doctors, leading to better care, improved health outcomes and possibly to reduced costs from the perspective of the healthcare system. However, its reliability, helpfulness and potential impact on the health outcomes have not been studied in detail. As physicians who involved in cancer care with enough medical knowledge and who had experience or interaction with some of ‘best doctors’ on TV would provide fair insight on this issue than patients or public, we decided to explore cancer physicians’ attitudes towards the selection of the ‘best cancer doctor’.

This study aims to evaluate physicians’ attitudes on the credibility, fairness, validity, helpfulness to patients, their intention to use the information and helpfulness to improve the quality of cancer care of the two different methods of selecting the ‘best cancer doctor’.

Methods

Study design and subjects

This study was performed as part of a nationwide survey which was conducted by the by Cancer Policy Branch of National Cancer Center (NCC) to explore find unmet needs and issues related to cancer control every year.19–21 This study was administratively supported by Ministry of Health and Welfare, and 13 cancer centres (NCC and 12 Regional Cancer Centers designated for national cancer control programme) across all the administrative regions of Korea participated in the survey.

Physicians were eligible for this survey if they were board-certified physicians involved in cancer care. Study coordinators at each participating cancer centre recruited physicians by attending faculty meetings or contacting potentially eligible physicians individually and explained the study purpose and details. Once a physician agreed to participate in the study, a paper-based survey questionnaire was given to the physicians and physicians self-administered the survey and returned them with informed consent. The study coordinators were guided to recruit physicians of various specialties so the sample can be representative of cancer care clinicians. Of the 901 eligible physicians contacted by the study coordinators, 680 (75.5%) agreed to participate in the study and 678 (75.3%) physicians who completed the survey were included in this study (figure 1). The study was approved by the institutional review board of the NCC.

Measures

Given the paucity of the relevant research, we developed a questionnaire based on the literature review and a discussion of the expert group. The expert group was comprised three researchers in healthcare services and management, three supportive care oncologists and one behaviour scientist who are expert in patient education, and they had several meetings to develop and finalise the questionnaire. In addition, a pilot study was conducted with five physicians at NCC. They all agreed that the questionnaires were well described and understandable. As none of the physicians participating in the pilot study had difficulties, no revision was made.

To compare the attitude towards different methods of selecting the ‘best cancer doctor,’ we selected two examples on opposite sides of the spectrum (table 1). The first one is the peer-selection system, which is used in ‘best doctor in America’ in the USA or ‘best cancer doctor and hospital’ in Korea. The second is selection by media personnel without open and specified methodology which is used in ‘this is worlds’ super doctor’ in Japan or ‘best doctor’ in Korea. As study participants might not be familiar with those selection methodologies, especially with the peer-selection system which was not common in Korea, we provided the details of the methodologies presenting examples (online supplementary appendix). Questionnaires included questions regarding the physicians’ opinions on the credibility, fairness, validity, helpfulness to patients, their intention to use the information and helpfulness to improve the quality of cancer care from the perspective of the healthcare system. The survey also recorded age, sex, specialty and years from board certification. The entire study questionnaire was pilot tested with five physicians, and was considered well understood by them.

Supplemental material

Table 1

Two ‘best cancer doctor’ selection methodologies compared in this study

Statistical analysis

Descriptive statistics were used to assess the responses to the questions. The differences in responses to the two different methods of selecting ‘best cancer doctors’ were compared by the McNemar test for matched sample. All statistical analyses were conducted using STATA V.12.0 (StataCorp), and a p value of <0.05 was considered statistically significant.

Results

Characteristics of participants

The mean age of the cancer care physicians was 42.7 years, and the mean time since board certification was awarded was 11.7 years. Approximately, 75% of the study participants were male. The sample comprised surgical oncologists (41.9%), medical oncologists (27.9%), radiation oncologists (4.6%) and physicians who provide clinical support for cancer care (25.7%). Rest of the respondents were physicians who provided clinical support to the oncologists (eg, 51 radiologists, 42 pathologists, 26 pain specialists, 17 laboratory medicine physicians, 10 psychiatrists, 7 nuclear medicine physicians, 6 cardiologists, 6 rehabilitation specialists and 9 others) (table 2).

Table 2

Characteristics of the respondents (n=678)

Perceived reliability of two different methods of selecting the ‘best cancer doctor’

The physician respondents generally had a negative opinion regarding the selection method of the ‘best cancer doctor’ by the media personnel—only a few believed that the selection method was credible (9.1%), fair (6.1%) or valid (10.0%). However, they were more positively disposed towards the peer-selection method of the ‘best doctor.’ The majority agreed or strongly agreed to the statement that this method is credible (74.7%), fair (64.7%) and valid (67.4%) (table 3).

Table 3

Responses to two different methods for selecting the ‘best cancer doctor’ (n=678)

Perceived usefulness for doctor selection and impact on the quality of cancer care

Only a minority of the participants believed that the ‘best cancer doctor’ information obtained via the media personnel selection method would be useful for patients when selecting their doctor (38.5%) or that they would consider that information in case their own family member was affected by cancer (22.3%). In contrast, most respondents reported that information obtained by the peer-selection method would be useful for patients when selecting their doctor (82.2%) and that they would consider that information in case their own family member was affected by cancer (75.8%).

With regard to the impact on the quality of cancer care, although very few (12.6%) believed that the ‘best cancer doctor’ information obtained by the media personnel selection method would be helpful, more than half (59.8%) believed that the peer-selection system would be helpful (table 4).

Table 4

Free comments regarding two different selection methods

Free comments

There were a total of 44 and 42 free comments for the media personnel and peer-selection methods, respectively. Comments with similar themes were collected, and are provided in table 4. Most comments involved negative feedback for each method of selecting ‘best cancer doctors’. Regarding the media personnel selection method, many were concerned that such selection would not be objective. It would be susceptible to lobbying, and media-friendly doctors are likely to be selected. Some disclosed their personal experience that some doctors were selected as ‘best doctor’ by the programme whom they did not agree with the selection. Regarding the peer-evaluation system, there were concerns that personal relationship would play a big role and this might be more beneficial to senior doctors in high-volume hospitals.

Discussion

Taking an active role in treatment decision-making can have a positive effect on patients’ health and quality of life. The choice of doctor is one of the patients’ and family members’ first decisions as active participants in the care process.

At present, the media has a profound impact on the choices of consumers, particularly in the healthcare system. The majority of the general US population rated the healthcare information they read or heard from the media as fair to good in its quality, and stated that the information provided influenced their health-seeking behaviour.22 Healthcare information by the media influences the consumers decision to visit a doctor or not, and whether they should obtain a second opinion or not, as well as the manner in which they should treat their illness.23 Therefore, we can hypothesise that such information may have a significant influence on the selection of a physician and/or institution. In addition, while many patients generally rely on their primary care physicians to choose the cancer doctor for them,24 25 it is unlikely that the primary care physician has adequate comparative data regarding which doctor is ‘best’ for a specific type of cancer; moreover, they may be more relieved and feel more confident when they refer their patients to a specialist with a good reputation. Therefore, ‘best cancer doctor’ information would likely influence the patients’ choice of cancer doctors, both directly through awareness of the consumers and indirectly through the referring physicians’ referral preference.26

In the present study, almost all the physicians answered that the selection of ‘best cancer doctor’ by the media personnel without specified methodology was not credible, fair or valid. Many physicians showed a high degree of distrust regarding its reliability, and some reported that they could not agree with this method after observing that an individual who did not deserve the ‘best doctor’ reputation, in their opinion, was given that status on a TV programme. There were also concerns that media-friendly doctors (eg, outgoing and confident doctors with friendly demeanours), rather than the actual ‘best’ and most highly qualified cancer doctors, would be selected for participation on the programme, and that this method is susceptible to lobbying by hospitals’ media teams. These responses were expected, as TV programme producers and journalists are often limited by a lack of training in scientific methodology,27 and are likely to sensationalise the contents to attract the attention of the viewers, readers or listeners.27

In contrast, the physicians believed that the selection of the ‘best cancer doctor’ by peers, with a clear methodology, was a relatively credible, fair and valid method. Peer physicians may have observed the candidate’s practices while training or working together, and, otherwise, may have interacted with the candidate at conferences or workshops, and thus had an opportunity to indirectly evaluate the candidate’s skill or knowledge. Accordingly, Best Doctors declared that they believe that physicians are the most qualified to evaluate the experience and skill sets of other physicians, and many physician respondents in our survey seemed to agree with this idea. However, a substantial minority seemed to disagree with the reliability of this evaluation system. Some respondents stated that doctors do not necessarily know how well other doctors perform, and that it is difficult to evaluate individuals they are not familiar with. Accordingly, they were concerned that the results could be biased by personal relationships, and that this system may be more advantageous to senior doctors in high-volume centres.

Most participants believed that the ‘best cancer doctor’ information obtained by the peer-selection method would be helpful to the patients, and they stated that they intended to use that information. Conversely, the ‘best cancer doctor’ information obtained by the media personnel selection method was perceived as much less helpful. However, one interesting finding was that a substantial portion of the respondents answered that such information would be helpful to the patients and might even serve as a reference for them, while they disagreed with its reliability. For example, in the case of media personnel selection, the agreement rates were 36.1% for perceived helpfulness and 21.7% for intention to use the information personally, although they were 10% or less for the credibility, fairness and validity items. The reasons for this discrepancy could not be clearly elucidated by our study; however, they may reflect the notion that this information is ‘better than nothing’. Some comments from the respondents included: ‘just one of the references’ and ‘could serve as a reference, but is not useful beyond that,’ suggesting that they thought that this kind of information might be helpful for the patients to select a doctor with confidence, although they doubted whether it would actually help them find the ‘best cancer doctor.’

The peer-evaluation system was much more likely to be perceived positively compared with the media personnel selection system in terms of the impact on the quality of cancer care. Public reporting of doctor and/or hospital performance generally results in improvements in the quality of healthcare, mainly due to a change in provider behaviours in an attempt to improve their own performance and reputation.28 Such effects would likely apply to the ‘best cancer doctor’ selection and disclosure as well, and the effects would theoretically be larger when the selection results are considered more reliable.

The most common comments in our survey were concerning the objectiveness of the selection methods, and many suggested that objective criteria such as patient volume, mortality and research output should be considered in the selection. Recently, it has been reported that the outcomes of cancer surgery or treatment vary between hospitals or between individual surgeons,29–32 and the release of such comparative outcome data to the public are becoming increasingly common.26 33 While most respondents in this study positively regarded the peer-selection method of the ‘best cancer doctor,’ incorporation of objective data may be important, as the actual outcomes have been shown to be poorly associated with the peer identification of the ‘best doctor,’ doctors having training in prestigious institutions or doctors having a long practice record.34 35 As ‘best doctor’ is a complex construct with multiple attributes, the peer selection of the ‘best cancer doctors’ should be accompanied by risk-adjusted performance data as a prerequisite for candidate nomination, as well as an evaluation by patients or caregivers as an additional selection criterion.

Another important issue, identified from the free comments in the present study, was transparency in the selection process. Several respondents mentioned that the selection of the ‘best cancer doctor’ by the media is susceptible to lobbying or political influence. Some believed that the media teams of each hospital influence the media personnel in the selection of the ‘best cancer doctors,’ as it is an effective advertisement method. Doctors who are selected can obtain personal benefits such as increased research funding or financial incentives,36 and some respondents suspected that some doctors with political power will therefore lobby for this. Recently, in the USA, there was an incident involving a ‘doctor,’ who had not worked as a dentist for even a single day, but was listed as one of ‘America’s top dentists’ by paying cost for wall plaque.13 Accordingly, we believe that a regulatory mechanism is necessary to ensure that the selection process is transparent (ie, subject to audits from third parties) and free of conflicts of interest (ie, should not be paid by the candidates or hospitals).

This study has certain limitations. First, we were unable to evaluate many different forms of selecting the ‘best cancer doctors.’ Currently, various methods are used in different countries, but there is no established methodology of selecting ‘best cancer doctor’ yet. Moreover, as our study participants were not familiar with all different methods, comparing these two methods in our study would provide details of real-world examples. Second, our findings might be context-specific to Korea, where patients have a high degree of freedom to choose among many specialists reimbursed by a single national health insurance provider. Therefore, the implications from this study would be more applicable to market-based healthcare systems such as the USA, Japan or Taiwan, where patients are expected to make more active choices in order to be treated by high-quality providers.24 37 However, patient choice has recently gained importance in a number of European countries as well, where patients have previously not been encouraged to actively choose their healthcare providers.

Despite the trend towards public disclosure of risk-adjusted performance data, patients and their family members still rely largely on the reputation of doctors when deciding where and from whom to obtain cancer treatment.4 Our study shows that media selection of ‘best cancer doctor’ are likely to be biased, and such programme might provide misinformation the patients and their family. Assuming that the ‘best cancer doctor’ selection method by the media remains popular, it is critical to ensure that the selection process is credible, fair and valid. In the present study, we indicated that physicians perceived the selection of the ‘best cancer doctor’ by peer endorsement as relatively credible, fair and valid, and thus, this method may guide the patients to make more informed and better choices, and may lead to improvement of the quality of cancer care from the perspective of the healthcare system. The need for ensuring objectiveness and transparency was also raised. Thus, regulations or guidelines for selecting the ‘best cancer doctor’ and for disclosing the information are necessary in order to control the quality of the information and to protect the customers.

Acknowledgments

The following 13 Korean institutions (National Cancer Center and Regional Cancer Centers) participated in this study and data collection (in alphabetical order): National Cancer Center (Goyang), Busan Regional Cancer Center, Chungbuk Regional Cancer Center, Daegu-Gyeongbuk Regional Cancer Center, Daejeon Regional Cancer Center, Gangwon Regional Cancer Center, Gyeonggi Regional Cancer Center, Gyeongnam Regional Cancer Center, Incheon Regional Cancer Center, Jeju Regional Cancer Center, Jeonbuk Regional Cancer Center, Jeonnam Regional Cancer Center and Ulsan Regional Cancer Center. We also thank Dr Soojin Kim for her search for Japanese website information.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
  34. 34.
  35. 35.
  36. 36.
  37. 37.

Footnotes

  • Contributors Study concept and design: DWS, JHP and SYK. Acquisition of data: JHP, HKY, JSC, J-SI, EJN and SYK. Analysis and interpretation of data: JHP, SYK, DWS, JC and HKY. Drafting of the manuscript: DWS, JC, SYK and HKY. Statistical analysis: JHP, SYK and DWS. Obtained funding: JHP. Administrative, technical or material support: JHP, J-SI, JSC, KP, HKY and SL. Critical review revision of manuscripts: DWS, JC, HKY, SYK, SL, EJN, JSC, J-SI, KP and JHP.

  • Funding This work was supported by a grant of the National R&D Program for Cancer Control, (No 1210150) and supported by the research grant of the Chungbuk National University in 2014 and the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) No2016R1A2B4011045. The funding agreement ensured the authors’ independence in designing the study, interpreting the data, writing, and publishing the report.

  • Competing interests None declared.

  • Patient consent Next of kin consent obtained.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data sharing statement The datasets generated during and/or analysed during the current study are not publicly available due to other ongoing research projects using the material but are available from the corresponding author on reasonable request.