The paper by Ioannidis is fatally flawed by the assumption that citation counts and Twitter activity correlate with scientific quality or health and policy impact. No hypothesis has been presented to support this. Opportunities to publish at scale will be influenced by length of career, and by other academic commitments. Reasons for high citation of papers may be to challenge or seek more clarity as much to commend or build on them. This point has been demonstrated in a peer reviewed paper which contrasted the examples of publication of contentious views gaining high citations, while a research finding which quickly got applied in practice had a single publication (1); the high-cited author eventually had his licence to practice withdrawn, while he who led to many lives being saved globally had no profile. Ioannidis seems to confuse noise and twittering with good grounding and integrity of evidence.
(1) Rigby M. Citation Analysis in Health Care Sciences - Innovative Investigation or Seductive Pseudo-science?; Methods Inf Med 2014; 53(06): 459-463, DOI: 10.3414/ME14-05-0004
I thank Gorski, Dahly, and Pimenta for their criticism and Yamey and Bak-Coleman for their second round of comments. As already stated, I signed neither GBD nor JSM, my study did not aim to elevate or downgrade one or the other narrative, and I congratulate all GBD and JSM signatories. The 443 signatories from GBD include 4 scientists with whom I have co-authored, and 3 with Stanford affiliation. The respective first 443 signatories of JSM include 5 scientists with whom I have co-authored, and 15 with Stanford affiliation. I have co-authored COVID-19 scientific papers with both GBD and JSM signatories (more with the latter). I have more close ongoing collaborators and friends in JSM than GBD. According to Scopus I have 6590 co-authors and probably >200 have signed GBD or JSM. I have learned from both JSM and GBD colleagues and I thank them all for sharing their wisdom.
As I did in my original paper, I applaud Pimenta again for his amazing work. Additional studies of engagement, impressions and reach would be very useful to perform. Pimenta defends fervently but needlessly some of the JSM main points, since my paper attacked neither JSM nor GBD. It only showed that both lists include many stellar scientists and that JSM had overwhelming Twitter presence. This is emphatically obvious also in the Twitter reception of my paper.
Gorski apparently submitted his rapid response and his 7591 words long blog in his sciencebasedmedicine.org website before seeing my...
I thank Gorski, Dahly, and Pimenta for their criticism and Yamey and Bak-Coleman for their second round of comments. As already stated, I signed neither GBD nor JSM, my study did not aim to elevate or downgrade one or the other narrative, and I congratulate all GBD and JSM signatories. The 443 signatories from GBD include 4 scientists with whom I have co-authored, and 3 with Stanford affiliation. The respective first 443 signatories of JSM include 5 scientists with whom I have co-authored, and 15 with Stanford affiliation. I have co-authored COVID-19 scientific papers with both GBD and JSM signatories (more with the latter). I have more close ongoing collaborators and friends in JSM than GBD. According to Scopus I have 6590 co-authors and probably >200 have signed GBD or JSM. I have learned from both JSM and GBD colleagues and I thank them all for sharing their wisdom.
As I did in my original paper, I applaud Pimenta again for his amazing work. Additional studies of engagement, impressions and reach would be very useful to perform. Pimenta defends fervently but needlessly some of the JSM main points, since my paper attacked neither JSM nor GBD. It only showed that both lists include many stellar scientists and that JSM had overwhelming Twitter presence. This is emphatically obvious also in the Twitter reception of my paper.
Gorski apparently submitted his rapid response and his 7591 words long blog in his sciencebasedmedicine.org website before seeing my response to previous comments. I refer him to it. The Twitter presence of many signatories is loud (even if laudable) regardless of whether number of followers is expressed as absolute count, k-index, log10, square root, or sinφ. Twitter influence on public perception of science, media, and policy is large, an elephant in the room that needs better study. An elephant is an elephant regardless of whether one presents his weight in kilograms or in pounds.
Ben-Coleman additionally wants more focus on statistical testing, but this is superfluous in a descriptive design. I agree with him that the paper “establishes that many key signatories of GBD did not identifiably use Twitter at the time of data collection” and I have already acknowledged that some Twitter accounts may not be easily retrievable from Google searches, but the difference is so stark that it is unlikely to be a data artifact. “Addresses, family members, places of work, and identities of political dissidents or victims of hate crime” were not the focus of data collection here. Anyone can search and report numbers of Twitter followers, it would be absurd to have to request permission for this. Also should influential advocates who publicly aim to regulate the life of billions of people be incognito or invisible?
Yamey falsely uses the term “lobbying” citing a BuzzFeed News story based on a spuriously selected sample of my e-mails. However, even the Buzzfeed News story clearly contradicts the “lobbying” label. How could I ever “lobby” against lockdowns when those in favor of lockdown at that time included me (as testified by interviews I gave in multiple countries[1-3]) and most other scientists in our group? What we sought was better data for addressing this major crisis in the long-term. We never saw the president or anyone in the Task Force, and, worse, the quest for reliable evidence was ridiculed with tragic consequences. The interpretation that my “controversial studies claim that the coronavirus isn’t that big a threat” is contradicted by multiple of my published studies which can be found in the folder Projects> COVID-19 published work in https://profiles.stanford.edu/john-ioannidis?tab=research-and-scholarship. Conversely, at about the same time, Yamey argued in interviews that this virus was “less fatal than previous epidemics” and claimed that “if I get coronavirus …the chances of my dying are zero percent…” (https://www.france24.com/en/video/20200227-when-it-comes-to-other-viruse...). Yamey also falsely claims that I predicted in a March 2020 editorial[4] that roughly 10,000 Americans could die from Covid-19. This is not what that editorial says, Yamey seems to be reading only tweets about it. My calculation estimated on average 10,000 deaths for each 1% of infected population, a surprisingly accurate estimate given the large uncertainty at that time. I discuss this stunning distortion of my work in [5]. Then Yamey quotes another Buzzfeed News paragraph that dwells on conservative commentaries. The truth is that, being adamant to stick to the evidence and not to espouse any political narrative, I was attacked right and left, with variable predilection in each country. For example, in Greece, the same BuzzFeed News material was re-used in an attack launched against me by an alt-right politician who wanted immigrants dispatched to uninhabited islands.
We need tolerance and dialogue with people who have different views in order to find ground for the common good. Demonizing does not help. Yamey and Gorski have fiercely attacked GBD and even AIER. I am in favor of full transparency of potential conflicts, but I think that their specific attack was inappropriate and justifiably led the BMJ to correct their relevant blog[6]. I am not affiliated with any political party and I feel strongly that science should not be kidnapped by politics. I respect people regardless of what party they vote for and I worry when political affiliation (conservative, liberal, libertarian, or progressive) is invoked to erode credibility of opponents in scientific matters.
Calling scientists stellar, recording their great impact and eventually congratulating all of them for their sense of social responsibility is not “an exercise in humiliating”. Conversely, calling some of the most accomplished scientists in the world “fringe” or “unethical” and loading them with alluded conflicts that they don’t have is demeaning, regardless of whether these scientists signed GBD, JSM, or neither. I am concerned about the increasing use of belittling cartoons, inflammatory tweets with crude ad hominem material, and overt ridicule instead of scientific data. Both Yamey and Gorski are talented communicators and they can offer much to restore courteous dialogue, to elevate science and the public health effort empowering healing and reconciliation. Conversely, divisiveness eventually hinders wide endorsement of life-saving measures like vaccines.
Finally, I share with Dahly the same choice about the most important words ever written about our field: the eloquent phrase of the late Doug Altman about research. Doug was a close friend of mine and I miss our interaction. I wonder how he would have responded to this major global crisis with his calm and generous spirit.
References
1. Ioannidis, John P. A. Interview given in March and published April 1, 2020 in Der Spiegel: Ist das Coronavirus weniger tödlich als ange-nommen? Die drakonischen Maßnahmen, …sind natürlich sinnvoll und auch gerechtfertigt, weil wir noch zu wenig über das Virus wissen (the draconian measures … of course make perfect sense and they are appropriate, because we still know little about the virus. In: https://www.spiegel.de/consent-a-?targetUrl=https%3A%2F%2Fwww.spiegel.de....
2. Ioannidis JPA. Interview given at the Stanford University studio on March 24, uploaded as detailed 1-hour video, Perspectives on the Pandemic, episode 1, https://www.dailymotion.com/video/x7ubcws. “I am perfectly happy to be in a situation of practically lockdown in California more or less, with shelter-in-place, but I think very soon we need to have that information to see what we did with that and where do we go next”. The video had over 1 million views but was censored in May by You Tube.
3. Ioannidis JPAI. Interview given on March 22 to in2life.gr: (“The lockdown is the correct decision, but…”): (the stringent measures that are taken at this phase are necessary because it seems that we have lost control of the epidemic, https://www.in2life.gr/features/notes/article/1004752/ioannhs-ioannidhs-...
4. Ioannidis JP. A fiasco-in-the-making? As the Coronavirus pandemic takes hold we are making decisions without reliable data. STAT (2020) https://www.statnews.com/2020/03/17/a-fiasco-in-the-making-as-the-corona...
5. Ioannidis JP, Tanner M, Cripps S. Forecasting for COVID-19 has failed. Int J Forecasting 2022 April-June; 38(2): 423–438.Published only ne 2020 Aug 25. doi: 10.1016/j.ijforecast.2020.08.004
6. Gorski D, Yamey G. COVID-19 and the new merchants of doubt. blogs.bmj.com/bmj/2021/09/13/covid-19-and-the-new-merchants-of-doubt/
Time for the science Kanyes to stop harassing the science Kardashians
This study doesn't adequately measure what it seeks to measure and may instead harmfully discourage people from engaging with the stakeholders of science.
We share many of the concerns presented by other rapid reviews, but we contribute to the discussion by critiquing this study from a science communication perspective, which we would argue is the most appropriate domain for this kind of study.
The study aims to measure whether it is scientific citations or social media metrics that influence the apparent perception that one group of scientists' policy responses to COVID-19 are more supported by credible scientists compared to another group's.
We could point to concerns about the precision and measurement of the outcomes under investigation. The description of "perceptions" (presumably an attitudinal concept) or "dominant narrative" or “prevailing narrative” (presumably a relative measure of message prevalence) are never defined or measured in this paper. Instead, there is an assumption that the number of Twitter followers axiomatically leads to such attitudinal or message prevalence outcomes. As science communication researchers, we wish it were so easy.
While one reviewer queried this assumption, it is disappointing that it was not fully addressed. As noted by that reviewer, "The author attempted to examine a group of scien...
Time for the science Kanyes to stop harassing the science Kardashians
This study doesn't adequately measure what it seeks to measure and may instead harmfully discourage people from engaging with the stakeholders of science.
We share many of the concerns presented by other rapid reviews, but we contribute to the discussion by critiquing this study from a science communication perspective, which we would argue is the most appropriate domain for this kind of study.
The study aims to measure whether it is scientific citations or social media metrics that influence the apparent perception that one group of scientists' policy responses to COVID-19 are more supported by credible scientists compared to another group's.
We could point to concerns about the precision and measurement of the outcomes under investigation. The description of "perceptions" (presumably an attitudinal concept) or "dominant narrative" or “prevailing narrative” (presumably a relative measure of message prevalence) are never defined or measured in this paper. Instead, there is an assumption that the number of Twitter followers axiomatically leads to such attitudinal or message prevalence outcomes. As science communication researchers, we wish it were so easy.
While one reviewer queried this assumption, it is disappointing that it was not fully addressed. As noted by that reviewer, "The author attempted to examine a group of scientists' academic and social media impact, but this has not been linked to health-related outcomes, nor COVID-19 measures."1 We agree. Yet, this is not our main concern.
Likewise, we could point to concerns about the main explanatory variable, which is the Kardashian Index, computed by finding the number of followers a given author has on Twitter and then dividing that number by the number of citations that author has in the scientific literature.2 As stated by its inventor, the measure is satirical in nature, intended to poke fun at "academia's obsessions with metrics”.3 Accordingly, many subsequent authors have used the index for satirical purposes. For example, Khan, et al. 4 stated that it had limited scientific value when examining the relationship between cardiologists’ Twitter followers and their scientific citations and confirmed its use for was for satire.5 A similar approach and caution against taking it too seriously was used when examining the Twitter use of electrophysiologists.6 However, we are concerned that other authors have taken the index more seriously, for example, when examining the relationship between these metrics for interventional neuroradiologists7 and educational leadership researchers.8
While the author does acknowledge the face validity and construct validity of this measure, it is worth detailing why these are more than limitations that should be recognised. We argue that these are flaws that render the findings unusable. Yet, this is also not our main concern.
The use of the Kardashian index means the study only focuses on one social media platform, a choice that was not justified in the study. This decision is important because it excludes many other, more frequently used social media platforms, such as Facebook, YouTube, WhatsApp and Instagram. Yet, this is not our main concern.
Describing this count as a measure of "social media visibility" is false. Visibility could be measured by analysing the engagements on tweets and posts on other social media platforms made by these authors on this particular topic, not by counting followers. Just because an author has a higher number of followers, it does not follow that any tweets on this topic were viewed and nor is it accurate to describe this as a "Twitter footprint." The study also describes this variable as measuring "social media presence" which incorrectly seems to imply whether or not the authors have registered an account (or are present) across any of these platforms, not the number of authors' followers on just one platform. Yet, these too are not our main concerns.
To get to our biggest concern, we have to ask why someone would write something like this and why editors would choose to publish it, without ethical approval. Our main concern is that the argument built on these methodological flaws may lead to harms among the academic community. For example, in overstating the influence that a Twitter follower count has, this study may encourage some scientists and medical professionals to misdirect time and effort into building a long social media follower list, with the unrealistic expectation they may relatively easily achieve public policy influence. As the study correctly notes, "massive misinformation and despicable behaviour may still generate huge follower lists."
Most harmfully, this paper may lead other scientists and medical professionals to avoid social media and other communication media to meaningfully engage the public as they wish to avoid this kind of mocking and personal derision in the academic literature. Particularly during a pandemic, we should instead encourage those who wish to take steps towards meaningful engagement between the science community and the stakeholders of science. In short, there is nothing inherently wrong with being a science Kardashian. Indeed we would argue that the world needs more of them.
Understanding the relationship between science communication activities, perceived scientific legitimacy and public policy outcomes is legitimate and worthy, but this study does not deliver meaningfully on this enterprise.
Like Kim Kardashian's former husband, we note that the author is not currently active on Twitter. Maybe like him, it is time for the science Kanyes to pledge to stop harassing the science Kardashians.9
Matthew S Nurse
Will J Grant
1. BMJ Open. Peer review history: BMJ Open, 2022.
2. Hall N. The Kardashian index: a measure of discrepant social media profile for scientists. Genome Biology 2014;15(7) doi: 10.1186/s13059-014-0424-0
3. Hall N. Only if they haven’t read the paper! the tells that the entire premise is satire could not be made more obvious. But is (sic) has had a ~180 citations so maybe I should have tried harder. In: @neilhall_uk, ed., 2022.
4. Khan MS, Shahadat A, Khan SU, et al. The Kardashian Index of Cardiologists. JACC: Case Reports 2020;2(2):330-32. doi: 10.1016/j.jaccas.2019.11.068
5. Michos Erin D, Khan Muhammad S, Kalra A. Reply. JACC: Case Reports 2020;2(6):983-84. doi: 10.1016/j.jaccas.2020.05.006
6. Linz D, Garcia R, Guerra F, et al. Twitter for professional use in electrophysiology: practical guide for #EPeeps. EP Europace 2021;23(8):1192-99. doi: 10.1093/europace/euab048
7. Vilanilam GK, Wadhwa V, Purushothaman R, et al. The Kardashian index of interventional neuroradiologists: measuring discrepant social media influence. The Neuroradiology Journal 2020;33(6):525-27. doi: 10.1177/1971400920950928
8. Eacott S. Educational leadership research, Twitter and the curation of followership. Leadership, Education, Personality: An Interdisciplinary Journal 2020;2(2):91-99. doi: 10.1365/s42681-020-00016-z
9. West K. I’ve learned that using all caps makes people feel like I’m screaming at them. I’m working on my communication. I can benefit from a team of creative professionals, organisers, mobilisers and community leaders. Thank everybody for supporting me. I know sharing screen shots was jarring and came off as harassing Kim. I take accountability. I’m still learning in real time. I don’t have all the answers. To be good leader is to be a good listener. In: https://www.instagram.com/kanyewest/, ed. Newsweek. New York, USA, 2022.
I am grateful to John Ioannidis for replying to my rapid response, but I do not agree with him when he says that I have misrepresented his study. I also believe he has failed to address my two original concerns: (1) along with a group of GBD signatories, he lobbied the Trump administration, yet he failed to declare this competing interest, and (2) his paper demeans, belittles, and humiliates named scientists and yet he did not seek ethics review. I also am very concerned indeed that the author has not described the GBD's competing interests (he merely says, " GBD leaders have repeatedly denied conflicts of interest"). He merely takes the GBD, who he is allied with, at face value.
I'd like to more fully explain my three concerns.
1. THE AUTHOR'S LOBBYING EFFORTS, ALONG WITH GBD SIGNATORIES
It is a matter of public record that Prof Ioannidis worked with three GBD signatories, one of who was an author of the GBD, to lobby the Trump Administration, as described in a March 2020 investigative report, titled "An Elite Group Of Scientists Tried To Warn Trump Against Lockdowns In March." [1]
The author of the investigation, Stephanie Lee, writes that: "John Ioannidis’s controversial studies claim that the coronavirus isn’t that big a threat. Before the Stanford scientist did any of them, he wanted to take that message to the White House."
I am grateful to John Ioannidis for replying to my rapid response, but I do not agree with him when he says that I have misrepresented his study. I also believe he has failed to address my two original concerns: (1) along with a group of GBD signatories, he lobbied the Trump administration, yet he failed to declare this competing interest, and (2) his paper demeans, belittles, and humiliates named scientists and yet he did not seek ethics review. I also am very concerned indeed that the author has not described the GBD's competing interests (he merely says, " GBD leaders have repeatedly denied conflicts of interest"). He merely takes the GBD, who he is allied with, at face value.
I'd like to more fully explain my three concerns.
1. THE AUTHOR'S LOBBYING EFFORTS, ALONG WITH GBD SIGNATORIES
It is a matter of public record that Prof Ioannidis worked with three GBD signatories, one of who was an author of the GBD, to lobby the Trump Administration, as described in a March 2020 investigative report, titled "An Elite Group Of Scientists Tried To Warn Trump Against Lockdowns In March." [1]
The author of the investigation, Stephanie Lee, writes that: "John Ioannidis’s controversial studies claim that the coronavirus isn’t that big a threat. Before the Stanford scientist did any of them, he wanted to take that message to the White House."
Lee notes:
"Stanford University scientist John Ioannidis has declared in study after study that the coronavirus is not that big of a threat, emboldening opponents of economic shutdowns — and infuriating critics who see fundamental errors in his work. But even before the epidemiologist had any of that data in hand, he and an elite group of scientists tried to convince President Donald Trump that locking down the country would be the real danger."
This elite group of scientists included David Katz (True Health Initiative), Michael Levitt (Stanford), and Jay Bhattacharya (Stanford), all of whom signed the Great Barrington Declaration. Jay Bhattacharya was one of the 3 authors of the GBD.
In a now infamous March 17 2020 editorial, Ioannidis predicted, based on data from the quarantined Diamond Princess cruise ship, that roughly 10,000 Americans could die from Covid-19; he said that "this sounds like a huge number." He argued that lockdowns could endanger “billions, not just millions” of lives [2].
Lee notes that:
"Over the following days, Ioannidis grew more vocal in a flurry of interviews and scientific commentary. And his Stat op-ed caught the eye of many conservative commentators, from Ann Coulter to Fox News personality Lisa Boothe. Bret Stephens cited it in a New York Times column titled “It’s Dangerous to Be Ruled by Fear.” It also circulated among West Wing aides, Bloomberg reported. But Ioannidis wanted to make his case to the president directly, according to the emails. Starting around March 23, he began rounding up a cohort of vocal and influential lockdown skeptics to help him do so. “I was told that they can arrange for the President to meet with 5-7 top scientists,” Ioannidis wrote in one email, with the subject line “meeting with the President in D.C.” He added, “I think you can make a huge difference in this critical time.”
I feel strongly that Ioannidis should have declared that he worked with GBD signatories to lobby the President. I am unclear why the author omitted this information from his competing interests statement. Surely his lobbying efforts, and his close work with the GBD, are absolutely critical in understanding why he published a paper that aims to demean and humiliate critics of the GBD?
2. THE DUBIOUS ETHICS OF THIS STUDY
I remain deeply troubled that Professor Ioannidis conducted a study that aimed to humiliate named individuals, without their consent. He uses an index, the Kardashian Index, which itself was supposed to be satire, to argue that his GBD colleagues are proper scientists whereas those who criticized his GBD colleagues are "Kardashians." He did not seek IRB approval or informed consent to this exercise in humiliating scientists who have been critical of the GBD.
I think it is reckless to argue, as the author does, that research about people's social media use does not require IRB approval. Does the author believe that humiliating critics of the GBD is free of ethical concerns? I believe he should have asked an IRB to review the protocol. As Bak-Coleman notes, "Professor Ioannidis argues that IRB approval should not be required to publish publicly searchable information in a deanonymized context. This is a dangerous position to take. A researcher using this standard could publish, for instance, the sleuthed addresses, family members, places of work, and identities of political dissidents or victims of hate crime. " [3]
3. THE GBD'S COMPETING INTERESTS
I have written at length in the BMJ about the support that the American Institute for Economic Research, a libertarian, Koch-funded, free market think tank, has provided to the GBD [4,5]. The AIER wined and dined the GBD, and provided accommodation, web services, social media support, marketing, and other forms of support. The AIER also helped to shape and edit the GBD. Again, this is crucial in understanding the motivations of the GBD, which opposes widespread vaccination, masks, test/trace/isolate/support, workplace protections and school protections. As I have previously argued, "Determining the scope of AIER’s involvement in the publication and dissemination of the GBD is crucial to understanding its ideological underpinnings and is a matter of national importance given that Martin Kulldorff, Jay Bhattacharya, and Sunetra Gupta, the authors of the GBD, met with the Secretary of Health and Human Services, Alex Azar, on 5 October 2020, the day the document was published." [5]
I am reminded of Doug Altman's seminal paper, The Scandal of Medical Research, published in the British Medical Journal almost 30 years ago. It begins:
"We need less research, better research, and research done for the right reasons."
I believe these are the most important words ever written about our field. They are also the most ignored. If asked for evidence of the latter, I would start by highlighting this paper, published under that same BMJ banner, and from which nothing of scientific substance can be learned. The bar remains entirely too low.
The BMJOpen has published a 'paper' by a close friend and supporter of Great Barrington authors, that attempts to measure 'social media influence' by Twitter followers (ignoring an entire field of data science that studies engagement, impressions and reach) in correlation to scientific impact (using an equally flawed citation count metric) combined into an index that was published and later confirmed to be, an actual joke. It's called the Kardashian Index, in order to say what exactly?
That letting millions of people become infected with COVID-19 when a vaccine was only a month or two away was actually a good idea? That shutting away approximately 30% of the population for an indeterminate amount of time was both feasible and ethical? That natural infection would confer lasting immunity (it didn't)? Or that variants wouldn't arise as a consequence of widespread infection (they did?). And all of these good ideas would've been gladly received by the global scientific community and it's decision makers, nearly 100% of which completely ignored these ideas as the nonsense they were, if not for a group of 30 scientists on Twitter?
Is that really what the BMJ, the British Medical Journal, thought was a scientific and academically rigorous concept? Did they even read any of the references? Such as when the Kardashian Index 'author referred to his own work as "just a bit of fun"?
The BMJOpen has published a 'paper' by a close friend and supporter of Great Barrington authors, that attempts to measure 'social media influence' by Twitter followers (ignoring an entire field of data science that studies engagement, impressions and reach) in correlation to scientific impact (using an equally flawed citation count metric) combined into an index that was published and later confirmed to be, an actual joke. It's called the Kardashian Index, in order to say what exactly?
That letting millions of people become infected with COVID-19 when a vaccine was only a month or two away was actually a good idea? That shutting away approximately 30% of the population for an indeterminate amount of time was both feasible and ethical? That natural infection would confer lasting immunity (it didn't)? Or that variants wouldn't arise as a consequence of widespread infection (they did?). And all of these good ideas would've been gladly received by the global scientific community and it's decision makers, nearly 100% of which completely ignored these ideas as the nonsense they were, if not for a group of 30 scientists on Twitter?
Is that really what the BMJ, the British Medical Journal, thought was a scientific and academically rigorous concept? Did they even read any of the references? Such as when the Kardashian Index 'author referred to his own work as "just a bit of fun"?
These discussions were around the deaths of millions of individuals, in the midst of the worst global crisis since World War II. To be labelled as 'non-serious' because of a joke index, distorting science, is frankly appalling and, to be perfectly frank, utterly idiotic notion. It feels weird to have to castigate so many of my literal and figurative elders in the scientific community, but seriously, grow up. If I were the BMJ I would like to remove my good name from this 'journal'.
It is very puzzling how someone of Prof. Ioannidis' stature could so lower himself as to write such a methodologically flawed a manuscript. However, to me its methodological flaws, which have been well covered by other Rapid Responses, are completely overshadowed by a far more glaring problem, a conceptual one at the heart of the very premise of the manuscript. The Kardashian index was conceived as satire. If you do not believe me, look no further than to Neil Hall himself, who, having been tagged in Tweets about Prof. Ioannidis' article, took to Twitter to say that the Kardashian index was "a dig at metrics not Kardashians. It’s like taking a quiz to see what character from Game of Thrones you are and finding out you’re Joffrey Baratheon. It doesn’t matter - it’s not a real test. Thankfully," adding that "the tells that the entire premise is satire could not be made more obvious." (https://twitter.com/neilhall_uk/status/1492259823114723329)
1. "I had intended to collect more data but it took a long time and I therefore decided 40 would be enough to make a point. Please don’t take this as representativ...
It is very puzzling how someone of Prof. Ioannidis' stature could so lower himself as to write such a methodologically flawed a manuscript. However, to me its methodological flaws, which have been well covered by other Rapid Responses, are completely overshadowed by a far more glaring problem, a conceptual one at the heart of the very premise of the manuscript. The Kardashian index was conceived as satire. If you do not believe me, look no further than to Neil Hall himself, who, having been tagged in Tweets about Prof. Ioannidis' article, took to Twitter to say that the Kardashian index was "a dig at metrics not Kardashians. It’s like taking a quiz to see what character from Game of Thrones you are and finding out you’re Joffrey Baratheon. It doesn’t matter - it’s not a real test. Thankfully," adding that "the tells that the entire premise is satire could not be made more obvious." (https://twitter.com/neilhall_uk/status/1492259823114723329)
1. "I had intended to collect more data but it took a long time and I therefore decided 40 would be enough to make a point. Please don’t take this as representative of my normal research rigor."
2. "While aware that the analysis is flawed and lacks statistical rigor, it is a relief to see that there is some kind of positive trend in scientific value when compared with celebrity."
3. "I propose that all scientists calculate their own K-index on an annual basis and include it in their Twitter profile. Not only does this help others decide how much weight they should give to someone’s 140 character wisdom, it can also be an incentive - if your K-index gets above 5, then it’s time to get off Twitter and write those papers."
As an aside, I took Hall's admonition to heart and calculated my K-index using his methods. It was 118. I did not, however, include it in my Twitter profile (@gorskon).
Levity aside, throughout his paper Prof. Ioannidis treats the K-index as if it were a valid bibliometrics measure, all in the service, apparently, of portraying the signatories of the John Snow Memorandum (JSM) as more "science Kardashians" than the signatories of the Great Barrington Declaration (GBD). One can only speculate why he ever thought that such an exercise was worth pursuing or why BMJ Open Access thought it was worth publishing.
I am left to conclude one of two things. Either Prof. Ioannidis did not realize at the time he conceived this analysis that the K-index was always intended as satire and, by using it as though it were an actual even somewhat valid measure of anything, proved Dr. Hall's point about the obsession of some scientists with metrics. Alternatively, Prof. Ioannidis' peculiar exercise in casting critics of the GBD as "science Kardashians" compared to the GBD signatories is as much satire as the K-index. Unfortunately, if the latter explanation is accurate, then I must confess that Prof. Ioannidis' satire is far too subtle for me. Indeed, it is so subtle that it is impossible to recognize as satire! Perhaps it's parody.
I welcome Professor Ioannidis’ engagement on these issues. The response however falls short on several points and raises additional concerns.
Ethical: Professor Ioannidis is correct that many papers make use of Twitter and other publicly available sources of data. However, he is mistaken in the assertion that as a result, such work does not need to be reviewed by an IRB (my own research is a case in point). IRBs exist in order to safeguard the rights of human subjects, and the decision about whether an IRB is required critically must rest with an IRB rather than with individual scientists conducting research.
Professor Ioannidis argues that IRB approval should not be required to publish publicly searchable information in a deanonymized context. This is a dangerous position to take. A researcher using this standard could publish, for instance, the sleuthed addresses, family members, places of work, and identities of political dissidents or victims of hate crime. In this particular case, Dr. Ioannidis indirectly ascertained non-use of Twitter, a decision that was made in a reasonably private context. No reasonable definition of consent to participate in the Twitter portion of the study could be applied for these signatories. Given the inflamed nature of the discussions around these issues, the potential for signatories of either to experience further negative attention on social media as a result of their Twitter accounts being publicly identified and linked...
I welcome Professor Ioannidis’ engagement on these issues. The response however falls short on several points and raises additional concerns.
Ethical: Professor Ioannidis is correct that many papers make use of Twitter and other publicly available sources of data. However, he is mistaken in the assertion that as a result, such work does not need to be reviewed by an IRB (my own research is a case in point). IRBs exist in order to safeguard the rights of human subjects, and the decision about whether an IRB is required critically must rest with an IRB rather than with individual scientists conducting research.
Professor Ioannidis argues that IRB approval should not be required to publish publicly searchable information in a deanonymized context. This is a dangerous position to take. A researcher using this standard could publish, for instance, the sleuthed addresses, family members, places of work, and identities of political dissidents or victims of hate crime. In this particular case, Dr. Ioannidis indirectly ascertained non-use of Twitter, a decision that was made in a reasonably private context. No reasonable definition of consent to participate in the Twitter portion of the study could be applied for these signatories. Given the inflamed nature of the discussions around these issues, the potential for signatories of either to experience further negative attention on social media as a result of their Twitter accounts being publicly identified and linked to in this fashion is not trivial. Whether this represents “reasonable expectation of harm” is a decision that should be made by an IRB rather than an individual scientist.
Conflicts of interest: It is unclear how having many conflicts of interest relieves one of the need to explicitly describe the nature of those individual conflicts (professional, financial, etc..).
Statistical: Regardless of their location in the manuscript, standards suggest that “rather than reporting isolated P values, articles should include effect sizes and uncertainty metrics” (Chavalarias, Wallach, Li, Ioannidis 2016). Certainly stating the test would be a given. These are basic reporting standards upheld broadly by journals and scientists alike. It is not clear how the reader would be able to infer, for instance, that a Mann-Whitney U-test was used given the sensitivity of the test in this context to zero inflation and therefore the choice of zero-imputing missing data. There are many more appropriate options that take-into account missing data or zero-inflation. Similarly, several options exist for 2x2 tables as well, and a reader should be able to evaluate whether those choices are appropriate.
Conceptual: I hope Dr. Ioannidis is aware that the Kardashian Index paper is a work of satire, as indicated by the final section in that paper “on a serious note”, implying the paper itself is not serious. A core issue is that Twitter use, follower counts, and citation counts are correlated with age and other demographic factors. The fact that the paper has been cited elsewhere is not a guarantee that the metric is appropriate for the stated purpose—which was "to examine whether the prevailing narrative that GBD is a minority view among experts is true". Whatever the merits of Kardashian Indices, they are surely immaterial to this question for these and other reasons. Generously, this paper simply establishes that many key signatories of GBD did not identifiably use Twitter at the time of data collection.
I thank Sheldrick, Bak-Coleman and Yamey for their constructive criticism. As stated clearly in the paper, the main analysis focuses on the key signatories of both documents: All the key signatories are included without random sampling. As the paper already explains in detail, since thousands of additional people signed each document, a few randomly selected signatories from the longer lists were also explored. Random numbers were generated in Excel. There was no power calculation for this secondary analysis. The paper already explains that this secondary analysis deserves caution, since only 443 GBD signatories were listed by name when the two documents were accessed online in April 2021.
The Twitter data represents information readily retrieved by a Google search by anyone. The notion of requiring IRB approval to report the results of searching Google or free publicly available databases (e.g. citation databases) contradicts the practice of hundreds of thousands of published papers reporting on such searches without IRB approval. Moreover, contrary to what Bak-Coleman asserts (“failure to disclose the author's well-documented history of interaction (co-authorship, affiliations, debate, etc..) with the named signatories—positive and negative”), my disclosures clarified explicitly that “The author has signed neither of the two documents and has many friends, collaborators and other people who he knows and he admires among those who have signed each of them...
I thank Sheldrick, Bak-Coleman and Yamey for their constructive criticism. As stated clearly in the paper, the main analysis focuses on the key signatories of both documents: All the key signatories are included without random sampling. As the paper already explains in detail, since thousands of additional people signed each document, a few randomly selected signatories from the longer lists were also explored. Random numbers were generated in Excel. There was no power calculation for this secondary analysis. The paper already explains that this secondary analysis deserves caution, since only 443 GBD signatories were listed by name when the two documents were accessed online in April 2021.
The Twitter data represents information readily retrieved by a Google search by anyone. The notion of requiring IRB approval to report the results of searching Google or free publicly available databases (e.g. citation databases) contradicts the practice of hundreds of thousands of published papers reporting on such searches without IRB approval. Moreover, contrary to what Bak-Coleman asserts (“failure to disclose the author's well-documented history of interaction (co-authorship, affiliations, debate, etc..) with the named signatories—positive and negative”), my disclosures clarified explicitly that “The author has signed neither of the two documents and has many friends, collaborators and other people who he knows and he admires among those who have signed each of them.” To track scientists’ co-authorships and affiliations, information is available in public view. Lists of all scientists’ papers exist in multiple public databases. My COVID-19-related papers are available in the folder https://profiles.stanford.edu/john-ioannidis?tab=research-and-scholarshi... published work; its perusal would allow a more accurate appraisal of my positions. Unfortunately my writings are occasionally distorted in some media and social media and such distortions happen to the positions and writings of many scientists who struggle to remain neutral and objective in the current charged environment. Among the thousands of scientists listed by name in the two documents’ sites in April 2021, I have probably co-authored and share affiliations with more scientists in JSM than GBD. My disclosure statement stated: “JPI congratulates all the thousands of signatories (of both documents) for their great sense of social responsibility.” I stand by this statement and remain thankful to all JSM and GBD signatories for their commitment to help during crisis.
I devoted several paragraphs discussing limitations of both citation indices and Twitter followers as measures of impact. This does not mean that citation and Twitter follower data cannot be used. Thousands of scientific publications have already done so. The K-index has already been widely used in published analyses. The article that introduced it1 has been cited 180 times and the number of mentions/uses is much larger, if one includes also citations to other papers that further popularized it, often with explicitly listed names of scientists, e.g. in Science.2 I should correct that the original, semi-serious, implementation by Hall1 used Web of Knowledge (not Google Scholar), but Scopus (used in my analysis) citation counts are even closer to Web of Knowledge counts. I have been repeatedly critical of over-trusting single indices of any sort and have strongly advocated for the use of more comprehensive approaches that appreciate multiple types of contributions with a broad perspective.3,4 However, here the presented data on Twitter followers are so extreme that they stand out regardless of whether the K-index or any index is preferred or not. I admire people who use social platform media for accurate science communication. Science communicators are my heroes, but it is important to recognize and to study dispassionately the extravagant influence of social media on public discourse of science and on policy.
My key analyses are purely descriptive, thus no P-values appear in the abstract. P-values are used secondarily for some comparisons based on routine tests (exact test for 2x2 tables, Mann-Whitney U test for two groups). The used data are all indeed provided in the manuscript. References were already provided in the paper5-7 that can link to the repository where citation metrics are publicly available. A direct link to the version used is https://elsevier.digitalcommonsdata.com/datasets/btchxktzyw/2. The referenced papers describe in detail the underlying methods. Reference was also provided8 to the widely used scientific field classification method (Science-Metrix): the classification system is free to download at https://www.science-metrix.com/classification/.
One of the reasons that I chose BMJ Open for submitting this paper is its excellent tradition to publish also peer-reviews. Apparently the comments of the three peer reviewers were not released initially due to some technical error on the part of BMJ Open. I am grateful that the editors have now posted the peer-review comments.
Finally, in contrast to what Yamey mentions, I did not "lobby" against a circuit-breaker lockdown in March 2020. I am not even among the probably millions of Americans who have met or seen in person any president of the United States. In fact, in several interviews I gave at that time in the USA and elsewhere, I agreed with lockdown at that time because the situation was uncertain and I urged to obtain promptly more reliable data to deal with this major crisis. As stated even in the Buzzfeed News feature alluded by Yamey, the scientists in the alluded group had variable perspectives regarding lockdown, but I believe we all shared the wish to obtain better data to guide a most efficient public health response. Sadly, this perspective was not heard. Yamey clearly misrepresents the current study when he says that “the study tries to show the GBD signatories were somehow superior to the scientists who signed the JSM”. My paper clearly states that both documents were supported by many stellar scientists. I continue to believe that both GBD and JSM were supported by stellar scientists, including Yamey.
The paper by Ioannidis is fatally flawed by the assumption that citation counts and Twitter activity correlate with scientific quality or health and policy impact. No hypothesis has been presented to support this. Opportunities to publish at scale will be influenced by length of career, and by other academic commitments. Reasons for high citation of papers may be to challenge or seek more clarity as much to commend or build on them. This point has been demonstrated in a peer reviewed paper which contrasted the examples of publication of contentious views gaining high citations, while a research finding which quickly got applied in practice had a single publication (1); the high-cited author eventually had his licence to practice withdrawn, while he who led to many lives being saved globally had no profile. Ioannidis seems to confuse noise and twittering with good grounding and integrity of evidence.
(1) Rigby M. Citation Analysis in Health Care Sciences - Innovative Investigation or Seductive Pseudo-science?; Methods Inf Med 2014; 53(06): 459-463, DOI: 10.3414/ME14-05-0004
I thank Gorski, Dahly, and Pimenta for their criticism and Yamey and Bak-Coleman for their second round of comments. As already stated, I signed neither GBD nor JSM, my study did not aim to elevate or downgrade one or the other narrative, and I congratulate all GBD and JSM signatories. The 443 signatories from GBD include 4 scientists with whom I have co-authored, and 3 with Stanford affiliation. The respective first 443 signatories of JSM include 5 scientists with whom I have co-authored, and 15 with Stanford affiliation. I have co-authored COVID-19 scientific papers with both GBD and JSM signatories (more with the latter). I have more close ongoing collaborators and friends in JSM than GBD. According to Scopus I have 6590 co-authors and probably >200 have signed GBD or JSM. I have learned from both JSM and GBD colleagues and I thank them all for sharing their wisdom.
As I did in my original paper, I applaud Pimenta again for his amazing work. Additional studies of engagement, impressions and reach would be very useful to perform. Pimenta defends fervently but needlessly some of the JSM main points, since my paper attacked neither JSM nor GBD. It only showed that both lists include many stellar scientists and that JSM had overwhelming Twitter presence. This is emphatically obvious also in the Twitter reception of my paper.
Gorski apparently submitted his rapid response and his 7591 words long blog in his sciencebasedmedicine.org website before seeing my...
Show MoreTime for the science Kanyes to stop harassing the science Kardashians
This study doesn't adequately measure what it seeks to measure and may instead harmfully discourage people from engaging with the stakeholders of science.
We share many of the concerns presented by other rapid reviews, but we contribute to the discussion by critiquing this study from a science communication perspective, which we would argue is the most appropriate domain for this kind of study.
The study aims to measure whether it is scientific citations or social media metrics that influence the apparent perception that one group of scientists' policy responses to COVID-19 are more supported by credible scientists compared to another group's.
We could point to concerns about the precision and measurement of the outcomes under investigation. The description of "perceptions" (presumably an attitudinal concept) or "dominant narrative" or “prevailing narrative” (presumably a relative measure of message prevalence) are never defined or measured in this paper. Instead, there is an assumption that the number of Twitter followers axiomatically leads to such attitudinal or message prevalence outcomes. As science communication researchers, we wish it were so easy.
While one reviewer queried this assumption, it is disappointing that it was not fully addressed. As noted by that reviewer, "The author attempted to examine a group of scien...
Show MoreI am grateful to John Ioannidis for replying to my rapid response, but I do not agree with him when he says that I have misrepresented his study. I also believe he has failed to address my two original concerns: (1) along with a group of GBD signatories, he lobbied the Trump administration, yet he failed to declare this competing interest, and (2) his paper demeans, belittles, and humiliates named scientists and yet he did not seek ethics review. I also am very concerned indeed that the author has not described the GBD's competing interests (he merely says, " GBD leaders have repeatedly denied conflicts of interest"). He merely takes the GBD, who he is allied with, at face value.
I'd like to more fully explain my three concerns.
1. THE AUTHOR'S LOBBYING EFFORTS, ALONG WITH GBD SIGNATORIES
It is a matter of public record that Prof Ioannidis worked with three GBD signatories, one of who was an author of the GBD, to lobby the Trump Administration, as described in a March 2020 investigative report, titled "An Elite Group Of Scientists Tried To Warn Trump Against Lockdowns In March." [1]
The author of the investigation, Stephanie Lee, writes that: "John Ioannidis’s controversial studies claim that the coronavirus isn’t that big a threat. Before the Stanford scientist did any of them, he wanted to take that message to the White House."
Lee notes:
"Stanford University scientist John Ioan...
Show MoreI am reminded of Doug Altman's seminal paper, The Scandal of Medical Research, published in the British Medical Journal almost 30 years ago. It begins:
"We need less research, better research, and research done for the right reasons."
(https://www.bmj.com/content/308/6924/283)
I believe these are the most important words ever written about our field. They are also the most ignored. If asked for evidence of the latter, I would start by highlighting this paper, published under that same BMJ banner, and from which nothing of scientific substance can be learned. The bar remains entirely too low.
The BMJOpen has published a 'paper' by a close friend and supporter of Great Barrington authors, that attempts to measure 'social media influence' by Twitter followers (ignoring an entire field of data science that studies engagement, impressions and reach) in correlation to scientific impact (using an equally flawed citation count metric) combined into an index that was published and later confirmed to be, an actual joke. It's called the Kardashian Index, in order to say what exactly?
That letting millions of people become infected with COVID-19 when a vaccine was only a month or two away was actually a good idea? That shutting away approximately 30% of the population for an indeterminate amount of time was both feasible and ethical? That natural infection would confer lasting immunity (it didn't)? Or that variants wouldn't arise as a consequence of widespread infection (they did?). And all of these good ideas would've been gladly received by the global scientific community and it's decision makers, nearly 100% of which completely ignored these ideas as the nonsense they were, if not for a group of 30 scientists on Twitter?
Is that really what the BMJ, the British Medical Journal, thought was a scientific and academically rigorous concept? Did they even read any of the references? Such as when the Kardashian Index 'author referred to his own work as "just a bit of fun"?
These discussions wer...
Show MoreIt is very puzzling how someone of Prof. Ioannidis' stature could so lower himself as to write such a methodologically flawed a manuscript. However, to me its methodological flaws, which have been well covered by other Rapid Responses, are completely overshadowed by a far more glaring problem, a conceptual one at the heart of the very premise of the manuscript. The Kardashian index was conceived as satire. If you do not believe me, look no further than to Neil Hall himself, who, having been tagged in Tweets about Prof. Ioannidis' article, took to Twitter to say that the Kardashian index was "a dig at metrics not Kardashians. It’s like taking a quiz to see what character from Game of Thrones you are and finding out you’re Joffrey Baratheon. It doesn’t matter - it’s not a real test. Thankfully," adding that "the tells that the entire premise is satire could not be made more obvious." (https://twitter.com/neilhall_uk/status/1492259823114723329)
Indeed, various points in his paper (https://genomebiology.biomedcentral.com/articles/10.1186/s13059-014-0424-0), Hall's "tells" include passages like:
1. "I had intended to collect more data but it took a long time and I therefore decided 40 would be enough to make a point. Please don’t take this as representativ...
Show MoreI welcome Professor Ioannidis’ engagement on these issues. The response however falls short on several points and raises additional concerns.
Ethical: Professor Ioannidis is correct that many papers make use of Twitter and other publicly available sources of data. However, he is mistaken in the assertion that as a result, such work does not need to be reviewed by an IRB (my own research is a case in point). IRBs exist in order to safeguard the rights of human subjects, and the decision about whether an IRB is required critically must rest with an IRB rather than with individual scientists conducting research.
Professor Ioannidis argues that IRB approval should not be required to publish publicly searchable information in a deanonymized context. This is a dangerous position to take. A researcher using this standard could publish, for instance, the sleuthed addresses, family members, places of work, and identities of political dissidents or victims of hate crime. In this particular case, Dr. Ioannidis indirectly ascertained non-use of Twitter, a decision that was made in a reasonably private context. No reasonable definition of consent to participate in the Twitter portion of the study could be applied for these signatories. Given the inflamed nature of the discussions around these issues, the potential for signatories of either to experience further negative attention on social media as a result of their Twitter accounts being publicly identified and linked...
Show MoreIn composing my response, I have a typo regarding the Kardashian Index. When someone does not use twitter, it goes to zero not infinity.
I thank Sheldrick, Bak-Coleman and Yamey for their constructive criticism. As stated clearly in the paper, the main analysis focuses on the key signatories of both documents: All the key signatories are included without random sampling. As the paper already explains in detail, since thousands of additional people signed each document, a few randomly selected signatories from the longer lists were also explored. Random numbers were generated in Excel. There was no power calculation for this secondary analysis. The paper already explains that this secondary analysis deserves caution, since only 443 GBD signatories were listed by name when the two documents were accessed online in April 2021.
Show MoreThe Twitter data represents information readily retrieved by a Google search by anyone. The notion of requiring IRB approval to report the results of searching Google or free publicly available databases (e.g. citation databases) contradicts the practice of hundreds of thousands of published papers reporting on such searches without IRB approval. Moreover, contrary to what Bak-Coleman asserts (“failure to disclose the author's well-documented history of interaction (co-authorship, affiliations, debate, etc..) with the named signatories—positive and negative”), my disclosures clarified explicitly that “The author has signed neither of the two documents and has many friends, collaborators and other people who he knows and he admires among those who have signed each of them...
Pages