Most conclusions in psychology are based on the results of null hypothesis significance testing (NHST; Cumming et al., 2007; Hubbard & Ryan, 2000; Sterling, 1959; Sterling, Rosenbaum, & Weinkam, 1995). Therefore, it is important that NHST is performed correctly and that NHST results are reported accurately. However, there is evidence that many reported p-values do not match their accompanying test statistic and degrees of freedom (Bakker & Wicherts, 2011; Bakker & Wicherts, 2014; Berle & Starcevic, 2007; Caperos & Pardo, 2013; Garcia-Berthou & Alcaraz, 2004; Veldkamp, Nuijten, Dominguez-Alvarez, Van Assen, & Wicherts, 2014; Wicherts, Bakker, & Molenaar, 2011). These studies highlighted that roughly half of all published empirical psychology articles using NHST contained at least one inconsistent p-value and that around one in seven articles contained a gross inconsistency, in which the reported p-value was significant and the computed p-value was not, or vice versa.

This alarmingly high error rate can have large consequences. Reporting inconsistencies could affect whether an effect is perceived to be significant or not, which can influence substantive conclusions. If a result is inconsistent it is often impossible (in the absence of raw data) to determine whether the test statistic, the degrees of freedom, or the p-value were incorrectly reported. If the test statistic is incorrect and it is used to calculate the effect size for a meta-analysis, this effect size will be incorrect as well, which could affect the outcome of the meta-analysis (Bakker & Wicherts, 2011; in fact, the misreporting of all kinds of statistics is a problem for meta-analyses; Gotzsche, Hrobjartsson, Maric, & Tendal, 2007; Levine & Hullett, 2002). Incorrect p-values could affect the outcome of tests that analyze the distribution of p-values, such as the p-curve (Simonsohn, Nelson, & Simmons, 2014) and p-uniform (Van Assen, Van Aert, & Wicherts, 2014). Moreover, Wicherts et al. (2011) reported that a higher prevalence of reporting errors were associated with a failure to share data upon request.

Even though reporting inconsistencies can be honest mistakes, they have also been categorized as one of several fairly common questionable research practices (QRPs) in psychology (John, Loewenstein, & Prelec, 2012). Interestingly, psychologists’ responses to John et al.’s survey fitted a Guttman scale reasonably well. This suggests that a psychologist’s admission to a QRP that is less often admitted to by others usually implies his or her admission to QRPs with a higher admission rate in the entire sample. Given that rounding down p-values close to .05 was one of the QRPs with relatively low admission rates, the frequency of misreported p-values could provide information on the frequency of the use of more common QRPs. The results of John et al. would therefore imply that a high prevalence of reporting errors (or more specifically, incorrect rounding down of p-values to be below .05) can be seen as an indicator of the use of other QRPs, such as the failure to report all dependent variables, collecting of more data after seeing whether results are significant, failing to report all conditions, and stopping data collection after achieving the desired result. Contrary to many other QRPs in John et al.’s list, misreported p-values that bear on significance can be readily detected on the basis of the articles’ text.

Previous research found a decrease in negative results (Fanelli, 2012) and an increase in reporting inconsistencies (Leggett, Thomas, Loetscher, & Nicholls, 2013), suggesting that QRPs are on the rise. On the other hand, it has been found that the number of published corrections to the literature did not change over time, suggesting no change in QRPs over time (Fanelli, 2013, 2014). Studying the prevalence of misreported p-values over time could shed light on possible changes in the prevalence of QRPs.

Besides possible changes in QRPs over time, some evidence suggests that the prevalence of QRPs may differ between subfields of psychology. Leggett et al. (2013) recently studied reporting errors in two main psychology journals in 1965 and 2005. They found that the increase in reporting inconsistencies over the years was higher in the Journal of Personality and Social Psychology (JPSP), the flagship journal of social psychology, than in the Journal of Experimental Psychology: General (JEPG). This is in line with the finding of John et al. (2012) that social psychologists admit to more QRPs, find them more applicable to their field, and find them more defensible as compared to other subgroups in psychology (but see also Fiedler & Schwarz, 2015, on this issue). However, the number of journals and test results in Leggett et al.’s study was rather limited and so it is worthwhile to consider more data before drawing conclusions with respect to differences in QRPs between subfields in psychology.

The current evidence for reporting inconsistencies is based on relatively small sample sizes of articles and p-values. The goal of our current study was to evaluate reporting errors in a large sample of more than a quarter million p-values retrieved from eight flagship journals covering the major subfields in psychology. Manually checking errors is time-consuming work, therefore we present and validate an automated procedure in the R package statcheck (Epskamp & Nuijten, 2015). The validation of statcheck is described in Appendix A.

We used statcheck to investigate the overall prevalence of reporting inconsistencies and compare our findings to findings in previous studies. Furthermore, we investigated whether there has been an increase in inconsistencies over the period 1985 to 2013, and, on a related note, whether there has been any increase in the number of NHST results in general and per paper. We also documented any differences in the prevalence and increase of reporting errors between journals. Specifically, we studied whether articles in social psychology contain more inconsistencies than articles in other subfields of psychology.

Method

“statcheck”

To evaluate the prevalence of reporting errors, we used the automated procedure statcheck (version 1.0.1.; Epskamp & Nuijten, 2015). This freely available R package (R Core Team, 2014) extracts statistical results and recalculates p-values based on reported test statistics and their degrees of freedom. Roughly, the underlying procedure executes the following four steps.

  • Step 1: First, statcheck converts a PDF or HTML file to a plain text file. The conversion from PDF to plain text can sometimes be problematic, because some journal publishers use images of signs such as “<”, “>”, or “=”, instead of the actual character. These images are not converted to the text file. HTML files do not have such problems and typically render accurate plain text files.

  • Step 2: From the plain text file, statcheck extracts t, F, r, χ2, and Z statistics, with the accompanying degrees of freedom (df) and p-value. Since statcheck is an automated procedure, it can only search for prespecified strings of text. Therefore, we chose to let statcheck search for results that are reported completely and exactly in APA style (American Psychological Association, 2010). A general example would be “test statistic (df1, df2) =/</> …, p =/</> …”. Two more specific examples are: “t(37) = −4.93, p <.001”, “χ 2(1, N = 226) = 6.90, p <.01.” statcheck takes different spacing into account, and also reads results that are reported as nonsignificant (ns). On the other hand, it does not read results that deviate from the APA template. For instance, statcheck overlooks cases in which a result includes an effect size estimate in between the test statistic and the p-value (e.g., “F(2, 70) = 4.48, MSE = 6.61, p <.02”) or when two results are combined into one sentence (e.g., “F(1, 15) = 19.9 and 5.16, p <.001 and p <.05, respectively”). These restrictions usually also imply that statcheck will not read results in tables, since these are often incompletely reported (see Appendix A for a more detailed overview of what statcheck can and cannot read).

  • Step 3: statcheck uses the extracted test statistics and degrees of freedom to recalculate the p-value. By default all tests are assumed to be two-tailed. We compared p-values recalculated by statcheck in R version 3.1.2 and Microsoft Office Excel 2013 and found that the results of both programs were consistent up to the tenth decimal point. This indicates that underlying algorithms used to approximate the distributions are not specific to the R environment.

  • Step 4: Finally, statcheck compares the reported and recalculated p-value. Whenever the reported p-value is inconsistent with the recalculated p-value, the result is marked as an inconsistency. If the reported p-value is inconsistent with the recalculated p-value and the inconsistency changes the statistical conclusion (assuming α = .05), the result is marked as a gross inconsistency. To take into account one-sided tests, statcheck scans the whole text of the article for the words “one-tailed,” “one-sided,” or “directional.” If a result is initially marked as inconsistent, but the article mentions one of these words and the result would have been consistent if it were one-sided, then the result is marked as consistent. Note that statcheck does not take into account p-values that are adjusted for multiple testing (e.g., a Bonferroni correction). P-values adjusted for multiple comparisons that are higher than the recalculated p-value can therefore erroneously be marked as inconsistent. However, when we automatically searched our sample of 30,717 articles, we found that only 96 articles reported the string “Bonferroni” (0.3 %) and nine articles reported the string “Huynh-Feldt” or “Huynh Feldt” (0.03 %). We conclude from this that corrections for multiple testing are rarely used and will not significantly distort conclusions in our study.

Similar to Bakker and Wicherts (2011), statcheck takes numeric rounding into account. Consider the following example: t(28) = 2.0, p<.05. The recalculated p-value that corresponds to a t-value of 2.0 with 28 degrees of freedom is .055, which appears to be inconsistent with the reported p-value of <.05. However, a reported t-value of 2.0 could correspond to any rounded value between 1.95 and 2.05, with a corresponding range of p-values between .0498 and .0613, which means that the reported p <.05 is not considered inconsistent.

Furthermore, statcheck considers p-values reported as p = .05 as significant. We inspected 10 % of the 2,473 instances in our sample in which a result was reported as “p = .05” and inspected whether these p-values were interpreted as significant. In the cases where multiple p-values from the same article were selected, we only included the p-value that was drawn first to avoid dependencies in the data. Our final sample consisted of 236 instances where “p = .05” was reported and of these p-values 94.3 % was interpreted as being significant. We therefore decided to count p-values reported as “p = .05” as indicating that the authors presented the result as significant.

The main advantage of statcheck is that it enables searching for reporting errors in very large samples, which would be infeasible by hand. Furthermore, manual checking is subject to human error, which statcheck eliminates. The disadvantage of statcheck is that it is not as comprehensive as a manual procedure, because it will miss results that deviate from standard reporting and results in tables, and it does not take into account adjustments on p-values. Consequently, statcheck will miss some reported results and will incorrectly earmark some correct p-values as a reporting error. Even though it is not feasible to create an automated procedure that is as accurate as a manual search in veryfying correctness of the results, it is important to exclude the possibility that statcheck yields a biased depiction of the true inconsistency rate. To avoid bias in the prevalence of reporting errors, we performed a validity study of statcheck, in which we compared statcheck’s results with the results of Wicherts, Bakker, and Molenaar (2011), who performed a manual search for and verification of reporting errors in a sample of 49 articles.

The validity study showed that statcheck read 67.5 % of the results that were manually extracted. Most of the results that statcheck missed were either reported with an effect size between the test statistics and the p-value (e.g., F(2, 70) = 4.48, MSE = 6.61, p <.02; 201 instances in total) or reported in a table (150 instances in total). Furthermore, Wicherts et al. found that 49 of 1,148 p-values were inconsistent (4.3 %) and ten of 1,148 p-values were grossly inconsistent (.9 %), whereas statcheck (with automatic one-tailed test detection) found that 56 of 775 p-values were inconsistent (7.2 %) and eight of 775 p-values were grossly inconsistent (1.0 %). The higher inconsistency rate found by statcheck was mainly due to our decision to count p = .000 as incorrect (a p-value cannot exactly be zero), whereas this was counted correct by Wicherts et al. If we do not include these 11 inconsistencies due to p = .000, statcheck finds an inconsistency percentage of 5.8 % (45 of 775 results), 1.5 percentage points higher than in Wicherts et al. This difference was due to the fact that statcheck did not take into account 11 corrections for multiple testing and Wicherts et al. did. The inter-rater reliability in this scenario between the manual coding in Wicherts et al. and the automatic coding in statcheck was .76 for the inconsistencies and .89 for the gross inconsistencies. Since statcheck slightly overestimated the prevalence of inconsistencies in this sample of papers, we conclude that statcheck can render slightly different inconsistency rates than a search by hand. Therefore, the results of statcheck should be interpreted with care. For details of the validity study and an explanation of all discrepancies between statcheck and Wicherts et al., see Appendix A.

Sample

A pilot study of social science journals in the Web of Science citation data base showed that few journals outside psychology include APA reporting style, therefore we limited our sample to psychology journals. As explained above, statcheck cannot always read results from articles in PDF due to problems in the conversion from PDF to plain text. These problems do not occur in articles in HTML format. Therefore, to obtain the most reliable statcheck results we restricted our sample to articles that were available in HTML format. The time span over which we downloaded articles depended on the year a journal started to publish articles in HTML. We collected the data in 2014, so we included articles up until 2013 to ensure complete sets of articles for an entire year. Via EBSCOhost we manually downloaded all articles in HTML from 1985 to 2013 from six flagship psychology journals that represent six main sub disciplines: Journal of Applied Psychology (JAP; Applied Psychology), Journal of Consulting and Clinical Psychology (JCCP; Clinical Psychology), Developmental Psychology (DP; Developmental Psychology), Journal of Experimental Psychology: General (JEPG; Experimental Psychology), and Journal of Personality and Social Psychology (JPSP; Social Psychology). These journals are published by the APA and follow the APA reporting guidelines. Furthermore, we manually downloaded all articles in HTML from two journals in general psychology: Psychological Science (PS; 2003–2013) and Frontiers in Psychology (FP; 2010–2013). In this manual download we did not include retractions, errata, and editorials. Finally, we automatically downloaded all HTML articles with the subject “psychology” from the Public Library of Science (PLoS; 2000–2013), using the rplos R package (Chamberlain, Boettiger, & Ram, 2014).Footnote 1 In this automatic process we did not exclude retractions, errata, or editorials. The final sample consisted of 30,717 articles. The number of downloaded articles per journal is given in Table 1. To obtain reporting error prevalences for each subfield and for psychology in total, statcheck was used on all downloaded articles.

Table 1 Specifications of the years from which HTML articles were available, the number of downloaded articles per journal, the number of articles with APA-reported null-hypothesis significance testing (NHST) results, the number of APA-reported NHST results, and the median number of APA-reported NHST results per article

Statistical analyses

Our population of interest is all APA-reported NHST results in the full text of the articles from the eight selected flagship journals in psychology from 1985 until 2013. Our sample includes this entire population. We therefore made no use of inferential statistics, since inferential statistics are only necessary to draw conclusions about populations when having much smaller samples. We restricted ourselves to descriptive statistics; every documented difference or trend entails a difference between or trend in the entire population or subpopulations based on journals. For linear trends we report regression weights and percentages of variance explained to aid interpretation.

Results

We report the prevalence of reporting inconsistencies at different levels. We document general prevalence of NHST results and present percentages of articles that use NHST per journal and over the years. Because only the five APA journals provided HTMLs for all years from 1985 to 2013, the overall trends are reported for APA journals only, and do not include results from Psychological Science, PLoS, and Frontiers, which only cover recent years. Reporting inconsistencies are presented both at the level of article and at the level of the individual p-value, i.e., the percentage of articles with at least one inconsistency and the average percentage of p-values within an article that is inconsistent, respectively. We also describe differences between journals and trends over time.

Percentage of articles with null-hypothesis significance testing (NHST) results

Overall, statcheck detected NHST results in 54.4 % of the articles, but this percentage differed per journal. The percentage of articles with at least one detected NHST result ranged from 24.1 % in PLoS to 85.1 % in JPSP (see Table 1). This can reflect a difference in the number of null-hypothesis significance tests performed, but it could also reflect a difference in the rigor with which the APA reporting standards are followed or how often tables are used to report results. Figure 1 shows the percentage of downloaded articles that contained NHST results over the years, averaged over all APA journals (DP, JCCP, JEPG, JPSP, and JAP; dark gray panel), and split up per journal (light gray panels for the APA journals and white panels for the non-APA journals). All journals showed an increase in the percentage of articles with APA-reported NHST results over the years except for DP and FP, for which this rate remained constant and/or declined, respectively. Appendix B lists the number of articles with NSHT results over the years per journal.

Fig. 1
figure 1

The percentage of articles with American Psychological Association (APA)-reported null-hypothesis significance testing (NHST) results over the years, averaged over all APA journals (Developmental Psychology (DP), Journal of Consulting and Clinical Psychology (JCCP), Journal of Experimental Psychology: General (JEPG), Journal of Personality and Social Psychology (JPSP), and Journal of Applied Psychology (JAP); dark gray panel), and split up per journal – light gray panels for the APA journals and white panels for the non-APA journals (Psychological Science (PS), Frontiers in Psychology (FP), and Public Library of Science (PLoS)). For each trend we report the unstandardized linear regression coefficient (b) and the coefficient of determination (R2) of the linear trend

Number of published NHST results over the years

We inspected the development of the average number of APA-reported NHST results per article, given that the article contained at least one detectable NHST result (see Fig. 2). Note that in 1985 the APA manual already required statistics to be reported in the manner that statcheck can read (American Psychological Association, 1983). Hence, any change in retrieved NHST results over time should reflect the actual change in the number of NHST results reported in articles rather than any change in the capability of statcheck to detect results.

Fig. 2
figure 2

The average number of American Psychological Association (APA)-reported null-hypothesis significance testing (NHST) results per article that contains NHST results over the years, averaged over all APA journals (Developmental Psychology (DP), Journal of Consulting and Clinical Psychology (JCCP), Journal of Experimental Psychology: General (JEPG), Journal of Personality and Social Psychology (JPSP), and Journal of Applied Psychology (JAP); dark gray panel), and split up per journal (light gray panels for the APA journals and white panels for the non-APA journals – Psychological Science (PS), Frontiers in Psychology (FP), and Public Library of Science (PLoS)). For each trend we report the unstandardized linear regression coefficient (b) and the coefficient of determination (R2) of the linear trend

Across all APA journals, the number of NHST results per article has increased over the period of 29 years (b = .25, R2 = .68), with the strongest increases in JEPG and JPSP. These journals went from an average of around 10–15 NHST results per article in 1985 to as much as around 30 results per article on average in 2013. The mean number of NHST results per article remained relatively stable in DP, JCCP, and JAP; over the years, the articles with NHST results in these journals contained an average of ten NHST results. It is hard to say anything definite about trends in PS, FP, and PLOS, since we have only a limited number of years for these journals (the earliest years we have information for are 2003, 2010, and 2004, respectively). Both the increase in the percentage of articles that report NHST results and the increased number of NHST results per article show that NHST is increasingly popular in psychology. It is therefore important that the results of these tests are reported correctly.

General prevalence of inconsistencies

Across all journals and years 49.6 % of the articles with NHST results contained at least one inconsistency (8,273 of the 16,695 articles) and 12.9 % (2,150) of the articles with NHST results contained at least one gross inconsistency. Furthermore, overall 9.7 % (24,961) of the p-values were inconsistent, and 1.4 % (3,581) of the p-values were grossly inconsistent. We also calculated the percentage of inconsistencies per article and averaged these percentages over all articles. We call this the “(gross) inconsistency rate.” Across journals, the inconsistency rate was 10.6 % and the gross inconsistency rate was 1.6 %.

Prevalence of inconsistencies per journal

We calculated the prevalence of inconsistencies per journal at two levels. First, we calculated the percentage of articles with NHST results per journal that contained at least one (gross) inconsistency. Second, we calculated the inconsistency rate per journal. The top panel of Fig. 3 shows the average percentage of articles with at least one (gross) inconsistency, per journal. The journals are ordered from the journal with the highest percentage of articles with an inconsistency to the journal with the least articles with an inconsistency. JPSP showed the highest prevalence of articles with at least one inconsistency (57.6 %), followed by JEPG (54.8 %). The journals in which the percentage of articles with an inconsistency was lowest are PS and JAP (39.7 % and 33.6 % respectively). JPSP also had the highest percentage of articles with at least one gross inconsistency (15.8 %), this time followed by DP (15.2 %). PS had the lowest percentage of articles with gross inconsistencies (6.5 %).

Fig. 3
figure 3

The average percentage of articles within a journal with at least one (gross) inconsistency and the average percentage of (grossly) inconsistent p-values per article, split up by journal. Inconsistencies are depicted in white and gross inconsistencies in grey. For the journals Journal of Personality and Social Psychology (JPSP), Journal of Experimental Psychology: General (JEPG), Developmental Psychology (DP), Frontiers in Psychology (FP), Public Library of Science (PLoS), Journal of Consulting and Clinical Psychology (JCCP), Psychological Science (PS), and Journal of Applied Psychology (JAP), respectively, the number of articles with null-hypothesis significance testing (NHST) results is 4,346, 821, 2,607, 702, 2,487, 2,413, 1,681, and 1,638, and the average number of NHST results in an article is 23.4, 23.0, 14.4, 14.5, 12.7, 11.4, 9.3, and 9.2

The inconsistency rate shows a different pattern than the percentage of articles with all inconsistencies. PLoS showed the highest percentage of inconsistent p-values per article overall, followed by FP (14.0 % and 12.8 %, respectively). Furthermore, whereas JPSP was the journal with the highest percentage of articles with inconsistencies, it had one of the lowest probabilities that a p-value in an article was inconsistent (9.0 %). This discrepancy is caused by a difference between journals in the number of p-values per article: the articles in JPSP contain many p-values (see Table 1, right column). Hence, notwithstanding a low probability of a single p-value in an article being inconsistent, the probability that an article contained at least one inconsistent p-value was relatively high. The gross inconsistency rate was quite similar over all journals except JAP, in which the gross inconsistency rate was relatively high (2.5 %).

Prevalence of inconsistencies over the years

If gross inconsistencies are indicative of QRPs and QRPs have increased over the years, we would expect an increase of gross inconsistencies over the years (see also Leggett et al., 2013). To study this, we inspected the gross inconsistency rate in journals over time. The results are shown in Fig. 4.

Fig. 4
figure 4

Average percentage of inconsistencies (open circles) and gross inconsistencies (solid circles) in an article over the years averaged over all American Psychological Association (APA) journals (Developmental Psychology (DP), Journal of Consulting and Clinical Psychology (JCCP), Journal of Experimental Psychology: General (JEPG), Journal of Personality and Social Psychology (JPSP), and Journal of Applied Psychology (JAP); dark gray panel) and split up per journal (light gray panels for the APA journals and white panels for non-APA journals – Psychological Science (PS), Frontiers in Psychology (FP), and Public Library of Science (PLoS)). The unstandardized regression coefficient b and the coefficient of determination R2 of the linear trend are shown per journal for both inconsistencies (incons) and gross inconsistencies (gross) over the years

The number of (gross) inconstencies has decreased or remained stable over the years across the APA journals. In DP, JCCP, JPEG, and JPSP the percentage of all inconsistencies in an article has decreased over the years. For JAP there is a positive (but very small) regression coefficient for year, indicating an increasing error rate, but the R2 is close to zero. The same pattern held for the prevalence of gross inconsistencies over the years. DP, JCCP, and JPSP have shown a decrease in gross inconsistencies, in JEPG and JAP the R2 is very small, and the prevalence seems to have remained practically stable. The trends for PS, FP, and PLoS are hard to interpret given the limited number of years of covarage. Overall, it seems that, contrary to the evidence suggesting that the use of QRPs could be on the rise (Fanelli, 2012; Leggett et al., 2013), neither the inconsistencies nor the gross inconsistencies have shown an increase over time. If anything, the current results reflect a decrease of reporting error prevalences over the years.

We also looked at the development of inconsistencies at the article level. More specifically, we looked at the percentage of articles with at least one inconsistency over the years, averaged over all APA journals (DP, JCCP, JEPG, JPSP, and JAP; dark gray panel in Fig. 5) and split up per journal (light gray panels for the APA journals and white panels for the non-APA journals in Fig. 5). Results show that there has been an increase in JEPG and JPSP for the percentage of articles with NHST results that have at least one inconsistency, which is again associated with the increase in the number of NHST results per article in these journals (see Fig. 2). In DP and JCCP, there was a decrease in articles with an inconsistency. For JAP there is no clear trend; the R2 is close to zero. A more general trend is evident in the prevalence of articles with gross inconsistencies: in all journals, except PS and PLOS, the percentage of articles with NHST that contain at least one gross inconsistency has been decreasing. Note that the trends for PS, FP, and PLOS are unstable due to the limited number of years we have data for. Overall, it seems that, even though the prevalence of articles with inconsistencies has increased in some journals, the prevalence of articles with gross inconsistencies has shown a decline over the studied period.

Fig. 5
figure 5

Percentage of articles with at least one inconsistency (open circles) or at least one gross inconsistency (solid circles), split up by journal. The unstandardized regression coefficient b and the coefficient of determination R2 of the linear trend are shown per journal for both inconsistencies (incons) as gross inconsistencies (gross) over the years. APA American Psychological Assocation, DP Developmental Psychology, JCCP Journal of Consulting and Clinical Psychology, JEPG Journal of Experimental Psychology: General , JPSP Journal of Personality and Social Psychology, JAP Journal of Applied Psychology, PS Psychological Science, FP Frontiers in Psychology, PLoS Public Library of Science

Prevalence of gross inconsistencies in results reported as significant and nonsignificant

We inspected the gross inconsistencies in more detail by comparing the percentage of gross inconsistencies in p-values reported as significant and p-values reported as nonsignificant. Of all p-values reported as significant 1.56 % was grossly inconsistent, whereas only .97 % of all p-values reported as nonsignificant was grossly inconsistent, indicating it is more likely for a p-value reported as significant to be a gross inconsistency than for a p-value reported as nonsignificant. We also inspected the prevalence of gross inconsistencies in significant and non-significant p-values per journal (see Fig. 6). In all journals, the prevalence of gross inconsistencies is higher in significant p-values than in nonsignificant p-values (except for FP, in which the prevalence is equal in the two types of p-values). This difference in prevalence is highest in JCCP (1.03 percentage point), JAP (.97 percentage point), and JPSP (.83 percentage point) respectively, followed by JEPG (.51 percentage point) and DP (.26 percentage point), and smallest in PLOS (.19 percentage point) and FP (.00 percentage point).

Fig. 6
figure 6

The percentage of gross inconsistencies in p-values reported as significant (white bars) and nonsignificant (gray bars), split up by journal. For the journals Journal of Applied Psychology (JAP), Journal of Consulting and Clinical Psychology (JCCP), Developmental Psychology (DP), Public Library of Science (PLoS), Psychological Science (PS), Frontiers in Psychology (FP), Journal of Personality and Social Psychology (JPSP), and Journal of Experimental Psychology: General (JEPG), respectively, the total number of significant p-values was 11,654, 21,120, 29,962, 22,071, 12,482, 7,377, 78,889, and 14,084, and the total number of nonsignificant p-values was 3,119, 5,558, 6,698, 9,134, 2,936, 2,712, 17,868, and 4,407

It is hard to interpret the percentages of inconsistencies in significant and nonsignificant p-values substantively, since they depend on several factors, such as the specific p-value: it seems more likely that a p-value of .06 is reported as smaller than .05 than a p-value of .78. That is, because journals may differ in the distribution of specific p-values we should also be careful in comparing gross inconsistencies in p-values reported as significant across journals. Furthermore, without the raw data it is impossible to determine whether it is the p-value that is erroneous, or the test statistic or degrees of freedom. As an example of the latter case, a simple typographical error such as “F(2,56) = 1.203, p < .001” instead of “F(2,56) = 12.03, p < .001” produces a gross inconsistency, without the p-value being incorrect. Although we cannot interpret the absolute percentages and their differences, the finding that gross inconsistencies are more likely in p-values presented as significant than in p-values presented as nonsignificant could indicate a systematic bias and is reason for concern.

Figure 7 shows the prevalence of gross inconsistencies in significant (solid line) and nonsignificant (dotted line) p-values over time, averaged over all journals. The size of the circles represents the total number of significant (open circle) and nonsignificant (solid circle) p-values in that particular year. Note that we only have information for PS, FP, and PLOS since 2003, 2010, and 2004, respectively. The prevalence of gross inconsistencies in significant p-values seems to decline slightly over the years (b = −.04, R2 = .65). The prevalence of the gross inconsistencies in nonsignificant p-values does not show any change (b = .00, R2 = .00). In short, the potential systematic bias leading to more gross inconsistencies in significant results seems to be present in all journals except for FP, but there is no evidence that this bias is increasing over the years.

Fig. 7
figure 7

The percentage of gross inconsistencies in p-values reported as significant (solid line) and nonsignificant (dotted line), over the years, averaged over journals. The size of the open and solid circles represents the number of significant and nonsignificant p-values in that year, respectively

To investigate the consequence of these gross inconsistencies, we compared the percentage of significant results in the reported p-values with the percentage of significant results in the computed p-values. Averaged over all journals and years, 76.6 % of all reported p-values were significant. However, only 74.4 % of all computed p-values were significant, which means that the percentage of significant findings in the investigated literature is overestimated by 2.2 percentage points due to gross inconsistencies.

Prevalence of inconsistencies as found by other studies

Our study can be considered a large replication of several previous studies (Bakker & Wicherts, 2011; Bakker & Wicherts, 2014; Berle & Starcevic, 2007; Caperos & Pardo, 2013; Garcia-Berthou & Alcaraz, 2004; Veldkamp et al., 2014; Wicherts et al., 2011). Table 2 shows the prevalence of inconsistent p-values as determined by our study and previous studies.

Table 2 Prevalence of inconsistencies in the current study and in earlier studies

Table 2 shows that the estimated percentage of inconsistent results can vary considerably between studies, ranging from 4.3 % of the results (Wicherts et al., 2011) to 14.3 % of the results (Berle & Starcevic, 2007). The median rate of inconsistent results is 11.1 % (1.4 percentage points higher than the 9.7 % in the current study). The percentage of gross inconsistencies ranged from .4 % (Garcia-Berthou & Alcaraz, 2004) to 2.3 % (Caperos & Pardo, 2013), with a median of 1.1 % (.3 percentage points lower than the 1.4 % found in the current study). The percentage of articles with at least one inconsistency ranged from as low as 10.1 % (Berle & Starcevic, 2007) to as high as 63.0 % (Veldkamp et al., 2014), with a median of 46.7 % (2.9 percentage points lower than the estimated 49.6 % in the current study). Finally, the lowest percentage of articles with at least one gross inconsistency is 2.6 % (Berle & Starcevic, 2007) and the highest is 20.5 % (Veldkamp et al., 2014), with a median of 14.3 % (1.4 percentage points higher than the 12.9 % found in the current study).

Some of the differences in prevalences could be caused by differences in inclusion criteria. For instance, Bakker and Wicherts (2011) included only t, F, and χ 2 values; Wicherts et al. (2011) included only t, F, and χ 2 values of which the reported p-value was smaller than .05; Berle and Starcevic (2007) included only exactly reported p-values; Bakker and Wicherts (2014) only included completely reported t and F values. Furthermore, two studies evaluated p-values in the medical field (Garcia-Berthou & Alcaraz, 2004) and in psychiatry (Berle & Starcevic, 2007) instead of in psychology. Lastly, there can be differences in which p-values are counted as inconsistent. For instance, the current study counts p = .000 as incorrect, whereas this was not the case in for example Wicherts et al. (2011; see also Appendix A).

Based on Table 2 we conclude that our study corroborates earlier findings. The prevalence of reporting inconsistencies is high: almost all studies find that roughly one in ten results is erroneously reported. Even though the percentage of results that is grossly inconsistent is lower, the studies show that a substantial percentage of published articles contain at least one gross inconsistency, which is reason for concern.

Discussion

In this paper we investigated the prevalence of reporting errors in eight major journals in psychology using the automated R package statcheck (Epskamp & Nuijten, 2015). Over half of the articles in the six flagship journals reported NHST results that statcheck was able to retrieve. Notwithstanding the many debates on the downsides of NHST (see e.g., Fidler & Cumming, 2005; Wagenmakers, 2007), the use of NHST in psychology appears to have increased from 1985 to 2013 (see Figs. 1 and 2), although this increase can also reflect an increase in adherence to APA reporting standards. Our findings show that in general the prevalence of reporting inconsistencies in six flagship psychology journals is substantial. Roughly half of all articles with NHST results contained at least one inconsistency and about 13 % contained a gross inconsistency that may have affected the statistical conclusion. At the level of individual p-values we found that on average 10.6 % of the p-values in an article were inconsistent, whereas 1.6 % of the p-values were grossly inconsistent.

Contrary to what one would expect based on the suggestion that QRPs have been on the rise (Leggett et al., 2013), we found no general increase in the prevalence of inconsistent p-values in the studied journals from 1985 to 2013. When focusing on inconsistencies at the article level, we only found an increase in the percentage of articles with NHST results that showed at least one inconsistency for JEPG and JPSP. Note this was associated with clear increases in the number of reported NHST results per article in these journals. Furthermore, we did not find an increase in gross inconsistencies in any of the journals. If anything, we saw that the prevalence of articles with gross inconsistencies has been decreasing since 1985, albeit only slightly. We also found no increase in the prevalence of gross inconsistencies in p-values that were reported as significant as compared to gross inconsistencies in p-values reported as nonsignificant. This is at odds with the notion that QRPs in general and reporting errors in particular have been increasing in the last decades. On the other hand, the stability or decrease in reporting errors is in line with research showing no trend in the proportion of published errata, which implies that there is also no trend in the proportion of articles with (reporting) errors (Fanelli, 2013).

Furthermore, we found no evidence that inconsistencies are more prevalent in JPSP than in other journals. The (gross) inconsistency rate was not the highest in JPSP. The prevalence of (gross) inconsistencies has been declining in JPSP, as it did in other journals. We did find that JPSP showed a higher prevalence of articles with at least one inconsistency than other journals, but this was associated with the higher number of NSHT results per article in JPSP. Hence our findings are not in line with the previous findings that JPSP shows a higher (increase in) inconsistency rate (Leggett et al., 2013). Since statcheck cannot distinguish between p-values pertaining to core hypotheses and p-values pertaining to, for example, manipulation checks, it is hard to interpret the differences in inconsistencies between fields and the implications of these differences. To warrant such a conclusion the inconsistencies would have to be manually analyzed within the context of the papers containing the inconsistencies.

We also found that gross inconsistencies are more prevalent in p-values reported as significant than in p-values reported as nonsignificant. This could suggest a systematic bias favoring significant results, potentially leading to an excess of false positives in the literature. The higher prevalence of gross inconsistencies in significant p-values versus nonsignificant p-values was highest in JCCP, JAP, and JPSP, and lowest in PLOS and FP. Note again that we do not know the hypotheses underlying these p-values. It is possible that in some cases a nonsignificant p-value would be in line with a hypothesis and thus in line with the researcher’s predictions. Our data do not speak to the causes of this over-representation of significant results. Perhaps these p-values are intentionally rounded down (a practice that 20 % of the surveyed psychological researchers admitted to; John et al., 2012) to convince the reviewers and other readers of an effect. Or perhaps researchers fail to double check significantly reported p-values, because they are in line with their expectations, hence leaving such reporting errors more likely to remain undetected. It is also possible that the cause of the over-representation of falsely significant results lies with publication bias: perhaps researchers report significant p-values as nonsignificant just as often as vice versa, but in the process of publication, only the (accidentally) significant p-values get published.

There are two main limitations in our study. Firstly, by using the automated procedure statcheck to detect reporting inconsistencies, our sample did not include NHST results that were not reported exactly according to APA format or results reported in tables. However, based on the validity study and on earlier results (Bakker & Wicherts, 2011), we conclude that there does not seem to be a difference in the prevalence of reporting inconsistencies between results in APA format and results that are not exactly in APA format (see Appendix A). The validity study did suggest, however, that statcheck might slightly overestimate the number of inconsistencies. One reason could be that statcheck cannot correctly evaluate p-values that were adjusted for multiple testing. However, we found that these adjustments are rarely used. Notably, the term “Bonferroni” was mentioned in a meager 0.3 % of the 30,717 papers. This finding is interesting in itself; with a median number of 11 NHST results per paper, most papers report multiple p-values. Without any correction for multiple testing, this suggests that overall Type I error rates in the eight psychology journals are already higher than the nominal level of .05. Nevertheless, the effect of adjustments of p-values on the error estimates from statcheck is expected to be small. We therefore conclude that, as long as the results are interpreted with care, statcheck provides a good method to analyze vast amounts of literature to locate reporting inconsistencies. Future developments of statcheck could focus on taking into account corrections for multiple testing and results reported in tables or with effect sizes reported between the test statistic and p-value.

The second limitation of our study is that we chose to limit our sample to only a selection of flagship journals from several sub disciplines of psychology. It is possible that the prevalence of inconsistencies in these journals is not representative for the psychological literature. For instance, it has been found that journals with lower impact factors have a higher prevalence of reporting inconsistencies than high impact journals (Bakker & Wicherts, 2011). In this study we avoid conclusions about psychology in general, but treat the APA-reported NHST results in the full text of the articles from journals we selected as the population of interest (which made statistical inference superfluous). All conclusions in this paper therefore hold for the APA-reported NHST results in the eight selected journals. Nevertheless, the relatively high impact factors of these journals attest to the relevance of the current study.

There are several possible solutions to the problem of reporting inconsistencies. Firstly, researchers can check their own papers before submitting, either by hand or with the R package statcheck. Editors and reviewers could also make use of statcheck to quickly flag possible reporting inconsistencies in a submission, after which the flagged results can be checked by hand. This should reduce erroneous conclusions caused by gross inconsistencies. Checking articles with statcheck can also prevent such inconsistencies from distorting meta-analyses or analyses of p-value distributions (Simonsohn et al., 2014; Van Assen et al., 2014). This solution would be in line with the notion of Analytic Review (Sakaluk, Williams, & Biernat, 2014), in which a reviewer receives the data file and syntax of a manuscript to check if the reported analyses were actually conducted and reported correctly. One of the main concerns about Analytic Review is that it would take reviewers a lot of additional work. The use of statcheck in Analytic Review could reduce this workload substantially.

Secondly, the prevalence of inconsistencies might decrease if co-authors check each other’s work, a so-called “co-pilot model” (Wicherts, 2011). In recent research (Veldkamp et al., 2014) this idea has been investigated by relating the probability that a p-value was inconsistent to six different co-piloting activities (e.g., multiple authors conducting the statistical analyses). Veldkamp et al. did not find direct evidence for a relation between co-piloting and reduced prevalence of reporting errors. However, the investigated co-pilot activities did not explicitly include the actual checking of each other’s p-values, hence we do not rule out the possibility that reporting errors would decrease if co-authors double checked p-values.

Thirdly, it has been found that reporting errors are related to reluctance to share data (Wicherts et al., 2011). Although any causal relation cannot be established, a solution might be to require open data by default, allowing exceptions only when explicit reasons are available for not sharing. Subsequently, researchers know their data could be checked and may feel inclined to double check the Results section before publishing the paper. Besides a possible reduction in reporting errors, sharing data has many other advantages. Sharing data for instance facilitates aggregating data for better effect size estimates, enable reanalyzing published articles, and increase credibility of scientific findings (see also Nosek, Spies, & Motyl, 2012; Sakaluk et al., 2014; Wicherts, 2013; Wicherts & Bakker, 2012). The APA already requires data to be available for verification purposes (American Psychological Association, 2010, p. 240), many journals explicitly encourage data sharing in their policies, and the journal Psychological Science has started to award badges to papers of which the data are publicly available. Despite these policies and encouragements, raw data are still rarely available (Alsheikh-Ali, Qureshi, Al-Mallah, & Ioannidis, 2011). One objection that has been raised is that due to privacy concerns data cannot be made publicly available (see e.g., Finkel, Eastwick, & Reis, 2015). Even though this can be a legitimate concern for some studies with particularly sensitive data, these are exceptions; the data of most psychology studies could be published without risks (Nosek et al., 2012).

To find a successful solution to the substantial prevalence of reporting errors, more research is needed on how reporting errors arise. It is important to know whether reporting inconsistencies are mere sloppiness or whether they are intentional. We found that the large majority of inconsistencies were not gross inconsistencies around p = .05, but inconsistencies that did not directly influence any statistical conclusion. Rounding down a p-value of, say, .38 down to .37 does not seem to be in the direct interest of the researcher, suggesting that the majority of inconsistencies are accidental. On the other hand, we did find that the large majority of grossly inconsistent p-values were nonsignificant p-values that were presented as significant, instead of vice versa. This seems to indicate a systematic bias that causes an over-representation of significant results in the literature. Whatever the cause of this over-representation might be, there seems to be too much focus on getting “perfect,” significant results (see also Giner-Sorolla, 2012). Considering that the ubiquitous significance level of .05 is arbitrary, and that there is a vast amount of critique on NHST in general (see e.g., Cohen, 1994; Fidler & Cumming, 2005; Krueger, 2001; Rozeboom, 1960; Wagenmakers, 2007), it should be clear that it is more important that p-values are accurately reported than that they are below .05.

There are many more interesting aspects of the collected 258,105 p-values that could be investigated, but this is beyond the scope of this paper. In another paper, the nonsignificant test results from this dataset are investigated for false negatives (Hartgerink, van Assen, & Wicherts, 2015). Here a method is used to detect false negatives and the results indicate two out of three papers with nonsignificant test results might contain false-negative results. This is only one out of the many possibilities and we publicly share the anonymized data on our Open Science Framework page (https://osf.io/gdr4q/) to encourage further research.

Our study illustrates that science is done by humans, and humans easily make mistakes. However, the prevalence of inconsistent p-values in eight major journals in psychology has generally been stable over the years, or even declining. Hopefully, statcheck can contribute to further reducing the prevalence of reporting inconsistencies in psychology.