“We publish this Note at this time because in the years that have passed since our original publication, and even now, the 1959 note has been quoted as a worthy piece of evidence. We ask to be spared the further embarrassment of having that earlier work cited in the reputable literature, and we hope we can spare other authors the labors of attempting to rationalize our aberrant data.” (Goldstein & Eastwood, 1966)
The above withdrawal note reflects an issue that continues to be relevant: a scientific article, once published, continues to exert influence and have an impact on the scientific community even if it is proven, after publication, that it contains errors that raise doubts on the conclusions, invalidating them totally or partially.
In 1756 appeared what is most likely the first documented withdrawal of scientific work (Wilfon, 1756), followed, over 150 years later, by that of Whelden (Whelden, 1927) and over time, by many others:
- unreproducible research of questionable scientific value persisted for decades before disappearing from the literature, although it was demonstrated to be wrong shortly after its publication (Deichmann & Müller-Hill, 1998);
- spectacular results were based on either a laboratory error or an attempt by a researcher to report positive preliminary results (Culliton, 1974);
- discoveries presented in an article with over 1000 citations in Google Scholar were questioned one month after publication, following an inspection that found severe problems of research methodology and that today would lead to a very rapid rejection or even would make the article impossible to publish (Maddox et al., 1988);
- for the first time, in addition to the retraction of published papers, the perpetrator of scientific fraud (the Poehlman case) was sentenced to prison (Tilden, 2010);
Research misconduct/questionable research practices (QRP) in various forms (from 1.97% up to 72%) within a scientific community has been reported by several studies (Baerlocher et al., 2010; Fanelli, 2009; Godecharle et al., 2018; Titus et al., 2008).
The costs generated by QRP can reach considerable amounts, conservatively evaluated, in 2010, to cost over 100 million USD only for the cases investigated by ORI in the United States (Michalek et al., 2010) or 400,000 USD/article in 2014 (Stern et al., 2014).
Indirect costs can also increase if no action is taken, such as the funding of research in which contaminated cell lines are being used (Buehring et al., 2004).
Apart from the specific situations in which the scientific value of some articles is being questioned, there is an area of questionable publication practices (QPP) in which the reasons for rejection are based instead on non-compliance with ethical standards (ethical writing) and legal regulations (copyright issues): plagiarism, overlap, authorship (Harriman & Patel, 2014; Horbach & Halffman, 2019; Roig, 2015; Scanlon, 2007). While plagiarism is rejected and considered a form of scientific misconduct, text re-use/recycling/self-plagiarism is still under debate about the quantity and type of recycled materials (Horbach & Halffman, 2019), and the decision to retract a scientific paper is mainly an editorial one.
If the mechanisms that should prevent the generation, perpetuation, and dissemination of QRP/QPP in biomedical research do not work correctly, there are situations in which harm can be caused to patients, the scientific community, research institutions, funding bodies, publishers, and scientific journals in which the results are published (Relman, 1983) (Parrish, 1999) (DeMets et al., 2017) (Selvam, 2021).
Numerous articles have addressed the subject of retractions in recent years. Similar to papers that used other databases, those that used PubMed/Medline as their primary source of data (Nath et al., 2006) (Cokol et al., 2007) (Cokol et al., 2008) (Redman et al., 2008) (Steen, 2011) (Foo, 2011) (Wager & Williams, 2011) (Fang et al., 2012) (Decullier et al., 2013) (Singh et al., 2014) (Azoulay et al., 2015; Madlock-Brown & Eichmann, 2015) (Mongeon & Larivière, 2016) (Rosenkrantz, 2016) (Pantziarka & Meheus, 2019) (Rapani et al., 2020) have shown an increasing trend in the number of retracted papers, a diversification of retraction reasons (recently including image manipulations (Bucci, 2018)) and an increased interest of journals and publishers in correcting the scientific literature (Table 1).
Table 1. Summary of papers studying PubMed/Medline retractions.
The articles withdrawn from the scientific literature indexed in PubMed, although in a small proportion to the total volume indexed in this database, are a problem not only by question marks which they can raise on the integrity of the scientific research as a whole but also by the impact on the scientific community, which can use or invest in ideas, methods or data invalidated a few years after publication.
Taking into account the absence of recent studies, we decided to perform an exhaustive exploratory analysis of human health-related papers withdrawn from the literature indexed in PubMed/Medline and published in the period 2009-2020 focused mainly on:
- dynamic of retracted articles and retraction notes for the period 2009-2020;
- retraction reasons with particular attention to image issues;
- countries producing QRP/QPP articles;
- the scientific impact of retracted papers;
At the beginning of this project, as far as we know, there was no exhaustive study about retracted articles indexed in PubMed (in the meantime, such a study has been published: it provides information on the evolution in time of the number of withdrawn articles and a ranking by countries without analyzing the reasons for withdrawing the articles and the number of their citations (Bhatt, 2021)).
Most articles published until now address either a limited number of withdrawn articles or less recent periods.
There is scarce information regarding the dynamics of retraction notes, the dynamic of retracted papers for image problems, and the scientific impact (citations) of withdrawn articles.
Considering these elements, we think that a snapshot of the withdrawals from the last 12 years using freely available alternative information sources could provide helpful information to those interested in the evolution and changes over time in this field.
3. Materials and Methods
3.1. Information Sources
- PubMed—PubMed (nih.gov)
- Google Scholar—citations https://scholar.google.com
Several elements determined the selection of Pubmed as the unique source of information:
- focus on biomedical journals;
- unrestricted access;
- the existence of a dedicated keyword for withdrawn articles, Retracted Publication [PT];
- the link between the withdrawn article and the withdrawal note, Retraction of Publication [PT];
- the possibility of exporting the data in csv format in our database for the individual analysis of each article and the corresponding retraction note.
We have used Google Scholar due to its free nature and the best coverage of citations (Martín-Martín et al., 2020).
We have used Dimensions database due to free access and provision, in addition to the total number of citations, of the number of citations from the last two years (compared to the current date).
SCOPUS (Elsevier) was used to obtain journal publishers and the journal impact factor (CiteScore) information. Because CiteScore is built based on the number of citations from the last four years, we decided to analyze a time interval that would cover three segments of 4 years each in terms of citations (2009-2020 interval, composed of 2009-2012, 2013-2016 and 2017-2020). This allows us to develop further directions of analysis of the data set obtained.
3.2. Articles Retrieval and Extracted Information
Withdrawn articles were identified in PubMed using “Retracted Publication [PT]” search without date restrictions. The data were downloaded in csv format and imported into an application developed for analysis by the author.
Data analysis period: 20.07.2020-31.05.2021
Last date of data import from PubMed: 30.01.2021
The period analyzed: 01.01.2009-31.12.2020 (taking into account the year of publication as recorded in PubMed).
The processing date was noted for each item.
Inclusion criteria: The field of study or the subject studied is related or may have an impact on human health (mentioned in the text of the article).
Exclusion criteria: The field of study or the studied subject is not related to human health (chemistry, agriculture, veterinary medicine, industrial products, ecology without mentioning in the article some implications with human health) proceedings volumes with no specific retractions mentioned, clinical practice guidelines withdrawn for updates or unspecified reasons, misclassified retractions.
Each item and associated retraction note were analyzed. The extracted data is grouped into four sections: authors and countries, retraction details (including those involved in retraction, Table 6), collateral damage, and citations info (Tables 2-5).
1) Authors and countries.
Table 2. Author and country-related variables.
2) Retraction details.
Table 3. Retraction-related variables (RN = Retraction Note; IRB = Institutional Review Board; ORI = Office of Research Integrity).
3) Collateral damage.
Table 4. Collateral damage variables.
Table 5. Citations-related variables.
5) Who was involved/requested the retraction.
Table 6. Who was involved in the retraction process?
In order to define a flexible taxonomy for the main retraction reasons categories and subcategories (see Table 7 and Table 8), we have considered several previously published articles (Azoulay et al., 2015; Benson, 2016; Decullier et al., 2013; Wager & Williams, 2011; Zhang et al., 2020).
7) Retraction reasons.
Table 7. Main retraction reasons.
8) Secondary reasons.
Table 8. Secondary retraction reasons.
The data were exported from the application and analyzed in IBM SPSS (IBM SPSS Statistics for Windows 2018).
4.1. Retracted Articles and Retraction Notes
A total of 5619 retracted papers were retrieved by a PubMed search for the period 2009-2020. Of these, 775 were excluded and 4844 analyzed. The distribution for the period 2009-2020 of the withdrawn articles and the withdrawal notes is presented in Figure 1 and Table 9.
Figure 1. Retracted articles and retraction notes.
Table 9. Retracted articles and retraction notes by year (based on PubMed publication year value and PubMed reported retraction note date; 20 articles present date errors because the date of the electronic publication was far ahead of the print publication/journal date. These errors are not reflected in the ET calculations).
4.2. Retraction Reasons
Out of the 4844 analyzed retractions, 4251 (87.76%) have a unique retraction reason, and 593 (12.24%) have multiple retraction reasons (see Table 10).
Table 10. Retraction reasons (Column 1 displays the number of papers with a single reason as the basis for retraction. Columns 2, 3, 4 contain the number of papers with 2, 3 and 4 concurrent reasons as the basis for retraction).
Secondary retraction reasons are presented in Table 11.
There were 229 instances of data fabrication, 217 in the “Mistakes/Inconsistent data” category, and 12 in other categories.
We have identified 286 instances of duplicate publication, 123 were editorial/publisher mistakes, and 163 were duplicate submissions.
Within the Images category, we found 253 instances of image overlap and 94 instances of plagiarized images. When taking into account images overlap and plagiarism, the total number of overlap articles (text and images) is 809 and the total number of plagiarism articles (text and images) is 757. Image reasons count 741 cases when image overlap and plagiarism are being moved in the overlap and plagiarism categories.
Fraudulent peer review was found in 350 instances (main category: Fraud).
The authors were unable to provide the raw data or the raw data that could not be retrieved in 293 cases.
Table 11. Secondary retraction reasons.
Citations were counted for all retracted papers (see Table 12). Overall, 140,810 citations were retrieved in Google Scholar and 96,000 in Dimensions (68% ratio Dimensions/Google Scholar).
Table 12. Citations for all 4844 retracted papers.
4.4. Exposure Time
Exposure time (time difference between the most recent retraction date and the earliest publication date, expressed in months) was collected for all articles included in the study (Table 13). The average exposure time was 28.89 months with a median value of 19 months.
Table 13. Exposure time for all 4844 retracted papers.
4.5. Retraction Reasons, Exposure Time, Average Citations/Article, the Average Number of Authors, and Average CiteScore (Table 14)
When analyzing the exposure time, the average number of citations, the average number of authors, and the average CiteScore from the year of publication, depending on the reason for withdrawal, there are some interesting aspects:
- articles withdrawn due to images have a much higher ET than other articles, more citations, the highest number of authors, and the third-highest value of CiteScore;
- articles withdrawn for ethical reasons have the second value of ET but much fewer citations and a CiteScore below average;
- mistakes/inconsistent data have the third ET as value (but below average), citations and authors above average, and the third CiteScore as value.
Table 14. Main retraction reason categories (all instances): exposure time (ET), average Google Scholar citations (G), average Dimensions citations (D), the average number of authors (A), and the average value of CiteScore (CS).
The order and characteristics of retractions remain almost the same when we analyze single retraction reasons articles (Table 15). However, we observe two exceptions. Articles retracted for ethical reasons represent only 54% of the total, meaning that ethics is probably part of more complex retractions involving multiple reasons. Authorship displays the same pattern, with only 29% of cases representing the single retraction reason which could also mean that authorship is more frequent in multiple/complex retractions, being only one of the factors contributing to article withdrawal.
Table 15. Main retraction reasons (papers with one retraction reason only. N = 4251): exposure time (ET), average Google Scholar citations (G average), average Dimensions citations (D average), the average number of authors (Authors), and the average value of CiteScore (CS).
4.6. Number of Authors
The average number of authors for the 4844 articles is 5.83 (5.73 - 5.93) with a median of 5, mode 4, IQR 4, and range 36. The distribution of articles according to the number of authors is displayed in Figure 2.
Figure 2. The number of retracted papers (n = 4844) by the number of authors.
Of the 4844 articles analyzed, 4103 (84.7%) were published by authors from the same country and 741 (15.3%) by authors from 2 or more countries. Two hundred fifty-five (5.26%) articles have only one author, 4589 (94.74%) have two or more authors.
The average number of authors (Figure 3) varied between 5.48 (2020) and 6.27 (2018).
Figure 3. The average number of authors by year.
4.7. Retracted Articles and Retraction Notes by Country of the First Author
The 4844 articles had the first authors from 92 countries. The top 30 countries have 4592 retractions (94.79% of total retractions).
The number and evolution of retractions and retraction notes for the top 30 countries are presented in Table 16.
Table 16. Yearly distribution of retracted papers (R) & retraction notes (RN) by country of the first author.
4.8. Retraction Reasons for Top 10 Countries, First Author Country
When we look at the proportions represented by the reasons for withdrawal, we notice notable differences between countries, probably due to different stages of development both at the institutional level and of national policies (Table 17).
The inconsistency of retraction notes affects all countries equally and editorial errors, in a not negligible percentage, seem to affect the UK and Germany.
Table 17. Retraction reasons for top 10 countries (first author affiliation considered).
Retraction reasons for top 10 countries, all authors countries (Table 18).
Table 18. Retraction reasons for top 10 countries (all author countries).
4.9. Impact of Retracted Articles
In order to estimate the impact of retracted research, we calculated for each country the number of articles and percentage from total retractions (%A), the number of Google Scholar citations and percentage from all citations (%G), the number of Dimensions citations and percentage from all Dimensions citations (%D), the ET and CiteScore average (see Table 19 for proposed grading).
Seven categories were defined: four marked green (positive or stationary evolution) and three red (negative evolution).
The high impact was considered any difference between %G or %D and %A greater than 25% (with steps at >25%, >50%, and >75%).
Low impact was considered any difference between %G or %D and %A less than 25%.
Values between −25% and +25% were considered stationary/neutral.
The impact assessment was made for:
Table 19. Impact grading.
Table 20. Retracted research impact for top 30 countries (first author, all retraction reasons). = >−25% - <+25%; = <25 up to 50%; —<50% - 75%; —<75%; —>25 up to 50%; —>50% - 75%; —> 75%; CS—CiteScore average (3933 records with available information); ET—Exposure Time average.
Table 21. Retracted research impact for top 30 countries (first author, without editorial retraction reason). = >−25% - <+25%; = <25 up to 50%; —<50% - 75%; —<75%; —>25 up to 50%; —>50% - 75%; —> 75%; CS—CiteScore average (3933 records with available information); ET—Exposure Time average.
All author countries and impact of retracted papers.
Table 22. Retracted research impact for top 30 countries (all authors, all retraction reasons). = >−25% - <+25%; = <25% up to 50%, —<50% - 75%; —<75%; —>25% up to 50%; —>50% - 75%; —>75%; CS—CiteScore average (3933 records with available information); ET—Exposure Time average.
Table 23. Retracted research impact for top 30 countries (all authors, without editorial retraction reason). = >−25% - <+25%; = <25% up to 50%; —<50% - 75%; —<75%; —>25% up to 50%; —>50% - 75%; —> 75%; CS—CiteScore average (3933 records with available information); ET—Exposure Time average.
The number of articles withdrawn from the biomedical literature is increasing, a fact that was constantly signaled in the articles that study this subject (Bordino et al., 2020; Budd et al., 1998; Cokol et al., 2007; Redman et al., 2008; Samp et al., 2012; Singh et al., 2014; Wager & Williams, 2011). Therefore, we analyzed the evolution over time of retractions.
We have not identified elements to signal a slowdown. On the contrary, 2020 seems to be a record year for retraction notes, 878 (18.6% of the total) being already registered in PubMed on January 31st, 2021. Almost half of the 2020 retractions (423/878) are issued for articles with the first author affiliated to a Chinese institution and 135 for authors affiliated to US institutions. Considering the period 2009-2020, five of the top 10 countries recorded the highest number of withdrawals in 2020: China, United States, India, Japan, and the United Kingdom.
The process of correcting the biomedical literature seems to be now continuous and consistent, going back ten years or more, 11% of the retraction notes appearing in 2020 and 15.8% in 2019, being for papers published in 2009-2012 (see Table 9).
5.2. Countries and International Cooperation
More than 50% of the total number of retracted articles come from China (1588/32.76%) and the United States (918/18.95%), followed by India (351/7.24%), Japan (212/4.37%) and Italy (182/3.75%). We can presume that these figures probably reflect a mix of scientific production volume and the control policies implemented at institutional and national levels. Recent results seem to support this hypothesis (Bhatt, 2021).
The top 30 countries account for 94.7% of all withdrawn articles.
The data does not radically change when considering the country of origin of any author of the article. More consistent growth when compared to first author country numbers, reflecting probably a systemic penchant for international cooperation, is recorded in the United States, United Kingdom, Germany, and Canada (Table 23). Retracted articles involving international cooperation represent a percentage of 15.3% (741 articles), articles written by a single author account for 5.26% (255 papers).
5.3. Retraction Reasons
The withdrawal of a scientific article is a complex process in which several reasons can justify the retraction decision. In our series, we have found that 12.6% of the articles had multiple retraction reasons.
Mistakes and data inconsistencies represent 32.06% (1st place), the most affected countries being United States, United Kingdom, and Japan.
There is a substantial increase of scientific articles retracted because of image-related issues at a rate lower (22.46%) than the one estimated on a PubMed sample by Bucci (Bucci, 2018), but still significant. Since the paper published by Rossner in 2004 (Rossner & Yamada, 2004), much progress has been made (Irwin et al., 2012), but we can hypothesize that only the recent years’ advances in image processing and analysis have generated the adoption by publishers and journals of technologies able to discover image related QRP/QPP which were not easy to identify on a large scale a couple of years ago. The dynamic of retraction notes seem to support such a hypothesis (Table 24). Countries with high image retractions percentages are the United States, Italy, and India.
Table 24. Evolution or retraction notes for images.
Plagiarism and overlap continue to represent a problem with more than 25% of the retraction causes (plagiarism and overlap excluding images) and more than 30% when images overlap and plagiarism is added. India, Italy, and Iran have disproportionately large percentages (around 50%) of their retractions in this category.
The 5th place is the fraud, predominantly represented by fraudulent peer review (350 out of 393). Iran (32.7%), China (18.2%) and South Korea (16.9%) have important percentages of their retractions in this category. As the top years were 2015 (118 retraction notes), 2016 (75 retraction notes), and 2017 (134 retraction notes), we have reasons to think that publishers and journals have fixed their vulnerabilities or the exploits in the publishing systems are not yet discovered.
Ethics also represent an important reason for retraction, with 7.43% of the retracted articles. Interestingly enough, countries we did not expect have in this category substantial percentages of the volume of their withdrawn articles: Germany (20.9%), Japan (15.6%), South Korea (12%). Also, only one country in the top 10 is below 5% (Iran, 4.9%), and the number of retractions for ethical reasons is continuously increasing (Table 25). This fact leads us to consider the possibility of greater attention from publishers and journals to research ethics and/or publication ethics.
Table 25. Evolution of retraction notes for ethical reasons.
Authorship is the retraction reason for 5.78% of the papers; editorial mistakes account for 3.73%, property & legal concerns 2.5%, and other reasons 1.3%. We must mention here that a relatively high number of papers (241, 5.1%) have no clear reason mentioned as the cause of retraction. Details about distribution by the top 10 countries are in Table 17.
The data do not change very much when the countries of all authors are taken into account (Table 18).
5.4. Number of Authors
Several published reports on the number of authors in the scientific literature (Gu et al., 2017; Jang et al., 2016; Larivière et al., 2015; Sacco & Milana, 1984) mention a continuous increase of average number. With an average of 5.83, a median of 5, and a mode of 4 authors per retracted article published between 2009 and 2020, we did not find an upward trend for the average number of authors. Single author papers account for 5.3% of the total, confirming the decreasing trend reported previously (Larivière et al., 2015).
5.5. Citations, Journal Impact, and Exposure Time
Citations and citations based indicators are still considered to reflect the impact and relevance of scientific work.
During the period in which the journal article is available both in search and on the journal’s website without any mention of withdrawal, it is considered valid by many researchers and can influence decisions related to research projects or, in the least pessimistic scenario, can be cited in the bibliography of an article.
Not all articles are cited, but there are enough situations in which the critical analysis of the content can be diminished in intensity by the journal’s prestige, author’s affiliations, or other factors. Therefore, to better highlight this multi-dimensional model, we extracted citations of withdrawn articles and the journal’s impact indicator (CiteScore) from the year of publication of the article (information available for the period 2011-2020). Also, the exposure time in months was calculated as described in the material and methods section for each article.
For all 4844 articles, the total number of citations in Google Scholar is 140,810 (mean = 29.07/median = 11), and the total number of citations in Dimensions database is 96,000 (19.82/7).
We found a 68% coverage of Dimensions database when reported to Google Scholar, which was slightly higher than a recent report (Martín-Martín et al., 2020).
Most cited articles were the ones retracted for image reasons(average 44.6 on Google Scholar/32.9 on Dimensions) followed by mistakes (35.5/24.6) and plagiarism (27.8/16.1). Less cited were those retracted for editorial reasons (6.9/4.1), property and legal concerns (10.4/6.9) and authorship (11.4/7.2) (see Table 14 for details).
5.7. Journal Impact (Cite Score)
There is old history and many controversies behind the indicators that were initially used to ease the purchasing decisions of academic libraries (Brodman, 1944; Gross & Gross, 1927) or to assess the quality of scientific literature (Garfield, 1955) and nowadays to give a measure of prestige for scientific journals (Garfield, 1999). Currently, there are two leading indicators for the impact of a journal: JIF (Journal Impact Factor, dominating the academic market since 1960) and CiteScore (launched in 2016) (Da Teixeira Silva & Memon, 2017). We extracted the journal CiteScore (CS) score (publicly available) for all retracted articles. CS information was available for 3933 articles published between 2011 and 2020.
Average CiteScore for 3933 articles is 6.03 (5.81 - 6.26).
Articles with retractions reasons‚ Other’‚ Mistakes/inconsistent data’ and Images’ had the largest CS average value. In contrast, articles retracted for plagiarism, fraud, overlap, or authorship were published in lower CS journals (see Table 14 for details).
5.8. Exposure Time
Exposure time was 28.89 months in average (median = 19). Previous studies reported article lifespans between 26 months and 44 months (Decullier et al., 2013; Fang et al., 2012; Foo, 2011; Nath et al., 2006; Pantziarka & Meheus, 2019; Rosenkrantz, 2016; Samp et al., 2012) excepting the paper by Singh (Singh et al., 2014) which reports an 18 months lifespan for papers published between 2004 and 2013 (12 months for 1695 papers published between 2009 and 2013). Our findings show a moderate decrease in the article lifespan for 2009-2020 compared to the previously reported data. For 2125 the papers published between 2009 and 2013, the lifespan is 41 months, a difference (compared to Singh) which late retractions for images/other reasons could explain.
Articles retracted due to image issues have a much longer average exposure time (49.21 months) than items withdrawn for any other reason (Table 14). This long exposure time, together with an average CS significantly higher than the average of the whole group and an average number of authors also higher than the average (the highest of all reasons for withdrawal), could explain why this group has the highest number of citations/article (44.6 in Google Scholar, 32.9 in Dimensions).
Articles withdrawn due to errors or inconsistency of data have a CS of 8.06 (highest) and an average number of authors of 6.4 (second in size) but an average exposure time of only 27 months. In this case, Google Scholar citations are 35.9/article and 24.9/article in Dimensions.
The other causes are significantly below average exposure time (less ‚Ethics’ reason, with 31.7 months). All have fewer authors on average, and excepting the reason ‚Others’, have a significantly lower CS.
5.9. Impact of Retracted Research
We tried, in this article, to formulate a representation of the impact(citations received) that retracted research (all retraction reasons, without editorial reasons, country of the first author, countries of all authors) for the top 30 countries (Tables 20-23).
5.9.1. First Author Only, without Editorial Reasons (Table 21)
Our findings suggest that retracted scientific papers from 9 out of the first 30 countries have a higher impact (United States, India, United Kingdom, Spain, Egypt, Brazil, Netherlands, Singapore, Ireland) while retracted research from other countries has a lower than expected impact (Pakistan, China, Turkey, Malaysia, Russian Federation, South Korea, Iran, Australia, Saudi Arabia, Poland, and Greece).
5.9.2. All Authors, without Editorial Reasons (Table 23)
High impact for 13 countries: Spain, Sweden, United States, United Kingdom, Netherlands, Singapore, Switzerland, India, Italy, Germany, France, Egypt, Brazil;
Low impact (countries & regions): Pakistan, Turkey, Malaysia, China, South Korea, Iran, Australia, Chinese Taipei, Poland.
Some outliers can bring some modifications. For example, Spain has an article with more than 4000 citations in Google Scholar. Australia has a highly cited article in Dimensions, while the same article has a small number of citations in Google Scholar. However, these do not modify the direction of the impact (Spain has a single red point instead of three, Australia loses one green point and passes to stable in only one of the four classifications).
Various factors may explain these differences in the impact of retracted research when considering the originating country: scientific tradition and strategies, funding of research, faster or easier access to better journals, scientific networking, better international cooperation, journal or institution prestige. When one or several of these factors concur in the direction of a negative impact of questionable research or publication practices, questions could be raised about the safety mechanisms of the scientific process.
The ecosystem of withdrawn articles is a complex one. The matrix in which they are framed reflects cultural differences, differences in development, and organization of the scientific research system in different countries that contribute to the shared heritage and advancement of biomedical knowledge. Our study is only a snapshot of a short period and its exploratory nature, and inherent weaknesses (lack of an unambiguous wording in some withdrawal notes, especially for ethical reasons; lack of/unclear content in over 5% of the withdrawal notes; online absence of some articles) determines us to proceed with caution in formulating conclusions.
Mistakes and inconsistent data (including data fabrication) are the main retraction reasons for articles published from 2009-2020 and indexed in PubMed.
Images (22.46%) and ethical issues (7.43%) are retraction reasons growing in recent years and count for almost 30% of all retractions.
Plagiarism and overlap still represent a significant problem (>30% of the total when images are included), especially in lower impact journals.
The number of fraudulent peer review cases (350) shows the need to strengthen these processes, making them less vulnerable to circumvention attempts. As with plagiarism and overlap, the journals affected are those with a low CiteScore.
Major publishers and biomedical journals are involved in the retrospective verification (going back at least 12 years), resulting in a steady increase in withdrawals. The process does not appear to be slowing down. Unfortunately, 5.1% inconsistent/unclear reporting of retractions together with 3.7% editorial errors signals severe deficiencies for some publishers and journals.
Withdrawal of articles seems to be a technology-dependent process (image analysis and anti-plagiarism software). Increasing the accessibility of these technologies can help combat QRP and QPP more effectively.
The number of citations of retracted articles shows a high impact of papers published by authors from certain countries, most of them scientifically advanced. This impact suggests a need for improving the verification processes at the national and institutional levels, publishers, and biomedical journals. The number of retracted articles per country does not always accurately reflect the scientific impact of QRP and QPP papers. Authors from countries whose articles have a negative scientific impact have published in journals with a high CiteScore: Switzerland (15.42), Spain (12.35), or United States (10.04) are some examples. On the opposite side, countries figuring in the top 10 but with low scientific impact published in much lower CiteScore journals (Iran (3.5), India (4.43), or China (4.45)). One may also ask whether authors’ access to high-impact journals is easier for some countries and more difficult for others.
The country distribution of retraction reasons shows structural problems in the organization and quality control of scientific research, which have different images depending on geographical location, economic development, and cultural model.
Further research of retracted articles could focus on the equitable distribution of the responsibility for the authors of withdrawn articles or the early detection of policy changes that lead to better research. Other directions could be the evaluation of citation dynamics and the prevention of post-retraction citations or the implementation of more effective policies for reporting and disseminating retraction notes at the publisher or journal level. Last but not least, a stable classification would allow the differentiation between QRP and QPP articles and, therefore, the rapid rejection of invalid research and the appropriate correction of defective scientific reporting practices without the annulment of scientifically valid results.
 Baerlocher, M. O., O’Brien, J., Newton, M., Gautam, T., & Noble, J. (2010). Data Integrity, Reliability and Fraud in Medical Research. European Journal of Internal Medicine, 21, 40-45.
 Bordino, M., Ravizzotti, E., & Vercelli, S. (2020). Retracted Articles in Rehabilitation: Just the Tip of the Iceberg? A Bibliometric Analysis. Archives of Physiotherapy, 10, Article No. 21.
 Buehring, G. C., Eby, E. A., & Eby, M. J. (2004). Cell Line Cross-Contamination: How Aware Are Mammalian Cell Culturists of the Problem and How to Monitor It? In Vitro Cellular & Developmental Biology-Animal, 40, 211-215.
 Decullier, E., Huot, L., Samson, G., & Maisonneuve, H. (2013). Visibility of retractions: A Cross-Sectional One-Year Study. BMC Research Notes, 6, Article No. 238.
 DeMets, D. L., Fleming, T. R., Geller, G., & Ransohoff, D. F. (2017). Institutional Responsibility and the Flawed Genomic Biomarkers at Duke University: A Missed Opportunity for Transparency and Accountability. Science and Engineering Ethics, 23, 1199-1205.
 Fanelli, D. (2009). How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data. PLoS ONE, 4, Article ID: e5738.
 Fang, F. C., Steen, R. G., & Casadevall, A. (2012). Misconduct Accounts for the Majority of Retracted Scientific Publications. Proceedings of the National Academy of Sciences of the United States of America, 109, 17028-17033.
 Foo, J. Y. A. (2011). A Retrospective Analysis of the Trend of Retracted Publications in the Field of Biomedical and Life Sciences. Science and Engineering Ethics, 17, 459-468.
 Godecharle, S., Fieuws, S., Nemery, B., & Dierickx, K. (2018). Scientists Still Behaving Badly? A Survey Within Industry and Universities. Science and Engineering Ethics, 24, 1697-1717.
 Gu, A., Almeida, N., Cohen, J. S., Peck, K. M., & Merrell, G. A. (2017). Progression of Authorship of Scientific Articles in the Journal of Hand Surgery, 1985-2015. The Journal of Hand Surgery, 42, 291.e1-291.e6.
 Irwin, R. S., Augustyn, N., French, C. T., Rice, J., & Welch, S. J. (2012). Spread the Word about the Journal in 2012: From Impact Factor to Plagiarism and Image Falsification Detection Software. Chest, 141, 1-4.
 Larivière, V., Gingras, Y., Sugimoto, C. R., & Tsou, A. (2015). Team Size Matters: Collaboration and Scientific Impact since 1900. Journal of the Association for Information Science and Technology, 66, 1323-1332.
 Martín-Martín, A., Thelwall, M., Orduna-Malea, E., & Delgado López-Cózar, E. (2020). Google Scholar, Microsoft Academic, Scopus, Dimensions, Web of Science, and OpenCitations’ COCI: A Multidisciplinary Comparison of Coverage via Citations. Scientometrics, 126, 871-906.
 McHugh, U. M., & Yentis, S. M. (2019). An Analysis of Retractions of Papers Authored by Scott Reuben, Joachim Boldt and Yoshitaka Fujii. Anaesthesia, 74, 17-21.
 Michalek, A. M., Hutson, A. D., Wicher, C. P., & Trump, D. L. (2010). The Costs and Underappreciated Consequences of Research Misconduct: A Case Study. PLoS Medicine, 7, Article ID: e1000318.
 Mongeon, P., & Larivière, V. (2016). Costly Collaborations: The Impact of Scientific Fraud on Co-Authors’ Careers. Journal of the Association for Information Science and Technology, 67, 535-542.
 Nath, S. B., Marcus, S. C., & Druss, B. G. (2006). Retractions in the Research Literature: Misconduct or Mistakes? The Medical Journal of Australia, 185, 152-154.
 Rapani, A., Lombardi, T., Berton, F., Del Lupo, V., Di Lenarda, R., & Stacchi, C. (2020). Retracted Publications and Their Citation in Dental Literature: A Systematic Review. Clinical and Experimental Dental Research, 6, 383-390.
 Scanlon, P. M. (2007). Song From Myself: An Anatomy of Self-Plagiarism. Plagiary, 2, 57-66.
 Selvam, T. (2021). Plagiarism and Its Consequences. In C. Thanavathi (Ed.), Advanced Educational Research and Statistics: Plagiarism and Its Consequences (1st ed., Vol. 1, pp. 33-38). ESN Publications.
 Singh, H. P., Mahendra, A., Yadav, B., Singh, H., Arora, N., & Arora, M. (2014). A Comprehensive Analysis of Articles Retracted between 2004 and 2013 from Biomedical Literature—A Call for Reforms. Journal of Traditional and Complementary Medicine, 4, 136-139.
 Stern, A. M., Casadevall, A., Steen, R. G., & Fang, F. C. (2014). Financial Costs and Personal Consequences of Research Misconduct Resulting in Retracted Publications. ELife, 3, Article ID: e02956.
 Tilden, S. J. (2010). Incarceration, Restitution, and Lifetime Debarment: Legal Consequences of Scientific Misconduct in the Eric Poehlman Case. Commentary on: “Scientific Forensics: How the Office of Research Integrity Can Assist Institutional Investigations of Research Misconduct during Oversight Review”. Science and Engineering Ethics, 16, 737-741.
 Wager, E., & Williams, P. (2011). Why and How Do Journals Retract Articles? An Analysis of Medline Retractions 1988-2008. Journal of Medical Ethics, 37, 567-570.
 Wilfon, B. (1756). CVI. A Retractation, by Mr. Benjamin Wilson, F. R. S. of His Former Opinion, Concerning the Explication of the Leyden Experiment. Philosophical Transactions of the Royal Society of London, 49, 682-683.
 Zhang, Q., Abraham, J., & Fu, H.-Z. (2020). Collaboration and Its Influence on Retraction Based on Retracted Publications during 1978-2017. Scientometrics, 125, 213-232.