Image manipulation – we can tell when you’re faking it.

Maverick continues its series on Research Integrity in scholarly publishing with a new whitepaper from Affiliate Senior Associate, Matthew Salter, Ph.D. that examines the issue of image manipulation in research. A downloadable version of the whitepaper is available here

In recent months the issue of image manipulation has returned to prominence in mainstream news media, publishing industry blogs, and social media. The reasons for this renewed interest are unclear but are likely due to a combination of the availability of better tools for detecting violations, a sharper focus by the research community and publishers on spotting infringements, and a greater willingness to pursue cases more vigorously.

Another possible explanation is the wider availability of powerful and inexpensive image editing software in recent years, which has made it easier for bad actors to try their luck leading to more, inexpert, attempts to defraud. In addition, recent events, such as adjustments to working practices as a result of the COVID-19 pandemic, and the explosion of Chat GPT and other AI tools in late 2022 that some commentators believe makes it easier for unscrupulous actors to cheat, have pushed the issues of plagiarism and image manipulation up many peoples’ agenda, prompting some publishers to issue warnings about the dangers.[1]

Therefore, what constitutes image manipulation, how big of a problem is it, and how can the community combat it?

When is a violation, not a violation?

First, it should be acknowledged that, as with plagiarism and other forms of scientific misconduct, not all instances of image manipulation are deliberate. Some come from a genuine desire on the part of researchers to present their data in a way that is clear, impactful, and easy for readers to interpret. Others may just be the result of sloppy technique, which, while undesirable, is hardly criminal. However, in the process of adjusting the contrast on a TEM image here or reordering the lanes on a Western blot there, it is all too easy for researchers to stray over the line and into the realm of inappropriate manipulation that could lead to an article being flagged as potential fakery or rejected, causing embarrassment and potentially triggering a painful misconduct investigation, as well as wasting the time of editors, reviewers, and research colleagues.

The good news is that unintentional image manipulation can be mitigated by better training and mentorship of early career scientists at the research group and institutional levels and by following journal submission guidelines[2], which typically stipulate minimal processing of images. Most publishers accept that a reasonable degree of image processing is allowable or even unavoidable in some cases, but are clear that “the final image must correctly represent the original data and conform to community standards.”[3] It is imperative that authors retain their raw image data in the event that editors or reviewers request to inspect it during peer review, as failure to do so promptly will hold up the process and may even lead to the article being rejected. Further advice comes in the form of guidelines issued by the Council of Science Editors.[4] In the face of the reputational risk that comes with inappropriate image manipulation, some research institutions are recruiting scientific integrity officers to check manuscripts produced by their researchers before they are submitted to journals with the dual aims of reassuring honest authors and discouraging those who might be tempted to act in less than ethical ways.[5]

Overzealous presentation and less-than-stellar lab practice are undesirable for sure, but if an author is determined to commit deliberate and outright forgery, then there is little that can be done to root out misconduct prior to submission. In this case, the last lines of defense are the editorial staff in the journal publisher’s office who are trained to spot and flag up irregularities in images included as part of a submitted manuscript.  These efforts frequently include the use of an increasing range of AI tools to detect these issues, especially duplication in preflight manuscript checks.[6]

Peer reviewers, of course, also play a vital role in rooting out anything that looks suspicious for further investigation. While this approach is undoubtedly successful in detecting many incidences of scholarly articles containing problematic images before they are published, despite the best efforts of publishers and reviewers, many more make it through the screening process and end up in the final paper. When this happens, and fraudulent graphics and images have entered the scholarly record, they can be very difficult to expunge. Of course, problematic articles can be corrected or retracted, but unlike cases where authors discover honest errors or unreliable data post-publication and take the scientifically responsible decision voluntarily to clean up the scientific record, in regard to deliberate fraud, it can take a long time for journals to act, if they act at all.

How big is the problem and is it getting worse?

Assessing the extent of image manipulation and fabrication is notoriously difficult due to the need not only to identify problematic images, but also to demonstrate that any manipulations were carried out with the deliberate intent to deceive. The acknowledged expert in image integrity analysis is former Utrecht University and Stanford microbiologist Dr. Elisabeth Bik,[7] who transitioned from frontline research to become a scientific misconduct expert in 2013 when she discovered that one of her own publications had been plagiarized. Since becoming a full-time scientific integrity consultant in 2016, Dr. Bik has led the charge against image manipulation, appearing in both the mainstream media[8],[9] and at scholarly conferences worldwide to raise awareness of the problem and deliver advice for addressing it.

Before going any further, it is important to distinguish between types of scientific misconduct, specifically the difference between fabrication and falsification. In the former, an unscrupulous researcher simply makes data up, be it a line on a graph, a panel in an image containing fake data, or the authors reporting experiments that were never carried out at all. In the case of falsification, the underlying data is real but may be edited or displayed in a way that gives a misleading impression of the research outcome, for example, by omitting data points, reusing parts of one data set in another context, or duplicating information and changing its appearance. It is generally acknowledged that image manipulation can take a number of forms, but it typically falls into one of the following three categories:

Category I – Simple duplication: Where the same figure is used more than once in the same paper, or a different paper, to represent different sets of results. Although potentially serious, this type of manipulation could also occur accidentally as a result of sloppy data curation.

Category II – Duplication and repositioning: In this category, an image is reused in another figure but is shifted, rotated, or reversed to look like a different data set. Such manipulation suggests deliberate intent to deceive, as it is difficult to claim that they came about as the result of a simple mistake.

Category III – Duplication with alteration: Here, images are duplicated and then altered, e.g., by cutting out certain parts of the image, inserting other visual elements, and/or rotating or mirroring parts of the image. Sometimes a section of the image is repeated within the figure or is partly obscured by the introduction of a plain background color. This type of manipulation strongly indicates a deliberate attempt to create fraudulent data.

Based on these categories, Bik and coworkers visually screened images in 2016 published in over 20,500 articles published in 40 biomedical journals between 1995 and 2014.[10] They concluded that on average, 3.8% of the articles examined contained at least one problematic image and that of these, approximately half appeared to be the result of a deliberate attempt to deceive.[11] The rate of suspect image occurrence remained fairly steady between 1995 and 2002 but increased sharply in 2003. Since 2003, this rate has remained steady. Interestingly, the higher the impact factor of the journal, the lower the rate of problematic images.

It should also be acknowledged that the 3.8% figure only applies to biomedical papers and that opportunities for manipulation probably vary by discipline. Furthermore, considerable variation in incidence rates was seen across the sample, ranging from 0.3% to 12.4%. Other investigators have estimated the incidence of image manipulation across a wider range of disciplines at approximately 1.9%,[12] while other studies suggest an incidence of as much as 5.7%.[13] Crucially, Bik et al., compared rates of inappropriately manipulated images by year with corresponding annual retraction rates and concluded that the significant jump in incidences of image fakery beginning in 2003 is not due solely to increased detection.

How long has this been going on?

Concerns about the authenticity of images used in scholarly publishing are nothing new.[14] The history of research is littered with accounts of questionable research that may or may not have involved fraud, but the lack of availability of methods to detect fraud at the time coupled with a reluctance to acknowledge the existence of a problem on the part of the community likely means that many cases went undiscovered or at least unreported. Many instances appear to originate in the biomedical sciences, but no research discipline is immune, and culprits can range from poorly trained graduate students to Nobel prizewinners.[15]

Community-based initiatives such as Pubpeer[16] and Retraction Watch[17] offer important opportunities for surfacing and discussing potential instances of suspicious image manipulation as well as other problematic publishing practices. Such forums have often been starting points for cases that eventually result in wider investigations and even retractions. Additionally, agencies such as the Office for Research Integrity, part of the US Department of Health and Human Services, regularly report the outcomes of research misconduct investigations of the recipients of government funding.[18]

Some instances of image fabrication and forgery are so egregious that they spill out of the confines of their native research community and receive mainstream coverage. The Schön scandal[19], in which German materials science researcher Jan Hendrik Schön was found to have fabricated graphs and figures – essentially creating fake data sets out of thin air – and reusing data from one experiment in an unrelated context, in at least 16 papers published in leading international journals hit the headlines in 2002. The incident stunned the materials science research community and led to mass retractions of papers, the firing of Schön, and the revocation of his Ph.D.

In the biomedical field, “Hwang-gate”[20] – recently the subject of a Netflix documentary[21] and a New York Times guest essay by the journalist who exposed him[22] – in which the internationally celebrated Korean stem cell scientist Hwang Woo-Suk, was found guilty of jaw-dropping bioethics violations, and embezzlement of research funds that resulted in his firing from the prestigious Seoul National University and a two-year suspended prison sentence, first came to light in 2004. Hwang is best known for his unethical acquisition of experimental materials as well as for his financial crimes, but it is often forgotten that his downfall was originally precipitated by suspicions about duplication of research data and manipulation of images in papers published in a leading international journal.

While these two examples of research misconduct are arguably among the most extreme, cases of image manipulation that garner less widespread attention are numerous. Worse still are the instances that lie dormant for years – or are never detected at all – and continue to distort the literature for extended periods, frustrating attempts by unsuspecting researchers to reproduce or build upon published data. Such cases of image manipulation are particularly concerning in the context of the “replication crisis” that most leading scientists believe is currently impacting the research enterprise.[23]

A replication crisis (also known as a “reproducibility crisis”) occurs when researchers repeat experiments, protocols or procedures from the literature but obtain different results to those reported leading to the published work being deemed unreproducible. Whilst this phenomenon is most commonly associated with the life sciences, no field is immune and researchers in the physical sciences and engineering routinely encounter problems replicating published work. The reasons behind the replication crisis are certainly multiple and complex, but the introduction of fake or falsified images into the scientific literature does nothing to help solve a problem that wastes researcher time, impedes career progression, and has been estimated to cost up to $28 billion[24] annually in the biomedical sector alone.

Why do people do it?

Trust and a professional reputation take years if not decades to build but can be erased in an instant, never to be regained. Past misdemeanors can cast long shadows.[1] Given the nature of scientific research, which seeks to establish absolute and unambiguous truth and protect the integrity of the scholarly record above everything, the thought of researchers committing deliberate fraud hits a particularly sour note and can invoke feelings akin to betrayal from other members of the community. Furthermore, ethics investigations, even if they result in the exoneration of the accused, are lengthy, painful, and embarrassing. Likewise, retraction of compromised manuscripts is time-consuming and humiliating. So why would anybody want to risk everything they have worked for just to get a paper published in a prestige journal?

Some blame the “publish or perish” culture of modern academic research, which places overwhelming pressures to produce on highly trained and talented researchers competing for dwindling sources of financial support and ever smaller numbers of tenure-track and faculty positions. This can result in corners being cut, vetting of data being overlooked, and best practices being compromised until researchers find themselves committing acts of scientific misconduct that once would have been unthinkable.

Others point to unbalanced recruitment and research assessment protocols that place undue importance on publishing in a small number of marquee journals for researchers to get a foot on and climb up the academic promotional ladder. Back in his days as a research team leader at a top Japanese university, the author recalls a friend and colleague telling him that if he could only get on a paper in [insert name of an internationally renowned journal here] then an academic position back in his home country would be assured. While that colleague eventually achieved professional success using entirely legitimate means, others are not so scrupulous or immune to temptation.

How can the scholarly community deal with the problem?

It is clear that inappropriate manipulation of images in scholarly research is a problem that all stakeholders in the research enterprise – researchers, publishers, libraries, institutions, and funders alike – have a vested interest in combatting. Robust mentoring and ethics training right from the beginning of a young researcher’s career is of course vital, as is the establishment of rigorous data tagging and archiving practices at the laboratory level. However, these measures alone will not solve the problem.

Publishers and reviewers have a large role to play in setting the standard for the scrutiny of submissions, and some publications such as the Journal of Cell Biology took an early lead in calling attention to the problem and mandating the systematic and thorough inspection of images in manuscripts it receives.[17] That only 0.3% of the images inspected by Bik’s team in this journal were found to be problematic, compared to the average of almost 4%, suggests that careful screening works. However, not everybody is so eagle-eyed, and sometimes technology has to step in to close the gap.

Scholarly publishers, keen to uphold quality and maintain the integrity and reputation of their journals, use image analysis software such as ImageJ[26] and certain features in Adobe Photoshop to assist in screening submissions. While there is currently no single automated solution for unambiguously identifying problematic images,[27] renewed scrutiny of the issue has focused efforts among publishing solution vendors to develop image analysis tools to sit alongside other research integrity products already in their product range. Ironically, some of these tools make use of the same AI and machine learning technologies that are causing concern in other areas of scholarly misconduct.

As most stakeholders acknowledge, solving the problem will take a multidisciplinary approach with increased coordinated action from vendors, publishers, funders, and the scientific research community to develop tools and protocols for detecting problematic images. In addition, the race to deal with image manipulation in scholarly publishing will be a marathon, rather than a sprint, because however the issue is approached, it is a serious problem that will likely be around for the foreseeable future.

Maverick offers a program of research integrity services, including training, to help publishers achieve and maintain best practices. It helps publishers operationalize research integrity to ensure safeguards are integrated throughout the workflow, from manuscript submission and peer review to publication and data management. For a free consultation, contact your Maverick representative or email

By Matthew Salter, Ph. D.
Affiliate Senior Associate
Matthew Salter is a multi-lingual publishing executive with over 20 years of experience in scholarly publishing and research communications. He has worked in senior roles at leading commercial and learned society publishers, including NPG Nature Asia-Pacific (now part of Springer Nature) where he headed up MacMillan Science Communication, the custom-publishing arm of Nature Publishing Group, and IOP Publishing. Matthew holds a BSc and Ph.D. in chemistry (Imperial College). He is a former Japan Society for the Promotion of Science (JSPS) Research Fellow (Tohoku University), Lecturer in Organic Chemistry (King’s College London), and a JST ERATO staff member at the University of Tokyo. He is fluent in Japanese and proficient in Mandarin Chinese.

Download the Image Manipulation in Scholarly Publishing whitepaper

Further Reading

When good intentions drive bad behavior

Answer These Five Questions to Ensure an Effective Research Integrity Strategy

Download Maverick’s Research Integrity service sheet.















[14] in-the-news-but-these-cases-are-hardly rare/