Destroying image integrity in scientific papers may result in serious consequences. Inappropriate duplication and fabrication of images are two common misconducts in this aspect. The rapid development of artificial-intelligence technology has brought to us promising image-generation models that can produce realistic fake images. In this article, Jinjin Gu and colleagues show that such advanced generative models threaten the publishing system in academia as they may be used to generate fake scientific images that cannot be effectively identified. The authors demonstrate the disturbing potential of these generative models in synthesizing fake images, plagiarizing existing images, and deliberately modifying images. It is very difficult to identify images generated by these models by visual inspection, image-forensic tools, and detection tools due to the unique paradigm of the generative models for processing images. This perspective reveals vast risks and arouses the vigilance of the scientific community on fake scientific images generated by artificial intelligence (AI) models.

LINK