Adobe's Research unit has recently been working on a new artificial intelligence solution designed to reliably identify doctored and manipulated digital images, as well as any other type of imagery that has been tampered with. The San Jose, California-based firm said it embarked on its newly publicized journey in order to help improve people's trust in online media, i.e. the authenticity of the thereof.
The project is led by Adobe Senior Research Scientist Vlad Morariu who has been detecting image tampering under DARPA's Media Forensics program since 2016 and boasts years of experience in the computer vision field. Edge sharpness, noise distribution, lighting metrics, and a wide variety of other pixel values can already be used for manually determining whether any particular image has been tampered with, whereas metadata itself can often allow for identical conclusions. The main challenge Adobe's project is seeking to overcome is training an AI system to leverage that knowledge and highlight suspicious images more effectively than a human could. The current version of the solution Mr. Morariu came up with focuses on three visual aspects: removal, i.e. the act of eliminating an object from an image and replacing it with something else; copying, which is effectively clone-stamping and can also be a variant of the foremost focus point, and splicing, or stitching multiple images together.
While a trained expert is capable of identifying a doctored image based on something like noise patterns, such process usually takes hours as it requires a pixel-by-pixel analysis, whereas Adobe's new experimental technology is capable of achieving the same results in a matter of seconds. The machine learning component of the platform allowed it to learn to recognize manipulated images based on datasets containing tens of thousands of known doctored files. It's still unclear whether the Photoshop creator intends to commercialize its latest AI technology or otherwise make it available to the general public.