Adobe develops AI tool to detect fake images

Machine learning tool able to spot common image manipulation techniques

Tags: Adobe Systems IncorporatedCounterfeitDARPA - Defense Advanced Research Projects Agency (www.darpa.mil)USA
  • E-Mail
Adobe develops AI tool to detect fake images Adobe developed a machine learning tool as part of a DARPA project to detect fake images.
By  Mark Sutton Published  June 26, 2018

Adobe has developed an AI-supported tool to detect image tampering.

An Adobe researcher had developed the tool, which can detect if an image has had objects removed or pasted in, or if it is two images spliced together, with detection possible in seconds compared to the hours it normally takes a skilled forensics expert.

The tool has been developed as part of a project by the US defence research agency, DARPA, and its Media Forensics program.

Vlad Morariu, senior research scientist at Adobe, applied his fourteen-plus years of expertise in the field to teaching a machine learning system to recognise tell-tale elements of some of the most popular forms of image fakery.

"We focused on three common tampering techniques - splicing, where parts of two different images are combined; copy-move, where objects in a photograph are moved or cloned from one place to another; and removal, where an object is removed from a photograph, and filled-in," Morariu said in a blog post.

"Each of these techniques tend to leave certain artefacts, such as strong contrast edges, deliberately smoothed areas, or different noise patterns."

Although these artefacts are not usually visible to the human eye, they are much more easily detectable through close analysis at the pixel level, or by applying filters that help highlight these changes. The results of this project are that AI can successfully identify which images have been manipulated. AI can identify the type of manipulation used and highlight the specific area of the photograph that was altered.

"Using tens of thousands of examples of known, manipulated images, we successfully trained a deep learning neural network to recognize image manipulation, fusing two distinct techniques together in one network to benefit from their complementary detection capabilities," Morariu explained.

The first technique uses an RGB stream (changes to red, green and blue colour values of pixels) to detect tampering. The second uses a noise stream filter. Image noise is random variation of colour and brightness in an image and produced by the sensor of a digital camera or as a byproduct of software manipulation. It looks a little like static. Many photographs and cameras have unique noise patterns, so it is possible to detect noise inconsistencies between authentic and tampered regions, especially if imagery has been combined from two or more photos.

Morariu said that while the techniques are still being perfected, he hopes in future that the algorithm can be extended to detect other forms of image manipulation.

Add a Comment

Your display name This field is mandatory

Your e-mail address This field is mandatory (Your e-mail address won't be published)

Security code