13. August 2020 | Press releases:

Computer Scientists at TU Braunschweig Develop Defence Against Image Manipulation What can be done against the manipulation of resized images

Whether uploading on a website, sending via messenger, posting on social media or processing by artificial intelligence – digital images are often reduced in size using algorithms. A year ago, Chinese scientists discovered that images can be manipulated inconspicuously when scaled down. A research team from the Institute of System Security at Technische Universität Braunschweig has now studied this attack technique further and developed a defence. They present the results today, August 13, 2020, at the USENIX Security Symposium, one of the world’s most important security conferences.

In order to reduce the size of a digital image, algorithms do not consider all image points (pixels) the same when calculating. Depending on image size and algorithm, many pixels are hardly or not at all included in the reduction. This is where attackers can take action and only change those pixels that are relevant to the scaling. “This is almost unnoticeable optically, there is only a slight noise in the image. If the image is then reduced in size, only the manipulated points remain and generate a new image that the attacker can freely determine,” explains Professor Konrad Rieck, head of the Institute of System Security.

Threat to learning-based systems

Such attacks are a particular threat to learning-based systems that work with artificial intelligence (AI): The scaling of images is a very common processing step in order to be able to analyze images through machine learning. “With this attack technique, humans see a different image than the learning process. The human sees the original image, while the artificial intelligence processes the scaled-down, manipulated image and uses it to learn,” says Rieck.

An example: If you want to train an AI system that is supposed to recognize road signs, the human being gives the learning procedure different images of stop signs, for example. If the images have been manipulated, the scaling in the AI system generates a completely different image, for example a right-of-way sign. The system learns a wrong context and does not recognize stop signs later. Such attacks are a threat to all security-relevant applications where images are processed. The image analysis can be sabotaged unnoticed and lead to false predictions.

Defence made in Braunschweig

But how can you protect yourself against such attacks? Attackers take advantage of the fact that not all pixels are equally involved in the image reduction. “This is exactly where our defence comes in: We have developed a method that ensures that all pixels are used equally for the reduction,” says Konrad Rieck. “Our method determines which pixels are relevant for scaling and cleverly includes the rest of the image in this. Visually, you cannot see this change. However, this makes an attack impossible.” The defense can be easily integrated into existing AI systems because it does not require any changes to the image processing and the learning process. “So far, no cases of attack have been reported. We hope that our analysis and defence will help to prevent this from happening in the future,” says Rieck.