Picture of the Month: Dreamy Distortions of Artificial Intelligence Submitted by the Institute for System Security
Klimt, Hundertwasser or who created this psychedelic picture? As a matter of fact, the photo, in which BRICS building can be made out in the background, comes from the Institute for System Safety at the TU Braunschweig. It gives an exciting insight into the world of artificial intelligence.
In the photo, the scientists have reinforced the patterns that an artificial neural network “sees” internally. To classify the image as a building, dog or street sign, for example, the network first extracts fine block structures that were reinforced throughout the photo. Later, these are processed internally into more complex patterns, easily recognizable by the dog’s snouts. This example visually shows how we can better understand artificial neural networks before they are implemented, for instance in autonomous driving vehicles.
How does AI adapt to new threats?
As part of their research at the Institute for System Security, the scientists are investigating the use of artificial intelligence in safety-critical applications. They research how intelligent systems can be trained on the basis of data and thus adapt themselves to new risks. In this way, researchers can, for example, develop new methods for detecting computer malware.
However, today’s artificial intelligence methods are themselves vulnerable. Attackers can either manipulate the learning process itself or produce the desired output through targeted manipulation. It has been shown, for example, that by affixing stickers to road signs it is possible to specifically manipulate the recognition of autonomous vehicles. Because of this, the institute scientists are also also investigating the target of artificial
A better understanding of the learned interrelationships themselves is a key issue here – similar to how an exam helps to understand what the students have learned from a lecture.