31. August 2021 | Magazine:

Defend AI against pixel forgers Student Competition: The Deep Learning Lab 2021

In order for machines to recognize images, they categorize each pixel individually. Nowadays, this is done by using artificial intelligence methods. But what happens when a cyberattack distorts pixel by pixel in the input image? At the Institute of Communications Technology at the TU Braunschweig, students not only learn what theoretically helps against this. For the fourth time, there is a practical challenge on machine learning in the Deep Learning Lab. This year, the so-called noise patterns were tackled.

Das Bild zeigt viermal die gleiche Straßenverkehrsszene in einer Stadt. Oben links ist das mit einem Rauschmuster attackierte Bild. Oben rechts zeigt, wie die Künstliche Intelligenz ohne Rauschmuster die Pixel farblich kategorisieren würde. Autos in blau, Menschen in rot, Straßenbegrenzungen in pink. Unten links ist dagegen das durchs Rauschmuster verfälschte Ergebnis der semantischen Segmentierung. Unten rechts zeigt schließlich, wie die Gewinner des Deep Learning Labs das Rauschen reduzierten und sich wieder dem Optimum von oben rechts annäherten.

This year’s Challenge in one image: Top left is the image attacked with a noise pattern. The top right shows how the artificial intelligence would categorize the pixels in terms of color without the noise pattern. Bottom left, on the other hand, is the semantic segmentation result corrupted by the noise pattern. Finally, bottom right shows how the Deep Learning Lab winners reduced the noise and again approached the optimum from top right. Image credit: Cityscapes Dataset, IfN/TU Braunschweig

Semantic segmentation” is the name of the process that gave rise to the colorful “pseudo-labels” cover section. In the process, an artificial intelligence evaluates each pixel of an incoming image and marks them with a color code: Vehicles in blue, people walking in red, and green spaces in green. Automatic image recognition must be absolutely reliable for applications in automated road traffic or medical imaging. Accordingly, everything should be recognized correctly even in fog or camera noise. In extreme cases, the software even has to work against noise patterns introduced from outside. Defense against such noise patterns is both a subject in research and sought-after know-how in industry.

“In this year’s Deep Learning Lab, we wanted to delve into this aspect of machine learning,” says Professor Tim Fingscheidt. “As a practical consolidation after the theory lectures, we have thereby divided our program into three practical phases: First, there is an introduction to the Python programming language. After that, step-by-step we give the participants more complex tasks on topics from the lecture. They learn, for example, how to handle computational clusters and how to optimize machine learning. The third step is the actual challenge, in which the students have to solve a challenging problem on their own. This year, nine teams of three mastered this final exercise.”

Training with the real thing

Particularly challenging are deliberately introduced noise patterns. These attacks calculate how to optimally trick the artificial intelligence. To the human eye, they are inconspicuous. Only a few color pixels show up on the image (“Attacked” in the cover image). For the machine, however, the original image is completely distorted (“Baseline” in the cover image). An automated vehicle would therefore no longer see other road users, for example.

The nine student teams in the Deep Learning Lab were tasked with protecting their networks from the damaging effects of such noise patterns. To do this, they tried out several approaches. “Adversarial training”, for example, in which the network already encounters noise patterns during training, produced good results. Alexandre Castadere, Yufeng Du and Tim Kayser (cover picture part “Winner Team”) were particularly successful. They improved their network over several training sections, with the network from the previous section training the attacks for the following section in each case. This way, the network’s robustness to noise patterns continuously increased. The team thus won first place in the Deep Learning Lab, endowed with 450 euros. Second place (300 euros) went to Aziz Hakiri, Nils Hartmann and Jonas Koll.

All participants of the Deep Learning Lab at the award ceremony. Photo credit: Institute of Communications Engineering/TU Braunschweig

Reaching the goal with few resources

Julian Bartels, Timo Prescher and Jannik von der Heide achieved two prizes with their team. On the one hand, their network achieved the third-best result overall (150 euros in prize money). On the other hand, they only needed 880 hours of GPU computing time and thus received the environmental prize (50 euros in prize money). After all, training artificial intelligence requires large amounts of computing power and thus energy. By comparison, the runners-up needed almost four times as much computing time.

Several companies supported the Deep Learning Lab and donated the prize money. The award ceremony also included some networking opportunities for the students. The practically trained graduates of the Challenge are sought-after specialists in the field of machine learning.