22. August 2025 | Magazine:

From the black box to explainable AI Students make artificial intelligence explainable in the Deep Learning Lab

How can the decisions made by artificial intelligence (AI) be made comprehensible? This question was addressed by bachelor’s and master’s students at TU Braunschweig during this summer semester as part of the eighth edition of the Deep Learning Lab. Working in teams, the participants developed innovative methods for explaining deep learning models in order to make neural networks, which are often criticised as ‘black boxes’, more transparent.

The visualisation of a bus (light blue) from the front shows in red that the winning team’s AI model recognises the ‘bus’ class, in particular based on the contours, the windscreen and the front apron. Image credits: PascalVOC dataset, IfN/TU Braunschweig

Artificial intelligence has become indispensable in many areas of life, from medical image analysis to autonomous driving. However, how a neural network arrives at its decision often remains a mystery to users. This is precisely where the research field of ‘explainable AI’ comes in: the aim is to make the decision-making processes of AI systems comprehensible to humans.

“Trust in AI can only develop if we can understand its decisions,” emphasises the Deep Learning Lab’s organisational team.

The challenge: making explanations visible

This year’s challenge focused on developing explanatory models for deep neural networks. The students worked with the PASCAL VOC 2012 image dataset and were asked to generate so-called ‘saliency maps’ – heat maps that show which areas of the image were particularly important for the model’s decision. The explanations were evaluated based on two criteria. First, they should be as similar as possible to explanations that a human would give. Second, they should not only be plausible, but also actually show how the AI arrived at its decision. In addition, the efficiency of the models played a role: a special prize (‘environmental prize’) was awarded for a project that achieved good results despite low computing requirements.

The winning team of the Deep Learning Lab 2025. From left to right: Thomas Graave (IfN), Jennifer Ly (winner), Fabian Kollhoff (winner), Niklas Hahn (Robert Bosch GmbH), Aruhan (winner), Michael Hufnagel (Siemens Mobility GmbH), Björn Möller (IfN), Prof. Tim Fingscheidt (IfN). Photo credits: Andreas Gudat/TU Braunschweig.

Innovative approaches and team spirit

The teams used different methods to open the black box. The challenge was to find a balance between explainability, model accuracy and computing power. “Solving the challenge was a combination of frustration and fun. I learned a lot!” reported one participant. In the end, the team led by students Fabian Kollhoff, Jennifer Ly and Aruhan won the €600 grand prize with an innovative combination of a vision transformer and a modified Grad-CAM variant. The €450 environmental prize for the lowest GPU computing time while still achieving very good performance went to two students, Mohammad Rezaei Barzani and Nils-André Forjahn.

The winners of the Deep Learning Lab 2025 environmental award. From left to right: Thomas Graave (IfN), Nils-André Forjahn (winner), Mohammad Rezaei Barzani (winner), Michael Hufnagel (Siemens Mobility GmbH), Niklas Hahn (Robert Bosch GmbH), Björn Möller (IfN), Prof. Tim Fingscheidt (IfN). Photo credits: Andreas Gudat/TU Braunschweig.

Outlook

The Deep Learning Lab’s closing event on 11 July 2025 gave participants the opportunity to present their results and exchange ideas with sponsors and industry experts. A buffet dinner provided an opportunity not only to celebrate, but also to discuss the future of explainable AI. The organising team was enthusiastic about the students’ commitment and creativity: “We were really positively surprised by some of the solutions! All in all, the challenge showed how important and exciting the topic of explainable AI is.”

Text: Thomas Graave/IfN