AI in an inclusive working world Prof. Jochen Steil on the use of artificial intelligence for more participation in working life
Almost eight million people in Germany, as of 2020, live with various impairments, a third of them of working age. The publication “Mit KI zu mehr Teilhabe in der Arbeitswelt” by the platform Lernende Systeme sheds light on how artificial intelligence (AI) can support these people in their integration into the working world. We spoke with Professor Jochen Steil, head of the Institute of Robotics and Process Control at TU Braunschweig and first author of the publication, about the potential and possible applications of AI technologies as well as the challenges that arise in the design of an inclusive working world.
What contribution can AI make in an inclusive working world?
AI technologies can make the working world more accessible and inclusive. For example, robotics and exoskeletons, a kind of support corset, can already support people with impairments in physically demanding tasks. AI can also make communication barrier-free by converting complex texts into plain language and thus making information accessible to all. Or it translates sign language into text or speech in real time. This makes it easier for hearing and deaf people to communicate with each other. The prerequisite is that the systems are tailored to people’s individual needs so as to make it easier for them to participate in working life or to enable them to engage in new work activities.
What are the necessary conditions for the development and use of AI applications in an inclusive world of work?
In order to expand or strengthen the individual competences of these people through AI technologies, it is essential to actively involve them in the development and inclusive design of the systems. To achieve technologies that fit as closely as possible, we need to keep the relevant target groups in mind and integrate people with impairments into the development process as early as possible.
How can AI systems be designed to accommodate the needs of people with different impairments?
Adaptive learning and assistance systems that can dynamically adapt to different individual and situational needs are a promising solution. However, this adaptation cannot be exclusively data-driven, as specific needs rarely follow general patterns and it is difficult to realistically collect sufficient amounts of data. Therefore, personalised methods are additionally required. These combine “one-shot learning”—a method in machine learning in which a model develops the ability to learn from a single example without requiring extensive training data—classical adaptation through control loops and AI-enhanced perception methods of the assisting systems. Such adaptive systems hardly exist at present. So there is a considerable need for research.
How can AI systems be prevented from reinforcing existing prejudices?
To ensure fair and non-discriminatory AI, it is crucial that the training data for the AI systems are varied and representative. They should reflect the diversity of people. This requires careful selection and preparation of data, which can be ensured through transparent and responsible data practices.
How can a balanced approach between data use and data protection be ensured, especially in the context of AI applications in an inclusive working world?
The database on which AI systems are trained touches on sensitive and personal areas such as health data or learning progress. Therefore, the protection of this data and personal rights is a key challenge. For example, data collected to evaluate learning behaviour and progress in order to provide individualised learning content must not be used to compare employees with each other or to favour those who achieve learning success more quickly. The development and implementation of these systems therefore requires a constant balancing act between optimising the usability of the systems and protecting the data and privacy rights of users.
How can trust be fostered regarding the protection of sensitive data?
It is important that it is clear to employees at all times what data is collected by the respective technology, how and where this data is processed and who has access to this data and in what form—for example, anonymised. At the same time, efforts to ensure data protection starting at the technical level—we are talking about privacy-by-design—should be supported and further developed.
What other challenges arise in the use of AI technologies in an inclusive working world?
The use of AI technologies must not lead to increased exclusion, especially if the tasks at work become more interdisciplinary and complex and the qualification requirements increase. Adapting to new circumstances in the working world through further training can lead to a much more difficult situation for people with learning difficulties or with mental impairments. The same applies to a working world that relies more on interdisciplinary communication: People with impairments in social and communication skills may experience greater difficulties here than their colleagues. It is therefore crucial that we take these aspects into account when implementing AI technologies.
Do you see more risks or opportunities?
Despite all the risks associated with the implementation of AI, we see above all the opportunities for an inclusive working world! The key to success lies in the development of suitable technologies and the creation of the right framework conditions – be it in corporate culture, the promotion and regulation of technologies or barrier-free education with adapted curricula. Because: Participation in the working world begins with inclusive education.