Explainable Artificial Intelligence: Establishing comprehensible AI (ExplAIn)

From image recognition to virtual assistants, machine learning (ML) is already being used by companies in a variety of ways. The added value: production processes can be adapted autonomously or even completely pre-planned and regulated using machine learning methods. However, there are also companies that have no trust in artificial intelligence due to situational errors, a lack of traceability and a lack of influence. The reservations listed are not weaknesses of artificial intelligence, but weaknesses in the control of the application. In many cases, there is a lack of implementable control instances that make existing and new ML systems more comprehensible and secure.

The aim of the project ‘Explainable artificial intelligence for secure and trustworthy industrial applications (ExplAIn)’ is therefore to make existing and new machine learning methods more comprehensible and secure. A system based on Explainable Artificial Intelligence (XAI) is being developed for this purpose. XAI is made up of processes and methods that enable users to understand and trust the results generated by machine learning. The XAI system is therefore intended to establish safe and traceable AI applications in practice.