feedback Give us your feedback

Introducing SAFEXPLAIN: Trustworthy AI through Deep Learning solutions that meet European Safety Standards for Industry

NEWS
Thu 21 Dec 2023

Autonomous systems require advanced software functionalities, like object detection, tracking and prediction, which can only be handled by DL solutions. However, these solutions are inherently at odds with functional safety requirements because they lack transparency and explainability. 

SAFEXPLAIN digitizes and tailors DL solutions to the certification and qualification needs of different industrial domains to ensure the safety of these new solutions and to facilitate their uptake. The project integrates explainable DL solutions into a toolset and platform that are then tested and evaluated by the project´s automotive, rail and space case studies.

In the video, Jaume Abella, SAFEXPLAIN coordinator, and CAOS Group Manager at the Barcelona Supercomputing Center-Centro Nacional de Supercomputación, highlights the project´s commitment to ensuring that DL solutions align with the specific certification and qualification requirements of diverse industries. He emphasizes:

‘customizing and digitizing deep learning solutions lets us guarantee that our solutions are explainable, trustworthy and that they meet the strict safety standards for use in European industries’.