Recently, the trend to implement Machine Learning (ML) algorithms in various engineering disciplines has signi_cantly increased, e.g., in computer vision, robotics, or biomedical engineering. A key enabler of these ML algorithms is the vast abundance of data (e.g., picture, video, sensor data, etc.) and the increase in computational power (e.g., the use of GPUs), which can be used to train complex ML models. Despite the ability of these ML algorithms to model highly complex functions and perform classi_cation tasks with high accuracy, the lack of validation methods for ML algorithms remains a hindering factor in their implementation in safety related functions, e.g., in highly automated driving functions.
On the one hand, the di_culty to explain the classi_cation decisions of ML algorithms makes validation di_cult, as these algorithms are mostly seen as black-box methods. In general, training ML algorithms is non-deterministic, which creates the necessity to obtain high quality training data to ensure that the ML algorithm learns meaningful representations and generalises correctly. Moreover, since ML methods purely rely on data, the existing physical models are ignored. To this end, the abundance of data-based and model-based methods enable engineers to fuse the outputs of independently optimised algorithms to create con_dence intervals or plausibility envelopes around the outputs of ML algorithms. With these quality metrics and plausibility claims of ML algorithms, we aim to create methods which can be used to validate safety related functions in highly automated driving.