Reality is a complex environment with lots of possible states that can change at random any time. In this utterly complex environment automated vehicles need to perceive their surrounding environment and keep track of static and dynamic objects in all environmental conditions to ensure passenger safety. However, correct perception is often not possible due to external influences that cause the sensor's performance to degrade. Such influences that lead to degradation in sensory and algorithmic performance can be fog, rain or snow. Small to microscopic water particles in the air lead to scattering of light rays before the sensor is hit, resulting in unsharp contours and loss of contrast in camera images. Moreover, rain leads to wet and reflective roads and snow leads to changes in scene geometry and occlusion of objects. All these effects have severe impact on the quality of camera-based object detection.
To tackle this problem, large amounts of real-world data is collected to improve object detection algorithms based on AI. However, the collected data is biased towards good visibility conditions, where objects are clearly visible in the scene. This originates from the fact that (extreme) adverse weather conditions are rare and cannot be generated in a controllable and reproducible way. This data bias leads to object detection algorithms performing worse and unpredictable when exposed to bad visibility conditions. In order to enable automated driving, it is inevitable to develop either new weather robust sensors or increase the robustness of perception algorithms to handle even extreme weather conditions.
The aim of the planned thesis is to push the boundaries of camera-based object detection algorithms in bad visibility conditions. This goal is to be achieved by augmenting the training space of AI-based object detection algorithms with synthetic training data with the purpose to learn more robust features that will allow detection even in limited visibility conditions.