![digital atmosphere radar digital atmosphere radar](https://media.springernature.com/full/springer-static/image/art%3A10.1038%2Fs41467-021-22051-0/MediaObjects/41467_2021_22051_Fig1_HTML.png)
![digital atmosphere radar digital atmosphere radar](https://d3i71xaburhd42.cloudfront.net/0846d50fe8b5478341abb734d2158d3d2ce14278/18-Figure1.2-1.png)
Specifically, we trained a DNN to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors. One way to overcome the limitations of this approach is with AI in the form of a deep neural network (DNN). The approach often has no way of distinguishing which. According to classical radar processing, this means the object could be a railing, a broken down car, a highway overpass or some other object. In this case, the object produces a dense cluster of reflections, but doesn’t move. While this approach can work well for inferring a moving vehicle, the same may not be true for a stationary one. If that cluster also happens to be moving over time, then that object is probably a car. If a sufficiently strong and dense cluster of reflections comes back, classical radar processing can determine this is likely some kind of large object. Traditional radar processing bounces radar signals off of objects in the environment and analyzes the strength and density of reflections that come back. In this DRIVE Labs video, we show how AI can address the shortcomings of traditional radar signal processing in distinguishing moving and stationary objects to bolster autonomous vehicle perception. However, additional radar sensors that leverage only traditional processing may not be enough. This means diverse and redundant sensors, such as radar, must also be capable of performing this task. However, low lighting, inclement weather or conditions where objects are heavily occluded can affect cameras’ vision. Īutonomous vehicles don’t just need to detect the moving traffic that surrounds them - they must also be able to tell what isn’t in motion.Īt first glance, camera-based perception may seem sufficient to make these determinations. Catch up on all of our automotive posts, here. Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them.