Most driverless vehicles rely on a clutch of sensors — radar, cameras and especially a 3D, 360-degree viewing technology called lidar, which is that big spinning thing you see atop test vehicles.

Why it matters: A big problem, in addition to making the sensors better and cheaper, has been unifying all their feeds into a single stream of information and acting reliably on what they indicate. If they could be fused in a sensible way, that could compensate for flaws in the individual sensors.


Ronny Cohen, CEO of Israel-based VayaVision, is gathering together the feeds of all the sensors and determining — based on machine learning — what challenges surround the vehicle or are coming ahead....

  • "We combine all of them into a unified image. We want to know where each pixel is in its space and its velocity," Cohen tells Axios.

Read more from our friends at Axios