When it comes to robocars, new LIDAR products were the story of CES 2018. Far more companies showed off LIDAR products than can succeed, with a surprising variety of approaches. CES is now the 5th largest car show, with almost the entire north hall devoted to cars. In coming articles I will look at other sensors, software teams and non-car aspects of CES, but let's begin with the LIDARs.
Tetravue showed off their radically different time of flight technology. So far there have been only a few methods to measure how long it takes for a pulse of light to go out and come back, thus learning the distance.
The classic method is basic sub-nanosecond timing. To get 1cm accuracy, you need to measure the time at close to about 50 picosecond accuracy. Circuits are getting that good. This needs to be done with both scanning pulses, where you send out a pulse and then look in precisely that direction for the return, or with "flash" LIDAR where you send out a wide, illuminating pulse and then have an array of detector/timers which count how long each pixels took to get back. This method works at almost any distance.
The second method is to use phase. You send out a continuous beam but you modulate it. When the return comes back, it will be out of phase with the outgoing signal. How much out of phase depends on how long it took to come back, so if you can measure the phase, you can measure the time and distance. This method is much cheaper but tends to only be useful out to about 10m.
Tetravue offers a new method. They send out a flash, and put a decaying (or opening) shutter in front of an ordinary camera-style sensor. Depending on when the light arrives, it is attenuated a certain amount by the shutter. The amount it is attenuated tells you when it arrived.
I am interested in this because I played with such designs myself back in 2011, instead proposing the technique for a new type of flash camera with even illumination but did not feel you could get enough range. Indeed, Tetravue only claims a maximum range of 80m, which is challenging -- it's not fast enough for highway driving or even local expressway driving, but could be useful for lower speed urban vehicles.
The big advantages of this method are cost -- it uses mostly commodity hardware -- and resolution. Their demo was a 1280x720 camera, and they said they were making a 4K camera. That's actually too much resolution for most neural networks, but digital crops from within it could be very good, and make for the best object recognition results to be found, even on more distant targets. This might be a great tool for recognizing things like pedestrian body language and more.
At present the Tetravue uses light in the 800nm bands. That is easier to receive with more efficiency on silicon, but there is more ambient light from the sun in this band to interfere.