TetraVue is the leading innovator of 4D LIDAR video cameras that digitally capture the richness and depth of our fast-paced world in high definition. TetraVue’s groundbreaking technology combines the resolution of high definition video with the range of LIDAR to accurately capture motion and depth for large fields-of-view and distances. The company is headquartered in Vista, California with a Northern California development site in San Jose.
Unlike traditional LiDAR systems which use an array of lasers to gather only a few object points, TetraVue's 4D High Definition Solid State Flash LiDAR can determine the distance of objects AND identify them as well.
It can see the difference between a car, a tree, a child, a dog, a paper bag and a mouse!
4D : Depth + Motion
Object Detection at 200+ meters
Solid State Flash
Low Energy Consumption
TetraVue's LiDAR captures 60 million bytes of data per second.
That's 100x faster than traditional LiDAR!
“TetraVue’s LiDAR gives cars perfect vision and allows them to make better decisions. Imagine a car being able to instantly determine if a black spot is a rock or a plastic bag.”
— Connie Sheng, founding managing director at Nautilus Venture Partners
“We strongly believe that LiDAR is the key enabler for the next generation of autonomous vehicles. AGC’s zero infrared absorption automotive glass now offers to Tetravue new possibilities for its 4D LiDAR video camera integration.”
— Michel Meyers, Mobility Business Development Office Director, AGC Automotive Europe
Robert Bosch Venture Capital GmbH (RBVC), the corporate venture capital company of the Bosch Group, has completed an investment in DeepMap Inc, a US start-up based in Palo Alto, California. DeepMap is a software company focused on solving the mapping and localization challenge for autonomous vehicles. “Maps explicitly designed to be read by machines are a critical enabling technology for safe autonomy. DeepMap fills a vacuum in the market. The company’s approach to mapping, which leverages embedded software on the vehicle, is very compelling and relevant for highly automated as well as autonomous driving, within Bosch and the whole automotive industry,” says RBVC Managing Director Dr. Ingo Ramesohl. “DeepMap builds upon the growing RBVC portfolio of technologies addressing the autonomous vehicle vertical, next to start-ups like AImotive and Tetravue.”
This design adds a new dimension: time. That’s because 20 times per second, it flashes the road ahead with an infrared light pulse recording 60 megabits per second. The reflected 2.0-megapixel image is recorded on an optical camera chip in raw form and in an infrared modulated form. The images are compared, and the degree to which any pixel on the modulated image is dimmer than its unmodulated counterpart indicates that photon’s “time in flight,” from which distance to that object is calculated. Many competing flash systems integrate computational circuitry on the receiving sensor board, which diminishes resolution, but here every pixel counts. Range is said to be 100-plus meters, the optimal viewing angle is 54 degrees, visible light levels don’t affect it, and rain, fog, and dust merely curtail a bit of the system’s ultimate range. The resulting image is detailed enough to consider integrating object detection in this device, simplifying the sensor fusion task of the host vehicle. Cost in high-volume production is expected to be $200 or so. Several OEMs have expressed interest.
Tetravue showed off their radically different time of flight technology. So far there have been only a few methods to measure how long it takes for a pulse of light to go out and come back, thus learning the distance... ...Tetravue offers a new method. They send out a flash, and put a decaying (or opening) shutter in front of an ordinary camera-style sensor. Depending on when the light arrives, it is attenuated a certain amount by the shutter. The amount it is attenuated tells you when it arrived... ...The big advantages of this method are cost -- it uses mostly commodity hardware -- and resolution. Their demo was a 1280x720 camera, and they said they were making a 4K camera. That's actually too much resolution for most neural networks, but digital crops from within it could be very good, and make for the best object recognition results to be found, even on more distant targets. This might be a great tool for recognizing things like pedestrian body language and more.