LAS VEGAS – TetraVue, a specialist in high-definition 4-D Lidar technology, announces partnerships with NVIDIA, CVedia and AGC/Wideye focused on the development of advanced autonomous vehicle systems.
In today’s market, Lidar systems are widely considered to be one of the crucial technologies needed to make autonomous driving a practical reality. However, TetraVue opted for a very different approach when it developed a way to merge aspects of Lidar systems and digital video.
TetraVue’s solid-state 4-D Lidar cameras can capture multi-megapixel images up to 30 frames per second. With over 100 times more spatial and motion data about the surrounding environment than existing low-resolution scanning Lidar systems, TetraVue says it offers a richer Lidar dataset than any competitor in the industry, enabling self-driving cars to make faster decisions with greater confidence, says Paul Banks, TetraVue’s founder, president and chief technical officer.
Current Lidar systems try to measure the time it takes for an emitted pulse of light to return to a receptor. Because the process is so fast – as fast as the speed of light – the technologies needed for those measurements have become notably complicated and expensive, Banks says.
But, instead of measuring the time-of-flight of a light impulse electronically, like most other Lidar systems do, TetraVue developed a process that allows the time-of-flight data to be encoded into an optical signal, Banks says.
After translating a light pulse into an optical signal, TetraVue’s system can detect the data with a typical imaging chip, “the same kind of imaging chip that’s in your cell phone camera,” Banks says.
He says “billions of dollars” already are being spent by other high-tech developers in their efforts to further enhance the resolution and quality of standard optical chips.
The challenge of bringing its optical sensor imaging to consumers via the standard optical readers was the primary goal of TetraVue’s team from the company’s beginning, he says. “We’ve been able to successfully do that” and secure funding, at least initially, from programs such as the National Science Foundation and NASA – the latter of which uses criteria to detect rocks on Mars similar to the criteria the auto industry uses to detect rocks or other obstacles on a road.
Current Velodyne Lidar units don’t gather enough information, Banks says. “It has 64 beams. It measures 64 points. It makes a movement – it measures 64 points again,” he says. “And so it adds up, over time, all of these points to get a more interesting view of the scene.”
On the other hand, Banks says the TetraVue system can collect 60 million data points, versus a few hundred thousand with current scanning systems.
“We’re using a megapixel or two-megapixel sensor, so we get two-million distance measurements in one moment in time, in a flash,” he says. “At video rates, you get another two million data points in 30 frames-per-second. The amount of detail you can look at is dramatically different.”
Here at CES, TetraVue is staging live demonstrations of its 4-D Lidar. Attendees will witness the processing power of the NVIDIA Drive AI platform, which combines deep learning, sensor fusion and surround vision to accurately create a full 360-degree environment and produce a robust representation of static and dynamic objects surrounding the vehicle.
CVedia also will display its SynCity driving simulator for autonomous applications as seen by the TetraVue camera. AGC/Wideye, with its revolutionary new infrared transparent glass, will show the future of Lidar sensor integration by eliminating the costly and impractical external installation, in favor of mounting behind the windshield.
“It’s great to see that more and more people and organizations across all industries are recognizing the immense value of truly high-definition 4-D – megapixels, not kilopixels – and we look forward to working closely with our partners to integrate this value with their products and services,” says Banks.