Back to the blog
Module Deep Dive #2: Lidar Fusion - your key to useful Lidar data
Product

Module Deep Dive #2: Lidar Fusion - your key to useful Lidar data

Skip the noise. Our Lidar Fusion module gives you clean, ready-to-use pointclouds.

Stefan Seltz-Axmacher
November 7, 2023
At Polymath Robotics, we make it radically simple to automate off-highway vehicles. We’re releasing a suite of 40 autonomy modules that make it easier than ever to build your autonomous navigation stack. Each module can be used individually or together as a full autonomy solution. This post is a deep dive into our Lidar Fusion module.

Lidars are invaluable tools for autonomous vehicles, but their raw data output is noisy and hard to use. Today, we’re launching our Lidar Fusion module to all customers, offering you a better way to wrangle your Lidar data into something useful. 

Lidar Fusion is a foundational component of our stack, and now, you can use it on its own, or together with our full autonomous navigation solution.

Introducing Polymath’s Lidar Fusion module

Lidar simplifies how robots perceive their environment by creating point clouds. Yet, the raw data is noisy and needs extensive processing to be useful.

The challenge of raw Lidar data

Generating point clouds from Lidar scans may sound straightforward, but the raw data it produces is basically useless. Think of it like reading the numbers off a credit card. While it's a necessary step, turning this raw data into a format that's ready for robotics applications is where the real effort begins.

With Lidar, all you initially get is a collection of points in space. These points lack context, making it hard to discern the shape or identity of objects in the environment. When you're up close to an object, you may have enough points to understand what it is, but as objects move further away, the data becomes harder to read.

The journey of Lidar fusion & filtering

Refining raw Lidar data involves multiple complicated steps:

Running the Lidar: The Lidar itself must be in working order, with proper electrical connections and data transmission. This is no small feat, as Lidars are data-intensive, and a single Lidar unit can flood a Gigabit network with information.

Data Interpretation: Once you have the data, you need to decipher it. Are the points indicative of noise, the ground, an obstacle, or empty space? 

Filtering: Lidar data is prone to noise, reflections, and the influence of external factors like dust and rain. Effective filtering is critical for accurate data.

Ground Segmentation: Identifying the ground is crucial, and various algorithms are used for this purpose. However, each approach comes with its own set of advantages and disadvantages. For example, one common method involves fitting a plane to a random set of points, but it has its limitations, especially when close to objects.

Object Detection: Recognizing objects, such as people, is a challenge. You need to determine if the points represent a single, moving object or just random noise.

Tracking: Even if you detect an object, how do you ensure it's consistently tracked as it moves? Has it changed position, or is it just data noise?

Data Clustering: All of these steps have to be repeated for every point, making the process resource-intensive.

Only after tackling all these details can you create a "costmap," a simpler 2D map for navigation and planning.

In short, plugging in a Lidar and expecting clean and ready-to-use data is like thinking that taking a picture is equivalent to mastering machine learning.

That’s why we built Lidar Fusion – the Lidar data whisperer for your autonomous vehicle.

What makes Lidar Fusion so powerful?

Save valuable time: Lidar Fusion does away with the need for extensive data wrangling. It automatically converts raw Lidar data into clean and usable point clouds.

Improve data quality: By reducing the noise and enhancing the quality of Lidar data, Lidar Fusion ensures more accurate perception of the environment.

Simplify calibration: When using multiple Lidar units, Lidar Fusion handles the calibration process to ensure consistent and accurate data from different vantage points.

Enhance compatibility: Lidar Fusion works seamlessly with a range of Lidar technologies, including 4D Lidar and 3D Lidar with RGB data, making it adaptable to various scenarios and environments.

Easy to incorporate: like all our modules, you can use Lidar Fusion on its own, or as part of our full autonomous navigation system.

This image shows real Lidar data from a Polymath customer vehicle -- and how much more useful it is after applying our Lidar Fusion module.

Stop fighting with your Lidar data

Lidar Fusion simplifies, accelerates, and streamlines the Lidar data process. It's your key to clean, ready-to-use point clouds and more efficient robotics applications. With Lidar Fusion, you can spend less time wrangling sensor noise and more time on the tasks that truly matter, all while ensuring a higher level of accuracy in your robotic systems.

Reach out to our sales team to explore the potential of Lidar Fusion, simplify your complex data, and make off-highway autonomy more accessible than ever before.

ABOUT THE AUTHOR
Stefan Seltz-Axmacher
Stefan is CEO & Co-Founder of Polymath. He formerly co-founded Starsky Robotics, the first company to drive an unmanned truck on a public highway. He's been featured in CBS Good Morning, 60 Minutes, WSJ, and Forbes 30 Under 30.

Stefan is CEO & Co-Founder of Polymath. He formerly co-founded Starsky Robotics, the first company to drive an unmanned truck on a public highway. He's been featured in CBS Good Morning, 60 Minutes, WSJ, and Forbes 30 Under 30.

Want to stay in the loop?

Get updates & robotics insights from Polymath when you sign up for emails.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.