Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(hesai): add filtered pointcloud counter function #247

Open
wants to merge 14 commits into
base: main
Choose a base branch
from

Conversation

ike-kazu
Copy link
Contributor

PR Type

  • Improvement

Description

This pr will be able to watch filtered pointcloud's count by each filtering such as distance, fov and so on. It is usefull for finding filtering error while seeking causes of lidar pointclouds error.

Review Procedure

Remarks

Pre-Review Checklist for the PR Author

PR Author should check the checkboxes below when creating the PR.

  • Assign PR to reviewer

Checklist for the PR Reviewer

Reviewers should check the checkboxes below before approval.

  • Commits are properly organized and messages are according to the guideline
  • (Optional) Unit tests have been written for new behavior
  • PR title describes the changes

Post-Review Checklist for the PR Author

PR Author should check the checkboxes below before merging.

  • All open points are addressed and tracked via issues or tickets

CI Checks

  • Build and test for PR: Required to pass before the merge.

Copy link

codecov bot commented Dec 20, 2024

Codecov Report

Attention: Patch coverage is 98.03922% with 1 line in your changes missing coverage. Please review.

Project coverage is 26.06%. Comparing base (97959dd) to head (92db91d).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...s/nebula_decoders_hesai/decoders/hesai_decoder.hpp 98.03% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #247      +/-   ##
==========================================
- Coverage   26.07%   26.06%   -0.02%     
==========================================
  Files         101      104       +3     
  Lines        9232     9420     +188     
  Branches     2213     2248      +35     
==========================================
+ Hits         2407     2455      +48     
- Misses       6436     6578     +142     
+ Partials      389      387       -2     
Flag Coverage Δ
differential 26.06% <98.03%> (?)
total ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@mojomex mojomex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR! Here is the review so far. Performance looks good but there are some more counters and naming changes I'd like to request 🙇

NebulaPointCloud point_timestamp_start;
NebulaPointCloud point_timestamp_end;

void clear()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please make sure that all fields are reset (e.g. timestamp_counter is missing)

Comment on lines 45 to 54
float distance_start = 0;
float distance_end = 0;
float raw_azimuth_start = 0;
float raw_azimuth_end = 0;
std::uint32_t packet_timestamp_start = 0;
std::uint32_t packet_timestamp_end = 0;
NebulaPointCloud point_azimuth_start;
NebulaPointCloud point_azimuth_end;
NebulaPointCloud point_timestamp_start;
NebulaPointCloud point_timestamp_end;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Please rename from start/end to min/max.
  • Please also add unit suffixes like _ns for nanoseconds, _rad for radians, _m for meters etc.
  • packet_timestamp_min/max should probably have type uint64_t (uint32_t cannot represent absolute timestamps in nanoseconds)

Instead of point_, please rename to cloud_ so that it is clear that those values are among the points that were not filtered.
I would suggest replacing raw_ with packet_ as well, so we have packet_ (before filtering) vs. cloud_ (after filtering).

Copy link
Collaborator

@mojomex mojomex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ike-kazu Thanks for the changes, I have a few more small requests to polish everything!

After implementing the changes, could you also provide a self-evaluation (you running Nebula with different parameters for min_range/max_range, cloud_min_angle, cloud_max_angle, dual_return_distance_threshold and showing the output JSON diagnostics?

Thank you!

ros2_socketcan Outdated Show resolved Hide resolved
transport_drivers Outdated Show resolved Hide resolved
float cloud_distance_max_m = 0;
float cloud_azimuth_min_rad = 0;
float cloud_azimuth_max_rad = 0;
uint64_t packet_timestamp_min_ns = 0;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For easier reaability, please make these deg instead of rad and convert accordingly in the get_minmax_info function below.

@ike-kazu
Copy link
Contributor Author

🟢 Evaluation

I tested this in each case changing each params.

Normal

Screenshot from 2025-01-10 14-36-03

Fov (0, 270)

Screenshot from 2025-01-10 14-57-41

distance(0, 5)

Screenshot from 2025-01-10 15-00-20

It works well. I could confirm filtered pointcloud num is increase in each case than normal.

Copy link
Collaborator

@mojomex mojomex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your updates. There were still some unaddressed issues from previous comments, and some small style changes I'd like to see after your self-evaluation.

Please double-check if the outputs in your evauation match what you expect (e.g. name: "invalid" as a top-level entry should not be there).

Thanks 🙇

uint64_t total_kept_point_count = 0;
uint64_t invalid_packet_count = 0;
float cloud_distance_min_m = std::numeric_limits<float>::infinity();
float cloud_distance_max_m = std::numeric_limits<float>::lowest();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While also okay, lowest() represents the lowest finite number, so for consistency, and to signal that the initial value is not a valid one, let's change this to -infinity() instead:

Suggested change
float cloud_distance_max_m = std::numeric_limits<float>::lowest();
float cloud_distance_max_m = -std::numeric_limits<float>::infinity();

float cloud_distance_min_m = std::numeric_limits<float>::infinity();
float cloud_distance_max_m = std::numeric_limits<float>::lowest();
float cloud_azimuth_min_deg = std::numeric_limits<float>::infinity();
float cloud_azimuth_max_rad = std::numeric_limits<float>::lowest();
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
float cloud_azimuth_max_rad = std::numeric_limits<float>::lowest();
float cloud_azimuth_max_rad = -std::numeric_limits<float>::infinity();

float cloud_azimuth_min_deg = std::numeric_limits<float>::infinity();
float cloud_azimuth_max_rad = std::numeric_limits<float>::lowest();
uint64_t packet_timestamp_min_ns = std::numeric_limits<uint64_t>::max();
uint64_t packet_timestamp_max_ns = std::numeric_limits<uint64_t>::min();

[[nodiscard]] nlohmann::ordered_json to_json() const
{
nlohmann::json distance_j;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use ordered_json throughout your code to preserve the field ordering when printing.

nlohmann::json invalid_j;
invalid_j["filter"] = "invalid";
invalid_j["name"] = "invalid";
invalid_j["invalid_point_count"] = invalid_point_count;
invalid_j["invalid_packet_count"] = invalid_packet_count;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Invalid points and packets are different concepts:

  • while invalid points are normal (the sensor sends them when there was no object hit, or when an object is too close),
  • invalid packets are an error (the size does not match our expectations)

So, please make the invalid points as part of the filter pipeline, and move invalid packets to the top level.
Also see this previous comment.

Comment on lines +97 to +99
j["azimuth_deg"] = pointcloud_bounds_azimuth_j,
j["distance_m"] = pointcloud_bounds_distance_j,
j["timestamp_ns"] = pointcloud_bounds_timestamp_j,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not work as-is (see the 0/1/2 instead of azimuth_deg/distance_m/timestamp_nsin your self-evaluation). Please check the nlohmann json documentation for how to specify JSON {"key": value} pairs.

Comment on lines +109 to +110
cloud_azimuth_min_deg = std::min(cloud_azimuth_min_deg, point.azimuth);
cloud_azimuth_max_rad = std::max(cloud_azimuth_max_rad, point.azimuth);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please implement this comment.

Comment on lines +418 to +423
for (const auto & [key, value] : j.items()) {
std::cout << key << ": " << std::endl;
for (const auto & [k, v] : value.items()) {
std::cout << k << ": " << v << std::endl;
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could do:

Suggested change
for (const auto & [key, value] : j.items()) {
std::cout << key << ": " << std::endl;
for (const auto & [k, v] : value.items()) {
std::cout << k << ": " << v << std::endl;
}
}
j.dump(2);

to get a pretty-printed version of the whole JSON with indent of 2 per nesting level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants