-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support SofastFixed point light #66
Support SofastFixed point light #66
Conversation
braden6521
commented
Apr 2, 2024
- Updated the SofastFixed unit test data to use data captured with a point light near the center origin dot when using a printed, fixed pattern.
- Added more unit tests to the sofast/lib/image_processing library that contains the blob detection functions.
- Added an example that creates a measurement object from an image, finds the origin blob, and saves to HDF5 file.
13177e7
to
f4a7ed1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I left a few comments. Feel free to take them or leave them.
@@ -422,8 +424,8 @@ def rectangle_loop_from_two_points(p1: Vxy, p2: Vxy, d_ax: float, d_perp: float) | |||
# Calculate axial and perpendicular directions | |||
v_axial = (p2 - p1).normalize() | |||
|
|||
R = np.array([[0, 1], [-1, 0]]) | |||
v_perp = v_axial.rotate(R) | |||
rot_mat_90 = np.array([[0, 1], [-1, 0]]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🥳
Parameters | ||
---------- | ||
image : np.ndarray | ||
Input image, uint8 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Presumably this should be a 2d image with a single color channel? AKA:
2D input image, single color channel, NxM or NxMx1, uint8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea! Changed!
@@ -228,6 +235,50 @@ def test_calculate_active_pixel_pointing_vectors(self): | |||
# Test | |||
np.testing.assert_allclose(data['u_pixel_pointing_facet'], u_pixel_pointing_optic) | |||
|
|||
def test_detect_blobs(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unit tests! 🥳
lt.logger(join(dir_save, 'log.txt'), lt.log.INFO) | ||
|
||
# 1. Load image | ||
# ============= |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hadn't considered also including this style of visual separator. Makes this really easy to read!
# ============= | ||
file_meas = join(opencsp_code_dir(), 'test/data/sofast_fixed/data_measurement/measurement_facet.h5') | ||
measurement_old = MeasurementSofastFixed.load_from_hdf(file_meas) | ||
image = measurement_old.image |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know it's silly, but it might make this more useful as an example if there were some commented out code for loading the image using opencv or pillow, as if you have an image jpg file instead of a measurement h5 file.
# image = cv2.imread(file_jpg, cv2.IMREAD_GRAYSCALE)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it! Added!
# ================================ | ||
|
||
# Find point light | ||
params = cv.SimpleBlobDetector_Params() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Idea time! It would be nice if you could retrieve these parameters from image_processing with some values such as filterByCircularity
set for you:
params = ip.dot_target_blob_params(minDistBetweenBlobs=2, minArea=3, maxArea=30)
def dot_target_blob_params(
minDistBetweenBlobs=2,
filterByArea=True,
minArea=3,
maxArea=30,
filterByCircularity=True,
minCircularity=0.8,
filterByConvexity=False,
filterByInertia=False,
):
params = cv.SimpleBlobDetector_Params()
params.minDistBetweenBlobs = minDistBetweenBlobs
params.filterByArea = filterByArea
params.minArea = minArea
params.maxArea = maxArea
params.filterByCircularity = filterByCircularity
params.minCircularity = minCircularity
params.filterByConvexity = filterByConvexity
params.filterByInertia = filterByInertia
return params
To be clear, not necessary for this PR, and not necessary generally. Just an idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an interesting idea! There are other inputs that SimpleBlobDetector_Params takes that I haven't typically used. Perhaps have a default_blob_detector_params()
function or something? I'll have to think about how best to implement it, but I do really like it. Will probably wait until a future PR.
# ======================================= | ||
image_annotated = ip.detect_blobs_inverse_annotate(image, params) | ||
|
||
figure_control = rcfg.RenderControlFigure(tile_array=(1, 1), tile_square=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unified render control style! 👍
@bbean23, thanks for the review! I implemented some of your suggestions, ready for re-review! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Approved!