Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

82 spot analysis for peak intensity correction #87

Conversation

bbean23
Copy link
Collaborator

@bbean23 bbean23 commented Apr 17, 2024

Note: this ticket got unwieldly big. Splitting between here and #91

Add spot analysis code to analyze peak intensities of a heliostat. The necessary features include:

  • area of interest cropping [af]
  • over-exposure detection & warnings [ag]
  • tagging images as NULL (aka dark aka not exposed to heliostat) or active (aka active aka exposed to heliostat) [ah]
  • averaging multiple images [u]
  • image subtraction, to get relative flux to a NULL image [a]
  • filters (gaussian and box) [ai]
  • centroid/peak value computation [e, aj]
  • logorithmic intensity scaling [ak]
  • false color visualization [al]
  • over-exposed pixels visualization [am]

Additional features that might be helpful include:

  • directory watcher image source, to introduce new images to the pipeline as they are added to the source directory
  • image translation correction based on tower fiducials
  • pixel intensity to actual flux correction [s]
  • image scale tagging (how many meters is a pixel?)

@bbean23 bbean23 force-pushed the 82-spot-analysis-for-peak-intensity-correction branch 2 times, most recently from a0512b8 to 8da78ac Compare April 24, 2024 17:35
@bbean23 bbean23 requested review from e10harvey and braden6521 April 24, 2024 17:36
@bbean23 bbean23 self-assigned this Apr 24, 2024
@bbean23 bbean23 added the enhancement New feature or request label Apr 24, 2024
@bbean23 bbean23 linked an issue Apr 24, 2024 that may be closed by this pull request
14 tasks
@bbean23
Copy link
Collaborator Author

bbean23 commented Apr 24, 2024

@e10harvey @braden6521 I added both of you as reviewers in case one of you really wants to do this review, not because I think you both need to do it. I suggest adding a comment that you're claiming the review.

@bbean23 bbean23 marked this pull request as ready for review April 24, 2024 17:46
@e10harvey
Copy link
Collaborator

@e10harvey @braden6521 I added both of you as reviewers in case one of you really wants to do this review, not because I think you both need to do it. I suggest adding a comment that you're claiming the review.

Thank you, @bbean23. This is a very large diff. It is impossible for me to review this. Can you split this into multiple PRs in a logical manner. Perhaps on a per-class or per-related tests basis. It would also be helpful to the reviewers to provide a summary of the changes in the PR description. The longer this PR sits, the harder it is going to be merge this in. Please to to rebase on-top of develop on a weekly basis to avoid high cognitive load when resolving conflicts.

@bbean23
Copy link
Collaborator Author

bbean23 commented Apr 25, 2024

Thank you, @bbean23. This is a very large diff. It is impossible for me to review this. Can you split this into multiple PRs in a logical manner. Perhaps on a per-class or per-related tests basis. It would also be helpful to the reviewers to provide a summary of the changes in the PR description. The longer this PR sits, the harder it is going to be merge this in. Please to to rebase on-top of develop on a weekly basis to avoid high cognitive load when resolving conflicts.

Sorry about this PR being so big. As you can see by the description, I actually started this PR because I realized that it was becoming so big in the first place.

  • The summary of changes was on the ticket, and I've now copied that summary to here.
  • Braden let me know that he started reviewing this. I've asked him if it will be helpful for me to split it into multiple pieces.
  • Your comment about rebasing it frequently is very insightful and well received. I will absolutely do this.

braden6521
braden6521 previously approved these changes Apr 26, 2024
Copy link
Collaborator

@braden6521 braden6521 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bbean23, some of this was over my head to be honest. But the code looks well written and well documented. I have some simple suggestions, but feel free to take or leave them.

Comment on lines 27 to 29
pixels_to_meters : Callable[[p2.Pxy], v3.Vxyz], optional
Conversion function to get the physical point in space for the given x/y position information. Used in the
default self.scale implementation. Defaults to 1 meter per pixel.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some questions, although this is not necessarily wrong:

  1. Does this assume that the camera is normal to the fiducials?
  2. Is distortion taken into account?
  3. Will different pixels ever have different scales?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are really good questions, and I'm glad that you brought them up. I've updated the docstring to try and answer them:

"""
pixels_to_meters : Callable[[p2.Pxy], v3.Vxyz], optional
    Conversion function to get the physical point in space for the given x/y position information. Used in the
    default self.scale implementation. A good implementation of this function will correct for many factors such
    as relative camera position and camera distortion. For extreme accuracy, this will also account for
    non-uniformity in the target surface. Defaults to a simple 1 meter per pixel model.
"""

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think I have the background to fully understand how this works. But generally, this code looks good.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I don't think you're missing a background, you're just not in my head. :)

In reality, understanding this class requires understanding the SpotAnalysis class, which will hopefully get easier to do with some examples.

Comment on lines 16 to 35
class AbstractFiducials(ABC):
def __init__(self, style=None, pixels_to_meters: Callable[[p2.Pxy], v3.Vxyz] = None):
"""
A collection of markers (such as an ArUco board) that is used to orient the camera relative to observed objects
in the scene. It is suggested that each implementing class be paired with a complementary locator method or
SpotAnalysisImageProcessor.

Parameters
----------
style : RenderControlPointSeq, optional
How to render this fiducial when using the defaul render_to_plot() method. By default rcps.default().
pixels_to_meters : Callable[[p2.Pxy], v3.Vxyz], optional
Conversion function to get the physical point in space for the given x/y position information. Used in the
default self.scale implementation. Defaults to 1 meter per pixel.
"""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably get in the habit of adding docstrings to all our classes just so it's easier to add these new classes to the docstring test later. Feel free to do this at a later date too. I noticed a few classes with no docstrings here but didn't point them all out.

Suggested change
class AbstractFiducials(ABC):
def __init__(self, style=None, pixels_to_meters: Callable[[p2.Pxy], v3.Vxyz] = None):
"""
A collection of markers (such as an ArUco board) that is used to orient the camera relative to observed objects
in the scene. It is suggested that each implementing class be paired with a complementary locator method or
SpotAnalysisImageProcessor.
Parameters
----------
style : RenderControlPointSeq, optional
How to render this fiducial when using the defaul render_to_plot() method. By default rcps.default().
pixels_to_meters : Callable[[p2.Pxy], v3.Vxyz], optional
Conversion function to get the physical point in space for the given x/y position information. Used in the
default self.scale implementation. Defaults to 1 meter per pixel.
"""
class AbstractFiducials(ABC):
"""
A collection of markers (such as an ArUco board) that is used to orient the camera relative to observed objects
in the scene. It is suggested that each implementing class be paired with a complementary locator method or
SpotAnalysisImageProcessor.
Parameters
----------
style : RenderControlPointSeq, optional
How to render this fiducial when using the defaul render_to_plot() method. By default rcps.default().
pixels_to_meters : Callable[[p2.Pxy], v3.Vxyz], optional
Conversion function to get the physical point in space for the given x/y position information. Used in the
default self.scale implementation. Defaults to 1 meter per pixel.
"""
def __init__(self, style=None, pixels_to_meters: Callable[[p2.Pxy], v3.Vxyz] = None):

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure where to put these docstrings, since they seem like they belong both on the class description and on the init. It looks like realpython suggests splitting it, putting the description on the class and the parameters on the method. I've updated the code to follow that approach. What do you think?

https://realpython.com/documenting-python-code/#class-docstrings

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that sounds like a good idea! I like that standard.

Comment on lines 44 to 46
@property
def orientation(self) -> v3.Vxyz:
return v3.Vxyz([0, 0, 0])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this "orientation" supposed to have a rotation component too, or is this a pointing direction?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's intended to be an absolute orientation in relation to the camera. Do we also need a pointing direction for that?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want a full six degree of freedom orientation, then you would need a rotation and translation, but I'm not sure that's what you're trying to do here.

Comment on lines 45 to 48
def orientation(self) -> v3.Vxyz:
"""The orientation(s) of this instance, in radians. This is relative to
the source image, where x is positive to the right, y is positive down,
and z is positive in (away from the camera)."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this supposed to be "radians?" it looks like it outputs a 3d vector? If it is radians, did you consider using a scipy.spatial.transform.Rotation object?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a brilliant idea! Thanks! 🧠

If you don't mind, please take a look at the updated description and let me know what you think 2568e2d:

"""
The orientation of the normal vector(s) of this instance.
This is relative to the orthorectified source image, where x is positive
to the right, y is positive down, and z is positive in (away from the
camera).

This can be used to describe the forward transformation from the
camera's perspective. For example, an aruco marker whose origin is in
the center of the image and is facing towards the camera could have the
orientation::

    Rotation.from_euler('y', np.pi)

If that same aruco marker was also placed upside down, then it's
orientation could be::

    Rotation.from_euler(
        'yz',
        [ [np.pi, 0],
            [0,     np.pi] ]
    )
"""

Copy link
Collaborator

@braden6521 braden6521 May 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that description is overall good! I'm not sure I understand what is meant by:

relative to the orthorectified source image

Is this the image on the camera sensor? If so, why is this orthorectified? If it is the camera or the target coordinates, I would just expect "camera" or "target."

Also I would add that this is just a rotation, not translation. When I think of an orientation, I usually think of rotation and translation (six degree of freedom (or six DOF)). Maybe consider renaming this rotation?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is nice

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! It's really handy for this sort of situation where:

  • you're going to be using lots of random classes
  • they all have the same name so it's easy to tell which package they came from

opencsp/common/lib/geometry/Vxy.py Show resolved Hide resolved
opencsp/common/lib/geometry/Vxyz.py Show resolved Hide resolved
opencsp/common/lib/tool/file_tools.py Outdated Show resolved Hide resolved
@bbean23 bbean23 force-pushed the 82-spot-analysis-for-peak-intensity-correction branch 2 times, most recently from 1a35c4a to 2568e2d Compare April 26, 2024 21:13
Copy link
Collaborator Author

@bbean23 bbean23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your thoughtful and considerate review, as always! I've made some commits to try and improve the code based on your comments. Please let me know what you think of these.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! I don't think you're missing a background, you're just not in my head. :)

In reality, understanding this class requires understanding the SpotAnalysis class, which will hopefully get easier to do with some examples.

Comment on lines 27 to 29
pixels_to_meters : Callable[[p2.Pxy], v3.Vxyz], optional
Conversion function to get the physical point in space for the given x/y position information. Used in the
default self.scale implementation. Defaults to 1 meter per pixel.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are really good questions, and I'm glad that you brought them up. I've updated the docstring to try and answer them:

"""
pixels_to_meters : Callable[[p2.Pxy], v3.Vxyz], optional
    Conversion function to get the physical point in space for the given x/y position information. Used in the
    default self.scale implementation. A good implementation of this function will correct for many factors such
    as relative camera position and camera distortion. For extreme accuracy, this will also account for
    non-uniformity in the target surface. Defaults to a simple 1 meter per pixel model.
"""

Comment on lines 16 to 35
class AbstractFiducials(ABC):
def __init__(self, style=None, pixels_to_meters: Callable[[p2.Pxy], v3.Vxyz] = None):
"""
A collection of markers (such as an ArUco board) that is used to orient the camera relative to observed objects
in the scene. It is suggested that each implementing class be paired with a complementary locator method or
SpotAnalysisImageProcessor.

Parameters
----------
style : RenderControlPointSeq, optional
How to render this fiducial when using the defaul render_to_plot() method. By default rcps.default().
pixels_to_meters : Callable[[p2.Pxy], v3.Vxyz], optional
Conversion function to get the physical point in space for the given x/y position information. Used in the
default self.scale implementation. Defaults to 1 meter per pixel.
"""
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure where to put these docstrings, since they seem like they belong both on the class description and on the init. It looks like realpython suggests splitting it, putting the description on the class and the parameters on the method. I've updated the code to follow that approach. What do you think?

https://realpython.com/documenting-python-code/#class-docstrings

opencsp/common/lib/geometry/Vxy.py Show resolved Hide resolved
opencsp/common/lib/geometry/Vxyz.py Show resolved Hide resolved

class NullImageSubtractionImageProcessor(AbstractSpotAnalysisImagesProcessor):
"""
Subtracts the NULL supporting image from the primary image, if there is an associated NULL image.
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's an image of a target without a beam on it. (Thanks for prompting me to add descriptions 62e77ae)

I wasn't sure what to call this type of image. Do you have a good idea?

@property
def orientation(self) -> v3.Vxyz:
# TODO untested
return np.zeros((3, self.points.x.size))
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do like that more. Done!

Comment on lines 44 to 46
@property
def orientation(self) -> v3.Vxyz:
return v3.Vxyz([0, 0, 0])
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's intended to be an absolute orientation in relation to the camera. Do we also need a pointing direction for that?

Comment on lines 45 to 48
def orientation(self) -> v3.Vxyz:
"""The orientation(s) of this instance, in radians. This is relative to
the source image, where x is positive to the right, y is positive down,
and z is positive in (away from the camera)."""
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a brilliant idea! Thanks! 🧠

If you don't mind, please take a look at the updated description and let me know what you think 2568e2d:

"""
The orientation of the normal vector(s) of this instance.
This is relative to the orthorectified source image, where x is positive
to the right, y is positive down, and z is positive in (away from the
camera).

This can be used to describe the forward transformation from the
camera's perspective. For example, an aruco marker whose origin is in
the center of the image and is facing towards the camera could have the
orientation::

    Rotation.from_euler('y', np.pi)

If that same aruco marker was also placed upside down, then it's
orientation could be::

    Rotation.from_euler(
        'yz',
        [ [np.pi, 0],
            [0,     np.pi] ]
    )
"""

@bbean23 bbean23 mentioned this pull request May 1, 2024
braden6521
braden6521 previously approved these changes May 20, 2024
Copy link
Collaborator

@braden6521 braden6521 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bbean23, thanks for answering my questions. Feel free to take or leave my new comments. I still need some more exposure to all this to fully wrap my head around it. I tried to do my best with this large PR, but in general the code looks good.

@e10harvey
Copy link
Collaborator

@bbean23: I restarted the windows2022 CI.

@e10harvey
Copy link
Collaborator

@bbean23: I believe restarting is using the previous version of develop, sans the commits from #95. Please rebase this on-top of develop.

bbean23 added a commit to bbean23/OpenCSP that referenced this pull request May 29, 2024
@bbean23 bbean23 force-pushed the 82-spot-analysis-for-peak-intensity-correction branch from 4341257 to e2d5055 Compare May 29, 2024 16:48
@bbean23
Copy link
Collaborator Author

bbean23 commented May 29, 2024

@bbean23: I believe restarting is using the previous version of develop, sans the commits from #95. Please rebase this on-top of develop.

I think you're right about this. I just rebased and pushed. I'll ping you again to approve once the builds are finished.

Copy link
Collaborator

@braden6521 braden6521 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved!

bbean23 added 27 commits May 29, 2024 15:36
… add initial_min and initial_max to PopulationStatisticsImageProcessor
@bbean23 bbean23 force-pushed the 82-spot-analysis-for-peak-intensity-correction branch from e2d5055 to 918a09b Compare May 29, 2024 21:36
@bbean23 bbean23 merged commit e432934 into sandialabs:develop May 29, 2024
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

spot analysis for peak intensity correction
3 participants