Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Read pyuvdata-formatted beamfits beam models #301

Open
wants to merge 16 commits into
base: master
Choose a base branch
from
Open

Conversation

rlbyrne
Copy link
Contributor

@rlbyrne rlbyrne commented Mar 24, 2023

This PR allows the user to specify the beam model by supplying the path to a *.beamfits file.

@rlbyrne
Copy link
Contributor Author

rlbyrne commented Mar 29, 2023

Copy link
Contributor

@nicholebarry nicholebarry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's an initial run-through for changes. Once implemented, I'll do some checks that it doesn't break existing beams

fhd_core/beam_modeling/fhd_struct_init_antenna.pro Outdated Show resolved Hide resolved
fhd_core/beam_modeling/fhd_struct_init_antenna.pro Outdated Show resolved Hide resolved
fhd_core/beam_modeling/fhd_struct_init_antenna.pro Outdated Show resolved Hide resolved
fhd_core/beam_modeling/fhd_struct_init_antenna.pro Outdated Show resolved Hide resolved
fhd_core/beam_modeling/import_az_el_beam.pro Outdated Show resolved Hide resolved
fhd_core/beam_modeling/import_az_el_beam.pro Outdated Show resolved Hide resolved
fhd_core/beam_modeling/import_az_el_beam.pro Show resolved Hide resolved
@rlbyrne rlbyrne force-pushed the uvbeam_support branch 3 times, most recently from 5c307c0 to ca206f3 Compare April 25, 2023 20:56
@nicholebarry
Copy link
Contributor

I've implemented a memory reduction for the imported beamfits that is present for normal beam generation (changes two lines of code).

Memory usage is below 80G for a typical MWA run. Typical MWA runs of this type are below 60G. However, in the memory usage profile of the job, there is a spike during beam generation. If I can find that, it can be just as efficient as a normal run.
Screen Shot 2023-05-24 at 8 08 18 pm

The resulting image is wrong, however. I'm using a beamfits of the MWA produced by pyuvdata. Looks like the beam (right) is 1's within the horizon (data on left). This could be because of the beamfits I've produced, or could be a problem of the code. More investigation is required (advice welcome).
Screen Shot 2023-05-25 at 12 38 23 pm

@rlbyrne
Copy link
Contributor Author

rlbyrne commented May 30, 2023

@nicholebarry Thank you for the memory speedup!

I'm afraid I don't understand the images you posted. How are you finding that the images are wrong? I would expect that the pyuvdata beam wouldn't be exactly equal to the MWA beam contained within FHD.

@rlbyrne
Copy link
Contributor Author

rlbyrne commented Jun 26, 2023

Unfortunately the "Reduce mem" commit seems to have introduced a bug. Here is an example of an image made without that commit:
458_peeled_calibrated_natural_Dirty_I_no_beam_speedup
and with the commit:
458_peeled_calibrated_natural_Dirty_I_with_beam_speedup
I'm going to undo that commit for now, but we can revisit if a similar kind of memory speedup would be possible.

@nicholebarry
Copy link
Contributor

nicholebarry commented Aug 28, 2023

Fixed the problem with the mem reduction. There is now some push towards combining some of the beamfits code as pyuvdata_beam_import is quite small. @rlbyrne can you check that the change does not affect LWA data?

I need to rerun, because the beam is ever so slightly off, but I didn't include the recent commits. So I will set that up now and have it finished soon.

@rlbyrne
Copy link
Contributor Author

rlbyrne commented Sep 6, 2023

@nicholebarry Thank you. I will run an LWA test. Please let me know what you find with your test.

@rlbyrne
Copy link
Contributor Author

rlbyrne commented Sep 14, 2023

I just tested the latest version of the branch on LWA data and the results look good. @nicholebarry are there any more tests we need to run?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants