-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use context array for exposure time calculation #1384
Comments
I agree that there's a similar approach that is something like:
that would save a drizzle. That's a good idea. More broadly I have the impression that there is a large amount of repeated computations among the different drizzles---drizzling the data and various variances, which all have the same kernels and WCSes. I feel like there's another win there if we could tell drizzle "drizzle all of the following arrays in the same way." |
Oh, a subtlety here is that for small pixfrac and large oversampling some input pixels won't touch some output pixels. We use pixfrac = 1 here to avoid that issue: https://github.com/spacetelescope/romancal/pull/959/files#diff-f87241f9a890ac31d6813379a09b6eb7e1052e3ab53039449a8f0df91aa030e1R453 |
Doesn't hard-coding pixfrac mean that the calculated exposure times won't match the actual exposure times for the mosaic if a non-1.0 pixfrac is used for resampling? |
It depends on what is meant by the "actual exposure time." For these values we want the meaning of that quantity to be "how much time did Roman spend pointed at this part of the sky." i.e., we want the metadata to support people asking questions like "give me the parts of the sky where Roman has spent the most time looking." We don't want it to exclude regions because we chose to process them with a different pixfrac / oversampling which means that fewer input pixels contribute to each output pixel, when ultimately the same total depths & sensitivities are possible. |
Thanks for the clarification! Given that information I don't think the context array will work. I'll close this issue but as you noted consolidating the resampling should make the extra resampling of the exposure time less costly. |
Sounds good. FWIW, the exposure time calculations could still be sped up---e.g., don't really care about the details of the boundaries and would be satisfied filling everything between the corners with a constant, for example, so it's still the case that drizzling is overkill. But I didn't want to write and test custom code here when I could "just" do another drizzle. |
#959 improved the level 3 exposure time calculation by adding an additional resampling step.
Although I don't yet fully understand the details I have a hunch the context array (generated during the science data resampling) would be sufficient to perform the calculation.
The text was updated successfully, but these errors were encountered: