-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added splice
module
#152
base: master
Are you sure you want to change the base?
Added splice
module
#152
Conversation
…into dev_ldsantos
Testing would be more robust if you upload an input file and a "truth" output file. Then you could compare all data values. I'm concerned that testing one flux value might not be sensitive to changes in all of your code's logic. |
Improved the test suite of |
Leo, Will most users want to use I'm wondering if |
…he other `stistools` modules
Docstrings should follow numpy/scipy docstring format: Current: Parameters
----------
unique_spectra_list (``list``):
List of unique spectra.
merged_pair_list (``list``):
List of merged overlapping pair spectra.
... Suggested: Parameters
----------
unique_spectra_list : list
List of unique spectra.
merged_pair_list : list
List of merged overlapping pair spectra.
... Note that the colons won't currently render properly on RTD, but we're working on that in a separate PR. |
stistools/splice.py
Outdated
dq_weights_interp = np.zeros_like(dq_interp) | ||
# And then for each acceptable dq, if the element of the dq array is one | ||
# of the acceptable flags, we set its dq weight to one | ||
for adq in acceptable_dq_flags: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably check bitwise, i.e.:
np.where(dq_ref & adq)
and
np.where(dq_interp & adq)
In case more than one flag is present at a particular location. Perhaps this doesn't occur with the default flags, but could with other options provided by the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In fact, you can probably skip the loop over the various acceptable_dq_flags
values and compare to the scalar value that is the bitwise-or or the adq flags (sum in this case).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, so wrapping it up together:
from functools import reduce
acceptable_dq_flags = reduce(np.bitwise_or, acceptable_dq_flags) # scalar value
dq_weights_ref[np.where(dq_ref & acceptable_dq_flags)] = 1
dq_weights_interp[np.where(dq_interp & acceptable_dq_flags)] = 1
(Should be tested.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that this code snippet catches all the good flags, except for 0
. But I should be able to add a line or two that catches it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch! We don't typically flag acceptable pixels, so that slipped by.
Perhaps:
bad_dq_flags = ~reduce(np.bitwise_or, acceptable_dq_flags) # scalar value
dq_weights_ref = np.ones(..., dtype=...)
dq_weights_interp = np.ones(..., dtype=...)
dq_weights_ref[dq_ref & bad_dq_flags] = 0
dq_weights_interp[dq_interp & bad_dq_flags] = 0
Looking at it now, we probably don't need the np.where
either.
…into dev_ldsantos
@ladsantos - |
Hi @sean-lockwood, no worries! It is good to go for sanity checks and merge. |
@ladsantos - cc: @Jackie-Brown |
I'm noticing near the spikes there are a lot of Might be indicative of a problem. |
Also, the DQ flag |
The spikes on the right have DQ = (512 | 2048), whereas the ones on the left are 2^15. I think making the current DQ 2^15 regions be (2^15 | 2048) makes the most sense, as we'd be able to filter all the spikes using the prior definition of SDQFLAGS (preserving behavior in legacy code) and your distinction using 2^15 would still be present. (Pending discussion with the team.) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still looking at the DQ behavior in more detail, but here are a few issues I noticed today.
# We interpolate the lower-SNR spectra to the wavelength bins of the higher | ||
# SNR spectrum. | ||
max_sens_idx = np.where(avg_sensitivity == np.nanmax(avg_sensitivity))[0][0] | ||
overlap_ref = overlap_sections.pop(max_sens_idx) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has the side effect of modifying the input variable. You should probably make a copy inside the function instead.
if truncate_edge_right is not None: | ||
for sk in spectrum: | ||
sk['data_quality'][-truncate_edge_right:] = 4096 | ||
elif truncate_edge_left is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
elif
-> if
allows both edges to be truncated with default values. Is that the desired behavior?
|
||
# Merge the overlapping spectral sections | ||
merged_pairs = [ | ||
merge_overlap(overlap_pair_sections[k], acceptable_dq_flags) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should the weight
parameter be passed here too?
overlap_s_u = np.copy(overlap_sections[i]['uncertainty']) | ||
overlap_s_n = np.copy(overlap_sections[i]['net']) | ||
overlap_s_dq = overlap_sections[i]['data_quality'] | ||
where_dq_bad = np.where(overlap_s_dq & bitwise_or_acceptable_dq_flags) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If bitwise_or_acceptable_dq_flags
are the flags that we're ok ignoring, shouldn't we use the inverse here to find bad locations?
unacceptable_dq_flags = int(~np.uint16(bitwise_or_acceptable_dq_flags))
...
where_dq_bad = (overlap_s_dq & unacceptable_dq_flags) != 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried this, and it does not remove the spikes on the right. I think there's something we are missing with the logic here (see my comment below).
The 2^15 DQ flag represents pixels that were co-added from good pixels (those with acceptable flags). This is odd, because the 2048 and 512 are not an acceptable DQ flag, so it shouldn't be co-adding the DQ = (512 | 2048) pixels. Or am I getting this logic wrong? |
Added the
splice
module, which is based on the standalone codestissplice
, including a simple test suite and documentation.