Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mags.py tests #148

Open
6 tasks
PK0207 opened this issue Mar 30, 2022 · 2 comments
Open
6 tasks

mags.py tests #148

PK0207 opened this issue Mar 30, 2022 · 2 comments
Assignees

Comments

@PK0207
Copy link
Contributor

PK0207 commented Mar 30, 2022

Describe the science task
Conduct tests on mags.py to see how varying parameters changes mags.py performance.

Proposed Direction

  • Is mags.py performance better than PTF mags calculations? (More open ended)
  • Does varying exposure time in aperture photometry affect magnitude calculation? (It should not, so this would point out a flaw in logic...possibly)
  • Investigate how varying the aperture size varies the results (beginning to look into optimizing aperture).
  • How does the time taken to run mags.py scale with more data points?
  • Run normalization with individual reference stars, and see how the performance of normalization varies with that.
  • Find another variable star to run the pipeline on to catch and formatting issues, (coordinate formats, and filter transformations)

Desired Results
We want mags.py to be able to perform efficiently, and the script to be optimized to run well on any target star, variable or non-variable.

Resources
Any applicable papers/resources/data/etc that should be consulted/used.
^^ PK will add PTF papers

Additional context
Add any other context about the problem here.

@EdgarMao EdgarMao self-assigned this Mar 30, 2022
@Samuel-Salters
Copy link

Error Propagation

The current error function propagates error through the fit function to find the error in each target magnitude put out by the mags.py function. The fit takes in an instrumental magnitude and corrects it using the equation of the fit line, which is essentially of the form m*x+b. The current method of error propagation uses the equation:

sigma = sqrt(sum(df/dxi * sigma_xi))

xi being the variable in the fit function and sigma_xi being its error. So as our function is m*x+b we have the target instrumental magnitude x and the slope of the line m, along with their errors being the elements that result from the partials, so the final equation is:

sigma = sqrt((xsigma_m)^2+(msigma_x)^2)

This is the error of our target instrumental magnitude when it is returned by mags.py. The function that achieves this breaks the equation down into steps, for ease of troubleshooting and to ensure accuracy in the propagation. The function itself is below.

First the error and parameter (slope) are defined.

coeff, cov = np.polyfit(x, y,1,cov = 'true')
parameter_err = np.array(np.sqrt(np.diag(cov)))
fit_err =np.array(parameter_err[0])
fit_slope =np.abs(np.poly1d(arr[0]))

Then the error propagation function breaks the equation into 2 and conducts each step, this is within the fitting function so there are two values returned for each magnitude passed through the fit function.

def mag_err1(targetmag):
    p_err = fit_err
    return p_err*targetmag
def mag_err2(in_err):
    m = fit_slope
    return m* in_err
err_1 = mag_err1(target_mag)
err_2 = mag_err2(in_magerr)
def err_prop1(e_1):
    return (e_1**2)
def err_prop2(e_2):
    return (e_2**2)
prop_1 = err_prop1(err_1)
prop_2 = err_prop2(err_2)

where prop_1 is the first part of the equation (xsigma_m)^2, and prop_2 is the second (msigma_x)^2. They are then summed and square rooted in a for loop, creating an array the same size as the array of magnitudes returned by the fit function, each ith object being the error corresponding to the ith magnitude.

for i in range(0, len(prop_2)):
    a = np.sqrt((prop_1+prop_2[i]))
    error = a

Now, once passed through phase_diagram.py the error bars in the light curve are produced.
Figure_1
The errors are consistent across the graph and vary in a fashion that makes sense to the way they are propagated, and looking through the data file it can be seen there are no absurd values. However, it can be clearly seen that the errors are far too large, which I believe is down to the magnitude being multiplied by the error creating an unreasonably large number. After consulting the Taylor textbook on error analysis, there seems to be a better way to propagate the error through a product than the way I did above, one using the sum of the relative uncertainties. This method will hopefully solve the issues of the uncertainties being unreasonably large.

@Samuel-Salters
Copy link

Samuel-Salters commented May 11, 2022

The new error function takes into consideration the recommendation of the Taylor textbook on error analysis to deal with the relative uncertainties in the case of simple sums and products, as it better approximates the error. It is also a much more simple way of iterating the errors. The new error propagation method is as follows.
The first step creates the relative uncertainties for m and x in the mx+b of the fit, the next step sums them in quadrature, returning the final relative uncertainty. The final relative uncertainty is then multiplied by the true value of mx to return the uncertainty of mx. This uncertainty and that in the intercept b are then summed in quadrature to produce the error.
The script of this function is:

def relative_u(sigma,x):
        return sigma/(np.abs(x))
    slope_u = relative_u(fit_err, fit_slope)
    mag_u = relative_u(in_magerr, target_inst_mag)
    for i in range(0,len(mag_u)):
        a = np.sqrt(mag_u[i]**2+slope_u**2)
    error = a*(target_mag - fit_intercept)
target_magerr = error
err = [ ]
for i in range(0, len(target_magerr)):
    e = np.sqrt(target_magerr[i]**2+fit_intercept**2)
    err.append(e)

The error bars produced are smaller than the first method under preliminary testing.
grap

The new version of the error is

    def relative_u(sigma,x):
        return sigma/(np.abs(x))
    slope_u = relative_u(fit_err, fit_slope)
    mag_u = np.mean(relative_u(in_magerr, target_inst_mag))
    #for i in range(0,len(mag_u)):
    #    a = np.sqrt(mag_u[i]**2+slope_u**2)
    def err(mag, slope):
        return np.sqrt(mag**2+slope**2)
    mx_err = err(mag_u, slope_u)
    def corr(err, slope, mag):
        return err*slope*mag
    part_err = corr(mx_err, fit_slope, target_inst_mag)
    def add(err):
        b = int_err
        return np.sqrt(err**2+b**2)/2
    err = add(part_err)
    target_magerr.append(err)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants