-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Log cosh #421
Log cosh #421
Conversation
While waiting forhttps://github.com/JuliaStats/Distributions.jl/pull/1157
For a 0.14.8 release
For a 0.14.9 release
For a 0.15 release
For 0.15.1 release
This reverts commit 9939b92.
Codecov Report
@@ Coverage Diff @@
## dev JuliaAI/MLJBase.jl#421 +/- ##
=======================================
Coverage 81.83% 81.83%
=======================================
Files 38 38
Lines 2703 2703
=======================================
Hits 2212 2212
Misses 491 491
Continue to review full report at Codecov.
|
function (log_cosh::LogCosh)(ŷ::Vec{<:Real}, y::Vec{<:Real}) | ||
check_dimensions(ŷ, y) | ||
return sum(log.(cosh.(ŷ-y))) / length(y) | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggestion (see C below): return a vector of per-observation losses: return log.(cosh.(ŷ-y))) / length(y)
oops I mean't to drop the /length(y)
as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sum
is missing in this definition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ven-k Thanks for this contribution and for taking the time to wrap your head around the API!
C
As the loss is just the mean of a per-observation loss, can I suggest the changes above? By default this loss will be aggregated in resampling using the mean (aggregation(...) = Mean()
). In this way the per-observation losses will be available, for example, to hyper-parameter optimisation strategies that use this information (some Bayesian methods).
So then the code will look basically the same as l1
which has this behaviour:
julia> l1([1,2,3], [2,3,4])
3-element Array{Int64,1}:
1
1
1
src/measures/continuous.jl
Outdated
|
||
Log-Cosh loss: | ||
|
||
``\\text{Log-Cosh} = m^{-1}∑ᵢ log(cosh(ŷᵢ-yᵢ))`` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One more thing update the docstring to reflect new behavior
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@OkonSamuel Yeah it is a bit confusing. Some measures that could be implemented as per-observation losses are not, eg, mae
(which I guess essentially duplicates l1
, which is per-observation loss).
@ven-k For the record, measures is overdue for a thorough review and migration out of MLJ https://github.com/alan-turing-institute/MLJBase.jl/issues/299 .
@ven-k Now live (see above) |
Thanks! @ablaom and @OkonSamuel I am following JuliaAI/StatisticalMeasures.jl#17 and #416. I am interested in StatisticalMeasures and possible visualisation package. |
log_cosh = sum (log (cosh (yhat - y))) / n