-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Mean Absolute Percentage Error #248
Conversation
Hello @pranjaldatta! Thanks for updating this PR. There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2021-06-09 23:23:39 UTC |
for more information, see https://pre-commit.ci
Codecov Report
@@ Coverage Diff @@
## master #248 +/- ##
==========================================
+ Coverage 96.75% 96.78% +0.03%
==========================================
Files 92 94 +2
Lines 3053 3084 +31
==========================================
+ Hits 2954 2985 +31
Misses 99 99
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Hey @SkafteNicki, I was working on #235. I just added the source code. But I started running into an issue with pytest and Ic couldn't figure out what the issue was. (Please ignore the docs/formatting test fails, I will fix them before the final PR ! ) So the issue is, IssueWhen I run ================================= test session starts =================================
platform linux -- Python 3.7.1, pytest-6.1.2, py-1.9.0, pluggy-0.13.1 -- /home/pranjal/miniconda3/envs/generalPT/bin/python
cachedir: .pytest_cache
rootdir: /home/pranjal/OSS/metrics, configfile: setup.cfg
collected 68 items
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanSquaredError-mean_squared_error-mean_squared_error-preds0-target0-_single_target_sk_metric] PASSED [ 1%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanSquaredError-mean_squared_error-mean_squared_error-preds1-target1-_multi_target_sk_metric] PASSED [ 2%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanAbsoluteError-mean_absolute_error-mean_absolute_error-preds0-target0-_single_target_sk_metric] PASSED [ 4%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanAbsoluteError-mean_absolute_error-mean_absolute_error-preds1-target1-_multi_target_sk_metric] PASSED [ 5%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanSquaredLogError-mean_squared_log_error-mean_squared_log_error-preds0-target0-_single_target_sk_metric] PASSED [ 7%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanSquaredLogError-mean_squared_log_error-mean_squared_log_error-preds1-target1-_multi_target_sk_metric] PASSED [ 8%]
tests/regression/test_mean_error.py::TestMeanError::test_mean_error_class[True-True-MeanAbsolutePercentageError-mean_absolute_percentage_error-mean_absolute_percentage_error-preds0-target0-_single_target_sk_metric] It stalls at the mean_absolute_percentage_error test and I have to Keyboardnterrupt to exit the stall. I have no clue why this happening. I would really appreciate any help or pointers on how to solve this issue! Implementation verificationYou can verify that the metric is implemented correctly and gives comparable results to scikit learn by running the following script from project root, import torch
import torchmetrics.functional.regression.mean_absolute_percentage_error as mape
from torchmetrics.regression import MeanAbsolutePercentageError as MAPE
from sklearn.metrics import mean_absolute_percentage_error as sk_mape
y_true = torch.tensor([1, 10, 1e6])
y_pred = torch.tensor([0.9, 15, 1.2e6])
print(f"pred: {y_pred}")
print(f"Y: {y_true}")
reg_mape = MAPE()
print("-------Functional Test-------")
print("Func_ans: ", mape(y_pred, y_true))
print("Module_ans: ", reg_mape(y_pred, y_true))
print("sk_ans: ",sk_mape(y_true, y_pred))
print("--------------------")
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @pranjaldatta,
PR is looking really good. I am going to investigate the failing test one of the next days but here are some comments. In addition, please add the following:
- References in docs
- Entry in
changelog
torchmetrics/functional/regression/mean_absolute_percentage_error.py
Outdated
Show resolved
Hide resolved
torchmetrics/functional/regression/mean_absolute_percentage_error.py
Outdated
Show resolved
Hide resolved
torchmetrics/functional/regression/mean_absolute_percentage_error.py
Outdated
Show resolved
Hide resolved
Sure, I'll resolve them right away! |
torchmetrics/functional/regression/mean_absolute_percentage_error.py
Outdated
Show resolved
Hide resolved
torchmetrics/functional/regression/mean_absolute_percentage_error.py
Outdated
Show resolved
Hide resolved
torchmetrics/functional/regression/mean_absolute_percentage_error.py
Outdated
Show resolved
Hide resolved
torchmetrics/functional/regression/mean_absolute_percentage_error.py
Outdated
Show resolved
Hide resolved
Hi @pranjaldatta, The PR looks done to me and is ready to be merged, but if you want to work more on it you can always think about adding the |
Hey @SkafteNicki , thank you so much for your help! It works fine now! Regarding |
@pranjaldatta then lets not implement |
Head branch was pushed to by a user without write access
for more information, see https://pre-commit.ci
Hey @Borda, I have added the deprecation warning, I hope it's okay! |
torchmetrics/functional/regression/mean_absolute_percentage_error.py
Outdated
Show resolved
Hide resolved
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
for more information, see https://pre-commit.ci
@pranjaldatta thank you for your patience! 🐰 |
Always a pleasure to contribute to this community in whatever small way possible! |
Before submitting
What does this PR do?
Fixes #235.
PR review
Anyone in the community is free to review the PR once the tests have passed.
If we didn't discuss your PR in Github issues there's a high chance it will not be merged.
Did you have fun?
Make sure you had fun coding 🙃