Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix evaluate() printed values being incorrect by default #20799

Closed
wants to merge 1 commit into from

Conversation

Carath
Copy link

@Carath Carath commented Jan 23, 2025

Temporary fix for most users to this issue which was not present in previous versions of Keras.

This changes to 2 the default verbosity level of the model evaluate() method, as to not print incorrect values for e.g. the loss and accuracy (mismatching correct values printed during training or values computed "by hand").

This does not fix the problem as to why verbose=1 makes evaluate() print incorrect values while still returning correct ones when passing return_dict=True.

Copy link

google-cla bot commented Jan 23, 2025

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@codecov-commenter
Copy link

codecov-commenter commented Jan 23, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 82.04%. Comparing base (90568da) to head (46355bb).

Additional details and impacted files
@@            Coverage Diff             @@
##           master   #20799      +/-   ##
==========================================
+ Coverage   82.01%   82.04%   +0.03%     
==========================================
  Files         557      557              
  Lines       52016    52016              
  Branches     8037     8037              
==========================================
+ Hits        42659    42675      +16     
+ Misses       7403     7387      -16     
  Partials     1954     1954              
Flag Coverage Δ
keras 81.86% <ø> (+0.03%) ⬆️
keras-jax 64.25% <ø> (+0.03%) ⬆️
keras-numpy 58.97% <ø> (+0.01%) ⬆️
keras-openvino 29.89% <ø> (ø)
keras-tensorflow 64.82% <ø> (+0.03%) ⬆️
keras-torch 64.20% <ø> (+0.03%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@fchollet
Copy link
Collaborator

If you want to investigate the reason for the discrepancy between the final value printed by evaluate() in the progress bar, and the values returned by evaluate() (which are the correct ones), you are welcome to do so. I'm thinking it might be the progress bar callback being delayed by one batch or something like that.

However disabling logging is not the way to do it.

@fchollet fchollet closed this Jan 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants