Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add multi-precision support #23

Open
wants to merge 23 commits into
base: main
Choose a base branch
from

Conversation

d-monnet
Copy link

No description provided.

@d-monnet d-monnet requested a review from farhadrclass October 24, 2023 19:09
src/utils.jl Show resolved Hide resolved

Tests FluxNLPModel loader and NN device consistency.
"""
function test_devices_consistency(chain_ANN::Vector{T},data_train,data_test) where {T <: Chain}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this? loader and training can be on different machine or type, we just need to adjust them on the running

@@ -0,0 +1,78 @@
# test example taken from Flux quickstart guide ([https://fluxml.ai/Flux.jl/stable/models/quickstart/](https://fluxml.ai/Flux.jl/stable/models/quickstart/))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest having couple of more tests to include one or 2 iteration with training and updating the weights

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can add this to my PR, I will have it ready soon but the tests look good

@farhadrclass
Copy link
Collaborator

PR #25 should solve this PR however I want to keep this and make it a branch just in case this outperforms #25 in larger or smaller models in GPU (To be tested )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants