Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement hessian functions with different minibatch #36

Open
lrsantos11 opened this issue Nov 5, 2024 · 0 comments
Open

Implement hessian functions with different minibatch #36

lrsantos11 opened this issue Nov 5, 2024 · 0 comments

Comments

@lrsantos11
Copy link

As I mentioned in Zulip,

I would like to use FluxNLPModels.jl and I need to use second order information. According to Flux.jl, one can use again gradient to compute the hessian.

In fact, I want a possible different batch to compute the hessian in relation to the gradient.

I saw the in FluxNLPModels.jl the NLPModels functions grad and objgrad are implemented.

  • Should we implement the functions related to the hessian?
  • If so, use of Zygote (as Flux.jl indicates) is preferable?
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant