You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been running some experiments with the model.
Do you know if anyone on your end has been able to get float16 precision working on CPU or GPU?
I tried both on using model.half() and feeding in float16 tensors. But an error indicating that somewhere an incompatible float32 array is being used somewhere in the model (I think it might be traced to the metadata part?). In any case, setting torch.set_default_dtype(torch.float16) didn't seem to do the trick. I will keep trying.
The text was updated successfully, but these errors were encountered:
I am not sure if it is possible to use float16 with Clay. That is probably a question @srmsoumya can answer best. But could you explain why float32 is not an option for you? Is it to optimize GPU usage?
Thank you! Mostly we were just curious to see if we could further optimize the model in terms of runtime (by running the model with larger batches of images and so forth).
Hey Clay! Great work on v1.
I've been running some experiments with the model.
Do you know if anyone on your end has been able to get float16 precision working on CPU or GPU?
I tried both on using
model.half()
and feeding in float16 tensors. But an error indicating that somewhere an incompatible float32 array is being used somewhere in the model (I think it might be traced to the metadata part?). In any case, settingtorch.set_default_dtype(torch.float16)
didn't seem to do the trick. I will keep trying.The text was updated successfully, but these errors were encountered: