Skip to content

Commit

Permalink
fix: cast torch model output dtype to match input
Browse files Browse the repository at this point in the history
  • Loading branch information
dodamih authored and supersergiy committed Jan 15, 2025
1 parent e2939d6 commit c1cc05f
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion zetta_utils/convnet/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,5 +123,5 @@ def load_and_run_model(path, data_in, device=None, use_cache=True): # pragma: n
with torch.inference_mode(): # uses less memory when used with JITs
with torch.autocast(device_type=autocast_device):
output = model(tensor_ops.convert.to_torch(data_in, device=device))
output = tensor_ops.convert.astype(output, reference=data_in)
output = tensor_ops.convert.astype(output, reference=data_in, cast=True)
return output

0 comments on commit c1cc05f

Please sign in to comment.