Skip to content

Commit

Permalink
Fix (brevitas_examples/llm): remove unnecessary checkpointing (#1161)
Browse files Browse the repository at this point in the history
  • Loading branch information
Giuseppe5 authored Jan 15, 2025
1 parent a7efcba commit 9721379
Showing 1 changed file with 0 additions and 4 deletions.
4 changes: 0 additions & 4 deletions src/brevitas_examples/llm/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -517,10 +517,6 @@ def quantize_llm(args):
apply_bias_correction(model, calibration_loader)
print("Bias correction applied.")

if args.checkpoint_name is not None:
print(f"Saving checkpoint to {args.checkpoint_name}")
torch.save(model.state_dict(), args.checkpoint_name)

if args.eval and not args.no_quantize:
print("Model eval...")
with torch.no_grad(), quant_inference_mode(model):
Expand Down

0 comments on commit 9721379

Please sign in to comment.