Skip to content

Commit

Permalink
Fixed typos for Tutorials and Guides docs (#3274)
Browse files Browse the repository at this point in the history
  • Loading branch information
henryhmko authored Dec 6, 2024
1 parent aa16d69 commit cb8b7c6
Show file tree
Hide file tree
Showing 4 changed files with 6 additions and 6 deletions.
2 changes: 1 addition & 1 deletion docs/source/basic_tutorials/migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ Set the mixed precision type to use in the [`Accelerator`], and then use the [`~
```diff
+ accelerator = Accelerator(mixed_precision="fp16")
+ with accelerator.autocast():
loss = complex_loss_function(outputs, target):
loss = complex_loss_function(outputs, target)
```

## Save and load
Expand Down
2 changes: 1 addition & 1 deletion docs/source/basic_tutorials/notebook.md
Original file line number Diff line number Diff line change
Expand Up @@ -327,7 +327,7 @@ def training_loop(mixed_precision="fp16", seed: int = 42, batch_size: int = 64):
# Build dataloaders
train_dataloader, eval_dataloader = get_dataloaders(batch_size)

# Instantiate the model (you build the model here so that the seed also controls new weight initaliziations)
# Instantiate the model (you build the model here so that the seed also controls new weight initializations)
model = create_model("resnet50d", pretrained=True, num_classes=len(label_to_id))

# Freeze the base model
Expand Down
2 changes: 1 addition & 1 deletion docs/source/usage_guides/quantization.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,4 +135,4 @@ Note that you don’t need to pass `device_map` when loading the model for train

### Example demo - running GPT2 1.5b on a Google Colab

Check out the Google Colab [demo](https://colab.research.google.com/drive/1T1pOgewAWVpR9gKpaEWw4orOrzPFb3yM?usp=sharing) for running quantized models on a GTP2 model. The GPT2-1.5B model checkpoint is in FP32 which uses 6GB of memory. After quantization, it uses 1.6GB with 8-bit modules and 1.2GB with 4-bit modules.
Check out the Google Colab [demo](https://colab.research.google.com/drive/1T1pOgewAWVpR9gKpaEWw4orOrzPFb3yM?usp=sharing) for running quantized models on a GPT2 model. The GPT2-1.5B model checkpoint is in FP32 which uses 6GB of memory. After quantization, it uses 1.6GB with 8-bit modules and 1.2GB with 4-bit modules.
6 changes: 3 additions & 3 deletions docs/source/usage_guides/tracking.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ rendered properly in your Markdown viewer.

# Experiment trackers

There are a large number of experiment tracking API's available, however getting them all to work with in a multi-processing environment can oftentimes be complex.
There are a large number of experiment tracking APIs available, however getting them all to work in a multi-processing environment can oftentimes be complex.
Accelerate provides a general tracking API that can be used to log useful items during your script through [`Accelerator.log`]

## Integrated Trackers
Expand Down Expand Up @@ -75,7 +75,7 @@ my_model, my_optimizer, my_training_dataloader = accelerator.prepare(my_model, m
device = accelerator.device
my_model.to(device)

for iteration in config["num_iterations"]:
for iteration in range(config["num_iterations"]):
for step, batch in enumerate(my_training_dataloader):
my_optimizer.zero_grad()
inputs, targets = batch
Expand Down Expand Up @@ -184,7 +184,7 @@ wandb_tracker = accelerator.get_tracker("wandb")
From there you can interact with `wandb`'s `run` object like normal:

```python
wandb_run.log_artifact(some_artifact_to_log)
wandb_tracker.log_artifact(some_artifact_to_log)
```

<Tip>
Expand Down

0 comments on commit cb8b7c6

Please sign in to comment.