Skip to content

Commit

Permalink
chore: remove redundant words in comment (#20510)
Browse files Browse the repository at this point in the history
Signed-off-by: withbest <[email protected].>
  • Loading branch information
withbest authored Jan 6, 2025
1 parent 9177ec0 commit afe5708
Show file tree
Hide file tree
Showing 4 changed files with 5 additions and 5 deletions.
4 changes: 2 additions & 2 deletions docs/source-pytorch/tuning/profiler_intermediate.rst
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ The profiler will generate an output like this:
Self CPU time total: 1.681ms
.. note::
When using the PyTorch Profiler, wall clock time will not not be representative of the true wall clock time.
When using the PyTorch Profiler, wall clock time will not be representative of the true wall clock time.
This is due to forcing profiled operations to be measured synchronously, when many CUDA ops happen asynchronously.
It is recommended to use this Profiler to find bottlenecks/breakdowns, however for end to end wall clock time use
the ``SimpleProfiler``.
Expand Down Expand Up @@ -142,7 +142,7 @@ This profiler will record ``training_step``, ``validation_step``, ``test_step``,
The output above shows the profiling for the action ``training_step``.

.. note::
When using the PyTorch Profiler, wall clock time will not not be representative of the true wall clock time.
When using the PyTorch Profiler, wall clock time will not be representative of the true wall clock time.
This is due to forcing profiled operations to be measured synchronously, when many CUDA ops happen asynchronously.
It is recommended to use this Profiler to find bottlenecks/breakdowns, however for end to end wall clock time use
the ``SimpleProfiler``.
Expand Down
2 changes: 1 addition & 1 deletion src/lightning/fabric/strategies/deepspeed.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ def __init__(
nvme_path: Filesystem path for NVMe device for optimizer/parameter state offloading.
optimizer_buffer_count: Number of buffers in buffer pool for optimizer state offloading
when ``offload_optimizer_device`` is set to to ``nvme``.
when ``offload_optimizer_device`` is set to ``nvme``.
This should be at least the number of states maintained per parameter by the optimizer.
For example, Adam optimizer has 4 states (parameter, gradient, momentum, and variance).
Expand Down
2 changes: 1 addition & 1 deletion src/lightning/pytorch/core/module.py
Original file line number Diff line number Diff line change
Expand Up @@ -979,7 +979,7 @@ def configure_optimizers(self) -> OptimizerLRScheduler:
# `scheduler.step()`. 1 corresponds to updating the learning
# rate after every epoch/step.
"frequency": 1,
# Metric to to monitor for schedulers like `ReduceLROnPlateau`
# Metric to monitor for schedulers like `ReduceLROnPlateau`
"monitor": "val_loss",
# If set to `True`, will enforce that the value specified 'monitor'
# is available when the scheduler is updated, thus stopping
Expand Down
2 changes: 1 addition & 1 deletion src/lightning/pytorch/strategies/deepspeed.py
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ def __init__(
nvme_path: Filesystem path for NVMe device for optimizer/parameter state offloading.
optimizer_buffer_count: Number of buffers in buffer pool for optimizer state offloading
when ``offload_optimizer_device`` is set to to ``nvme``.
when ``offload_optimizer_device`` is set to ``nvme``.
This should be at least the number of states maintained per parameter by the optimizer.
For example, Adam optimizer has 4 states (parameter, gradient, momentum, and variance).
Expand Down

0 comments on commit afe5708

Please sign in to comment.