Skip to content

Commit

Permalink
Fix many spelling errors with tool codespell.
Browse files Browse the repository at this point in the history
This patch is imported from Debian Package.
  • Loading branch information
cdluminate committed Aug 13, 2016
1 parent 7bf5a99 commit b78d6d6
Show file tree
Hide file tree
Showing 10 changed files with 15 additions and 15 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ restrictions:
[mailing-list](http://groups.google.com/forum/#!forum/torch7)).

* Please **do not** open issues regarding the code in a torch package
outside the core. For example dont open issues about the
outside the core. For example don't open issues about the
REPL in the nn issue tracker, use the trepl issue tracker for that.

<a name="bugs"></a>
Expand Down
2 changes: 1 addition & 1 deletion ClassSimplexCriterion.lua
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ function ClassSimplexCriterion:__init(nClasses)
end

-- handle target being both 1D tensor, and
-- target being 2D tensor (2D tensor means dont do anything)
-- target being 2D tensor (2D tensor means don't do anything)
local function transformTarget(self, target)
if torch.type(target) == 'number' then
self._target:resize(self.nClasses)
Expand Down
2 changes: 1 addition & 1 deletion Container.lua
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ end

-- Check if passing arguments through xpcall is supported in this Lua interpreter.
local _, XPCALL_ARGS = xpcall(function(x) return x ~= nil end, function() end, 1)
local TRACEBACK_WARNING = "WARNING: If you see a stack trace below, it doesn't point to the place where this error occured. Please use only the one above."
local TRACEBACK_WARNING = "WARNING: If you see a stack trace below, it doesn't point to the place where this error occurred. Please use only the one above."
-- module argument can be retrieved with moduleIndex, but code is cleaner when
-- it has to be specified anyway.
function Container:rethrowErrors(module, moduleIndex, funcName, ...)
Expand Down
2 changes: 1 addition & 1 deletion LookupTable.lua
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ function LookupTable:renorm(input)
if not self.maxNorm then
return
end
-- copy input into _input, so _input is continous.
-- copy input into _input, so _input is continuous.
-- The copied _input will be modified in the C code.
self._input:resize(input:size()):copy(input)
local row_idx = self._input
Expand Down
2 changes: 1 addition & 1 deletion SpatialDropout.lua
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ function SpatialDropout:updateOutput(input)
end
self.noise:bernoulli(1-self.p)
-- We expand the random dropouts to the entire feature map because the
-- features are likely correlated accross the map and so the dropout
-- features are likely correlated across the map and so the dropout
-- should also be correlated.
self.output:cmul(torch.expandAs(self.noise, input))
else
Expand Down
4 changes: 2 additions & 2 deletions Sum.lua
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ end

function Sum:updateGradInput(input, gradOutput)
local dimension = self:_getPositiveDimension(input)
-- zero-strides dont work with MKL/BLAS, so
-- dont set self.gradInput to zero-stride tensor.
-- zero-strides don't work with MKL/BLAS, so
-- don't set self.gradInput to zero-stride tensor.
-- Instead, do a deepcopy
local size = input:size()
size[dimension] = 1
Expand Down
2 changes: 1 addition & 1 deletion VolumetricDropout.lua
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ function VolumetricDropout:updateOutput(input)
end
self.noise:bernoulli(1-self.p)
-- We expand the random dropouts to the entire feature map because the
-- features are likely correlated accross the map and so the dropout
-- features are likely correlated across the map and so the dropout
-- should also be correlated.
self.output:cmul(torch.expandAs(self.noise, input))
else
Expand Down
6 changes: 3 additions & 3 deletions doc/simple.md
Original file line number Diff line number Diff line change
Expand Up @@ -598,7 +598,7 @@ end
module = nn.Copy(inputType, outputType, [forceCopy, dontCast])
```

This layer copies the input to output with type casting from `inputType` to `outputType`. Unless `forceCopy` is true, when the first two arguments are the same, the input isn't copied, only transfered as the output. The default `forceCopy` is false.
This layer copies the input to output with type casting from `inputType` to `outputType`. Unless `forceCopy` is true, when the first two arguments are the same, the input isn't copied, only transferred as the output. The default `forceCopy` is false.
When `dontCast` is true, a call to `nn.Copy:type(type)` will not cast the module's `output` and `gradInput` Tensors to the new type. The default is false.

<a name="nn.Narrow"></a>
Expand Down Expand Up @@ -1432,10 +1432,10 @@ gpustr = torch.serialize(gpu)
```

The module is located in the __nn__ package instead of __cunn__ as this allows
it to be used in CPU-only enviroments, which are common for production models.
it to be used in CPU-only environments, which are common for production models.

The module supports nested table `input` and `gradOutput` tensors originating from multiple devices.
Each nested tensor in the returned `gradInput` will be transfered to the device its commensurate tensor in the `input`.
Each nested tensor in the returned `gradInput` will be transferred to the device its commensurate tensor in the `input`.

The intended use-case is not for model-parallelism where the models are executed in parallel on multiple devices, but
for sequential models where a single GPU doesn't have enough memory.
Expand Down
4 changes: 2 additions & 2 deletions lib/THNN/generic/SpatialUpSamplingNearest.c
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ void THNN_(SpatialUpSamplingNearest_updateOutput)(
int yDim = input->nDimension-1;

// dims
int idim = input->nDimension; // Gauranteed to be between 3 and 5
int idim = input->nDimension; // Guaranteed to be between 3 and 5
int osz0 = output->size[0];
int osz1 = output->size[1];
int osz2 = output->size[2];
Expand Down Expand Up @@ -80,7 +80,7 @@ void THNN_(SpatialUpSamplingNearest_updateGradInput)(
int yDim = gradInput->nDimension-1;

// dims
int idim = gradInput->nDimension; // Gauranteed to be between 3 and 5
int idim = gradInput->nDimension; // Guaranteed to be between 3 and 5
int isz0 = gradInput->size[0];
int isz1 = gradInput->size[1];
int isz2 = gradInput->size[2];
Expand Down
4 changes: 2 additions & 2 deletions test.lua
Original file line number Diff line number Diff line change
Expand Up @@ -6306,9 +6306,9 @@ function nntest.addSingletonDimension()
local resultArg = torch.Tensor()
local resultR = nn.utils.addSingletonDimension(resultArg, tensor, dim)
mytester:eq(resultArg:size():totable(), resultSize,
'wrong content for random singleton dimention '..
'wrong content for random singleton dimension '..
'when the result is passed as argument')
mytester:eq(resultArg, result, 'wrong content for random singleton dimention '..
mytester:eq(resultArg, result, 'wrong content for random singleton dimension '..
'when the result is passed as argument')

mytester:eq(resultR == resultArg, true,
Expand Down

0 comments on commit b78d6d6

Please sign in to comment.