Skip to content

Commit

Permalink
Fix typos
Browse files Browse the repository at this point in the history
  • Loading branch information
taehoonlee committed Jun 14, 2017
1 parent dd1bc6b commit ed668cd
Show file tree
Hide file tree
Showing 16 changed files with 20 additions and 20 deletions.
6 changes: 3 additions & 3 deletions Documentation/Documents/Configuration Files.md
Original file line number Diff line number Diff line change
Expand Up @@ -685,7 +685,7 @@ The following parameters can be used to customize the behavior of the reader:

- **randomize**\[{Auto}, None, \#\] the randomization range (number of records to randomize across) for randomizing the input. This needs to be an integral factor of the epochSize and an integral multiple of minibatch size. Setting it to Auto will let CNTK find something that works.

- **minibatchMode**\[{Partial},Full\] the mode for minibatchs when the end of the epoch is reached. In partial minibatch mode, if the remaining records are less than a full minibatch, only those read will be returned (a partial minibatch). I Full minibatch mode, no partial minibatches will be returned, instead those records will be skipped.
- **minibatchMode**\[{Partial},Full\] the mode for minibatches when the end of the epoch is reached. In partial minibatch mode, if the remaining records are less than a full minibatch, only those read will be returned (a partial minibatch). I Full minibatch mode, no partial minibatches will be returned, instead those records will be skipped.

Each of the data record sub-sections have the following parameters:

Expand Down Expand Up @@ -717,7 +717,7 @@ For training and evaluation the following need to be defined:

- **framemode**\[{true}, false\] is the reader reading frames, or utterances

- **minibatchMode**\[{Partial},Full\] the mode for minibatchs when the end of the epoch is reached. In partial minibatch mode, if the remaining records are less than a full minibatch, only those read will be returned (a partial minibatch). I Full minibatch mode, no partial minibatches will be returned, instead those records will be skipped.
- **minibatchMode**\[{Partial},Full\] the mode for minibatches when the end of the epoch is reached. In partial minibatch mode, if the remaining records are less than a full minibatch, only those read will be returned (a partial minibatch). I Full minibatch mode, no partial minibatches will be returned, instead those records will be skipped.

- **readAhead**\[true,{false}\] have the reader read ahead in another thread. NOTE: some known issues with this feature

Expand Down Expand Up @@ -893,7 +893,7 @@ The parameters used for the binary reader are quite simple as most of the requir

The following parameters can be used to customize the behavior of the reader:

- **minibatchMode**\[{Partial},Full\] the mode for minibatchs when the end of the epoch is reached. In partial minibatch mode, if the remaining records are less than a full minibatch, only those read will be returned (a partial minibatch). I Full minibatch mode, no partial minibatches will be returned, instead those records will be skipped.
- **minibatchMode**\[{Partial},Full\] the mode for minibatches when the end of the epoch is reached. In partial minibatch mode, if the remaining records are less than a full minibatch, only those read will be returned (a partial minibatch). I Full minibatch mode, no partial minibatches will be returned, instead those records will be skipped.

- **file** – array of files paths to load. Each file may contain one or more datasets. The dataset names used when the file was created will be used when the file is read.

Expand Down
2 changes: 1 addition & 1 deletion Source/CNTKv2LibraryDll/DistributedLearnerBase.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ namespace CNTK

void DistributedLearnerBase::PrepaireZeroGradients(std::unordered_map<Parameter, NDArrayViewPtr>& gradientValues, MinibatchInfo& info)
{
// Need to intialize gradients to 0 in case when it is an empty minibatch.
// Need to initialize gradients to 0 in case when it is an empty minibatch.
for (auto& g : gradientValues)
{
auto weights = g.first.Value();
Expand Down
4 changes: 2 additions & 2 deletions Source/Common/DataReader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -240,7 +240,7 @@ size_t DataReader::GetCurrentSamplePosition()
// GetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
bool DataReader::GetMinibatch(StreamMinibatchInputs& matrices)
{
/**
Expand Down Expand Up @@ -273,7 +273,7 @@ bool DataReader::GetMinibatch(StreamMinibatchInputs& matrices)
// latticeinput - lattice for each utterances in this minibatch
// uids - lables stored in size_t vector instead of ElemType matrix
// boundary - phone boundaries
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
bool DataReader::GetMinibatch4SE(std::vector<shared_ptr<const msra::dbn::latticepair>>& latticeinput, vector<size_t>& uids, vector<size_t>& boundaries, vector<size_t>& extrauttmap)
{
bool bRet = true;
Expand Down
2 changes: 1 addition & 1 deletion Source/Common/Include/ASGDHelper.h
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ namespace Microsoft { namespace MSR { namespace CNTK {
// Providing option for DataParallelASGD training. so that every nodes
// could adjust learning rate every minibatch at first N epochs.
// -----------------------------------------------------------------------
// TODO: We can removed these options once we can adjust learning rate at minibatchs level
// TODO: We can removed these options once we can adjust learning rate at minibatches level
enum class AdjustLearningRateAtBeginning : int
{
None = 0, // default, don't adjust learning rate
Expand Down
2 changes: 1 addition & 1 deletion Source/Common/Include/DataReader.h
Original file line number Diff line number Diff line change
Expand Up @@ -442,7 +442,7 @@ class DataReader : public IDataReader, protected Plugin, public ScriptableObject
// GetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
virtual bool GetMinibatch(StreamMinibatchInputs& matrices);
virtual bool GetMinibatch4SE(std::vector<shared_ptr<const msra::dbn::latticepair>>& latticeinput, vector<size_t>& uids, vector<size_t>& boundaries, vector<size_t>& extrauttmap);
virtual bool GetHmmData(msra::asr::simplesenonehmm* hmm);
Expand Down
2 changes: 1 addition & 1 deletion Source/Common/Include/latticestorage.h
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
#include <stdint.h>
#include <cstdio>

#undef INITIAL_STRANGE // [v-hansu] intialize structs to strange values
#undef INITIAL_STRANGE // [v-hansu] initialize structs to strange values
#define PARALLEL_SIL // [v-hansu] process sil on CUDA, used in other files, please search this
#define LOGZERO -1e30f

Expand Down
2 changes: 1 addition & 1 deletion Source/ComputationNetworkLib/RecurrentNodes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ template<class ElemType, int direction>
// - matrix column indices into the initial state
// - if initial-state sequence has >1 steps, then index from back
// - if 1 step, then broadcast that to all
// - or -1 for non-boundary entires
// - or -1 for non-boundary entries

// our own output MB layout
let& outMBLayout = GetMBLayout();
Expand Down
2 changes: 1 addition & 1 deletion Source/EvalDll/EvalReader.h
Original file line number Diff line number Diff line change
Expand Up @@ -112,7 +112,7 @@ class EvalReader : public DataReaderBase
// TryGetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
virtual bool TryGetMinibatch(StreamMinibatchInputs& matrices)
{
// how many records are we reading this time
Expand Down
2 changes: 1 addition & 1 deletion Source/Readers/BinaryReader/BinaryReader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ bool BinaryReader<ElemType>::CheckEndDataset(size_t actualmbsize)
// GetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
template <class ElemType>
bool BinaryReader<ElemType>::TryGetMinibatch(StreamMinibatchInputs& matrices)
{
Expand Down
2 changes: 1 addition & 1 deletion Source/Readers/DSSMReader/DSSMReader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -318,7 +318,7 @@ void DSSMReader<ElemType>::StoreLabel(ElemType& labelStore, const LabelType& lab
// GetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
template <class ElemType>
bool DSSMReader<ElemType>::TryGetMinibatch(StreamMinibatchInputs& matrices)
{
Expand Down
2 changes: 1 addition & 1 deletion Source/Readers/HTKMLFReader/HTKMLFReader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -933,7 +933,7 @@ bool HTKMLFReader<ElemType>::GetHmmData(msra::asr::simplesenonehmm* hmm)
// GetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
// TODO: Why do we have two read functions? Is one not a superset of the other?
template <class ElemType>
bool HTKMLFReader<ElemType>::TryGetMinibatch(StreamMinibatchInputs& matrices)
Expand Down
2 changes: 1 addition & 1 deletion Source/Readers/Kaldi2Reader/HTKMLFReader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -844,7 +844,7 @@ void HTKMLFReader<ElemType>::StartMinibatchLoopToWrite(size_t mbSize, size_t /*e
// GetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
template <class ElemType>
bool HTKMLFReader<ElemType>::TryGetMinibatch(StreamMinibatchInputs& matrices)
{
Expand Down
2 changes: 1 addition & 1 deletion Source/Readers/SparsePCReader/SparsePCReader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ void SparsePCReader<ElemType>::StartMinibatchLoop(size_t mbSize, size_t /*epoch*
// GetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
template <class ElemType>
bool SparsePCReader<ElemType>::TryGetMinibatch(StreamMinibatchInputs& matrices)
{
Expand Down
4 changes: 2 additions & 2 deletions Source/Readers/UCIFastReader/UCIFastReader.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -763,7 +763,7 @@ void UCIFastReader<ElemType>::StoreLabel(ElemType& labelStore, const LabelType&
// GetMinibatch - Get the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
template <class ElemType>
bool UCIFastReader<ElemType>::TryGetMinibatch(StreamMinibatchInputs& matrices)
{
Expand Down Expand Up @@ -825,7 +825,7 @@ bool UCIFastReader<ElemType>::TryGetMinibatch(StreamMinibatchInputs& matrices)
// GetMinibatchImpl - The actual implementation of getting the next minibatch (features and labels)
// matrices - [in] a map with named matrix types (i.e. 'features', 'labels') mapped to the corresponding matrix,
// [out] each matrix resized if necessary containing data.
// returns - true if there are more minibatches, false if no more minibatchs remain
// returns - true if there are more minibatches, false if no more minibatches remain
template <class ElemType>
bool UCIFastReader<ElemType>::GetMinibatchImpl(StreamMinibatchInputs& matrices)
{
Expand Down
2 changes: 1 addition & 1 deletion Source/SGDLib/SGD.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -3262,7 +3262,7 @@ SGDParams::SGDParams(const ConfigRecordType& configSGD, size_t sizeofElemType)
#endif
m_isAsyncBufferEnabled = configDataParallelASGD(L"UsePipeline", false);
m_isSimulateMA = configDataParallelASGD(L"SimModelAverage", false); // using parameter server-based version of ModelAveragingSGD
if (configDataParallelASGD.Exists(L"AdjustLearningRateAtBeginning")) // adjust learning rate per m_adjustNumInBatch minibatchs until to original one,
if (configDataParallelASGD.Exists(L"AdjustLearningRateAtBeginning")) // adjust learning rate per m_adjustNumInBatch minibatches until to original one,
// this option could be used to takcle the unstableness of DataParallelASGD if you get a chance
{
const ConfigRecordType & configAdjustLearningRateAtBeginning(configDataParallelASGD(L"AdjustLearningRateAtBeginning", ConfigRecordType::Record()));
Expand Down
2 changes: 1 addition & 1 deletion Tutorials/CNTK_103C_MNIST_MultiLayerPerceptron.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
"- Data reading: We will use the CNTK Text reader \n",
"- Data preprocessing: Covered in part A (suggested extension section). \n",
"\n",
"There is a high overlap with CNTK 102. Though this tutorial we adapt the same model to work on MNIST data with 10 classses instead of the 2 classes we used in CNTK 102.\n"
"There is a high overlap with CNTK 102. Though this tutorial we adapt the same model to work on MNIST data with 10 classes instead of the 2 classes we used in CNTK 102.\n"
]
},
{
Expand Down

0 comments on commit ed668cd

Please sign in to comment.