Don't Worry Deep learning isn't that Hard !!! Follow these Resources in order and you'll be good to go !
Don't be sacred by this hard words.Let's see them one by one:
Optimization Technique dictates Technique through which loss function is optimized.In informal words basically how your network get tweaked.See Pros and Cons of diffrent Techniques.
Loss function basically is one function which measures error in whole network.Goal of Optimization Technique is to minimize value of this Loss Function.See See Usage of different Losss Functions.
This is the number which controls the speed of learning.For ex. 0
means no learning and 1
means Full paced learning.See See Choosing right learning rate.
As you know before training weights of network must be intialized.And Weights Intialization dictates Technique through which weights get intialized.See Pros and Cons of diffrent Techniques.
Epoch is number which dictates how many times training should be done on training data. For Ex: 3 : Do training on training data 3 times.
See Pros and Cons of diffrent Epochs
Validation Data is basically testing data derived from training/testing data.It can be used to see if model is overfitting/underfitting.
Optimizer | Convergence Speed | Convergence Quality |
---|---|---|
SGD | # | ### |
Adagrad | ### | # |
RMSprop | ### | ## or ### |
Adam | ### | ## or ### |
Nadam | ### | ## or ### |
AdaMax | ### | ## or ### |
You should always choose optimizer according to available data and timeconstraint.
Actually it's little hard to select just right learning rate for your project.Feel free to experiment with different learning rates and see which one gives best results.
Our Advice is to start with smaller learning rate and gradually increase it untill you find best one.
Guessing right number of epochs also require experience.But you can start with less epochs and gradually increase it untill you find best one for your project.