Skip to content

Commit

Permalink
Add resources res
Browse files Browse the repository at this point in the history
  • Loading branch information
Atcold committed Sep 12, 2016
1 parent 45f1dda commit 8c777c1
Show file tree
Hide file tree
Showing 10 changed files with 158 additions and 7 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,35 +6,35 @@ This aims to be a growing collections of introductory video tutorials on the [*T
*Torch* is one of the fastest and most flexible framework existing for Machine and Deep Learning.
And yes, flexibility was used to come with an intimidatig learning curve... until now.

Enjoy the view of these videos and code transcripts (which will be added soon).
Enjoy the view of these videos, transcripts and quizes (you can find in the [`res`](res) folder together with some notes about how I made these videos).


## 1 - Get the basics straight

### 1.0 - An overview on *Lua*
### 1.0 - An overview on *Lua* ([slides](res/1.0/slides.pdf))

[![Practical 1.0 - Lua](http://img.youtube.com/vi/QLYLOPeI92g/0.jpg)](http://www.youtube.com/watch?v=QLYLOPeI92g)

### 1.1 - An overview on *Torch*’s `Tensor`s
### 1.1 - An overview on *Torch*’s `Tensor`s ([slides](res/1.1/slides.pdf))

[![Practical 1.1 - Torch](http://img.youtube.com/vi/o3aRgD1uzsc/0.jpg)](http://www.youtube.com/watch?v=o3aRgD1uzsc)

### 1.2 - An overview on *Torch*’s `image` package
### 1.2 - An overview on *Torch*’s `image` package ([slides](res/1.2/slides.pdf))

[![Practical 1.2 - image package](http://img.youtube.com/vi/dEjvydjcwOE/0.jpg)](http://www.youtube.com/watch?v=dEjvydjcwOE)


## 2 - Artificial Neural Networks

### 2.0 - Neural Networks – feed forward (inference)
### 2.0 - Neural Networks – feed forward (inference) ([slides](res/2.0/slides.pdf), [quiz](res/2.0/quiz.tex))

[![Practical 2.0 – NN forward](http://img.youtube.com/vi/hxA0wxibv8g/0.jpg)](http://www.youtube.com/watch?v=hxA0wxibv8g)

### 2.1 - Neural Networks – back propagation (training)
### 2.1 - Neural Networks – back propagation (training) ([slides](res/2.1/slides.pdf))

[![Practical 2.1 - NN backward](http://img.youtube.com/vi/VaQUx7m3oR4/0.jpg)](http://www.youtube.com/watch?v=VaQUx7m3oR4)

### 2.2 - Neural Networks – An overview on *Torch*’s `nn` package
### 2.2 - Neural Networks – An overview on *Torch*’s `nn` package ([slides](res/2.2/slides.pdf), [script](res/2.2/script.lua))

[![Practical 2.2 - nn package](http://img.youtube.com/vi/atZYdZ8hVCw/0.jpg)](http://www.youtube.com/watch?v=atZYdZ8hVCw)

Expand Down
Binary file added res/1.0/slides.pdf
Binary file not shown.
Binary file added res/1.1/slides.pdf
Binary file not shown.
Binary file added res/1.2/slides.pdf
Binary file not shown.
5 changes: 5 additions & 0 deletions res/2.0/compile.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# Builds 35 different quizes naming them testXX, XX = 01 -> 35

for i in 0{1..9} {10..35}; do
rubber --pdf --jobname test$i quiz.tex
done
37 changes: 37 additions & 0 deletions res/2.0/quiz.tex
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
\documentclass{article}
\usepackage{pgf}
\usepackage{amsmath,amssymb,bm}
% Random int
\pgfmathsetseed{\number\pdfrandomseed}
\newcommand\rint{\pgfmathparse{random(10)}\pgfmathresult}
\begin{document}

\title{Quiz: forward propagation dimensionality}
\author{BME595 DeepLearning}
\date{\today}
\maketitle

Given a network with the size vector $\bm{s}=(\rint, \rint, \rint, \rint)^\top$, determine $n$, $K$, $L$ and write the dimensionality of:
%
\begin{align*}
\bm{x} \\
\bm{\hat x} \\
\bm{a}^{(1)} \\
\bm{\hat a}^{(1)} \\
\bm{\Theta}^{(1)} \\
\bm{z}^{(2)} \\
\bm{a}^{(2)} \\
\bm{\hat a}^{(2)} \\
\bm{\Theta}^{(2)} \\
\bm{z}^{(3)} \\
\bm{a}^{(3)} \\
\bm{\hat a}^{(3)} \\
\bm{\Theta}^{(3)} \\
\bm{z}^{(4)} \\
\bm{a}^{(4)} \\
\bm{\hat a}^{(4)} \\
h_{\bm{\Theta}}(\bm{x}) \\
\bm{y}
\end{align*}

\end{document}
Binary file added res/2.0/slides.pdf
Binary file not shown.
Binary file added res/2.1/slides.pdf
Binary file not shown.
109 changes: 109 additions & 0 deletions res/2.2/script.lua
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
-- Recording script, not a runnable script

require 'nn';
lin = nn.Linear(5, 3)
lin
{lin}
lin.weight
lin.bias
Theta_1 = torch.cat(lin.bias, lin.weight, 2) -- New Tensor
Theta_1
lin:zeroGradParameters()

sig = nn.Sigmoid()
{sig}
require 'gnuplot';
z = torch.linspace(-10, 10, 21)
gnuplot.plot(z, sig:forward(z))
-- Forward pass
x = torch.randn(5)
a1 = x
h_Theta = sig:forward(lin:forward(x)):clone()
z2 = Theta_1 * torch.cat(torch.ones(1), x, 1)
a2 = z_1:clone():apply(function (z) return 1/(1 + math.exp(-z)) end)

-- Backward pass
loss = nn.MSECriterion()
? nn.MSECriterion
loss
loss.sizeAverage = false
y = torch.rand(3)
-- forward(input, target)
E = loss:forward(h_Theta, y)
E
(h_Theta - y):pow(2):sum()

dE_dh = loss:updateGradInput(h_Theta, y):clone()
dE_dh
2 * (h_Theta - y)

delta2 = sig:updateGradInput(z2, dE_dh)
dE_dh:clone():cmul(a2):cmul(1 - a2)

lin:accGradParameters(x, delta2)
{lin}
lin.gradWeight
lin.gradBias
delta2:view(-1, 1) * torch.cat(torch.ones(1), x, 1):view(1, -1)

lin_gradInput = lin:updateGradInput(x, delta2)
lin.weight:t() * delta2

net = nn.Sequential()
net:add(lin);
net:add(sig);
net

-- While true
pred = net:forward(x)
pred
h_Theta
err = loss:forward(pred, y)
err
E
gradCriterion = loss:backward(pred, y)
gradCriterion
dE_dh
net:zeroGradParameters()
net:get(1)
torch.cat(net:get(1).gradBias, net:get(1).gradWeight, 2)

oldWeight = net:get(1).weight:clone()
oldBias = net:get(1).bias:clone()
etha = 0.01
net:updateParameters(etha)
net:get(1).weight
oldWeight - 0.01 * net:get(1).gradWeight
net:get(1).bias
oldBias - 0.01 * net:get(1).gradBias

-- X: design matrix
-- Y: labels / targets matrix / vector

for i = 1, m do
local pred = net:forward(X[i])
local err = criterion:forward(pred, Y[i])
local gradCriterion = criterion:backward(pred, Y[i])
net:zeroGradParameters()
net:backward(X[i], gradCriterion)
net:updateParameters(learningRate)
end

for i = 1, m, batchSize do
net:zeroGradParameters()
for j = 0, batchSize - 2 do
if i+j > m then break end
local pred = net:forward(X[i+j])
local err = criterion:forward(pred, Y[i+j])
local gradCriterion = criterion:backward(pred, Y[i+j])
net:backward(X[i+j], gradCriterion)
end
net:updateParameters(learningRate)
end

dataset = {}
function dataset:size() return m end
for i = 1, m do dataset[i] = {X[i], Y[i]} end
trainer = nn.StochasticGradient(net, loss)
trainer:train(dataset)

Binary file added res/2.2/slides.pdf
Binary file not shown.

0 comments on commit 8c777c1

Please sign in to comment.