- Weight pruning: set individual weights in the weight matrix to zero. This corresponds to deleting connections as in the figure above. Here, to achieve sparsity of k% we rank the individual weights in weight matrix W according to their magnitude (absolute value), and then set to zero the smallest k%.
- Unit/Neuron pruning: set entire columns to zero in the weight matrix to zero, in effect deleting the corresponding output neuron. Here to achieve sparsity of k% we rank the the columns of a weight matrix according to their L2-norm and delete the smallest k%. Dataset used in this analysis is MNIST fashion data.
-
Notifications
You must be signed in to change notification settings - Fork 0
MotiBaadror/Pruning_in_neural_net
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Understanding weight and neuron pruning in the neural network
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published