Apr 26, 2013

Grumbling a little

I constantly find out that it is rather easy to make a mistake when implementing forward/backward propagation and weights update for neural network layers. For example, mistakes with offsets or iteration count. In the best case I get cuda-memcheck error and thus I am able to identify the problem and fix it right now.

In other cases the network will work almost fine. For example, if I accidentally set the iteration count to a value lesser than required, then I might end up just not updating some weights. And network will accommodate to such accidental restrictions and will still learn pretty fast. Until I encounter network schema where this bug affects too much weights and network starts learning too slow. *sigh*