- Regularization "Upper bound on L2 norm of the incoming weight vector for each output neuron" added
- ROC-type result now works fine for multi-class output types
- rotate_band data and noise_data_transformer transformers added
- Dropout is now done per input neuron instead of per input feature map - more robust option
- Minor fixes
Jul 27, 2013
nnForge v1.0.5
Just published nnForge v1.0.5:
Jun 23, 2013
nnForge v1.0.4
Hi,
Here is nnForge 1.0.4. It features:
- Rectified linear, soft rectified linear and softmax layes with CPU and GPU backends implemented
- On the fly distortion
- ann_snapshot command (weights visualization)
- Minor improvements and bug-fixes
May 31, 2013
nnForge v1.0.3
Hi,
I did a number of improvements to the code and decided it was time to mark them with a version number, nnForge v1.0.3:
- Ability to validate and test with multiple samples per entry (averaging results)
- Max Subsampling layer in CUDA backend (2D only)
- Flipping image option added to the toolset
- Additional constructor with fixed seed for random generator
- preparing_data command split into preparing_training_data and preparing_testing_data
- A couple of minor bug-fixes
Apr 28, 2013
nnForge v1.0.2
I have finally published nnForge v1.0.2
This release contains the single major feature: Performance tuning for Kepler GK110 (GeForce Titan, Tesla K20). I have also improved the performance for Fermi cards.
What about Kepler GK104 (Tesla K10, Geforce 680, 670 e t.c.)? Almost all the optimizations I applied for GK110 are applicable to GK104, though I didn't test it. I don't have GK104 card so I didn't even run the code on it.
Initially I planned to add support for 1D convolutional layers, but ended up adding it for testers and hessian calculators only. The reason is simple: It is better to have an example on which I would be able to test new functionality. Otherwise I might just add a lot of code which doesn't work.
Apr 26, 2013
Grumbling a little
I constantly find out that it is rather easy to make a mistake when implementing forward/backward propagation and weights update for neural network layers. For example, mistakes with offsets or iteration count. In the best case I get cuda-memcheck error and thus I am able to identify the problem and fix it right now.
In other cases the network will work almost fine. For example, if I accidentally set the iteration count to a value lesser than required, then I might end up just not updating some weights. And network will accommodate to such accidental restrictions and will still learn pretty fast. Until I encounter network schema where this bug affects too much weights and network starts learning too slow. *sigh*
Apr 6, 2013
NVIDIA GeForce Titan
Just bought GeForce Titan. I will play Bioshock Infinite first then will proceed with optimizing nnForge for GK110 :)
Mar 3, 2013
nnForge v1.0.1
Hi,
I published v1.0.1 of nnForge library. It contains the single yet important enhancement: the library now fully supports data with input neurons having 'float' data type; before that the only supported type for inpt neurons was (unsigned) byte.
With that improvement the user is now able to feed system with input neurons of any range and suitable precision. So cumbersome scaling feature is not needed anymore and thus removed.
Subscribe to:
Posts (Atom)