Pages

Dec 27, 2013

Moved to blogger

Moved the blog from Zoho to Blogger platform for a number of reasons including better uptime, design, ability to edit posts in place. All posts are copied to the new platform.

Nov 23, 2013

nnForge v1.1.0

I've just published new nnForge release v1.1.0, which has a lot of new functionality and fixes implemented:
  • Squared Hinge Loss error function added
  • Local contrast subtractive layer hessian and updater implementations added both to CPU and GPU backend
  • Maxout layer added with CPU and GPU backends implemented
  • Added tester functionality for rgb_to_you_convert layer in CUDA backend
  • Learning rate decay functionality for tail iterations is added
  • Fixed:
    • Functionality bug in L2 incoming weights regularizer
    • Functionality bug for rectangular local contrast subtractive
    • Recovered snapshot_invalid functionality

Oct 23, 2013

Convolutional Neural Networks talk

Just did a presentation on convolutional neural networks at Computer Vision meet-up at Yandex. Here are the slides (in Russian).

Sep 22, 2013

nnForge v1.0.7

I released the latest commits to nnForge under the tag v1.0.7. The major improvements are adjustments for regression type of models and 3D convolutional layer implemented in CUDA backend. Here is the complete list:
  • supervised_data_reader now naturaly inherits unsupervised_data_reader: code is simplified
  • supervised_transformed_output_data_reader and unsupervised_transformed_input_data_reader added
  • Stats for readers (max, min, avg, std_dev) implemented
  • normalize_data_transformer added
  • Regression output type added
  • Convolutional 3D layer implemented in CUDA backend
  • Max subsampling 3D layer implemented in CUDA backend

Aug 21, 2013

nnForge v1.0.6

I've just published v1.0.6 release of nnForge. It contains:
  • Dropout support is extended to all layers
  • Data transformers simplified; removed deterministic mode of noise
  • Added sanity check for mse in order to drop ANNs with broken weights during training
  • Fixed plain (CPU) backend for rectangular convolutional and subsampling layers
  • CUDA exceptions now go with filename and line number
  • Minor fixes and improvements

Jul 27, 2013

Facial Expression Recognition Challenge

By the way, I managed to get the first public result with nnForge: The 3rd place in Challenges in Representation Learning: Facial Expression Recognition Challenge at Kaggle.


This contest and two others were the base for the ICML 2013 Workshop on Challenges in Representation Learning, all the results are well covered and analyzed in "Challenges in Representation Learning: A report on three machine learning contests", Ian Goodfellow et al, arXiv:1307.0414.

nnForge v1.0.5

Just published nnForge v1.0.5:
  • Regularization "Upper bound on L2 norm of the incoming weight vector for each output neuron" added
  • ROC-type result now works fine for multi-class output types
  • rotate_band data and noise_data_transformer transformers added
  • Dropout is now done per input neuron instead of per input feature map - more robust option
  • Minor fixes

Jun 23, 2013

nnForge v1.0.4

Hi,

Here is nnForge 1.0.4. It features:
  • Rectified linear, soft rectified linear and softmax layes with CPU and GPU backends implemented
  • On the fly distortion
  • ann_snapshot command (weights visualization)
  • Minor improvements and bug-fixes

May 31, 2013

nnForge v1.0.3

Hi,

I did a number of improvements to the code and decided it was time to mark them with a version number, nnForge v1.0.3:
  • Ability to validate and test with multiple samples per entry (averaging results)
  • Max Subsampling layer in CUDA backend (2D only)
  • Flipping image option added to the toolset
  • Additional constructor with fixed seed for random generator
  • preparing_data command split into preparing_training_data and preparing_testing_data
  • A couple of minor bug-fixes

Apr 28, 2013

nnForge v1.0.2

I have finally published nnForge v1.0.2

This release contains the single major feature: Performance tuning for Kepler GK110 (GeForce Titan, Tesla K20). I have also improved the performance for Fermi cards.

What about Kepler GK104 (Tesla K10, Geforce 680, 670 e t.c.)? Almost all the optimizations I applied for GK110 are applicable to GK104, though I didn't test it. I don't have GK104 card so I didn't even run the code on it.

Initially I planned to add support for 1D convolutional layers, but ended up adding it for testers and hessian calculators only. The reason is simple: It is better to have an example on which I would be able to test new functionality. Otherwise I might just add a lot of code which doesn't work.

Apr 26, 2013

Grumbling a little

I constantly find out that it is rather easy to make a mistake when implementing forward/backward propagation and weights update for neural network layers. For example, mistakes with offsets or iteration count. In the best case I get cuda-memcheck error and thus I am able to identify the problem and fix it right now.

In other cases the network will work almost fine. For example, if I accidentally set the iteration count to a value lesser than required, then I might end up just not updating some weights. And network will accommodate to such accidental restrictions and will still learn pretty fast. Until I encounter network schema where this bug affects too much weights and network starts learning too slow. *sigh*

Apr 6, 2013

NVIDIA GeForce Titan

Just bought GeForce Titan. I will play Bioshock Infinite first then will proceed with optimizing nnForge for GK110 :)


Mar 3, 2013

nnForge v1.0.1

Hi,

I published v1.0.1 of nnForge library. It contains the single yet important enhancement: the library now fully supports data with input neurons having 'float' data type; before that the only supported type for inpt neurons was (unsigned) byte.

With that improvement the user is now able to feed system with input neurons of any range and suitable precision. So cumbersome scaling feature is not needed anymore and thus removed.

Feb 24, 2013

nnForge v1.0.0 is published

Hi,

I published initial version of neural network library nnForge on GitHub. It has 2 backends implemented, CPU and GPU (CUDA) ones.

I also added GTSRB example application to the package, plan to add more later.

You will find news and updates here in the blog, stay tuned.