Nov 30, 2016

nnForge v2.3.0

I have added multi-GPU support to nnForge! Both training and inferene can be done on multiple GPUs now. Single node only is supported. Training is parallelized with data parallel approach, where mini-batch is split across multiple GPUs.
The framework moved to C++11 now, you will need gcc 4.7 or newer to build the lib, and MS VS 2013 for Windows.

Jul 5, 2016

nnForge v2.2.0

Hi, nnForge v2.2.0 is published!
  • Convolutional layer
    • strides added
    • w/out bias option added
  • check_gradient command added
  • Imagenet: reproduced ResNet50 result (7.5% Top5 single crop)
  • Average subsampling layer allows specifying output size instead of subsampling window sizes
  • Added profiling to CUDA backend
  • Max subsampling layer:
    • round_up mode added
    • Strides added
  • Step learning rate decay policy added
  • Added update_bn_weights action (but calculating mean and invsigma during training works well)
  • Spatial Transformer:
    • affine_grid_generator_layer added
    • linear_sampler layer added
  • Utilizing cudnnFindConvolution*AlgorithmEx functions to get maximum perf (cuDNN v5 is required for that)
  • Added strides to sparse convolution layer

Feb 21, 2016

nnForge v2.1.0

2 months passed since the last release, this one is pretty big. A number of layers added, existing layers' functionality is extended. Here is the full list of changes in nnForge v2.1.0:
  • New layers added: Concat, Reshape, CDFMax, PrefixSum, Upsampling, Add (element-wise), CDF2PDF, EntryConvolution
  • Average and Max subsampling layers are now capable of subsampling in feature map and entry directions
  • MSE Layer reworked into generic LError layer (L2 by default)
  • Max subsampling can do MIN as well
  • Optional scale parameter for AverageSubsampling layer added
  • Detailed info on layers in the schema dumped
  • Dumping graph with layer configs in debug mode
  • Added dumping data in CSV format
  • Runtime layer replacement with data layers
  • Bug fixes