- Deterministic transformator added for testing and validating
- snapshots are made on ANNs from batch directory
- Toolset parameters changed:
- learning_rate_decay_rate is exposed as a command line parameter
- training_speed parameter renamed to learning_rate, training_speed_degradation is dropped
- training_iteration_count renamed to training_epoch_count
- train command does batch train, batch_train command is removed
- validate and test now work in batch mode, validate_batch and test_batch removed
- mu_increase_factor is set to 1.0 by default
- max_mu set to 1.0 by default
- Bug-fixes
Feb 7, 2014
nnForge v1.1.2
I brushed up parameters for nnForge toolset. I also changed default values for some of them; if you run GTSRB you will probably need to update config file. Here is the full change list:
Jan 11, 2014
nnForge v1.1.1
I've just published new nnForge release v1.1.1:
- Using space-filling curve for all the convolutional updaters, testers and hessians in CUDA backend, training large networks performance improved
- Improved concurrent training and loading/processing input data for all the stages by loading data in a separate host thread, CUDA backend only
- In-memory supervised data reader added
- Added NVTX profiling for reading input data, CUDA backend only
- Fixed:
- Binding texture to too large linear buffer
- Average subsampling backprop in CUDA backend is wrong for non-even configs
- Fixed performance in Windws with WDDM driver
Dec 27, 2013
Moved to blogger
Moved the blog from Zoho to Blogger platform for a number of reasons including better uptime, design, ability to edit posts in place. All posts are copied to the new platform.
Nov 23, 2013
nnForge v1.1.0
I've just published new nnForge release v1.1.0, which has a lot of new functionality and fixes implemented:
- Squared Hinge Loss error function added
- Local contrast subtractive layer hessian and updater implementations added both to CPU and GPU backend
- Maxout layer added with CPU and GPU backends implemented
- Added tester functionality for rgb_to_you_convert layer in CUDA backend
- Learning rate decay functionality for tail iterations is added
- Fixed:
- Functionality bug in L2 incoming weights regularizer
- Functionality bug for rectangular local contrast subtractive
- Recovered snapshot_invalid functionality
Oct 23, 2013
Convolutional Neural Networks talk
Just did a presentation on convolutional neural networks at Computer Vision meet-up at Yandex. Here are the slides (in Russian).
Sep 22, 2013
nnForge v1.0.7
I released the latest commits to nnForge under the tag v1.0.7. The major improvements are adjustments for regression type of models and 3D convolutional layer implemented in CUDA backend. Here is the complete list:
- supervised_data_reader now naturaly inherits unsupervised_data_reader: code is simplified
- supervised_transformed_output_data_reader and unsupervised_transformed_input_data_reader added
- Stats for readers (max, min, avg, std_dev) implemented
- normalize_data_transformer added
- Regression output type added
- Convolutional 3D layer implemented in CUDA backend
- Max subsampling 3D layer implemented in CUDA backend
Aug 21, 2013
nnForge v1.0.6
I've just published v1.0.6 release of nnForge. It contains:
- Dropout support is extended to all layers
- Data transformers simplified; removed deterministic mode of noise
- Added sanity check for mse in order to drop ANNs with broken weights during training
- Fixed plain (CPU) backend for rectangular convolutional and subsampling layers
- CUDA exceptions now go with filename and line number
- Minor fixes and improvements
Subscribe to:
Posts (Atom)