- supervised_data_reader now naturaly inherits unsupervised_data_reader: code is simplified
- supervised_transformed_output_data_reader and unsupervised_transformed_input_data_reader added
- Stats for readers (max, min, avg, std_dev) implemented
- normalize_data_transformer added
- Regression output type added
- Convolutional 3D layer implemented in CUDA backend
- Max subsampling 3D layer implemented in CUDA backend
Sep 22, 2013
nnForge v1.0.7
I released the latest commits to nnForge under the tag v1.0.7. The major improvements are adjustments for regression type of models and 3D convolutional layer implemented in CUDA backend. Here is the complete list:
Aug 21, 2013
nnForge v1.0.6
I've just published v1.0.6 release of nnForge. It contains:
- Dropout support is extended to all layers
- Data transformers simplified; removed deterministic mode of noise
- Added sanity check for mse in order to drop ANNs with broken weights during training
- Fixed plain (CPU) backend for rectangular convolutional and subsampling layers
- CUDA exceptions now go with filename and line number
- Minor fixes and improvements
Jul 27, 2013
Facial Expression Recognition Challenge
By the way, I managed to get the first public result with nnForge: The 3rd place in Challenges in Representation Learning: Facial Expression Recognition Challenge at Kaggle.
This contest and two others were the base for the ICML 2013 Workshop on Challenges in Representation Learning, all the results are well covered and analyzed in "Challenges in Representation Learning: A report on three machine learning contests", Ian Goodfellow et al, arXiv:1307.0414.
nnForge v1.0.5
Just published nnForge v1.0.5:
- Regularization "Upper bound on L2 norm of the incoming weight vector for each output neuron" added
- ROC-type result now works fine for multi-class output types
- rotate_band data and noise_data_transformer transformers added
- Dropout is now done per input neuron instead of per input feature map - more robust option
- Minor fixes
Jun 23, 2013
nnForge v1.0.4
Hi,
Here is nnForge 1.0.4. It features:
- Rectified linear, soft rectified linear and softmax layes with CPU and GPU backends implemented
- On the fly distortion
- ann_snapshot command (weights visualization)
- Minor improvements and bug-fixes
May 31, 2013
nnForge v1.0.3
Hi,
I did a number of improvements to the code and decided it was time to mark them with a version number, nnForge v1.0.3:
- Ability to validate and test with multiple samples per entry (averaging results)
- Max Subsampling layer in CUDA backend (2D only)
- Flipping image option added to the toolset
- Additional constructor with fixed seed for random generator
- preparing_data command split into preparing_training_data and preparing_testing_data
- A couple of minor bug-fixes
Apr 28, 2013
nnForge v1.0.2
I have finally published nnForge v1.0.2
This release contains the single major feature: Performance tuning for Kepler GK110 (GeForce Titan, Tesla K20). I have also improved the performance for Fermi cards.
What about Kepler GK104 (Tesla K10, Geforce 680, 670 e t.c.)? Almost all the optimizations I applied for GK110 are applicable to GK104, though I didn't test it. I don't have GK104 card so I didn't even run the code on it.
Initially I planned to add support for 1D convolutional layers, but ended up adding it for testers and hessian calculators only. The reason is simple: It is better to have an example on which I would be able to test new functionality. Otherwise I might just add a lot of code which doesn't work.
Subscribe to:
Posts (Atom)