Skip to content

Releases: milakov/nnForge

v2.3.0

30 Nov 18:53
Compare
Choose a tag to compare
  • Multi-GPU training and inference (single node only)
  • Moved the lib to C++11
  • Few bugs fixed

v2.2.0

05 Jul 18:55
Compare
Choose a tag to compare
  • Convolutional layer
    -- strides added
    -- w/out bias option added
  • check_gradient command added
  • Imagenet: reproduced ResNet50 result (7.5% Top5 single crop)
  • Average subsampling layer allows specifying output size instead of subsampling window sizes
  • Added profiling to CUDA backend
  • Max subsampling layer:
    -- round_up mode added
    -- Strides added
  • Step learning rate decay policy added
  • Added update_bn_weights action (but calculating mean and invsigma during training works well)
  • Spatial Transformer:
    -- affine_grid_generator_layer added
    -- linear_sampler layer added
  • Utilizing cudnnFindConvolution*AlgorithmEx functions to get maximum perf (cuDNN v5 is required for that)
  • Added strides to sparse convolution layer

v2.1.0

21 Feb 14:18
Compare
Choose a tag to compare
  • New layers added: Concat, Reshape, CDFMax, PrefixSum, Upsampling, Add (element-wise), CDF2PDF, EntryConvolution
  • MSE Layer reworked into generic LError layer (L2 by default)
  • Average and Max subsampling layers are now capable of subsampling in feature map and entry directions
  • Max subsampling can do MIN as well
  • Optional scale parameter for AverageSubsampling layer added
  • Detailed info on layers in the schema dumped
  • Dumping graph with layer configs in debug mode
  • Added dumping data in CSV format
  • Runtime layer replacement with data layers
  • Bug fixes

v2.0.2

19 Dec 20:53
Compare
Choose a tag to compare
  • Gradient modifier layer added
  • Structured_data_constant_reader added
  • Error functions accept the 3rd optional input layer - mask
  • ADAM training algo implemented, use "--momentum_type adam", rate should generally be much smaller than for other methods
  • Changed default value for cuda_fixed_working_buffers_ratio to 0.4

v2.0.1

23 Nov 21:39
Compare
Choose a tag to compare
  • Multiple improvements to reduce total buffer sizes, allows running larger chunks, (3x for ImageNet):
    • Taking buffer sizes into account when coloring graph
    • Maxout, ReLU, and MaxSubsampling layers consume much less memory in CUDA backend
    • Action graph is optimized to exclude unnecessary concurrency
  • Migrated to cuDNN v3
  • Reusing CUDA streams
  • Allocating chunk of mem for fixed working buffers - improves perf
  • Few bug-fixes

v2.0.0

07 Nov 07:57
Compare
Choose a tag to compare
  • The model is now arbitrary DAG
  • Running independent actions in mutiple streams in CUDA backend
  • Memory buffers are heavily reused

v1.2.0

30 Apr 17:22
Compare
Choose a tag to compare
  • Improvements on supervised_image_stream_reader
  • Model schema is now stord in Protobuf format. Use convert_schema to convert schemas in old binary format to new one
  • Input and output data normalizers are stored in protobuf format now. Use convert_input_normalizer and convert_output_normalizer to convert existing binary normalizers to new format
  • Nesterov momentum added (see --momentum_type option)
  • ROC result outputs accuracy, precision, recall, and F-score now (in addition to AUC)
  • snapshot_invalid now saves images, including binary classifier case
  • uniform_intensity_data_transformer added
  • Momentum data is kept between epochs (it is save and restored as well)
  • embed_data_transformer added
  • Schema and data are compatible now if non-empty layers match. Now empty-data layers don't matter
  • Overfeat functionality added (see tiling option of max subsampling layer, and untile layer)

v1.1.13

26 Mar 20:38
Compare
Choose a tag to compare
  • Data transformers:
    -- Stretch added to distort sampler transformer
    -- perspective distortions added to distort_2d transformer
    -- reshape_data_transformer added
    -- elastic_deformation_2d_data_transformer added
  • Mixture of models:
    -- Added --test_validate_save_output and --test_validate_load_output options
    -- Running testing and validation from a mixture of output_values
  • Readers:
    -- supervised_shuffle_entries_data_reader is made deterministic
    -- deterministic image data reader is extended to sampler
  • Layers:
    -- Parametric ReLU added (with CPU and GPU backends)
    -- Average subsampling is reverted to native implementation (3D and 4D support)
  • Others:
    -- Taking RELUs into account when initializing weights
    -- validate_progress_network_data_pusher is extended with frequency parameter
    -- Quasi-random training data randomization is dropped
    -- Memory consumption reduced during testing
    -- Resume training (-R) can now be applied with multiple ANNs training (-N)
    -- VS2013 projects and solution added (using CUDA 7.0)
    -- Fixed fancy backprop for analyzer
    -- Bug-fixes

v1.1.12

21 Jan 18:58
Compare
Choose a tag to compare
  • Using cuDNN for a lot of layers now, Fermi is no longer supported
  • New transformers added: convert_to_polar_data_transformer, negate_data_transformer
  • New readers added: supervised_shuffle_entries_data_reader, image related readers (from raw jpeg stored
  • Dropout functionality is moved into its own layer with better randomization
  • Soft recified linear layer removed

v1.1.11

30 Nov 10:03
Compare
Choose a tag to compare
  • Padding added to sparse convolutional layers
  • Sparse convolutional layers implemented in GPU backend (Kepler+ only)
  • Fixed bug with dropout when error function is fuzed with last activation function
  • Array with random numbers extended to 256K elements (for dropout)