Skip to content

A compressed adaptive optimizer for training large-scale deep learning models using PyTorch

License

Notifications You must be signed in to change notification settings

jamescporter/Count-Sketch-Optimizers

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Count-Sketch Optimizers

Compressing Gradient Optimizers via Count-Sketches

An ICML 2019 paper by Ryan Spring, Anastasios Kyrillidis, Vijai Mohan, Anshumali Shrivastava

BERT-Large Training Results

Trained with Activation Checkpointing and Mixed Precision Training (FP16) on Nvidia V100 DGX-1 servers

BERT-Large Adam Count-Min Sketch (CMS) - RMSprop
Time (Days) 5.32 5.52
Size (MB) 7,097 5,133
Test Perplexity 4.04 4.18

Convergence Rate - Adam, CMS-RMSprop Faster convergence rate with larger batch size - CMS-RMSprop

Instructions

  1. Install Requirements
  2. Add optimizers folder to $PYTHONPATH

Requirements

  1. torch
  2. torchvision
  3. cupy
  4. pynvrtc

Examples

  1. ImageNet - ResNet-18
  2. LM1B - Transformer / LSTM
  3. Wikitext-2 - LSTM

Dense Layer Support

We support compressing the dense layers of the neural network without update sparsity. During training, we update the auxiliary variables and perform the gradient update for each parameter in a single fused CUDA kernel. The dense kernel is equivalent to the sparse kernel. The main difference is that we explicitly avoid generating the auxiliary variables for the dense layers in global memory. Instead, we access them inside the shared memory of the GPU Streaming Multiprocessor. Without this key feature, our approach would not save any GPU memory for the dense layers. In the sparse case, we assume that the non-zero gradient updates is significantly smaller than the auxiliary variable. (See dense_exp_cms.py for more details)

References

  1. Transformer Architecture - Nvidia Megatron Language Model
  2. Compressing Gradient Optimizers via Count-Sketches (ICML 2019)

About

A compressed adaptive optimizer for training large-scale deep learning models using PyTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%