Skip to content

Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

License

Notifications You must be signed in to change notification settings

rashid301/amazon-dsstne

 
 

Repository files navigation

Amazon DSSTNE: Deep Scalable Sparse Tensor Network Engine

DSSTNE (pronounced "Destiny") is an open source software library for training and deploying recommendation models with sparse inputs, fully connected hidden layers, and sparse outputs. Models with weight matrices that are too large for a single GPU can still be trained on a single host. DSSTNE has been used at Amazon to generate personalized product recommendations for our customers at Amazon's scale. It is designed for production deployment of real-world applications which need to emphasize speed and scale over experimental flexibility.

DSSTNE was built with a number of features for production recommendation workloads:

  • Multi-GPU Scale: Training and prediction both scale out to use multiple GPUs, spreading out computation and storage in a model-parallel fashion for each layer.
  • Large Layers: Model-parallel scaling enables larger networks than are possible with a single GPU.
  • Sparse Data: DSSTNE is optimized for fast performance on sparse datasets, common in recommendation problems. Custom GPU kernels perform sparse computation on the GPU, without filling in lots of zeroes.

Benchmarks

Scaling up

License

License

Setup

  • Follow Setup for step by step instructions on installing and setting up DSSTNE

User Guide

  • Check User Guide for detailed information about the features in DSSTNE

Examples

  • Check Examples to start trying your first Neural Network Modeling using DSSTNE

Q&A

FAQ

About

Deep Scalable Sparse Tensor Network Engine (DSSTNE) is an Amazon developed library for building Deep Learning (DL) machine learning (ML) models

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Cuda 46.6%
  • C++ 45.0%
  • Java 3.5%
  • C 3.4%
  • Makefile 0.8%
  • Python 0.4%
  • Other 0.3%