Deep learning development tools using Docker
- Get docker (at least 19.03)
- Get nvidia-container-toolkit: https://github.com/NVIDIA/nvidia-docker
Add these lines to your .bashrc or .zshrc, this enables using docker with own user:
export GID=$(id -g)
export USER=$(id -u -n)
export HOST_GROUP=$(id -g -n)
export HOST_UID=$(id -u)
Set Nvidia as default container runtime for Docker in /etc/docker/daemon.json
, it is needed to run docker-compose:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
To make development portable and deployable on cloud, it is recommended to run the IDE inside the container. Legacy solution installed the IDE inside the container which resulted in large container sizes (moving around useless packages). Best practice is to mount your IDE inside the container (to be specified in the docker-compose file or docker run) and start them from the container. GUI access is granted on Intel, AMD and NVIDIA GPUs by default. This enables portable code and easier cooperation of teams.
Supported CUDA versions: 11.8.0
and 12.1.1
in all images, latest
tag is not used as a best practice implementation.
Content:
- CUDA-dev
- CUDNN-dev
- CUDA OpenGL-dev
Built on xmindai/cuda-cudnn-opengl
- User layer - support for arbitrary user to log in
- Development layer from folder
general-development
, supporting complete, graphical dev life-cycle in a single container
Built on xmindai/cuda-cpp
Adds CUDA Python libs with pyenv
:
- PyTorch - GPU
- Tensorflow - GPU
- CUDA RAPIDS
- Many more defined in
cuda-python-development/requirements.txt
Start interactive session:
docker-compose -f xmind-development/docker-compose.yml run dev
Start ssh service (background):
docker-compose -f xmind-development/docker-compose.yml up -d ssh
Kill ssh service:
docker-compose -f xmind-development/docker-compose.yml down ssh