Deprecation notice
This repository will no longer be updated, since Nvidia started to provide its own Docker images based on Ubuntu 20 as of JETPACK SDK 5.0.1 (Link to images).
Therefore, the Dockerfiles in this repository should only be used as reference if CUDA 10.2 is a requirement.
Nvidia Jetson images are based on Ubuntu 18.04. However, many applications and projects utilizes libraries specific to Ubuntu 20.04. Therefore, this repository provides docker images based on ubuntu:focal
, that are able to take full advantage of the Jetson hardware (Nano, Xavier NX, Xavier AGX and Xavier TX2). All images come with full CUDA support (passthrough from the host) including TensorRT (including python bindings) and VisionWorks.
Furthermore the script folder includes system installable run-scripts to quickly iterate the build process and set the relevant docker flags at runtime.
The following images can be directly pulled from DockerHub without needing to build the containers yourself:
L4T Version | Dockerhub image | |
---|---|---|
l4t-ubuntu20-base |
R32.7.1 | timongentzsch/l4t-ubuntu20-base:latest |
l4t-ubuntu20-opencv |
R32.7.1 | timongentzsch/l4t-ubuntu20-opencv:latest |
l4t-ubuntu20-pytorch |
R32.7.1 | timongentzsch/l4t-ubuntu20-pytorch:latest |
l4t-ubuntu20-ros2-base |
R32.7.1 | timongentzsch/l4t-ubuntu20-ros2-base:latest |
l4t-ubuntu20-ros2-desktop |
R32.7.1 | timongentzsch/l4t-ubuntu20-ros2-desktop:latest |
l4t-ubuntu20-zedsdk |
R32.7.1 | timongentzsch/l4t-ubuntu20-zedsdk:latest |
l4t-ubuntu20-crosscompile |
R32.7.1 | timongentzsch/l4t-ubuntu20-crosscompile:latest |
note: make sure to run the container on the intended L4T host system. Running on older JetPack releases (e.g. r32.6.1) can cause driver issues, since L4T drivers are passed into the container.
note: pytorch image also inckudes the python3.8 tensorrt bindings
To download and run one of these images, you can use the included run script from the repo:
$ scripts/docker_run timongentzsch/l4t-ubuntu20-base:latest
For other configurations, below are the instructions to build and test the containers using the included Dockerfiles.
To enable access to the CUDA compiler (nvcc) during docker build
operations, add "default-runtime": "nvidia"
to your /etc/docker/daemon.json
configuration file before attempting to build the containers:
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
You will then want to restart the Docker service or reboot your system before proceeding.
To rebuild the containers from a Jetson device, first clone this repo via Git LFS
:
$ git clone https://github.com/timongentzsch/Jetson_Ubuntu20_Images.git
$ cd Jetson_Ubuntu20_Images
You may want to install the provided scripts to build, run and restart containers with the right set of docker flags:
$ sudo scripts/setup_launchscripts.sh
This will enable you to quickly iterate your build process and application. Sudo is needed to create the dummy_root_SSH_config
and enable SSH access for root users
After that you can use following commands globally:
dbuild
, drun
, dstart
It ensures that the docker environment feels as native as possible by enabling the following features by default:
-
USB hot plug
-
sound
-
network
-
bluetooth
-
GPU/cuda
-
X11
note: refer to
--help
for the syntax