Skip to content

Commit

Permalink
Isaac ROS 0.20.0 (DP2)
Browse files Browse the repository at this point in the history
  • Loading branch information
jaiveersinghNV committed Oct 19, 2022
1 parent 0235d5d commit 883d965
Show file tree
Hide file tree
Showing 29 changed files with 669 additions and 256 deletions.
14 changes: 14 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Isaac ROS Contribution Rules

Any contribution that you make to this repository will
be under the Apache 2 License, as dictated by that
[license](http://www.apache.org/licenses/LICENSE-2.0.html):

> **5. Submission of Contributions.** Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.
Contributors must sign-off each commit by adding a `Signed-off-by: ...`
line to commit messages to certify that they have the right to submit
the code they are contributing to the project according to the
[Developer Certificate of Origin (DCO)](https://developercertificate.org/).

[//]: # (202201002)
266 changes: 201 additions & 65 deletions LICENSE

Large diffs are not rendered by default.

88 changes: 65 additions & 23 deletions README.md

Large diffs are not rendered by default.

39 changes: 28 additions & 11 deletions docs/centerpose.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,18 @@
### Inference on CenterPose using Triton
This tutorial is for using CenterPose with Triton.
# Tutorial for CenterPose Inference using Triton

<div align="center"><img src="../resources/centerpose_rviz.png" width="600px"/></div>

## Overview

This tutorial walks you through a pipeline to estimate the 6DOF pose of a target object using [CenterPose](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation) with Triton. It uses input monocular images from a rosbag.
> **Warning**: These steps will only work on `x86_64` and **NOT** on `Jetson`.
## Tutorial Walkthrough

1. Complete steps 1-5 of the quickstart [here](../README.md#quickstart)
2. Select a CenterPose model by visiting the CenterPose model collection available on the official [CenterPose GitHub](https://github.com/NVlabs/CenterPose) repository [here](https://drive.google.com/drive/folders/1QIxcfKepOR4aktOz62p3Qag0Fhm0LVa0). The model is assumed to be downloaded to `~/Downloads` outside the docker container. This example will use `shoe_resnet_140.pth`, which should be downloaded into `/tmp/models` inside the docker container:
> **Note**: this should be run outside the container
```bash
cd ~/Downloads && \
docker cp shoe_resnet_140.pth isaac_ros_dev-x86_64-container:/tmp/models
Expand All @@ -13,59 +21,68 @@ This tutorial is for using CenterPose with Triton.
> **Warning**: The models in the root directory of the model collection listed above will *NOT WORK* with our inference nodes because they have custom layers not supported by TensorRT nor Triton. Make sure to use the PyTorch weights that have the string `resnet` in their file names.

3. Create a models repository with version `1`:

```bash
mkdir -p /tmp/models/centerpose_shoe/1
```

4. Create a configuration file for this model at path `/tmp/models/centerpose_shoe/config.pbtxt`. Note that name has to be the same as the model repository name. Take a look at the example at `isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt` and copy that file to `/tmp/models/centerpose_shoe/config.pbtxt`.

```bash
cp /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/test/models/centerpose_shoe/config.pbtxt /tmp/models/centerpose_shoe/config.pbtxt
```

5. To run the TensorRT engine plan, convert the PyTorch model to ONNX first. Export the model into an ONNX file using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py`:
5. To run the TensorRT engine plan, convert the PyTorch model to ONNX first. Export the model into an ONNX file using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py`:

```bash
python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_centerpose/scripts/centerpose_pytorch2onnx.py --input /tmp/models/shoe_resnet_140.pth --output /tmp/models/centerpose_shoe/1/model.onnx
```
6. To get a TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter `trtexec`:

6. To get a TensorRT engine plan file with Triton, export the ONNX model into an TensorRT engine plan file using the builtin TensorRT converter `trtexec`:

```bash
/usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/centerpose_shoe/1/model.onnx --saveEngine=/tmp/models/centerpose_shoe/1/model.plan
```

7. Inside the container, build and source the workspace:
7. Inside the container, build and source the workspace:

```bash
cd /workspaces/isaac_ros-dev && \
colcon build --symlink-install && \
source install/setup.bash
```

8. Start `isaac_ros_centerpose` using the launch file:
8. Start `isaac_ros_centerpose` using the launch file:

```bash
ros2 launch isaac_ros_centerpose isaac_ros_centerpose.launch.py model_name:=centerpose_shoe model_repository_paths:=['/tmp/models']
```

Then open **another** terminal, and enter the Docker container again:

```bash
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
./scripts/run_dev.sh
```

Then, play the ROS bag:

```bash
ros2 bag play -l src/isaac_ros_pose_estimation/resources/rosbags/centerpose_rosbag/
```

9. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through `ros2 topic echo`:
9. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through `ros2 topic echo`:

In a **third** terminal, enter the Docker container again:

```bash
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
./scripts/run_dev.sh
```

```bash
source install/setup.bash && \
ros2 topic echo /object_poses
```

10. Launch `rviz2`. Click on `Add` button, select "By topic", and choose `MarkerArray` under `/object_poses`. Set the fixed frame to `centerpose`. You'll be able to see the cuboid marker representing the object's pose detected!
<div align="center"><img src="../resources/centerpose_rviz.png" width="600px"/></div>
50 changes: 40 additions & 10 deletions docs/dope-triton.md
Original file line number Diff line number Diff line change
@@ -1,29 +1,49 @@
### Inference on DOPE using Triton
This tutorial shows using Triton with different backends.
# Tutorial for DOPE Inference

<div align="center"><img src="../resources/dope_rviz2.png" width="600px"/></div>

## Overview

This tutorial walks you through a pipeline to estimate the 6DOF pose of a target object using [DOPE](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation) using different backends. It uses input monocular images from a rosbag. The different backends show are:

1. PyTorch and ONNX
2. TensorRT Plan files with Triton
3. PyTorch model with Triton

> **Note**: The DOPE converter script only works on `x86_64`, so the resultant `onnx` model following these steps must be copied to the Jetson.
## Tutorial Walkthrough

1. Complete steps 1-6 of the quickstart [here](../README.md#quickstart).
2. Make a directory called `Ketchup` inside `/tmp/models`, which will serve as the model repository. This will be versioned as `1`. The downloaded model will be placed here:

```bash
mkdir -p /tmp/models/Ketchup/1 && \
mv /tmp/models/Ketchup.pth /tmp/models/Ketchup/
```

3. Now select a backend. The PyTorch and ONNX options **MUST** be run on `x86_64`:
- To run ONNX models with Triton, export the model into an ONNX file using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py`:

```bash
python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format onnx --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.onnx --input_name INPUT__0 --output_name OUTPUT__0
```

- To run `TensorRT Plan` files with Triton, first copy the generated `onnx` model from the above point to the target platform (e.g. a Jetson or an `x86_64` machine). The model will be assumed to be copied to `/tmp/models/Ketchup/1/model.onnx` inside the Docker container. Then use `trtexec` to convert the `onnx` model to a `plan` model:

```bash
/usr/src/tensorrt/bin/trtexec --onnx=/tmp/models/Ketchup/1/model.onnx --saveEngine=/tmp/models/Ketchup/1/model.plan
```

- To run PyTorch model with Triton (**inferencing PyTorch model is supported for x86_64 platform only**), the model needs to be saved using `torch.jit.save()`. The downloaded DOPE model is saved with `torch.save()`. Export the DOPE model using the script provided under `/workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py`:
```bash
python3 /workspaces/isaac_ros-dev/src/isaac_ros_pose_estimation/isaac_ros_dope/scripts/dope_converter.py --format pytorch --input /tmp/models/Ketchup/Ketchup.pth --output /tmp/models/Ketchup/1/model.pt
```
4. Create a configuration file for this model at path `/tmp/models/Ketchup/config.pbtxt`. Note that name has to be the same as the model repository. Depending on the platform selected from a previous step, a slightly different `config.pbtxt` file must be created: `onnxruntime_onnx` (`.onnx` file), `tensorrt_plan` (`.plan` file) or `pytorch_libtorch` (`.pt` file):
```
```log
name: "Ketchup"
platform: <insert-platform>
max_batch_size: 0
Expand All @@ -47,52 +67,62 @@ This tutorial shows using Triton with different backends.
}
}
```
The `<insert-platform>` part should be replaced with `onnxruntime_onnx` for `.onnx` files, `tensorrt_plan` for `.plan` files and `pytorch_libtorch` for `.pt` files.
> **Note**: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop or resize using ROS2 nodes from [Isaac ROS Image Pipeline](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline) or similar packages.
> **Note**: The DOPE decoder currently works with the output of a DOPE network that has a fixed input size of 640 x 480, which are the default dimensions set in the script. In order to use input images of other sizes, make sure to crop or resize using ROS2 nodes from [Isaac ROS Image Pipeline](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_image_pipeline) or similar packages.
> **Note**: The model name must be `model.onnx`.
5. Rebuild and source `isaac_ros_dope`:
```bash
cd /workspaces/isaac_ros-dev
colcon build --packages-up-to isaac_ros_dope && source install/setup.bash
```
6. Start `isaac_ros_dope` using the launch file:
6. Start `isaac_ros_dope` using the launch file:
```bash
ros2 launch isaac_ros_dope isaac_ros_dope_triton.launch.py model_name:=Ketchup model_repository_paths:=['/tmp/models'] input_binding_names:=['INPUT__0'] output_binding_names:=['OUTPUT__0'] object_name:=Ketchup
```
> **Note**: `object_name` should correspond to one of the objects listed in the DOPE configuration file, and the specified model should be a DOPE model that is trained for that specific object.
7. Open **another** terminal, and enter the Docker container again:
7. Open **another** terminal, and enter the Docker container again:
```bash
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
./scripts/run_dev.sh
```
Then, play the ROS bag:
```bash
ros2 bag play -l src/isaac_ros_pose_estimation/resources/rosbags/dope_rosbag/
```
8. Open another terminal window and attach to the same container. You should be able to get the poses of the objects in the images through `ros2 topic echo`:
In a **third** terminal, enter the Docker container again:
```bash
cd ~/workspaces/isaac_ros-dev/src/isaac_ros_common && \
./scripts/run_dev.sh
```
```bash
ros2 topic echo /poses
```
> **Note**: We are echoing `/poses` because we remapped the original topic `/dope/pose_array` to `poses` in the launch file.
Now visualize the pose array in rviz2:
```bash
rviz2
```
Then click on the `Add` button, select `By topic` and choose `PoseArray` under `/poses`. Finally, change the display to show an axes by updating `Shape` to be `Axes`, as shown in the screenshot below. Make sure to update the `Fixed Frame` to `camera`.
<div align="center"><img src="../resources/dope_rviz2.png" width="600px"/></div>
Then click on the `Add` button, select `By topic` and choose `PoseArray` under `/poses`. Finally, change the display to show an axes by updating `Shape` to be `Axes`, as shown in the screenshot at the top of this page. Make sure to update the `Fixed Frame` to `camera`.
> **Note:** For best results, crop/resize input images to the same dimensions your DNN model is expecting.
21 changes: 15 additions & 6 deletions isaac_ros_centerpose/CMakeLists.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,19 @@
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0

cmake_minimum_required(VERSION 3.5)
project(isaac_ros_centerpose LANGUAGES PYTHON)
Expand Down
18 changes: 18 additions & 0 deletions isaac_ros_centerpose/config/decoder_params.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,21 @@
%YAML 1.2
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0
---
centerpose_decoder_node:
ros__parameters:
camera_matrix: [616.078125, 0.0, 325.8349304199219, 0.0, 616.1030883789062, 244.4612274169922, 0.0, 0.0, 1.0]
Expand Down
18 changes: 18 additions & 0 deletions isaac_ros_centerpose/config/decoder_params_test.yaml
Original file line number Diff line number Diff line change
@@ -1,3 +1,21 @@
%YAML 1.2
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0
---
isaac_ros_test:
centerpose_decoder_node:
ros__parameters:
Expand Down
27 changes: 19 additions & 8 deletions isaac_ros_centerpose/isaac_ros_centerpose/CenterPoseDecoder.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,19 @@
# Copyright (c) 2021-2022, NVIDIA CORPORATION. All rights reserved.
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0

from isaac_ros_centerpose.CenterPoseDecoderUtils import Cuboid3d, CuboidPNPSolver, \
merge_outputs, nms, object_pose_post_process, tensor_to_numpy_array, \
Expand Down Expand Up @@ -109,7 +118,7 @@ def decode_impl(hm, wh, kps, hm_hp, reg, hp_offset, obj_scale, K):
min_dist = np.expand_dims(min_dist, -1)
min_ind = np.broadcast_to(np.reshape(min_ind, (num_joints, K, 1, 1)),
(batch, num_joints, K, 1, 2))

# make hm_kps and min_ind writable
hm_kps.setflags(write=1)
min_ind.setflags(write=1)
Expand Down Expand Up @@ -332,7 +341,9 @@ def main(args=None):
pass
finally:
node.destroy_node()
rclpy.shutdown()
# only shut down if context is active
if rclpy.ok():
rclpy.shutdown()


if __name__ == '__main__':
Expand Down
Original file line number Diff line number Diff line change
@@ -1,10 +1,19 @@
# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved.
# SPDX-FileCopyrightText: NVIDIA CORPORATION & AFFILIATES
# Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# SPDX-License-Identifier: Apache-2.0
from enum import IntEnum

import cv2
Expand Down
Loading

0 comments on commit 883d965

Please sign in to comment.