Skip to content

v1.0.0

Latest
Compare
Choose a tag to compare
@ammar-n-abbas ammar-n-abbas released this 18 Nov 16:43
· 2 commits to main since this release
1a99d1f

Release Notes for FoundationPoseROS2

Version: 1.0.0

Release Date: November 18, 2024


Introduction

The FoundationPoseROS2 system is officially released, providing a cutting-edge solution for 6D object pose estimation and tracking of novel objects in ROS2 environments. Built on the FoundationPose architecture and leveraging the Segment Anything Model 2 (SAM2) framework, it offers real-time, model-based multi-object pose estimation and tracking with minimal hardware requirements.

This release aims to enhance capabilities in robotics, computer vision, and real-time object tracking. It is ideal for developers, researchers, and integrators in robotics and automation.


Key Features

  1. ROS2 Integration

    • Real-time framework fully compatible with ROS2.
    • Operates efficiently on an 8GB NVIDIA GPU.
  2. SAM2-Based Object Segmentation

    • Automatic segmentation of objects in color and depth images.
    • Supports real-time adjustments.
  3. Multi-Object Pose Estimation and Tracking

    • Seamless assignment of object models to segmented masks.
    • Handles multiple objects concurrently.
  4. Interactive GUI for Object Selection

    • User-friendly GUI to assign and reorder object models (.obj or .stl formats).
  5. 3D Visualization

    • Displays object poses with bounding boxes and axes.

Setup Highlights

  1. Prerequisites

    • Operating System: Ubuntu
    • Hardware: Intel RealSense Camera, NVIDIA GPU (min. 8GB)
    • CUDA: 12.x
  2. Environment Setup

    • Miniconda-based Python environment with ROS2 compatibility.
    • CUDA environment optimization included in the build process.
  3. Compatibility

    • Python versions align with ROS2 distributions (e.g., Python 3.8 for Foxy, Python 3.10 for Humble).

Improvements Over Previous Frameworks

  • Unlike isaac_ros_foundationpose, this release operates efficiently on an 8GB GPU, lowering hardware barriers.
  • Fully automated segmentation and pose estimation using the SAM2 framework.
  • Enhanced compatibility with modern GPU architectures by upgrading C++14 to C++17.

Demonstration and Usage

  1. Run RealSense2 Camera Node

    ros2 launch realsense2_camera rs_launch.py enable_rgbd:=true enable_sync:=true align_depth.enable:=true enable_color:=true enable_depth:=true pointcloud.enable:=true
    
  2. Launch FoundationPoseROS2

    conda activate foundationpose_ros && source /opt/ros/<ROS_DISTRO>/setup.bash && python ./FoundationPoseROS2/foundationpose_ros_multi.py
    
  3. Rosbag Playback for Demos

    • Use recorded data for testing object tracking.
  4. Support for Novel Objects

    • Import .obj or .stl mesh files into the designated folder for custom object tracking.

Acknowledgements

This release is supported by funding from the EU Commission Recovery and Resilience Facility under the Science Foundation Ireland Future Digital Challenge Grant (Grant Number: 22/NCF/FD/10929).


Contributions and Support

We welcome contributions and suggestions from the community. For issues, feature requests, or general queries, please raise a ticket in the Issues section of this repository.


FoundationPoseROS2 — Redefining 6D pose estimation in ROS2 environments.