Skip to content

UCSD-ECEMAE-148/winter-2024-final-project-team-7

Repository files navigation

SLAM with Obstacle Avoidance

image

MAE 148 Final Project

Team 7 Winter 2024

Table of Contents

  1. Team Members
  2. Abstract
  3. What We Promised
  4. Accomplishments
  5. Challenges
  6. Final Project Videos
  7. Software
  8. Hardware
  9. Gantt Chart
  10. Course Deliverables
  11. Project Reproduction
  12. Acknowledgements
  13. Contacts

Team Members

Winston Chou - MAE Ctrls & Robotics (MC34) - Class of 2026 - LinkedIn

Amir Riahi - ECE - UPS Student

Rayyan Khalid - MAE Ctrls & Robotics (MC34) - Class of 2025


Abstract

The project's goal is to develop a robotic system capable of mapping a new enclosed environment, determining a path from a specified starting point to a desired destination while avoiding obstacles in its path. This involves integrating sensors for environmental perception, implementing mapping and localization algorithms, designing path planning and obstacle avoidance strategies, and creating a robust control system for the robot's navigation.

The robot utilizes the ROS2 Navigation 2 stack and integrating LiDAR for SLAM (Simultaneous Localization and Mapping) along with the OAK-D Lite depth point cloud camera for real-time obstacle avoidance.


What We Promised

Must Have

  • Integrate LiDAR sensor(s) into the ROS2 system. Utilize the ROS2 Navigation 2 stack to perform SLAM using LiDAR data.

Nice to Have

  • Integrate the OAK-D Lite depth camera, and Develop algorithms within ROS2 to process the point cloud data generated by OAK-D Lite for real-time obstacle detection. (ONLY Detection, for now)
  • To move the robot from a given location A to a desired location B using the ROS2 Navigation 2 stack and integrated sensors. (Not implemented yet)

Accomplishments

  • SLAM development accomplished
    • It enables the robot to map an unknown environment, and to locate its position.
    • Seeed IMU setup for better localization (Extended Kalman Filter).
  • Obstacle Avoidance
    • Utilize its depth sensing capabilities to generate a point cloud representation of the environment. (Rviz2 and Foxglove Studio)
    • Simple obstacle detection algorithm

Challenges

  • Nav2 Stack is a complex but useful system for developing an autonomous robot.
  • Futher Actions:
    • PointCloud Dynamic Obstacle Detection:
      • Develop an algorithm to mark down position of obstacle group, and add them to Nav2 obstacle layer
    • Nav2 Path Planning & ROS 2 Control:
      • Path Planning Server Development, and communication to ROS 2 Control System

Final Project Videos

Click any of the clips below to reroute to the video.

Mapping

Localization

PCL Obstacle Detection

Odom Frame Demo

Scan Correction Demo


Software

Overall Architecture

The project was successfully completed using the Slam-Toolbox and ROS2 Navigation 2 Stack, with a significant adaptation to the djnighti/ucsd_robocar container. The adaptation allowed for seamless integration and deployment of the required components, facilitating efficient development and implementation of the robotic system.

SLAM (Simultaneous Localization and Mapping)

  • The Slam Toolbox proved indispensable in our project, enabling us to integrate the LD19 Lidar – firmware-compatible with the LD06 model – into the ROS2 framework. This integration allowed us to implement SLAM, empowering our robot to autonomously map its environment while concurrently determining its precise location within it. Additionally, we enhanced this capability by incorporating nav2 amcl localization, further refining the accuracy and dependability of our robot's localization system. By combining these technologies, our robot could navigate confidently, accurately mapping its surroundings and intelligently localizing itself within dynamic environments.

  • The Online Async Node from the Slam Toolbox is a crucial component that significantly contributes to the creation of the map_frame in the project. This node operates asynchronously, meaning it can handle data processing tasks independently of other system operations, thereby ensuring efficient utilization of resources and enabling real-time performance. The map_frame is a fundamental concept in SLAM, representing the coordinate frame that defines the global reference frame for the environment map being generated. The asynchronous online node processes Lidar data, and fuses this information together to construct a coherent and accurate representation of the surrounding environment.

  • The VESC Odom Node plays a pivotal role in supplying vital odometry frame data within the robotics system. This node is responsible for gathering information from the VESC (Vedder Electronic Speed Controller), and retrieves essential data related to the robot's motion, such as wheel velocities and motor commands. The odometry frame, often referred to as the "odom_frame," is a critical component in localization and navigation tasks. It represents the robot's estimated position and orientation based on its motion over time. This information is crucial for accurately tracking the robot's trajectory and determining its current pose within the environment. By utilizing the data provided by the VESC Odom Node, the system can update the odometry frame in real-time, reflecting the robot's movements and changes in its position. This dynamic updating ensures that the odometry frame remains synchronized with the robot's actual motion, providing an accurate representation of its trajectory.



  • The URDF Publisher is a tool used to generate and publish Unified Robot Description Format (URDF) models within the ROS 2 ecosystem.


URDF Model of the robot

Physical Robot

  • The Seeed IMU Node is used to publish IMU data Seeed Studio XIAO nRF52840 Sense. By integrating the Seeed Studio XIAO nRF52840 Sense's 6-Axis IMU and implementing an Extended Kalman Filter (Not Done), the robot gains improved localization accuracy and reduced odometry drift. The IMU provides orientation and acceleration data, complementing other sensors like wheel encoders and GPS. The Extended Kalman Filter fuses IMU and odometry measurements, dynamically adjusting uncertainties to mitigate noise and inaccuracies, resulting in enhanced navigation performance and reliability.

  • The Scan Correction Node becomes particularly useful when there are specific sections of Lidar data that we wish to exclude from being collected by SLAM. This node allows us to define undesired ranges within the Lidar data and effectively filter them out, ensuring that only relevant and accurate information is utilized in the SLAM process. This capability enhances the overall quality and reliability of the generated map by preventing erroneous or irrelevant data from influencing the mapping and localization algorithms.


Before filtered


After filtered

Obstacle Avoidance

We utlized the OAK-D Lite depth camera to implement obstacle avoidance functionality within the ROS2 framework. Leveraging its depth sensing capabilities, we utilized the camera to generate a point cloud representation of the environment. The program logic is straightforward: the robot detects obstacles by identifying areas where the height is less than 2 meters (customizable) in front of it. If an object is detected within this distance threshold, the robot dynamically adjusts its trajectory to avoid collision, typically by making a turn. This simple yet effective approach allows the robot to navigate safely through its environment, reacting to potential obstacles in real-time to ensure smooth and obstacle-free movement.


PointCloud Visualization with Rviz2


PointCloud Visualization with Foxglove Studio

We integrated the DepthAI ROS package into our ROS2 setup to enable object detection functionality. Within the package, we utilized the provided YOLO (You Only Look Once) neural network setup for object detection. This configuration allowed our robot to detect objects in its environment in real-time using deep learning techniques. By leveraging the YOLO neural network, our robot could accurately identify and classify various objects, enhancing its perception and autonomy for effective navigation in dynamic environments.


yolo_v3_tf_object_detection


Hardware

  • 3D Printing: Camera Stand, Jetson Nano Case, GPS Plate, Lidar Mount
  • Laser Cut: Base plate to mount electronics and other components.

Parts List

  • Traxxas Chassis with steering servo and sensored brushless DC motor
  • Jetson Nano
  • WiFi adapter
  • 64 GB Micro SD Card
  • Adapter/reader for Micro SD Card
  • Logitech F710 controller
  • OAK-D Lite Camera
  • LD19 Lidar (LD06 Lidar)
  • VESC
  • Point One GNSS with antenna
  • Anti-spark switch with power switch
  • DC-DC Converter
  • 4-cell LiPo battery
  • Battery voltage checker/alarm
  • DC Barrel Connector
  • XT60, XT30, MR60 connectors

Additional Parts used for testing/debugging

  • Car stand
  • USB-C to USB-A cable
  • Micro USB to USB cable
  • 5V, 4A power supply for Jetson Nano

Mechanical Design Highlight

Base Plate

Camera Stand

Camera Stand components were designed in a way that it's an adjustable angle and height This design feature offers versatility and adaptability, ensuring optimal positioning of the camera to capture desired perspectives and accommodate various environments or setups.


GPS Plate

Circuit Diagram

Our team made use of a select range of electronic components, primarily focusing on the OAK-D Lite camera, Jetson NANO, a GNSS board / GPS, and an additional Seeed Studio XIAO nRF52840 Sense (for IMU usage). Our circuit assembly process was guided by a circuit diagram provided by our class TAs.


Gantt Chart


Course Deliverables

Here are our autonomous laps as part of our class deliverables:

Team 7's the weekly project Status Update and Final Presentation:


Project Reproduction

If you are interested in reproducing our project, here are a few steps to get you started with our repo:

  1. Follow instuctions on UCSD Robocar Framework Guidebook,
    pull devel image on your JTN: docker pull djnighti/ucsd_robocar:devel
  2. sudo apt update && sudo apt upgrade
    (make sure you upgrade the packages, or else it won't work; maybe helpful if you run into some error https://askubuntu.com/questions/1433368/how-to-solve-gpg-error-with-packages-microsoft-com-pubkey)
    check if slam_toolbox is installed and launchable:
    sudo apt install ros-foxy-slam-toolbox
    source_ros2
    ros2 launch slam_toolbox online_async_launch.py
    
    Output should be similar to:
    [INFO] [launch]: All log files can be found below /root/.ros/log/2024-03-16-03-57-52-728234-ucsdrobocar-148-07-14151
    [INFO] [launch]: Default logging verbosity is set to INFO
    [INFO] [async_slam_toolbox_node-1]: process started with pid [14173]
    [async_slam_toolbox_node-1] 1710561474.218342 [7] async_slam: using network interface wlan0 (udp/192.168.16.252) selected arbitrarily from: wlan0, docker0
    [async_slam_toolbox_node-1] [INFO] [1710561474.244055467] [slam_toolbox]: Node using stack size 40000000
    [async_slam_toolbox_node-1] 1710561474.256172 [7] async_slam: using network interface wlan0 (udp/192.168.16.252) selected arbitrarily from: wlan0, docker0
    [async_slam_toolbox_node-1] [INFO] [1710561474.517037334] [slam_toolbox]: Using solver plugin solver_plugins::CeresSolver
    [async_slam_toolbox_node-1] [INFO] [1710561474.517655574] [slam_toolbox]: CeresSolver: Using SCHUR_JACOBI preconditioner.
    
  3. Since we upgrade all existing packges, we need to rebuild VESC pkg under /home/projects/sensor2_ws/src/vesc/src/vesc
    cd /home/projects/sensor2_ws/src/vesc/src/vesc
    git pull
    git switch foxy
    

    make sure you are on foxy branch
    Then, build 1st time under sensor2_ws/src/vesc/src/vesc
    colcon build
    source install/setup.bash
    
    Then, 2nd time but under sensor2_ws/src/vesc
    cd /home/projects/sensor2_ws/src/vesc
    colcon build
    source install/setup.bash
    
    Now, try ros2 pkg xml vesc, check if VESC pkg version has come to 1.2.0
  4. Install Navigation 2 package, and related packages:
    sudo apt install ros-foxy-navigation2 ros-foxy-nav2* ros-foxy-robot-state-publisher ros-foxy-joint-state-publisher
  5. Clone this repository,
    cd /home/projects/ros2_ws/src
    git clone --recurse-submodules https://github.com/WinstonHChou/winter-2024-final-project-team-7.git
    cd winter-2024-final-project-team-7/
    
    There a Replace_to_ucsd_robocar_nav2 folder, which includes several files you'd like to replace/place to ucsd_robocar_nav2_pkg
    1. scan_correction.yaml, mapper_params_online_async.yaml, node_config.yaml, node_pkg_locations_ucsd.yaml
      should be placed to /home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/config/
    2. sensor_visualization.rviz
      should be placed to /home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/rviz/
    3. ucsdrobocar-148-07.urdf
      should be placed to /home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/urdf/
      (you can edit URDF if you want to, https://docs.ros.org/en/foxy/Tutorials/Intermediate/URDF/URDF-Main.html)
    4. urdf_publisher_launch.launch.py
      should be placed to /home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/launch/
    5. package.xml
      should be placed to /home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/

    Next, modify setup.py in /home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/,
    and add (os.path.join('share', package_name, 'urdf'), glob('urdf/*.urdf'))
    Then,
    build_ros2
    ros2 launch ucsd_robocar_nav2_pkg all_nodes.launch.py
    
    If functional, pre-setting for SLAM is done. Note: scan_correction and urdf_publisher nodes are now able to be launch by all_nodes.launch.py. Remember to toggle settings for them node_config.yaml.
    • scan_correction.yaml can define the lidar undesired range, and filter them out using scan_correction node
    • Follow this instuction if you only want to do SLAM, Easy SLAM Instruction Video on ROS 2 Foxy
    • Since you might adjust setting of VESC pkg for vesc_odom, here's additonal resource f1tenth calibrating VESC Odom
    • Change odom direction by
      • adjust vesc_to_odom.cpp line 100
        double current_speed = -1 * (-state->state.speed - speed_to_erpm_offset_) / speed_to_erpm_gain_; (adding a negative sign)
      • adjust vesc_to_odom.cpp line 107(if you invert steering_angle at joy_teleop.yaml)
        -1 * (last_servo_cmd_->data - steering_to_servo_offset_) / steering_to_servo_gain_; (adding a negative sign)
    • If you made changes in vesc_to_odom.cpp, must repeat Step. 3 to rebuild VESC pkg
  6. Setting up Seeed IMU, follow instructions In src/winter-2024-final-project-team-7/team_7_external/config/,you may adjust setting in Seeed_imu.yaml (equivalent for razor.yaml in razorIMU_9dof) and Seeed_imu_config.yaml.
    Then,
    build_ros2
    ros2 launch team_7_external Seeed_imu.launch.py
    
  7. DepthAI ROS & team_7_obstacle_detection Installation
    1. Install Depthai and related packages,
      sudo apt install ros-foxy-depthai* ros-foxy-sensor-msgs-py
    2. If you're using an OAK-D Lite,
      • adjust camera.yaml,
        nano /opt/ros/foxy/share/depthai_ros_driver/config/camera.yaml
        Disable imu and ir

      • adjust pcl.yaml,
        nano /opt/ros/foxy/share/depthai_ros_driver/config/pcl.yaml
        Disable imu and ir, and comment out "oak:"

    3. Open an additional terminal,
      ros2 launch depthai_ros_driver pointcloud.launch.py to publish /oak/points ros 2 topic.
    4. Open an additional terminal,
      ros2 launch team_7_obstacle_detection obstacle_detection.launch.py. Now, you are able to detect a simple obstacle using height < 2 meters. (Adjustable in the launch file)
  8. Foxglove Studio, using rosbridge_server
    Download Foxglove Studio. And Follow instructions https://docs.foxglove.dev/docs/introduction/

That's it! Most of settings are above. If you need any assistance on how to utilize this repo, you may create new issue on this GitHub repo, or contact [email protected] if needed.


Acknowledgements

Special thanks to Professor Jack Silberman and TA Arjun Naageshwaran for delivering the course!
Thanks to Raymond on Triton AI giving suggestions on our project!
Thanks to Nikita on Triton AI providing support on razorIMU_9dof repo for IMU usage!

Programs Reference:

README.md Format, reference to spring-2023-final-project-team-5


Contacts

About

winter-2024-final-project-team-7

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages