- Team Members
- Abstract
- What We Promised
- Accomplishments
- Challenges
- Final Project Videos
- Software
- Hardware
- Gantt Chart
- Course Deliverables
- Project Reproduction
- Acknowledgements
- Contacts
Winston Chou - MAE Ctrls & Robotics (MC34) - Class of 2026 - LinkedIn
Amir Riahi - ECE - UPS Student
Rayyan Khalid - MAE Ctrls & Robotics (MC34) - Class of 2025
The project's goal is to develop a robotic system capable of mapping a new enclosed environment, determining a path from a specified starting point to a desired destination while avoiding obstacles in its path. This involves integrating sensors for environmental perception, implementing mapping and localization algorithms, designing path planning and obstacle avoidance strategies, and creating a robust control system for the robot's navigation.
The robot utilizes the ROS2 Navigation 2 stack and integrating LiDAR for SLAM (Simultaneous Localization and Mapping) along with the OAK-D Lite depth point cloud camera for real-time obstacle avoidance.
- Integrate LiDAR sensor(s) into the ROS2 system. Utilize the ROS2 Navigation 2 stack to perform SLAM using LiDAR data.
- Integrate the OAK-D Lite depth camera, and Develop algorithms within ROS2 to process the point cloud data generated by OAK-D Lite for real-time obstacle detection. (ONLY Detection, for now)
- To move the robot from a given location A to a desired location B using the ROS2 Navigation 2 stack and integrated sensors. (Not implemented yet)
- SLAM development accomplished
- It enables the robot to map an unknown environment, and to locate its position.
- Seeed IMU setup for better localization (Extended Kalman Filter).
- Obstacle Avoidance
- Utilize its depth sensing capabilities to generate a point cloud representation of the environment. (Rviz2 and Foxglove Studio)
- Simple obstacle detection algorithm
- Nav2 Stack is a complex but useful system for developing an autonomous robot.
- Futher Actions:
- PointCloud Dynamic Obstacle Detection:
- Develop an algorithm to mark down position of obstacle group, and add them to Nav2 obstacle layer
- Nav2 Path Planning & ROS 2 Control:
- Path Planning Server Development, and communication to ROS 2 Control System
- PointCloud Dynamic Obstacle Detection:
Click any of the clips below to reroute to the video.
The project was successfully completed using the Slam-Toolbox and ROS2 Navigation 2 Stack, with a significant adaptation to the djnighti/ucsd_robocar container. The adaptation allowed for seamless integration and deployment of the required components, facilitating efficient development and implementation of the robotic system.
-
The Slam Toolbox proved indispensable in our project, enabling us to integrate the LD19 Lidar – firmware-compatible with the LD06 model – into the ROS2 framework. This integration allowed us to implement SLAM, empowering our robot to autonomously map its environment while concurrently determining its precise location within it. Additionally, we enhanced this capability by incorporating nav2 amcl localization, further refining the accuracy and dependability of our robot's localization system. By combining these technologies, our robot could navigate confidently, accurately mapping its surroundings and intelligently localizing itself within dynamic environments.
-
The Online Async Node from the Slam Toolbox is a crucial component that significantly contributes to the creation of the map_frame in the project. This node operates asynchronously, meaning it can handle data processing tasks independently of other system operations, thereby ensuring efficient utilization of resources and enabling real-time performance. The map_frame is a fundamental concept in SLAM, representing the coordinate frame that defines the global reference frame for the environment map being generated. The asynchronous online node processes Lidar data, and fuses this information together to construct a coherent and accurate representation of the surrounding environment.
-
The VESC Odom Node plays a pivotal role in supplying vital odometry frame data within the robotics system. This node is responsible for gathering information from the VESC (Vedder Electronic Speed Controller), and retrieves essential data related to the robot's motion, such as wheel velocities and motor commands. The odometry frame, often referred to as the "odom_frame," is a critical component in localization and navigation tasks. It represents the robot's estimated position and orientation based on its motion over time. This information is crucial for accurately tracking the robot's trajectory and determining its current pose within the environment. By utilizing the data provided by the VESC Odom Node, the system can update the odometry frame in real-time, reflecting the robot's movements and changes in its position. This dynamic updating ensures that the odometry frame remains synchronized with the robot's actual motion, providing an accurate representation of its trajectory.
from https://answers.ros.org/question/387751/difference-between-amcl-and-odometry-source/
Our Robot TF Tree
- The URDF Publisher is a tool used to generate and publish Unified Robot Description Format (URDF) models within the ROS 2 ecosystem.
- The Seeed IMU Node is used to publish IMU data Seeed Studio XIAO nRF52840 Sense. By integrating the Seeed Studio XIAO nRF52840 Sense's 6-Axis IMU and implementing an Extended Kalman Filter (Not Done), the robot gains improved localization accuracy and reduced odometry drift. The IMU provides orientation and acceleration data, complementing other sensors like wheel encoders and GPS. The Extended Kalman Filter fuses IMU and odometry measurements, dynamically adjusting uncertainties to mitigate noise and inaccuracies, resulting in enhanced navigation performance and reliability.
- The Scan Correction Node becomes particularly useful when there are specific sections of Lidar data that we wish to exclude from being collected by SLAM. This node allows us to define undesired ranges within the Lidar data and effectively filter them out, ensuring that only relevant and accurate information is utilized in the SLAM process. This capability enhances the overall quality and reliability of the generated map by preventing erroneous or irrelevant data from influencing the mapping and localization algorithms.
We utlized the OAK-D Lite depth camera to implement obstacle avoidance functionality within the ROS2 framework. Leveraging its depth sensing capabilities, we utilized the camera to generate a point cloud representation of the environment. The program logic is straightforward: the robot detects obstacles by identifying areas where the height is less than 2 meters (customizable) in front of it. If an object is detected within this distance threshold, the robot dynamically adjusts its trajectory to avoid collision, typically by making a turn. This simple yet effective approach allows the robot to navigate safely through its environment, reacting to potential obstacles in real-time to ensure smooth and obstacle-free movement.
We integrated the DepthAI ROS package into our ROS2 setup to enable object detection functionality. Within the package, we utilized the provided YOLO (You Only Look Once) neural network setup for object detection. This configuration allowed our robot to detect objects in its environment in real-time using deep learning techniques. By leveraging the YOLO neural network, our robot could accurately identify and classify various objects, enhancing its perception and autonomy for effective navigation in dynamic environments.
- 3D Printing: Camera Stand, Jetson Nano Case, GPS Plate, Lidar Mount
- Laser Cut: Base plate to mount electronics and other components.
Parts List
- Traxxas Chassis with steering servo and sensored brushless DC motor
- Jetson Nano
- WiFi adapter
- 64 GB Micro SD Card
- Adapter/reader for Micro SD Card
- Logitech F710 controller
- OAK-D Lite Camera
- LD19 Lidar (LD06 Lidar)
- VESC
- Point One GNSS with antenna
- Anti-spark switch with power switch
- DC-DC Converter
- 4-cell LiPo battery
- Battery voltage checker/alarm
- DC Barrel Connector
- XT60, XT30, MR60 connectors
Additional Parts used for testing/debugging
- Car stand
- USB-C to USB-A cable
- Micro USB to USB cable
- 5V, 4A power supply for Jetson Nano
Base Plate
Camera Stand
Camera Stand components were designed in a way that it's an adjustable angle and height This design feature offers versatility and adaptability, ensuring optimal positioning of the camera to capture desired perspectives and accommodate various environments or setups.
GPS Plate
Circuit Diagram
Our team made use of a select range of electronic components, primarily focusing on the OAK-D Lite camera, Jetson NANO, a GNSS board / GPS, and an additional Seeed Studio XIAO nRF52840 Sense (for IMU usage). Our circuit assembly process was guided by a circuit diagram provided by our class TAs.
Here are our autonomous laps as part of our class deliverables:
- DonkeyCar Reinforcement Laps: https://youtu.be/UEGGQz-GSq4
- Line Following: https://youtu.be/GaKq_m8Ola0
- Lane Following: https://youtu.be/1v2-Dgx5fyk
- GPS Laps: https://youtu.be/92Q-JpYGPZk?si=UYrh6Mo9-b4TGgYO
Team 7's the weekly project Status Update and Final Presentation:
If you are interested in reproducing our project, here are a few steps to get you started with our repo:
- Follow instuctions on UCSD Robocar Framework Guidebook,
pulldevel
image on your JTN:docker pull djnighti/ucsd_robocar:devel
-
sudo apt update && sudo apt upgrade
(make sure you upgrade the packages, or else it won't work; maybe helpful if you run into some error https://askubuntu.com/questions/1433368/how-to-solve-gpg-error-with-packages-microsoft-com-pubkey)
check ifslam_toolbox
is installed and launchable:
sudo apt install ros-foxy-slam-toolbox source_ros2 ros2 launch slam_toolbox online_async_launch.py
Output should be similar to:[INFO] [launch]: All log files can be found below /root/.ros/log/2024-03-16-03-57-52-728234-ucsdrobocar-148-07-14151 [INFO] [launch]: Default logging verbosity is set to INFO [INFO] [async_slam_toolbox_node-1]: process started with pid [14173] [async_slam_toolbox_node-1] 1710561474.218342 [7] async_slam: using network interface wlan0 (udp/192.168.16.252) selected arbitrarily from: wlan0, docker0 [async_slam_toolbox_node-1] [INFO] [1710561474.244055467] [slam_toolbox]: Node using stack size 40000000 [async_slam_toolbox_node-1] 1710561474.256172 [7] async_slam: using network interface wlan0 (udp/192.168.16.252) selected arbitrarily from: wlan0, docker0 [async_slam_toolbox_node-1] [INFO] [1710561474.517037334] [slam_toolbox]: Using solver plugin solver_plugins::CeresSolver [async_slam_toolbox_node-1] [INFO] [1710561474.517655574] [slam_toolbox]: CeresSolver: Using SCHUR_JACOBI preconditioner.
-
Since we upgrade all existing packges, we need to rebuild VESC pkg under
/home/projects/sensor2_ws/src/vesc/src/vesc
cd /home/projects/sensor2_ws/src/vesc/src/vesc git pull git switch foxy
make sure you are on foxy branch
Then, build 1st time undersensor2_ws/src/vesc/src/vesc
colcon build source install/setup.bash
Then, 2nd time but undersensor2_ws/src/vesc
cd /home/projects/sensor2_ws/src/vesc colcon build source install/setup.bash
Now, tryros2 pkg xml vesc
, check if VESC pkg version has come to1.2.0
-
Install Navigation 2 package, and related packages:
sudo apt install ros-foxy-navigation2 ros-foxy-nav2* ros-foxy-robot-state-publisher ros-foxy-joint-state-publisher
-
Clone this repository,
cd /home/projects/ros2_ws/src git clone --recurse-submodules https://github.com/WinstonHChou/winter-2024-final-project-team-7.git cd winter-2024-final-project-team-7/
There aReplace_to_ucsd_robocar_nav2
folder, which includes several files you'd like to replace/place toucsd_robocar_nav2_pkg
scan_correction.yaml
,mapper_params_online_async.yaml
,node_config.yaml
,node_pkg_locations_ucsd.yaml
should be placed to/home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/config/
sensor_visualization.rviz
should be placed to/home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/rviz/
ucsdrobocar-148-07.urdf
should be placed to/home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/urdf/
(you can edit URDF if you want to, https://docs.ros.org/en/foxy/Tutorials/Intermediate/URDF/URDF-Main.html)urdf_publisher_launch.launch.py
should be placed to/home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/launch/
package.xml
should be placed to/home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/
Next, modifysetup.py
in/home/projects/ros2_ws/src/ucsd_robocar_hub2/ucsd_robocar_nav2_pkg/
,
and add(os.path.join('share', package_name, 'urdf'), glob('urdf/*.urdf'))
Then,build_ros2 ros2 launch ucsd_robocar_nav2_pkg all_nodes.launch.py
If functional, pre-setting for SLAM is done. Note: scan_correction and urdf_publisher nodes are now able to be launch byall_nodes.launch.py
. Remember to toggle settings for themnode_config.yaml
.
scan_correction.yaml
can define the lidar undesired range, and filter them out usingscan_correction
node- Follow this instuction if you only want to do SLAM, Easy SLAM Instruction Video on ROS 2 Foxy
- Since you might adjust setting of VESC pkg for vesc_odom, here's additonal resource f1tenth calibrating VESC Odom
- Change odom direction by
-
adjust
vesc_to_odom.cpp
line 100
double current_speed = -1 * (-state->state.speed - speed_to_erpm_offset_) / speed_to_erpm_gain_;
(adding a negative sign) -
adjust
vesc_to_odom.cpp
line 107
(if you invert steering_angle at joy_teleop.yaml)
-1 * (last_servo_cmd_->data - steering_to_servo_offset_) / steering_to_servo_gain_;
(adding a negative sign)
-
adjust
- If you made changes in
vesc_to_odom.cpp
, must repeat Step. 3 to rebuild VESC pkg
- Setting up Seeed IMU, follow instructions
In
src/winter-2024-final-project-team-7/team_7_external/config/
,you may adjust setting inSeeed_imu.yaml
(equivalent forrazor.yaml
in razorIMU_9dof) andSeeed_imu_config.yaml
.
Then,build_ros2 ros2 launch team_7_external Seeed_imu.launch.py
- DepthAI ROS & team_7_obstacle_detection Installation
- Install Depthai and related packages,
sudo apt install ros-foxy-depthai* ros-foxy-sensor-msgs-py
- If you're using an OAK-D Lite,
- Open an additional terminal,
ros2 launch depthai_ros_driver pointcloud.launch.py
to publish/oak/points
ros 2 topic. - Open an additional terminal,
ros2 launch team_7_obstacle_detection obstacle_detection.launch.py
. Now, you are able to detect a simple obstacle using height < 2 meters. (Adjustable in the launch file)
- Install Depthai and related packages,
-
Foxglove Studio, using rosbridge_server
Download Foxglove Studio. And Follow instructions https://docs.foxglove.dev/docs/introduction/
That's it! Most of settings are above. If you need any assistance on how to utilize this repo, you may create new issue on this GitHub repo, or contact [email protected] if needed.
Special thanks to Professor Jack Silberman and TA Arjun Naageshwaran for delivering the course!
Thanks to Raymond on Triton AI giving suggestions on our project!
Thanks to Nikita on Triton AI providing support on razorIMU_9dof repo for IMU usage!
Programs Reference:
README.md Format, reference to spring-2023-final-project-team-5
- Winston Chou - [email protected] | [email protected] | LinkedIn
- Amir Riahi - [email protected]
- Rayyan Khalid - [email protected]