Skip to content

fall-2024-final-project-fall-2024-final-project-team-11 created by GitHub Classroom

License

Notifications You must be signed in to change notification settings

UCSD-ECEMAE-148/fall-2024-final-project-team-11

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

31 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Obstacle Detection with RRT Path Planning

image

MAE 148 Final Project

Team 11 Fall 2024


Table of Contents

  1. Team Members
  2. Abstract
  3. What We Promised
  4. Accomplishments
  5. Challenges
  6. Final Project Videos
  7. Software
  8. Hardware
  9. Gantt Chart
  10. How To Run
  11. Acknowledgements
  12. Contacts

Team Members

  • Liem Le - MAE-Ctrls & Robotics (MC34) - Class of 2025
  • Jusung Park- MAE (MC81) - Class of 2025
  • Anna Hsu - ECE (EC27) - Class of 2026

Abstract

We propose developing an autonomous car navigation system using Rapidly-Exploring Random Trees (RRT) with LiDAR for real-time environment mapping and obstacle detection. Using LiDAR, the system will create a virtual map, enabling the RRT algorithm to identify viable paths around obstacles. A local planner using the Regulated Pure Pursuit Controller will use generated waypoints to guide the car to the target, while real-time controls (x = u̇, v̇, θ̇) will manage the car’s movement. This setup allows for real-time decision-making, enabling the car to differentiate between obstacles on the fly and navigate dynamically toward its goal.


What We Promised

Must Have

  • Integrate 3D LiDAR with FAST LIO (LiDAR-inertial odometry package) to generate 3D pointcloud data and use point cloud data to get 2D laser scan data for SLAM
  • Integrate cost map with RRT algorithm to show RRT pathing in real time and repathing when obstacles are detected within a certain radius of the robot
  • Integrate Regulated Pure Pursuit Controller to follow waypoints generated by RRT algorithm in simulation

Nice to Have

  • Integrate 2D laser scan data onto costmap for obstacle mapping in real time
  • Have map generation, RRT pathing, and controller work with physical car

Accomplishments

  • ODOM data achieved
    • Successfully implemented Fast LIO for real-time lidar odometry and mapping which enables the robot to accurately locate itself while generating a 3D map of the environment.
  • Laserscan 2D visualization achieved
    • Configured visualization tools such as Rviz2 for odometry data and 3D point cloud representation, including tuning parameters for LaserScan and PointCloud2 to enhance data clarity.
  • Map generation achieved
    • Successfully set up map generation with occupancy grid generated with SLAM to create set grid of specified size (default 10m x 10m) and show goal position and robot position in real time
  • RRT path planning and repathing achieved
    • Successfully set up RRT algorithm to plan path around obstacles and replan when new obstacle is detected
  • Regulated Pure Pursuit Controller implemented to follow waypoints
    • Successfully set up pure pursuit controller algorithm and utilized it to follow waypoints generated by RRT path planning script

Challenges

  • Installing and setting up OMPL on ARM64 architecture (worked well on x86 architecture in simulation)
  • Configuring SLAM to use 2D laser scan data
  • Configuring VESC with our custom written controller code

Final Project Videos

Click on image to open video in new tab:

Simulation

Simulation

Software

Laserscan

  • Laserscan was another key component to our project which helped enable efficient processing of lidar data for environmental mapping and localization within the ROS2 framework. By using LaserScan data, we were able to detect and visualize the robot’s surroundings in real time, providing a clear 2D representation of spatial layouts. This was also visualized using Rviz2 which optimized visualization settings for clarity.

Fast_Lio and PointCloud

  • The Fast LIO was integral to our project, which helped enable integration of lidar odometry and mapping into the ROS2 framework. This setup allowed the robot to process lidar data in real time and generate accurate odometry for localization while simultaneously creating detailed point cloud representations of the environment. These point clouds were visualized using the tool, Rviz2 and this helped assess the environment in 3D which also helped the robot to map and detect obstacles.

Odometry

  • The odometry system was crucial for our project, which provided the foundation for accurate localization and navigation within the ROS2 framework. The system allowed the robot to determine its precise position and orientation relative to its environment. This was also visualized using Rviz2, which helped provide a clear and smooth tracking of the robot’s movement.

SLAM Toolbox

  • The SLAM Toolbox incorporates the 2D laser scan data to detect the obstacles. Using TF transforms and laser scan data, it creates an occupancy grid that either marks cells as occupied (100), empty (0), or unknown (-1). In simulation, the SLAM toolbox also provides odometry data that allows the robot to be simultaneously mapped onto the map as well as providing the robot with its real time position.

RViz

  • RViz is a visualization tool that is mainly used for visualizing 3D point cloud data and 2D laser scan data, monitoring occupancy grid data, and visualizing odometry data. It allowed us to in real time monitor and debug the mapping and localization of the robot.

Gazebo

  • Gazebo is a simulation software that allows us to model the robots sensors, movements, and environment without using actual physical hardware. In Gazebo, we simulated the use of the LiDAR with our RRT pathing and map generation script as well as testing our controller script to follow the waypoints generated by the RRT pathing.



Hardware

  • 3D Printing: Camera Case & Stand, Jetson Nano Case, Jetson Nano Case, Base Mounts
  • Laser Cut: Base with 3mm holes to mount electronics and other components.

Parts List

  • Traxxas Chassis with steering servo and sensored brushless DC motor
  • Jetson Nano
  • LIVOX MID 360
  • Livox three-wire aviation connector
  • Aviation connector power network port cable
  • 64 GB Micro SD Card
  • Adapter for micro SD card
  • Wifi Antenna
  • Logitech Controller (F710)
  • OAK-D Lite Camera
  • SparkFun OpenLog Artemis (IMU)
  • VESC
  • XeRUn 3660 G2 Sensored Motor
  • Anti-Spark Switch with Power Switch
  • DC-DC Converter
  • 3 cell LIPO Battery
  • Battery Voltage Checker
  • DC Barrel to XT30 Connector
  • XT60, XT30, MR60 connectors

For Testing:

*Car Stand *5V, 4A power supply for Jetson Nano *USB-C to USB-A cable *Micro USB to USB cable

Mechanical Designs

Base Plate with 3mm Holes

Adjustable Camera Stand


GPS Mount/Stand

Circuit Diagram


Gantt Chart


How To Run

ssh -X [email protected]

Pulling docker images(files)

  • cd projects/
  • ll
  • cd robocar//
  • docker ps -a
  • docker images
  • cat docker.sh

To enter container:

  • docker start team11
  • docker exec -it team11 bash To activate ros2:
  • source /opt/ros/foxy/setup.bash
  • ros2 <command lines>

All the code has to be in the ros2_ws directory.

Git.clone https://github.com/hku-mars/FAST_LIO/tree/ROS2

  • into src in the container (...)
  • cd ~/path/to/your/container/workspace/src
  • git clone <url to git hub>
  • git clone -b ROS2 --single-branch https://github.com/hku-mars/FAST_LIO.git

OR

vcs import < livox.repox & vcs import < racer.repos

  • cd ~/path/to/your/container/workspace/src
  • vcs import < livox.repos
  • vcs import < racer.repos

To get Nav2 stack:

Update package index sudo apt update

Install nav2 packages

sudo apt install ros-<ros-distro>-navigation2 
ros-<ros-distro>-nav2-bringup
source /opt/ros/<ros-distro>/setup.bash

Install livox_ros_driver2 and Livox-SDK/Livox-SDK2

Installing Cart Repos

robocar/repos/racer.repos:

Pointcloud->Laser scan

In this code change the Triton AI to hku-mars/FAST LIO…

  • Git.clone https://github.com/hku-mars/FAST_LIO/tree/ROS2
    • Change IP address of livox_ros_driver2/config/MID360_config.json
    • Change line 28 to 192.168.1.124

Launching Lidar

Installing Livox-Pointcloud2

Launch ros2 run livox_to_pointcloud2 livox_to_pointcloud2_node

Pointcloud-Laserscan

Important: Creates a launch file to run all the ros2 code at the same time.

  • Take a look at example in /src/my_launch_package/launch ros2 launch my_launch_package <launch file name>

Map generation, RRT pathing, and RPP controller in simulated environment

  1. Install ROS2 Humble and Gazebo
  2. Install required packages: slam_toolbox, ompl, tf2, nav-msgs, rviz2, and any required dependencies
  3. Create package called autonomous_pkg in ROS2 workspace, place the following files into the src folder: map_generation.py, rrt_ompl.py, rpp_controller.py
  4. Run colcon build and source install/setup.bash on each terminal used: will need at least 5 terminals
  5. Run Gazebo with the following command: ros2 launch turtlebot3_gazebo empty_world.launch.py
    • Place obstacles in desired location - red axis on Gazebo is x-axis, green is y-axis
    • Keep in mind default goal is set to top right corner of the map (8, 8) in a 10 m x 10 m map (goal can be changed in map_generation.py)
  6. Run SLAM Toolbox with the following command: ros2 launch slam_toolbox online_async_launch.py use_sim_time:=True The output should read:
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [async_slam_toolbox_node-1]: process started with pid [2202921]
[async_slam_toolbox_node-1] [INFO] [1733721265.346647781] [slam_toolbox]: Node using stack size 40000000
[async_slam_toolbox_node-1] [INFO] [1733721265.456959936] [slam_toolbox]: Using solver plugin solver_plugins::CeresSolver
[async_slam_toolbox_node-1] [INFO] [1733721265.459165442] [slam_toolbox]: CeresSolver: Using SCHUR_JACOBI preconditioner.
  1. Run map generation script by running: ros2 run autonomous_pkg map_generation.py
    • An empty costmap with a point representing the robot with a circle around it representing the radius and a point representing the goal should show
  2. Run controller script by running: ros2 run autonomous_pkg rpp_controller.py
  3. Run RRT path planning script by running: ros2 run autonomous_pkg rrt_ompl.py
    • A red line path should appear on the global costmap, going to the goal
    • Everytime a new obstacle cell is detected within the radius of the robot, the RRT path replans
    • The robot should automatically start following the path until the goal where it stops

Acknowledgements

Special thanks to Professor Jack Silberman for delivering the course!
Many thanks to TA Winston Chou and Ta Alexander for giving suggestions to our project!

Programs Reference:

README.md Format, reference to spring-2024-final-project-team-7


License: MIT

Contacts

About

fall-2024-final-project-fall-2024-final-project-team-11 created by GitHub Classroom

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •