In this work, we implemented several functions based on DJI M300 Drone and H20T camera for the early wildfire point preception applications:
-
The wildfire point is firstly segmented by CNN-based networks to provide the semantic information for the other submodules in the framework.
-
After the indoor calibration, the precise camera trajectory with correct scale is recoveried by ORB-SLAM2 and the drone platform navigation information. Then the depth is estimated.
-
A model-based visible-infrared images registration is proposed to fuse the two types of information to reduce the false positive alram further.
features | |
---|---|
Attention gate U-net wildfire segmentation | |
Trianglulation-based wildfire point depth estimation | |
Visible-infrared camera system calibration | |
Model-based wide fire point registration |
- We use the forest fire detection system to control M300 and capture the data during the trajectory.
- DJI M300 RTK
- DJI H20T Camera
- Nvidia NX on board computer
For each submodules:
cd <submodules>
mkdir -p build&&cd build
cmake cmake -DCMAKE_EXPORT_COMPILE_COMMANDS=1 -DCMAKE_BUILD_TYPE=Release ..
The executable file will be generated under <submodules>/bin
directory.
Copyright (C) 2022 Concordia NAVlab. All rights reserved.