AI #19
Replies: 4 comments 6 replies
-
Hi, I will see what i can do.. |
Beta Was this translation helpful? Give feedback.
-
Also forgot: Do you want/need to bypass the PID controllers? |
Beta Was this translation helpful? Give feedback.
-
That's a really good point about the video subtitles, I didn't think of that. As far as I can see, they don't include orientation (orientation in 3D of the drone + camera) and the resolution is 1 sec. On the other hand, I don't think the PID controllers need to be bypassed for now. Sort of in between, reasonable frequency full position and orientation data would be great. I'm not sure about if pulling from the drone black box is needed. It sounds pretty cool, but if the data are available from the standard API, I think it would suffice for now. Of course, if we go to online direct prediction of flight paths etc., more detailed, high-frequency data etc. may be needed. |
Beta Was this translation helpful? Give feedback.
-
that would great, indeed. I was also thinking that subtitles could be used for the sync |
Beta Was this translation helpful? Give feedback.
-
Following up on the discussions in #14
My first idea in this direction would be to 1) run a depth estimation network such as https://github.com/isl-org/MiDaS on the video feed from the drone. Subsequently, 2) save flight data (synchronized video and telemetry streams) and use this as training data either to train a new network unsupervised (using that depth estimation should be consistent across different camera positions) or e.g. fine-tune MiDaS using the saved data.
The depth estimation ideas above would then be first steps in more actual autonomous control. Next could e.g. be direct prediction of flight paths from camera input similar to https://www.science.org/doi/10.1126/scirobotics.abg5810
For 1), I was thinking of capturing the UDP stream on a laptop connected with wifi to the phone similar to if the stream is displayed in an external QGC. I have the depth estimation running in a python script with video input from a tello drone, so it should be fairly easy setup if the UDP streaming works.
For 2), telemetry data would be required as well. Here, I was thinking of polling it in a dronekit script. I believe this should work, the only issue may be synchronization with the video stream (the camera position info and video input should be synchronized).
It could potentially be better to acquire the data directly in rosettadrone as @m4xw suggested in the above issue. Perhaps a downsampled stream would be more stable and time synchronization easy. Any input on this would be great.
Beta Was this translation helpful? Give feedback.
All reactions