DroneAid uses machine learning to detect calls for help on the ground placed by those in need. At the heart of DroneAid is a Symbol Language that is used to train a visual recognition model. That model analyzes video from a drone to detect and count specific images. A dashboard can be used to plot those locations on a map and initiate a response.
DroneAid consists of several components:
- The DroneAid Symbol Language that represents need and quantities
- A mechanism for rendering the symbols in virtual reality to train a model
- The trained model that can be applied to drone livestream video
- A dashboard that renders the location of needs captured by a drone
The current implementation can be extended beyond a particular drone to additional drones, airplanes, and satellites. The Symbol Language can be used to train additional visual recognition implementations.
The original version of DroneAid was created by Pedro Cruz in August 2018. A refactored version was released as a Call for Code® with The Linux Foundation open source project in October 2019. DroneAid is currently hosted at The Linux Foundation.
- The DroneAid origin story
- DroneAid Symbol Language
- See it in action
- Use the pre-trained visual recognition model on the Symbol Language
- Set up and training the model
- Frequently asked questions
- Project roadmap
- Built with
- Contributing
- Authors
- License
Pedro Cruz explains his inspiration for DroneAid, based on his experience in Puerto Rico after Hurricane Maria. He flew his drone around his neighborhood and saw handwritten messages indicating what people need and realized he could standardize a solution to provide a response.
The DroneAid Symbol Language provides a way for those affected by natural disasters to express their needs and make them visible to drones, planes, and satellites when traditional communications are not available.
Victims can use a pre-packaged symbol kit that has been manufactured and distributed to them, or recreate the symbols manually with whatever materials they have available.
These symbols include those below, which represent a subset of the icons provided by The United Nations Office for the Coordination of Humanitarian Affairs (OCHA). These can be complemented with numbers to quantify need, such as the number or people who need water.
A demonstration implementation takes the video stream of DJI Tello drone and analyzes the frames to find and count symbols. See tello-demo for instructions on how to get it running.
See the Tensorflow.js example.
See the Tensorflow.js example deployed to Code Engine.
In order to train the model, we must place the symbols into simulated environments so that the system knows how to detect them in a variety of conditions (i.e. whether they are distorted, faded, or in low light conditions).
See SETUP.md
See FAQ.md
See ROADMAP.md
- TensorFlow.js - Used to run inference on the browser
- Cloud Annotations - Used for training the model
- Lens Studio - Used to create the augmented reality and generate the imageset
Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting DroneAid pull requests.
This project is licensed under the Apache 2 License - see the LICENSE file for details.