This repository contains an implementation of the Deep RL method proposed in Deep Reinforcement Learning for Frontal View Person Shooting using Drones. The following are supplied:
- An OpenAI gym-compliant environment that can be directly used with keras-rl. This environment uses the HPID dataset to simulate the camera control commands (up, down, left, right, stay).
- Code to train and evaluate an RL-agent that is trained to control the camera to perform frontal shooting.
- A pre-trained model.
To run the code:
- Install the required dependencies (Python 3.6 was used for training/testing the models):
pip3 install tensorflow-gpu keras keras-rl gym
Also install the python bindings for OpenCV (or replace the calls to openCV library if you do not want to use OpenCV).
- Download the HPID dataset to 'data/datasets'.
- Run the preprocess_dataset.py script to create the dataset pickle.
- (Optionally) train the model by running the train.py script.
- Evaluate the model by running the evaluate.py script.
Note that the evaluation function also supports interactive evaluation. This allows for more easily examining the behavior of the agent.
If you use this code in your work please cite the following paper:
@inproceedings{frontal-rl, title = "Deep Reinforcement Learning for Frontal View Person Shooting using Drones", author = "Passalis, Nikolaos and Tefas, Anastasios", booktitle = "Proceedings of the IEEE Conference on Evolving and Adaptive Intelligent Systems (to appear)", year = "2018" }
Also, check my website for more projects and stuff!
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 731667 (MULTIDRONE). This publication reflects the authors’ views only. The European Commission is not responsible for any use that may be made of the information it contains.