Skip to content

shsym/pysc2-agents

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

21 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PySC2 agents

This is a simple implementation of DeepMind's PySC2 RL agents. In this project, the agents are defined according to the original paper, which use all feature maps and structured information to predict both actions and arguments via an A3C algorithm.

Requirements

  • PySC2 is a learning environment of StarCraft II provided by DeepMind. It provides an interface for RL agents to interact with StarCraft II, getting observations and sending actions. You can follow the tutorial in PySC2 repo to install it.

  • Python packages might miss: tensorflow. If pip is set up on your system, it can be easily installed by running

pip install tensorflow-gpu

Getting Started

Clone this repo:

git clone https://github.com/xhujoy/pysc2-agents
cd pysc2-agents

Testing

  • Download the pretrained model from here and extract them to ./snapshot/.

  • Test the pretrained model:

python -m main --map=MoveToBeacon --training=False
  • You will get the following results for different maps.
MoveToBeacon CollectMineralShards DefeatRoaches
Mean Score ~25 ~62 ~87
Max Score 31 97 371

Training

Train a model by yourself:

python -m main --map=MoveToBeacon

Notations

  • Different from the original A3C algorithm, we replace the policy penalty term with epsilon greedy exploration.
  • When train a model by yourself, you'd better to run several times and choose the best one. If you get better results than ours, it's grateful to share with us.

Licensed under The MIT License.

About

This is a simple implementation of DeepMind's PySC2 RL agents.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%