Skip to content

vuoristo/MMAML-rl

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Multimodal Model-Agnostic Meta-Learning for Reinforcement Learning

This project is an implementation of Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation, which is published in NeurIPS 2019. Visit project page for more information. Code for classification can be found at MMAML-Classification.

Model-agnostic meta-learners aim to acquire meta-prior parameters from a distribution of tasks and adapt to novel tasks with few gradient updates. Yet, seeking a common initialization shared across the entire task distribution substantially limits the diversity of the task distributions that they are able to learn from. We propose a multimodal MAML (MMAML) framework, which is able to modulate its meta-learned prior according to the identified mode, allowing more efficient fast adaptation. An illustration of the proposed framework is as follows.

This implementation is based on and includes code from ProMP.

Installation / Dependencies

The code can be run in Anaconda or Virtualenv environments. For other installation methods refer to the ProMP repository.

Using Anaconda or Virtualenv

1. Installing MPI

Ensure that you have a working MPI implementation (see here for more instructions).

For Ubuntu you can install MPI through the package manager:

sudo apt-get install libopenmpi-dev
2. Create either venv or conda environment and activate it
Virtualenv
pip install --upgrade virtualenv
virtualenv <venv-name>
source <venv-name>/bin/activate
Anaconda

If not done yet, install anaconda by following the instructions here. Then reate a anaconda environment, activate it and install the requirements in requirements.txt.

conda create -n <env-name> python=3.6
source activate <env-name>
B.3. Install the required python dependencies
pip install -r requirements.txt
B.4. Set up the Mujoco physics engine and mujoco-py

For running the majority of the provided Meta-RL environments, the Mujoco physics engine as well as a corresponding python wrapper are required. For setting up Mujoco and mujoco-py, please follow the instructions here.

Running

Use the following commands to execute MMAML-rl algorithm.

python run_scripts/mumo_run_point_mass.py --config_file configs/point_env_momentum_dense.json
python run_scripts/mumo_run_mujoco.py --config_file configs/reacher.json
python run_scripts/mumo_run_mujoco.py --config configs/ant_rand_goal_mode.json

Use the following commands to execute ProMP algorithm.

python run_scripts/pro-mp_run_point_mass.py --config_file configs/point_env_momentum_dense.json
python run_scripts/pro-mp_run_mujoco.py --config_file configs/reacher.json
python run_scripts/pro-mp_run_mujoco.py --config configs/ant_rand_goal_mode.json

Results

Please check out our paper for comprehensive results.

Related work

Cite the paper

If you find this useful, please cite

@inproceedings{vuorio2019multimodal,
  title={Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation},
  author={Vuorio, Risto and Sun, Shao-Hua and Hu, Hexiang and Lim, Joseph J.},
  booktitle={Neural Information Processing Systems},
  year={2019},
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages