This is a part of the official PyTorch implementation of paper:
SkillDiffuser: Interpretable Hierarchical Planning via Skill Abstractions in Diffusion-Based Task Execution
Zhixuan Liang, Yao Mu, Hengbo Ma, Masayoshi Tomizuka, Mingyu Ding, Ping Luo
CVPR 2024
SkillDiffuser is a hierarchical planning model that leverages the cooperation of interpretable skill abstractions at the higher level and a skill conditioned diffusion model at the lower level for task execution in a multi-task learning environment. The high-level skill abstraction is achieved through a skill predictor and a vector quantization operation, generating sub-goals (skill set) that the diffusion model employs to determine the appropriate future states. Future states are converted to actions using an inverse dynamics model. This unique fusion enables a consistent underlying planner across different tasks, with the variation only in the inverse dynamics model.
@article{liang2023skilldiffuser,
title={Skilldiffuser: Interpretable hierarchical planning via skill abstractions in diffusion-based task execution},
author={Liang, Zhixuan and Mu, Yao and Ma, Hengbo and Tomizuka, Masayoshi and Ding, Mingyu and Luo, Ping},
journal={arXiv preprint arXiv:2312.11598},
year={2023}
}
To install and use SkillDiffuser check the instructions provided in the skilldiffuser folder.
The diffusion model implementation is based on Michael Janner's diffuser repo. The organization of this repo and remote launcher is based on the LISA repo.
Please email us if you have any questions.
Zhixuan Liang ([email protected])