diff --git a/.nojekyll b/.nojekyll
new file mode 100644
index 0000000..8b13789
--- /dev/null
+++ b/.nojekyll
@@ -0,0 +1 @@
+
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..52908a3
--- /dev/null
+++ b/README.md
@@ -0,0 +1,48 @@
+# Academic Project Page Template
+This is an academic paper project page template.
+
+
+Example project pages built using this template are:
+- https://vision.huji.ac.il/spectral_detuning/
+- https://vision.huji.ac.il/podd/
+- https://dreamix-video-editing.github.io
+- https://vision.huji.ac.il/conffusion/
+- https://vision.huji.ac.il/3d_ads/
+- https://vision.huji.ac.il/ssrl_ad/
+- https://vision.huji.ac.il/deepsim/
+
+
+
+## Start using the template
+To start using the template click on `Use this Template`.
+
+The template uses html for controlling the content and css for controlling the style.
+To edit the websites contents edit the `index.html` file. It contains different HTML "building blocks", use whichever ones you need and comment out the rest.
+
+**IMPORTANT!** Make sure to replace the `favicon.ico` under `static/images/` with one of your own, otherwise your favicon is going to be a dreambooth image of me.
+
+## Components
+- Teaser video
+- Images Carousel
+- Youtube embedding
+- Video Carousel
+- PDF Poster
+- Bibtex citation
+
+## Tips:
+- The `index.html` file contains comments instructing you what to replace, you should follow these comments.
+- The `meta` tags in the `index.html` file are used to provide metadata about your paper
+(e.g. helping search engine index the website, showing a preview image when sharing the website, etc.)
+- The resolution of images and videos can usually be around 1920-2048, there rarely a need for better resolution that take longer to load.
+- All the images and videos you use should be compressed to allow for fast loading of the website (and thus better indexing by search engines). For images, you can use [TinyPNG](https://tinypng.com), for videos you can need to find the tradeoff between size and quality.
+- When using large video files (larger than 10MB), it's better to use youtube for hosting the video as serving the video from the website can take time.
+- Using a tracker can help you analyze the traffic and see where users came from. [statcounter](https://statcounter.com) is a free, easy to use tracker that takes under 5 minutes to set up.
+- This project page can also be made into a github pages website.
+- Replace the favicon to one of your choosing (the default one is of the Hebrew University).
+- Suggestions, improvements and comments are welcome, simply open an issue or contact me. You can find my contact information at [https://pages.cs.huji.ac.il/eliahu-horwitz/](https://pages.cs.huji.ac.il/eliahu-horwitz/)
+
+## Acknowledgments
+Parts of this project page were adopted from the [Nerfies](https://nerfies.github.io/) page.
+
+## Website License
+
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
diff --git a/index.html b/index.html
new file mode 100644
index 0000000..39081d0
--- /dev/null
+++ b/index.html
@@ -0,0 +1,359 @@
+
+
+
+
8th Conference on Robot Learning (CoRL 2024), Munich, Germany
++ Bimanual manipulation presents unique challenges compared to unimanual tasks due to the complexity of coordinating two robotic arms. In this paper, we introduce InterACT: Inter-dependency aware Action Chunking with Hierarchical Attention Transformers, a novel imitation learning framework designed specifically for bimanual manipulation. InterACT leverages hierarchical attention mechanisms to effectively capture inter-dependencies between dual-arm joint states and visual inputs. The framework comprises a Hierarchical Attention Encoder, which processes multi-modal inputs through segment-wise and cross-segment attention mechanisms, and a Multi-arm Decoder that refines individual action predictions by providing the other arm's intermediate output as context via synchronization blocks. Our experiments, conducted on various simulated and real-world bimanual manipulation tasks, demonstrate that InterACT outperforms existing methods. Detailed ablation studies further validate the significance of key components, including the impact of CLS tokens, cross-segment encoders, and synchronization blocks on task performance. +
++ The Hierarchical Attention Encoder consists of multiple blocks of segment-wise encoders and cross-segment encoder. The output is passed through the Multi-arm Decoder which consists of Arm1 and Arm2 specific decoders that process the input segments independently. The synchronization block allows for information sharing between the two decoders. +
++ Success rate (%) for tasks adapted from ACT (top) and our original tasks (bottom). For the simulation tasks, the data used to train the model came from human demonstrations, and we averaged the results across 3 random seeds with 50 episodes each. The real-world tasks were also evaluated over 50 episodes. +
+@article{lee2024interact,
+ title={InterACT: Inter-dependency Aware Action Chunking with Hierarchical Attention Transformers for Bimanual Manipulation},
+ author={Lee, Andrew and Chuang, Ian and Chen, Ling-Yuan and Soltani, Iman},
+ journal={arXiv preprint arXiv:2409.07914},
+ year={2024}
+}
+