diff --git a/2021Fall_AntNLP/README.md b/2021Fall_AntNLP/README.md
index bf5a6b4..066fc69 100644
--- a/2021Fall_AntNLP/README.md
+++ b/2021Fall_AntNLP/README.md
@@ -45,18 +45,17 @@ Week | Date | Speaker | Paper | Materials
4 |9.29 | 杨晰 |[ACL21]Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks
[ACL21]ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning |
5 |10.8 | 高怡 |[Survey] Some Papers about Few Shot Learning|
6 |10.15| 纪焘 |[arxiv20] Seed the Views: Hierarchical Semantic Alignment for Contrastive Representation Learning
[ACL21] COSY: COunterfactual SYntax for Cross-Lingual Understanding|
-7 |10.22 |刘宇芳 |[Survey] Shapley Value for Model Interpretability |
+7 |10.22 |刘宇芳 |[Survey] Shapley Value for Model Interpretability | [Slides](https://github.com/AntNLP/seminar/tree/master/2021Fall_AntNLP/week7/1022.pdf)
8 |10.29 | 李鹏|[Survey] Certified Defenses: A Survey |
9 |11.5 | 雷钲仪 | |
10|11.12|黄子寅 | |
11|11.19|杜威 | |
12|11.26|周杰 | |
13|12.3 |王志承|[EMNLP21]Transformer Feed-Forward Layers Are Key-Value Memories
[EMNLP21]Knowledge Neurons in Pretrained Transformers|[slides](https://drive.google.com/file/d/1pGjWLM9xJbJAh7Qslatfz4EwGea1JMkw/view?usp=sharing)|
-14|12.10| | |
-15|12.17| | |
+14|12.10|高怡 |Mixup in Meta-Learning |[slides](https://github.com/AntNLP/seminar/tree/master/2021Fall_AntNLP/week14/1210.pptx)
+15|12.17|杨晰 | |
16|12.24|纪焘 | [Survey] Continual Lifelong Learning in NLP A Simple Survey | [slides](https://drive.google.com/file/d/1aM0bVLxKvrKKlMaI-X22ISsiMNzWndmr/view?usp=sharing)|
-17|12.31| | |
-18|1.7| | |
+17|12.31|刘宇芳 |Papers about Lottery Ticket Hypothesis | [slides](https://github.com/AntNLP/seminar/tree/master/2021Fall_AntNLP/week17/1231.pdf)
diff --git a/2021Fall_AntNLP/week14/1210.pptx b/2021Fall_AntNLP/week14/1210.pptx
new file mode 100644
index 0000000..422197c
Binary files /dev/null and b/2021Fall_AntNLP/week14/1210.pptx differ
diff --git a/2021Fall_AntNLP/week17/1231.pdf b/2021Fall_AntNLP/week17/1231.pdf
new file mode 100644
index 0000000..c5c9eb9
Binary files /dev/null and b/2021Fall_AntNLP/week17/1231.pdf differ
diff --git a/2021Fall_AntNLP/week7/1022.pdf b/2021Fall_AntNLP/week7/1022.pdf
new file mode 100644
index 0000000..adb11f8
Binary files /dev/null and b/2021Fall_AntNLP/week7/1022.pdf differ
diff --git a/2022Spring_AntNLP/README.md b/2022Spring_AntNLP/README.md
index 86a7074..f3789be 100644
--- a/2022Spring_AntNLP/README.md
+++ b/2022Spring_AntNLP/README.md
@@ -41,14 +41,14 @@ Welcome to AntNLP Seminar 2022 Spring. : )
| Week | Date | Speaker | Paper | Materials |
| ---- | ---- | ------- | ----- | --------- |
| 1 | 3.11 | 纪焘 | PRETRAINED LANGUAGE MODEL IN CONTINUAL LEARNING: A COMPARATIVE STUDY
[TACL2021]Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week1) |
-| 2 | 3.18 | 刘宇芳 | Dataset Distillat
[ICLR2021]DATASET CONDENSATION WITH GRADIENT MATCHING | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week2) |
-| 3 | 3.25 | 高怡 | | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week3) |
+| 2 | 3.18 | 刘宇芳 | Papers about Dataset Distillation | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week2) |
+| 3 | 3.25 | 高怡 | Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation
On Episodes, Prototypical Networks, and Few-Shot Learning
TASK2VEC: Task Embedding for Meta-Learning | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week3) |
| 4 | 4.1 | 杨晰 | [EMNLP19] Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks
[EMNLP20] Inducing Target-Specific Latent Structures for Aspect Sentiment Classification | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week4) |
| 5 | 4.8 | 杜威 |[EMNLP21]Zero-Shot Information Extraction as a Unified Text-to-Triple Translation | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week5) |
| 6 | 4.15 | 王志承 |[ICLR2018]Measuring the Intrinsic Dimension of Objective Landscapes
[ACL2021]Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week6) |
| 7 | 4.22 | 刘宇芳 | [ICML2020]Certified Data Removal from Machine Learning Models
[AAAI2022]Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
[AISTATS2021]Approximate Data Deletion from Machine Learning Models | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week7) |
| 8 | 4.29 | 纪焘 | [ACL2022]Knowledge Neurons in Pretrained Transformers
[EMNLP21]MultiEURLEX – A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer
[ACL2022]Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week8) |
-| 9 | 5.6 | 高怡 | | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week9) |
+| 9 | 5.6 | 高怡 |PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models
Noisy Channel Language Model Prompting for Few-Shot Text Classification
PILED: An Identify-and-Localize Framework for Few-Shot Event Detection | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week9) |
| 10 | 5.13 | 杨晰 | [ACL18]Neural Open Information Extraction
[EMNLP20]Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction
[EMNLP20]OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction
[EMNLP21]Maximal Clique Based Non-Autoregressive Open Information Extraction | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week10) |
| 11 | 5.20 | 李鹏 | | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week11) |
| 12 | 5.27 | 杜威 |[EMNLP16] Creating a Large Benchmark for Open Information Extraction
[EMNLP20] Multi2OIE: Multilingual Open Information Extraction Based on Multi-Head Attention with BERT | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week12) |
diff --git a/2022Spring_AntNLP/week3/3-25.pptx b/2022Spring_AntNLP/week3/3-25.pptx
new file mode 100644
index 0000000..be3d986
Binary files /dev/null and b/2022Spring_AntNLP/week3/3-25.pptx differ
diff --git a/2022Spring_AntNLP/week9/520.pptx b/2022Spring_AntNLP/week9/520.pptx
new file mode 100644
index 0000000..cba3f4a
Binary files /dev/null and b/2022Spring_AntNLP/week9/520.pptx differ