Skip to content

Commit

Permalink
add:week
Browse files Browse the repository at this point in the history
  • Loading branch information
Yufang-Liu committed Nov 2, 2022
1 parent a7a8438 commit efc2cca
Show file tree
Hide file tree
Showing 7 changed files with 7 additions and 8 deletions.
9 changes: 4 additions & 5 deletions 2021Fall_AntNLP/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,18 +45,17 @@ Week | Date | Speaker | Paper | Materials
4 |9.29 | 杨晰 |[ACL21]Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks <br />[ACL21]ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning |
5 |10.8 | 高怡 |[Survey] Some Papers about Few Shot Learning|
6 |10.15| 纪焘 |[arxiv20] Seed the Views: Hierarchical Semantic Alignment for Contrastive Representation Learning<br /> [ACL21] COSY: COunterfactual SYntax for Cross-Lingual Understanding|
7 |10.22 |刘宇芳 |[Survey] Shapley Value for Model Interpretability |
7 |10.22 |刘宇芳 |[Survey] Shapley Value for Model Interpretability | [Slides](https://github.com/AntNLP/seminar/tree/master/2021Fall_AntNLP/week7/1022.pdf)
8 |10.29 | 李鹏|[Survey] Certified Defenses: A Survey |
9 |11.5 | 雷钲仪 | |
10|11.12|黄子寅 | |
11|11.19|杜威 | |
12|11.26|周杰 | |
13|12.3 |王志承|[EMNLP21]Transformer Feed-Forward Layers Are Key-Value Memories <br />[EMNLP21]Knowledge Neurons in Pretrained Transformers|[slides](https://drive.google.com/file/d/1pGjWLM9xJbJAh7Qslatfz4EwGea1JMkw/view?usp=sharing)|
14|12.10| | |
15|12.17| | |
14|12.10|高怡 |Mixup in Meta-Learning |[slides](https://github.com/AntNLP/seminar/tree/master/2021Fall_AntNLP/week14/1210.pptx)
15|12.17|杨晰 | |
16|12.24|纪焘 | [Survey] Continual Lifelong Learning in NLP A Simple Survey | [slides](https://drive.google.com/file/d/1aM0bVLxKvrKKlMaI-X22ISsiMNzWndmr/view?usp=sharing)|
17|12.31| | |
18|1.7| | |
17|12.31|刘宇芳 |Papers about Lottery Ticket Hypothesis | [slides](https://github.com/AntNLP/seminar/tree/master/2021Fall_AntNLP/week17/1231.pdf)



Expand Down
Binary file added 2021Fall_AntNLP/week14/1210.pptx
Binary file not shown.
Binary file added 2021Fall_AntNLP/week17/1231.pdf
Binary file not shown.
Binary file added 2021Fall_AntNLP/week7/1022.pdf
Binary file not shown.
6 changes: 3 additions & 3 deletions 2022Spring_AntNLP/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,14 +41,14 @@ Welcome to AntNLP Seminar 2022 Spring. : )
| Week | Date | Speaker | Paper | Materials |
| ---- | ---- | ------- | ----- | --------- |
| 1 | 3.11 | 纪焘 | PRETRAINED LANGUAGE MODEL IN CONTINUAL LEARNING: A COMPARATIVE STUDY <br>[TACL2021]Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week1) |
| 2 | 3.18 | 刘宇芳 | Dataset Distillat <br> [ICLR2021]DATASET CONDENSATION WITH GRADIENT MATCHING | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week2) |
| 3 | 3.25 | 高怡 | | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week3) |
| 2 | 3.18 | 刘宇芳 | Papers about Dataset Distillation | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week2) |
| 3 | 3.25 | 高怡 | Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation <br> On Episodes, Prototypical Networks, and Few-Shot Learning <br> TASK2VEC: Task Embedding for Meta-Learning | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week3) |
| 4 | 4.1 | 杨晰 | [EMNLP19] Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks <br> [EMNLP20] Inducing Target-Specific Latent Structures for Aspect Sentiment Classification | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week4) |
| 5 | 4.8 | 杜威 |[EMNLP21]Zero-Shot Information Extraction as a Unified Text-to-Triple Translation | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week5) |
| 6 | 4.15 | 王志承 |[ICLR2018]Measuring the Intrinsic Dimension of Objective Landscapes <br>[ACL2021]Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week6) |
| 7 | 4.22 | 刘宇芳 | [ICML2020]Certified Data Removal from Machine Learning Models<br> [AAAI2022]Hard to Forget: Poisoning Attacks on Certified Machine Unlearning <br> [AISTATS2021]Approximate Data Deletion from Machine Learning Models | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week7) |
| 8 | 4.29 | 纪焘 | [ACL2022]Knowledge Neurons in Pretrained Transformers <br> [EMNLP21]MultiEURLEX – A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer <br>[ACL2022]Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week8) |
| 9 | 5.6 | 高怡 | | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week9) |
| 9 | 5.6 | 高怡 |PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models<br> Noisy Channel Language Model Prompting for Few-Shot Text Classification <br> PILED: An Identify-and-Localize Framework for Few-Shot Event Detection | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week9) |
| 10 | 5.13 | 杨晰 | [ACL18]Neural Open Information Extraction <br> [EMNLP20]Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction <br>[EMNLP20]OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction <br>[EMNLP21]Maximal Clique Based Non-Autoregressive Open Information Extraction | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week10) |
| 11 | 5.20 | 李鹏 | | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week11) |
| 12 | 5.27 | 杜威 |[EMNLP16] Creating a Large Benchmark for Open Information Extraction <br> [EMNLP20] Multi2OIE: Multilingual Open Information Extraction Based on Multi-Head Attention with BERT | [Slides](https://github.com/AntNLP/seminar/tree/master/2022Spring_AntNLP/week12) |
Expand Down
Binary file added 2022Spring_AntNLP/week3/3-25.pptx
Binary file not shown.
Binary file added 2022Spring_AntNLP/week9/520.pptx
Binary file not shown.

0 comments on commit efc2cca

Please sign in to comment.