Time: 3:00 pm, Friday
Venue: B914, Science Building; Online
Welcome to AntNLP Seminar 2022 Spring. : )
- Please choose recent papers (2021, 2020) from top NLP/AI venues. A (incomplete) list is
- NLP: ACL, TACL, EMNLP, NAACL, EACL
- ML: ICML, NeurIPS, AISTATS, JMLR, ICLR
- AI: AAAI, IJCAI
- IR/DM: SIGIR, CIKM, WSDM, KDD, WWW
- While we are interested in a broad range of NLP/AI topics, the followings (and a list here) are of great importance
- syntactic/semantic parsing
- entity/relation/event extraction
- distributed/distributional/compositional semantics
- MT/QA/Dialog
- (deep) learning algorithms
- Materials with broad interests are welcome (e.g., tutorials form top conferences, high-quality surveys).
-
Please fill your slots in the Agenda at least one week before your presentation.
- Please format Paper fields with [venue+year]title (e.g. [ACL21]A Good Paper).
-
Please upload your slides, and add links to them in Slides fields.
-
Besides technical novelties, please give enough background knowledge in case people are unfamiliar with your topic.
-
It would be great to keep your presentation within 60 min.
- Please read abstract/introduction sections before the seminar.
Week | Date | Speaker | Paper | Materials |
---|---|---|---|---|
1 | 3.11 | 纪焘 | PRETRAINED LANGUAGE MODEL IN CONTINUAL LEARNING: A COMPARATIVE STUDY [TACL2021]Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs |
Slides |
2 | 3.18 | 刘宇芳 | Papers about Dataset Distillation | Slides |
3 | 3.25 | 高怡 | Grad2Task: Improved Few-shot Text Classification Using Gradients for Task Representation On Episodes, Prototypical Networks, and Few-Shot Learning TASK2VEC: Task Embedding for Meta-Learning |
Slides |
4 | 4.1 | 杨晰 | [EMNLP19] Aspect-based Sentiment Classification with Aspect-specific Graph Convolutional Networks [EMNLP20] Inducing Target-Specific Latent Structures for Aspect Sentiment Classification |
Slides |
5 | 4.8 | 杜威 | [EMNLP21]Zero-Shot Information Extraction as a Unified Text-to-Triple Translation | Slides |
6 | 4.15 | 王志承 | [ICLR2018]Measuring the Intrinsic Dimension of Objective Landscapes [ACL2021]Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning |
Slides |
7 | 4.22 | 刘宇芳 | [ICML2020]Certified Data Removal from Machine Learning Models [AAAI2022]Hard to Forget: Poisoning Attacks on Certified Machine Unlearning [AISTATS2021]Approximate Data Deletion from Machine Learning Models |
Slides |
8 | 4.29 | 纪焘 | [ACL2022]Knowledge Neurons in Pretrained Transformers [EMNLP21]MultiEURLEX – A multi-lingual and multi-label legal document classification dataset for zero-shot cross-lingual transfer [ACL2022]Lifelong Pretraining: Continually Adapting Language Models to Emerging Corpora |
Slides |
9 | 5.6 | 高怡 | PERFECT: Prompt-free and Efficient Few-shot Learning with Language Models Noisy Channel Language Model Prompting for Few-Shot Text Classification PILED: An Identify-and-Localize Framework for Few-Shot Event Detection |
Slides |
10 | 5.13 | 杨晰 | [ACL18]Neural Open Information Extraction [EMNLP20]Systematic Comparison of Neural Architectures and Training Approaches for Open Information Extraction [EMNLP20]OpenIE6: Iterative Grid Labeling and Coordination Analysis for Open Information Extraction [EMNLP21]Maximal Clique Based Non-Autoregressive Open Information Extraction |
Slides |
11 | 5.20 | 李鹏 | Slides | |
12 | 5.27 | 杜威 | [EMNLP16] Creating a Large Benchmark for Open Information Extraction [EMNLP20] Multi2OIE: Multilingual Open Information Extraction Based on Multi-Head Attention with BERT |
Slides |
13 | 6.3 | 休息 | ||
14 | 6.10 | 王志承 | [ACL2022]An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels [ACL2022]Prototypical Verbalizer for Prompt-based Few-shot Tuning |
Slides |
15 | 6.17 | 休息 | ||
16 | 6.24 | 汪杰 李雨倩 |
Slides |
- How to fill the slots and upload your slides?
- creating-a-pull-request-from-a-fork/
- or you can contact:
- Peng Li, ruhao9805@gmail.com
- Tao Ji, taoji.cs@gmail.com
- Yang Wei, i@godweiyang.com
- any quesitons, please feel free to contact us.