Skip to content

Commit

Permalink
Add xlnet-gcn as one of the benchmark in NER (#622)
Browse files Browse the repository at this point in the history
Hi Sebastian,
I would like to update my work on improving NER using the combination of contextual information from XLNet and global information from GCN to boost NER performance in comparison to standalone information.
Thanks in advance.
  • Loading branch information
honghanhh authored Jun 23, 2024
1 parent 7040ae7 commit 29dc695
Showing 1 changed file with 1 addition and 0 deletions.
1 change: 1 addition & 0 deletions english/named_entity_recognition.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ corpus tagged with four different entity types (PER, LOC, ORG, MISC). Models are
| ACE + document-context (Wang et al., 2021) | 94.6 | [Automated Concatenation of Embeddings for Structured Prediction](https://arxiv.org/pdf/2010.05006.pdf) | [Official](https://github.com/Alibaba-NLP/ACE)|
| LUKE (Yamada et al., 2020) | 94.3 | [LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention](https://www.aclweb.org/anthology/2020.emnlp-main.523/) | [Official](https://github.com/studio-ousia/luke) |
| CL-KL (Wang et al., 2021) | 93.85 | [Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning](https://arxiv.org/abs/2105.03654) | [Official](https://github.com/Alibaba-NLP/CLNER)|
| XLNet-GCN (Tran et al., 2021) | 93.82 | [Improving Named Entity Recognition by External Context Retrieving and Cooperative Learning](https://link.springer.com/chapter/10.1007/978-3-030-91669-5_21) | [Official](https://github.com/honghanhh/ner-combining-contextual-and-global-features)|
| InferNER (Moemmur et al., 2021) | 93.76| [InferNER: an attentive model leveraging the sentence-level information for Named Entity Recognition in Microblogs](https://journals.flvc.org/FLAIRS/article/view/128538) | |
| ACE (Wang et al., 2021) | 93.6 | [Automated Concatenation of Embeddings for Structured Prediction](https://arxiv.org/pdf/2010.05006.pdf) | [Official](https://github.com/Alibaba-NLP/ACE)|
| CNN Large + fine-tune (Baevski et al., 2019) | 93.5 | [Cloze-driven Pretraining of Self-attention Networks](https://arxiv.org/pdf/1903.07785.pdf) | |
Expand Down

0 comments on commit 29dc695

Please sign in to comment.