EffectChainer11/14/2022 Publisher = "Association for Computational Linguistics",Ībstract = "Previous work of class-incremental learning for Named Entity Recognition (NER) relies on the assumption that there exists abundance of labeled data for the training of new classes. Cite (Informal): Few-Shot Class-Incremental Learning for Named Entity Recognition (Wang et al., ACL 2022) Copy Citation: BibTeX Markdown MODS XML Endnote More options… PDF: = "Few-Shot Class-Incremental Learning for Named Entity Recognition",īooktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", Association for Computational Linguistics. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 571–582, Dublin, Ireland. Few-Shot Class-Incremental Learning for Named Entity Recognition. Anthology ID: 2022.acl-long.43 Volume: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) Month: May Year: 2022 Address: Dublin, Ireland Venue: ACL SIG: Publisher: Association for Computational Linguistics Note: Pages: 571–582 Language: URL: DOI: 10.18653/v1/2022.acl-long.43 Bibkey: wang-etal-2022-shot Cite (ACL): Rui Wang, Tong Yu, Handong Zhao, Sungchul Kim, Subrata Mitra, Ruiyi Zhang, and Ricardo Henao. Experimental results show that our approach achieves significant improvements over existing baselines. We further develop a framework that distills from the existing model with both synthetic data, and real data from the current training set. To alleviate the problem of catastrophic forgetting in few-shot class-incremental learning, we reconstruct synthetic training data of the old classes using the trained NER model, augmenting the training of new classes. In this work, we study a more challenging but practical problem, i.e., few-shot class-incremental learning for NER, where an NER model is trained with only few labeled samples of the new classes, without forgetting knowledge of the old ones. Abstract Previous work of class-incremental learning for Named Entity Recognition (NER) relies on the assumption that there exists abundance of labeled data for the training of new classes.
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |