A central function of natural language is to convey information about the properties of events. Perhaps the most fundamental of these properties is factuality: whether an event happened or not.
In this line of work, we develop a factuality annotation that incorporates a notion of confidence. This allows us to handle a wide variety of cases where the factuality of an event is unclear.
Data
Train | Dev | Test | Download | Citation |
---|---|---|---|---|
5668 | 652 | 600 | v1 (tar.gz) | White et al. 2016 |
22279 | 2660 | 2561 | v2 (tar.gz) | Rudinger et al. 2018 |
References
White, Aaron Steven, Dee Ann Reisinger, Keisuke Sakaguchi, Tim Vieira, Sheng Zhang, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2016. Universal Decompositional Semantics on Universal Dependencies. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, 1713–1723. Austin, Texas: Association for Computational Linguistics.
[pdf, doi, bib]
Rudinger, Rachel, Aaron Steven White, and Benjamin Van Durme. 2018. Neural Models of Factuality. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), 731–744. New Orleans, Louisiana: Association for Computational Linguistics.
[pdf, doi, bib]
White, Aaron Steven, Rachel Rudinger, Kyle Rawlins, and Benjamin Van Durme. 2018. Lexicosyntactic Inference in Neural Models. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 4717–4724. Brussels, Belgium: Association for Computational Linguistics.
[pdf, data, doi]
Researchers
Rachel Rudinger |
Aaron Steven White |
Benjamin Van Durme |
Kyle Rawlins |