Factuality

Data

v1 (tar.gz), v2 (tar.gz)

About

A central function of natural language is to convey information about the properties of events. Perhaps the most fundamental of these properties is factuality: whether an event happened or not.

In this line of work, we develop a factuality annotation that incorporates a notion of confidence. This allows us to handle a wide variety of cases where the factuality of an event is unclear.

For a detailed description of the datasets and the item construction and collection methods as well as models of these data, please see the following papers:

White, A.S., D. Reisinger, K. Sakaguchi, T. Vieira, S. Zhang, R. Rudinger, K. Rawlins, & B. Van Durme. 2016. Universal Decompositional Semantics on Universal Dependencies. Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1713–1723, Austin, Texas, November 1-5, 2016.

Rudinger, R., White, A.S., & B. Van Durme. 2018. Neural models of factuality. Proceedings of NAACL-HLT 2018, pages 731–744. New Orleans, Louisiana, June 1-6, 2018.

White, A. S., R. Rudinger, K. Rawlins, & B. Van Durme. 2018. Lexicosyntactic Inference in Neural Models. To appear in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31-November 4, 2018.

If you make use of these datasets in a presentation or publication, we ask that you please cite both of these papers.

Researchers

Benjamin Van Durme bio photo
Benjamin Van Durme
Kyle Rawlins bio photo
Kyle Rawlins
Aaron Steven White bio photo
Aaron Steven White
Rachel Rudinger bio photo
Rachel Rudinger