We introduce a novel data generation method for contradiction detection, which leverages the generative power of large language models as well as linguistic rules. Our vision is to provide a condensed corpus of prototypical contradictions, allowing for in-depth linguistic analysis as well as efficient language model fine-tuning. To this end, we instruct the generative models to create contradicting statements with respect to descriptions of specific contradiction types. In addition, the model is also instructed to come up with completely new contradiction typologies. As an auxiliary approach, we use linguistic rules to construct simple contradictions such as those arising from negation, antonymy and numeric mismatch. We find that our methods yield promising results in terms of coherence and variety of the data. Further studies, as well as manual refinement are necessary to make use of this data in a machine learning setup.

 

Citation:

M. Pielka, S. Schmidt, and R. Sifa, “Generating Prototypes for Contradiction Detection Using Large Language Models and Linguistic Rules,” IEEE Xplore, Dec. 2023, doi: 10.1109/bigdata59044.2023.10386499.

 

More Information:

Open source: https://doi.org/10.1109/BigData59044.2023.10386499