Neural Unification for Logic Reasoning over Language
Description
In this work we propose a transformer-based architecture, namely the Neural Unifier, and a relative training procedure, for deriving conjectures given axioms expressed in natural language (English). The method achieves state-of-the-art results in term of generalisation on the considered benchmark datasets, showing that mimicking a well-known inference procedure, the backward chaining, it is possible to answer deep queries even when the model is trained only on shallow ones. More information can be found in the full paper: https://aclanthology.org/2021.findings-emnlp.331.pdf
Main Contributors
Gabriele Picco, Hoang Thanh Lam, Marco Luca Sbodio, Vanessa Lopez Garcia