Introduction to Learning in Logic Tensor Networks
In order to train a Logic Tensor Network, one has to define:
a First-Order Logic knowledge base containing some logical axioms;
some learnable predicates, functions, and/or logical constants appearing in the axioms;
some data.
Given these three components, the LTN workflow is the following:
grounding phase: data is used to ground (instantiate) the logical axioms included in the knowledge base;
forward phase: the truth values of the logical axioms are computed based on the given grounding (instantiation);
aggregation phase: the truth values of the axioms are aggregated to compute the overall satisfaction level of the knowledge base;
loss function computation: the gap between the overall satisfaction level and the truth (1) has to be minimized;
backward phase: the parameters of the learnable predicates, functions, and/or constants are changed in such a way to maximize the overall satisfaction level.
The training ends up with a solution which maximally satisfies all the logical axioms in the knowledge base. This tutorial shows how to use the satisfaction of a First-Order Logic knowledge base as an objective to learn a Logic Tensor Network.
In this documentation, you will find how to create a First-Order Logic knowledge base containing learnable predicates (ltn.core.Predicate
),
functions (ltn.core.Function
), and/or constants (ltn.core.Constant
) using LTNtorch.