ltn.core

Members

ltn.core.LTNObject(value, var_labels)

Class representing a generic LTN object.

ltn.core.Constant(value[, trainable])

Class representing an LTN constant.

ltn.core.Variable(var_label, individuals[, ...])

Class representing an LTN variable.

ltn.core.Predicate([model, func])

Class representing an LTN predicate.

ltn.core.Function([model, func])

Class representing LTN functions.

ltn.core.diag(*vars)

Sets the given LTN variables for diagonal quantification.

ltn.core.undiag(*vars)

Resets the LTN broadcasting for the given LTN variables.

ltn.core.Connective(connective_op)

Class representing an LTN connective.

ltn.core.Quantifier(agg_op, quantifier)

Class representing an LTN quantifier.

The ltn.core module contains the main functionalities of LTNtorch. In particular, it contains the definitions of constants, variables, predicates, functions, connectives, and quantifiers.

class ltn.core.LTNObject(value, var_labels)

Bases: object

Class representing a generic LTN object.

In LTNtorch, LTN objects are constants, variables, and outputs of predicates, formulas, functions, connectives, and quantifiers.

Parameters
valuetorch.Tensor

The grounding (value) of the LTN object.

var_labelslist of str

The labels of the free variables contained in the LTN object.

Raises
TypeError

Raises when the types of the input parameters are incorrect.

Notes

  • in LTNtorch, the groundings of the LTN objects (symbols) are represented using PyTorch tensors, namely torch.Tensor instances;

  • LTNObject is used by LTNtorch internally. The user should not create LTNObject instances by his/her own, unless strictly necessary.

Attributes
valuetorch.Tensor

See value parameter.

free_varslist of str

See var_labels parameter.

shape()

Returns the shape of the grounding of the LTN object.

Returns
torch.Size

The shape of the grounding of the LTN object.

class ltn.core.Constant(value, trainable=False)

Bases: ltn.core.LTNObject

Class representing an LTN constant.

An LTN constant denotes an individual grounded as a tensor in the Real field. The individual can be pre-defined (fixed data point) or learnable (embedding).

Parameters
valuetorch.Tensor

The grounding of the LTN constant. It can be a tensor of any order.

trainablebool, default=False

Flag indicating whether the LTN constant is trainable (embedding) or not.

Notes

  • LTN constants are LTN objects. ltn.core.Constant is a subclass of ltn.core.LTNObject;

  • the attribute free_vars for LTN constants is an empty list, since a constant does not have variables by definition;

  • if parameter trainable is set to True, the LTN constant becomes trainable, namely an embedding;

  • if parameter trainable is set to True, then the value attribute of the LTN constant will be used as an initialization for the embedding of the constant.

Examples

Non-trainable constant

>>> import ltn
>>> import torch
>>> c = ltn.Constant(torch.tensor([3.4, 5.4, 4.3]))
>>> print(c)
Constant(value=tensor([3.4000, 5.4000, 4.3000]), free_vars=[])
>>> print(c.value)
tensor([3.4000, 5.4000, 4.3000])
>>> print(c.free_vars)
[]
>>> print(c.shape())
torch.Size([3])

Trainable constant

>>> t_c = ltn.Constant(torch.tensor([[3.4, 2.3, 5.6],
...                                  [6.7, 5.6, 4.3]]), trainable=True)
>>> print(t_c)
Constant(value=tensor([[3.4000, 2.3000, 5.6000],
        [6.7000, 5.6000, 4.3000]], requires_grad=True), free_vars=[])
>>> print(t_c.value)
tensor([[3.4000, 2.3000, 5.6000],
        [6.7000, 5.6000, 4.3000]], requires_grad=True)
>>> print(t_c.free_vars)
[]
>>> print(t_c.shape())
torch.Size([2, 3])
class ltn.core.Variable(var_label, individuals, add_batch_dim=True)

Bases: ltn.core.LTNObject

Class representing an LTN variable.

An LTN variable denotes a sequence of individuals. It is grounded as a sequence of tensors (groundings of individuals) in the real field.

Parameters
var_labelstr

Name of the variable.

individualstorch.Tensor

Sequence of individuals (tensors) that becomes the grounding the LTN variable.

add_batch_dimbool, default=True

Flag indicating whether a batch dimension (first dimension) has to be added to the vale of the variable or not. If True, a dimension will be added only if the value attribute of the LTN variable has one single dimension. In all the other cases, the first dimension will be considered as batch dimension, so no dimension will be added.

Raises
TypeError

Raises when the types of the input parameters are not correct.

ValueError

Raises when the value of the var_label parameter is not correct.

Notes

  • LTN variables are LTN objects. ltn.core.Variable is a subclass of ltn.core.LTNObject;

  • the first dimension of an LTN variable is associated with the number of individuals in the variable, while the other dimensions are associated with the features of the individuals;

  • setting add_batch_dim to False is useful, for instance, when an LTN variable is used to denote a sequence of indexes (for example indexes for retrieving values in tensors);

  • variable labels starting with ‘_diag’ are reserved for diagonal quantification (ltn.core.diag()).

Examples

add_batch_dim=True has no effects on the variable since its value has more than one dimension, namely there is already a batch dimension.

>>> import ltn
>>> import torch
>>> x = ltn.Variable('x', torch.tensor([[3.4, 4.5],
...                                     [6.7, 9.6]]), add_batch_dim=True)
>>> print(x)
Variable(value=tensor([[3.4000, 4.5000],
        [6.7000, 9.6000]]), free_vars=['x'])
>>> print(x.value)
tensor([[3.4000, 4.5000],
        [6.7000, 9.6000]])
>>> print(x.free_vars)
['x']
>>> print(x.shape())
torch.Size([2, 2])

add_bath_dim=True adds a batch dimension to the value of the variable since it has only one dimension.

>>> y = ltn.Variable('y', torch.tensor([3.4, 4.5, 8.9]), add_batch_dim=True)
>>> print(y)
Variable(value=tensor([[3.4000],
        [4.5000],
        [8.9000]]), free_vars=['y'])
>>> print(y.value)
tensor([[3.4000],
        [4.5000],
        [8.9000]])
>>> print(y.free_vars)
['y']
>>> print(y.shape())
torch.Size([3, 1])

add_batch_dim=False tells to LTNtorch to not add a batch dimension to the value of the variable. This is useful when a variable contains a sequence of indexes.

>>> z = ltn.Variable('z', torch.tensor([1, 2, 3]), add_batch_dim=False)
>>> print(z)
Variable(value=tensor([1, 2, 3]), free_vars=['z'])
>>> print(z.value)
tensor([1, 2, 3])
>>> print(z.free_vars)
['z']
>>> print(z.shape())
torch.Size([3])
class ltn.core.Predicate(model=None, func=None)

Bases: torch.nn.modules.module.Module

Class representing an LTN predicate.

An LTN predicate is grounded as a mathematical function (either pre-defined or learnable) that maps from some n-ary domain of individuals to a real number in [0,1] (fuzzy), which can be interpreted as a truth value.

In LTNtorch, the inputs of a predicate are automatically broadcasted before the computation of the predicate, if necessary. Moreover, the output is organized in a tensor where each dimension is related to one variable given in input. See LTN broadcasting for more information.

Parameters
modeltorch.nn.Module, default=None

PyTorch model that becomes the grounding of the LTN predicate.

funcfunction, default=None

Function that becomes the grounding of the LTN predicate.

Raises
TypeError

Raises when the types of the input parameters are incorrect.

ValueError

Raises when the values of the input parameters are incorrect.

Notes

  • the output of an LTN predicate is always an LTN object (ltn.core.LTNObject);

  • LTNtorch allows to define a predicate using a trainable model or a python function, not both;

  • defining a predicate using a python function is suggested only for simple and non-learnable mathematical operations;

  • examples of LTN predicates could be similarity measures, classifiers, etc;

  • the output of an LTN predicate must be always in the range [0., 1.]. Outputs outside of this range are not allowed;

  • evaluating a predicate with one variable of \(n\) individuals yields \(n\) output values, where the \(i_{th}\) output value corresponds to the predicate calculated with the \(i_{th}\) individual;

  • evaluating a predicate with \(k\) variables \((x_1, \dots, x_k)\) with respectively \(n_1, \dots, n_k\) individuals each, yields a result with \(n_1 * \dots * n_k\) values. The result is organized in a tensor where the first \(k\) dimensions can be indexed to retrieve the outcome(s) that correspond to each variable;

  • the attribute free_vars of the LTNobject output by the predicate tells which dimension corresponds to which variable in the value of the LTNObject. See LTN broadcasting for more information;

  • to disable the LTN broadcasting, see ltn.core.diag().

Examples

Unary predicate defined using a torch.nn.Sequential.

>>> import ltn
>>> import torch
>>> predicate_model = torch.nn.Sequential(
...                         torch.nn.Linear(4, 2),
...                         torch.nn.ELU(),
...                         torch.nn.Linear(2, 1),
...                         torch.nn.Sigmoid()
...                   )
>>> p = ltn.Predicate(model=predicate_model)
>>> print(p)
Predicate(model=Sequential(
  (0): Linear(in_features=4, out_features=2, bias=True)
  (1): ELU(alpha=1.0)
  (2): Linear(in_features=2, out_features=1, bias=True)
  (3): Sigmoid()
))

Unary predicate defined using a function. Note that torch.sum is performed on dim=1. This is because in LTNtorch the first dimension (dim=0) is related to the batch dimension, while other dimensions are related to the features of the individuals. Notice that the output of the print is Predicate(model=LambdaModel()). This indicates that the LTN predicate has been defined using a function, through the func parameter of the constructor.

>>> p_f = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()(torch.sum(x, dim=1)))
>>> print(p_f)
Predicate(model=LambdaModel())

Binary predicate defined using a torch.nn.Module. Note the call to torch.cat to merge the two inputs of the binary predicate.

>>> class PredicateModel(torch.nn.Module):
...     def __init__(self):
...         super(PredicateModel, self).__init__()
...         elu = torch.nn.ELU()
...         sigmoid = torch.nn.Sigmoid()
...         self.dense1 = torch.nn.Linear(4, 5)
...         self.dense2 = torch.nn.Linear(5, 1)
...
...     def forward(self, x, y):
...         x = torch.cat([x, y], dim=1)
...         x = self.elu(self.dense1(x))
...         out = self.sigmoid(self.dense2(x))
...         return out
...
>>> predicate_model = PredicateModel()
>>> b_p = ltn.Predicate(model=predicate_model)
>>> print(b_p)
Predicate(model=PredicateModel(
  (dense1): Linear(in_features=4, out_features=5, bias=True)
  (dense2): Linear(in_features=5, out_features=1, bias=True)
))

Binary predicate defined using a function. Note the call to torch.cat to merge the two inputs of the binary predicate.

>>> b_p_f = ltn.Predicate(func=lambda x, y: torch.nn.Sigmoid()(
...                                             torch.sum(torch.cat([x, y], dim=1), dim=1)
...                                         ))
>>> print(b_p_f)
Predicate(model=LambdaModel())

Evaluation of a unary predicate on a constant. Note that:

  • the predicate returns a ltn.core.LTNObject instance;

  • since a constant has been given, the LTNObject in output does not have free variables;

  • the shape of the LTNObject in output is empty since the predicate has been evaluated on a constant, namely on one single individual;

  • the attribute value of the LTNObject in output contains the result of the evaluation of the predicate;

  • the value is in the range [0., 1.] since it has to be interpreted as a truth value. This is assured thanks to the sigmoid function in the definition of the predicate.

>>> c = ltn.Constant(torch.tensor([0.5, 0.01, 0.34, 0.001]))
>>> out = p_f(c)
>>> print(type(out))
<class 'ltn.core.LTNObject'>
>>> print(out)
LTNObject(value=tensor(0.7008), free_vars=[])
>>> print(out.value)
tensor(0.7008)
>>> print(out.free_vars)
[]
>>> print(out.shape())
torch.Size([])

Evaluation of a unary predicate on a variable. Note that:

  • since a variable has been given, the LTNObject in output has one free variable;

  • the shape of the LTNObject in output is 2 since the predicate has been evaluated on a variable with two individuls.

>>> v = ltn.Variable('v', torch.tensor([[0.4, 0.3],
...                                     [0.32, 0.043]]))
>>> out = p_f(v)
>>> print(out)
LTNObject(value=tensor([0.6682, 0.5898]), free_vars=['v'])
>>> print(out.value)
tensor([0.6682, 0.5898])
>>> print(out.free_vars)
['v']
>>> print(out.shape())
torch.Size([2])

Evaluation of a binary predicate on a variable and a constant. Note that:

  • like in the previous example, the LTNObject in output has just one free variable, since only one variable has been given to the predicate;

  • the shape of the LTNObject in output is 2 since the predicate has been evaluated on a variable with two individuals. The constant does not add dimensions to the output.

>>> v = ltn.Variable('v', torch.tensor([[0.4, 0.3],
...                                     [0.32, 0.043]]))
>>> c = ltn.Constant(torch.tensor([0.4, 0.04, 0.23, 0.43]))
>>> out = b_p_f(v, c)
>>> print(out)
LTNObject(value=tensor([0.8581, 0.8120]), free_vars=['v'])
>>> print(out.value)
tensor([0.8581, 0.8120])
>>> print(out.free_vars)
['v']
>>> print(out.shape())
torch.Size([2])

Evaluation of a binary predicate on two variables. Note that:

  • since two variables have been given, the LTNObject in output has two free variables;

  • the shape of the LTNObject in output is (2, 3) since the predicate has been evaluated on a variable with two individuals and a variable with three individuals;

  • the first dimension is dedicated to variable x, which is also the first one appearing in free_vars, while the second dimension is dedicated to variable y, which is the second one appearing in free_vars;

  • it is possible to access the value attribute for getting the results of the predicate. For example, at position (1, 2) there is the evaluation of the predicate on the second individual of x and third individuals of y.

>>> x = ltn.Variable('x', torch.tensor([[0.4, 0.3],
...                                     [0.32, 0.043]]))
>>> y = ltn.Variable('y', torch.tensor([[0.4, 0.04, 0.23],
...                                     [0.2, 0.04, 0.32],
...                                     [0.06, 0.08, 0.3]]))
>>> out = b_p_f(x, y)
>>> print(out)
LTNObject(value=tensor([[0.7974, 0.7790, 0.7577],
        [0.7375, 0.7157, 0.6906]]), free_vars=['x', 'y'])
>>> print(out.value)
tensor([[0.7974, 0.7790, 0.7577],
        [0.7375, 0.7157, 0.6906]])
>>> print(out.free_vars)
['x', 'y']
>>> print(out.shape())
torch.Size([2, 3])
>>> print(out.value[1, 2])
tensor(0.6906)
Attributes
modeltorch.nn.Module or function

The grounding of the LTN predicate.

forward(*inputs, **kwargs)

It computes the output of the predicate given some LTN objects in input.

Before computing the predicate, it performs the LTN broadcasting of the inputs.

Parameters
inputstuple of ltn.core.LTNObject

Tuple of LTN objects for which the predicate has to be computed.

Returns
ltn.core.LTNObject

An LTNObject whose value attribute contains the truth values representing the result of the predicate, while free_vars attribute contains the labels of the free variables contained in the result.

Raises
TypeError

Raises when the types of the inputs are incorrect.

ValueError

Raises when the values of the output are not in the range [0., 1.].

class ltn.core.Function(model=None, func=None)

Bases: torch.nn.modules.module.Module

Class representing LTN functions.

An LTN function is grounded as a mathematical function (either pre-defined or learnable) that maps from some n-ary domain of individuals to a tensor (individual) in the Real field.

In LTNtorch, the inputs of a function are automatically broadcasted before the computation of the function, if necessary. Moreover, the output is organized in a tensor where the first \(k\) dimensions are related with the \(k\) variables given in input, while the last dimensions are related with the features of the individual in output. See LTN broadcasting for more information.

Parameters
modeltorch.nn.Module, default=None

PyTorch model that becomes the grounding of the LTN function.

funcfunction, default=None

Function that becomes the grounding of the LTN function.

Raises
TypeError

Raises when the types of the input parameters are incorrect.

ValueError

Raises when the values of the input parameters are incorrect.

Notes

  • the output of an LTN function is always an LTN object (ltn.core.LTNObject);

  • LTNtorch allows to define a function using a trainable model or a python function, not both;

  • defining an LTN function using a python function is suggested only for simple and non-learnable mathematical operations;

  • examples of LTN functions could be distance functions, regressors, etc;

  • differently from LTN predicates, the output of an LTN function has no constraints;

  • evaluating a function with one variable of \(n\) individuals yields \(n\) output values, where the \(i_{th}\) output value corresponds to the function calculated with the \(i_{th}\) individual;

  • evaluating a function with \(k\) variables \((x_1, \dots, x_k)\) with respectively \(n_1, \dots, n_k\) individuals each, yields a result with \(n_1 * \dots * n_k\) values. The result is organized in a tensor where the first \(k\) dimensions can be indexed to retrieve the outcome(s) that correspond to each variable;

  • the attribute free_vars of the LTNobject output by the function tells which dimension corresponds to which variable in the value of the LTNObject. See LTN broadcasting for more information;

  • to disable the LTN broadcasting, see ltn.core.diag().

Examples

Unary function defined using a torch.nn.Sequential.

>>> import ltn
>>> import torch
>>> function_model = torch.nn.Sequential(
...                         torch.nn.Linear(4, 3),
...                         torch.nn.ELU(),
...                         torch.nn.Linear(3, 2)
...                   )
>>> f = ltn.Function(model=function_model)
>>> print(f)
Function(model=Sequential(
  (0): Linear(in_features=4, out_features=3, bias=True)
  (1): ELU(alpha=1.0)
  (2): Linear(in_features=3, out_features=2, bias=True)
))

Unary function defined using a function. Note that torch.sum is performed on dim=1. This is because in LTNtorch the first dimension (dim=0) is related to the batch dimension, while other dimensions are related to the features of the individuals. Notice that the output of the print is Function(model=LambdaModel()). This indicates that the LTN function has been defined using a function, through the func parameter of the constructor.

>>> f_f = ltn.Function(func=lambda x: torch.repeat_interleave(
...                                              torch.sum(x, dim=1, keepdim=True), 2, dim=1)
...                                         )
>>> print(f_f)
Function(model=LambdaModel())

Binary function defined using a torch.nn.Module. Note the call to torch.cat to merge the two inputs of the binary function.

>>> class FunctionModel(torch.nn.Module):
...     def __init__(self):
...         super(FunctionModel, self).__init__()
...         elu = torch.nn.ELU()
...         self.dense1 = torch.nn.Linear(4, 5)
...         dense2 = torch.nn.Linear(5, 2)
...
...     def forward(self, x, y):
...         x = torch.cat([x, y], dim=1)
...         x = self.elu(self.dense1(x))
...         out = self.dense2(x)
...         return out
...
>>> function_model = FunctionModel()
>>> b_f = ltn.Function(model=function_model)
>>> print(b_f)
Function(model=FunctionModel(
  (dense1): Linear(in_features=4, out_features=5, bias=True)
))

Binary function defined using a function. Note the call to torch.cat to merge the two inputs of the binary function.

>>> b_f_f = ltn.Function(func=lambda x, y:
...                                 torch.repeat_interleave(
...                                     torch.sum(torch.cat([x, y], dim=1), dim=1, keepdim=True), 2,
...                                     dim=1))
>>> print(b_f_f)
Function(model=LambdaModel())

Evaluation of a unary function on a constant. Note that:

  • the function returns a ltn.core.LTNObject instance;

  • since a constant has been given, the LTNObject in output does not have free variables;

  • the shape of the LTNObject in output is (2) since the function has been evaluated on a constant, namely on one single individual, and returns individuals in \(\mathbb{R}^2\);

  • the attribute value of the LTNObject in output contains the result of the evaluation of the function.

>>> c = ltn.Constant(torch.tensor([0.5, 0.01, 0.34, 0.001]))
>>> out = f_f(c)
>>> print(type(out))
<class 'ltn.core.LTNObject'>
>>> print(out)
LTNObject(value=tensor([0.8510, 0.8510]), free_vars=[])
>>> print(out.value)
tensor([0.8510, 0.8510])
>>> print(out.free_vars)
[]
>>> print(out.shape())
torch.Size([2])

Evaluation of a unary function on a variable. Note that:

  • since a variable has been given, the LTNObject in output has one free variable;

  • the shape of the LTNObject in output is (2, 2) since the function has been evaluated on a variable with two individuls and returns individuals in \(\mathbb{R}^2\).

>>> v = ltn.Variable('v', torch.tensor([[0.4, 0.3],
...                                     [0.32, 0.043]]))
>>> out = f_f(v)
>>> print(out)
LTNObject(value=tensor([[0.7000, 0.7000],
        [0.3630, 0.3630]]), free_vars=['v'])
>>> print(out.value)
tensor([[0.7000, 0.7000],
        [0.3630, 0.3630]])
>>> print(out.free_vars)
['v']
>>> print(out.shape())
torch.Size([2, 2])

Evaluation of a binary function on a variable and a constant. Note that:

  • like in the previous example, the LTNObject in output has just one free variable, since only one variable has been given to the predicate;

  • the shape of the LTNObject in output is (2, 2) since the function has been evaluated on a variable with two individuals and returns individuals in \(\mathbb{R}^2\). The constant does not add dimensions to the output.

>>> v = ltn.Variable('v', torch.tensor([[0.4, 0.3],
...                                     [0.32, 0.043]]))
>>> c = ltn.Constant(torch.tensor([0.4, 0.04, 0.23, 0.43]))
>>> out = b_f_f(v, c)
>>> print(out)
LTNObject(value=tensor([[1.8000, 1.8000],
        [1.4630, 1.4630]]), free_vars=['v'])
>>> print(out.value)
tensor([[1.8000, 1.8000],
        [1.4630, 1.4630]])
>>> print(out.free_vars)
['v']
>>> print(out.shape())
torch.Size([2, 2])

Evaluation of a binary function on two variables. Note that:

  • since two variables have been given, the LTNObject in output has two free variables;

  • the shape of the LTNObject in output is (2, 3, 2) since the function has been evaluated on a variable with two individuals, a variable with three individuals, and returns individuals in \(\mathbb{R}^2\);

  • the first dimension is dedicated to variable x, which is also the first one appearing in free_vars, the second dimension is dedicated to variable y, which is the second one appearing in free_vars, while the last dimensions is dedicated to the features of the individuals in output;

  • it is possible to access the value attribute for getting the results of the function. For example, at position (1, 2) there is the evaluation of the function on the second individual of x and third individuals of y.

>>> x = ltn.Variable('x', torch.tensor([[0.4, 0.3],
...                                     [0.32, 0.043]]))
>>> y = ltn.Variable('y', torch.tensor([[0.4, 0.04, 0.23],
...                                     [0.2, 0.04, 0.32],
...                                     [0.06, 0.08, 0.3]]))
>>> out = b_f_f(x, y)
>>> print(out)
LTNObject(value=tensor([[[1.3700, 1.3700],
         [1.2600, 1.2600],
         [1.1400, 1.1400]],

        [[1.0330, 1.0330],
         [0.9230, 0.9230],
         [0.8030, 0.8030]]]), free_vars=['x', 'y'])
>>> print(out.value)
tensor([[[1.3700, 1.3700],
         [1.2600, 1.2600],
         [1.1400, 1.1400]],

        [[1.0330, 1.0330],
         [0.9230, 0.9230],
         [0.8030, 0.8030]]])
>>> print(out.free_vars)
['x', 'y']
>>> print(out.shape())
torch.Size([2, 3, 2])
>>> print(out.value[1, 2])
tensor([0.8030, 0.8030])
Attributes
modeltorch.nn.Module or function

The grounding of the LTN function.

forward(*inputs, **kwargs)

It computes the output of the function given some LTN objects in input.

Before computing the function, it performs the LTN broadcasting of the inputs.

Parameters
inputstuple of ltn.core.LTNObject

Tuple of LTN objects for which the function has to be computed.

Returns
ltn.core.LTNObject

An LTNObject whose value attribute contains the result of the function, while free_vars attribute contains the labels of the free variables contained in the result.

Raises
TypeError

Raises when the types of the inputs are incorrect.

ltn.core.diag(*vars)

Sets the given LTN variables for diagonal quantification.

The diagonal quantification disables the LTN broadcasting for the given variables.

Parameters
varstuple of ltn.core.Variable

Tuple of LTN variables for which the diagonal quantification has to be set.

Returns
list of ltn.core.Variable

List of the same LTN variables given in input, prepared for the use of diagonal quantification.

Raises
TypeError

Raises when the types of the input parameters are incorrect.

ValueError

Raises when the values of the input parameters are incorrect.

See also

ltn.core.undiag()

It allows to disable the diagonal quantification for the given variables.

Notes

  • diagonal quantification has been designed to work with quantified statements, however, it could be used also to reduce the combinations of individuals for which a predicate has to be computed, making the computation more efficient;

  • diagonal quantification is particularly useful when we need to compute a predicate, or function, on specific tuples of variables’ individuals only;

  • diagonal quantification expects the given variables to have the same number of individuals.

Examples

Behavior of a predicate without diagonal quantification. Note that:

  • if diagonal quantification is not used, LTNtorch applies the LTN broadcasting to the variables before computing the predicate;

  • the shape of the LTNObject in output is (2, 2) since the predicate has been computed on two variables with two individuals each;

  • the free_vars attribute of the LTNObject in output contains two variables, namely the variables on which the predicate has been computed.

>>> import ltn
>>> import torch
>>> p = ltn.Predicate(func=lambda a, b: torch.nn.Sigmoid()(
...                                         torch.sum(torch.cat([a, b], dim=1), dim=1)
...                                     ))
>>> x = ltn.Variable('x', torch.tensor([[0.3, 0.56, 0.43], [0.3, 0.5, 0.04]]))
>>> y = ltn.Variable('y', torch.tensor([[0.4, 0.004], [0.3, 0.32]]))
>>> out = p(x, y)
>>> print(out.value)
tensor([[0.8447, 0.8710],
        [0.7763, 0.8115]])
>>> print(out.free_vars)
['x', 'y']
>>> print(out.shape())
torch.Size([2, 2])

Behavior of the same predicate with diagonal quantification. Note that:

  • diagonal quantification requires the two variables to have the same number of individuals;

  • diagonal quantification has disabled the LTN broadcasting, namely the predicate is not computed on all the possible combinations of individuals of the two variables (that are 2x2). Instead, it is computed only on the given tuples of individuals (that are 2), namely on the first individual of x and first individual of y, and on the second individual of x and second individual of y;

  • the shape of the LTNObject in output is (2) since diagonal quantification has been set and the variables have two individuals;

  • the free_vars attribute of the LTNObject in output has just one variable, even if two variables have been given to the predicate. This is due to diagonal quantification;

  • when diagonal quantification is set, you will se a variable label starting with diag_ in the free_Vars attribute.

>>> x, y = ltn.diag(x, y)
>>> out = p(x, y)
>>> print(out.value)
tensor([0.8447, 0.8115])
>>> print(out.free_vars)
['diag_x_y']
>>> print(out.shape())
torch.Size([2])

See the examples under ltn.core.Quantifier to see how to use ltn.core.diag() with quantifiers.

ltn.core.undiag(*vars)

Resets the LTN broadcasting for the given LTN variables.

In other words, it removes the diagonal quantification setting from the given variables.

Parameters
varstuple of ltn.core.Variable

Tuple of LTN variables for which the diagonal quantification setting has to be removed.

Returns
list

List of the same LTN variables given in input, with the diagonal quantification setting removed.

Raises
TypeError

Raises when the types of the input parameters are incorrect.

See also

ltn.core.diag()

It allows to set the diagonal quantification for the given variables.

Examples

Behavior of predicate with diagonal quantification. Note that:

  • diagonal quantification requires the two variables to have the same number of individuals;

  • diagonal quantification has disabled the LTN broadcasting, namely the predicate is not computed on all the possible combinations of individuals of the two variables (that are 2x2). Instead, it is computed only on the given tuples of individuals (that are 2), namely on the first individual of x and first individual of y, and on the second individual of x and second individual of y;

  • the shape of the LTNObject in output is (2) since diagonal quantification has been set and the variables have two individuals;

  • the free_vars attribute of the LTNObject in output has just one variable, even if two variables have been given to the predicate. This is due to diagonal quantification;

  • when diagonal quantification is set, you will se a variable label starting with diag_ in the free_Vars attribute.

>>> import ltn
>>> import torch
>>> p = ltn.Predicate(func=lambda a, b: torch.nn.Sigmoid()(
...                                         torch.sum(torch.cat([a, b], dim=1), dim=1)
...                                     ))
>>> x = ltn.Variable('x', torch.tensor([[0.3, 0.56, 0.43], [0.3, 0.5, 0.04]]))
>>> y = ltn.Variable('y', torch.tensor([[0.4, 0.004], [0.3, 0.32]]))
>>> x, y = ltn.diag(x, y)
>>> out = p(x, y)
>>> print(out.value)
tensor([0.8447, 0.8115])
>>> print(out.free_vars)
['diag_x_y']
>>> print(out.shape())
torch.Size([2])

ltn.core.undiag() can be used to restore the LTN broadcasting for the two variables. In the following, it is shown the behavior of the same predicate without diagonal quantification. Note that:

  • since diagonal quantification has been disabled, LTNtorch applies the LTN broadcasting to the variables before computing the predicate;

  • the shape of the LTNObject in output is (2, 2) since the predicate has been computed on two variables with two individuals each;

  • the free_vars attribute of the LTNObject in output contains two variables, namely the variables on which the predicate has been computed.

>>> x, y = ltn.undiag(x, y)
>>> out = p(x, y)
>>> print(out.value)
tensor([[0.8447, 0.8710],
        [0.7763, 0.8115]])
>>> print(out.free_vars)
['x', 'y']
>>> print(out.shape())
torch.Size([2, 2])
class ltn.core.Connective(connective_op)

Bases: object

Class representing an LTN connective.

An LTN connective is grounded as a fuzzy connective operator.

In LTNtorch, the inputs of a connective are automatically broadcasted before the computation of the connective, if necessary. Moreover, the output is organized in a tensor where each dimension is related to one variable appearing in the inputs. See LTN broadcasting for more information.

Parameters
connective_opltn.fuzzy_ops.ConnectiveOperator

The unary/binary fuzzy connective operator that becomes the grounding of the LTN connective.

Raises
TypeError

Raises when the type of the input parameter is incorrect.

See also

ltn.fuzzy_ops

The ltn.fuzzy_ops module contains the definition of common fuzzy connective operators that can be used with LTN connectives.

Notes

  • the LTN connective supports various fuzzy connective operators. They can be found in ltn.fuzzy_ops;

  • the LTN connective allows to use these fuzzy operators with LTN formulas. It takes care of combining sub-formulas which have different variables appearing in them (LTN broadcasting).

  • an LTN connective can be applied only to LTN objects containing truth values, namely values in \([0., 1.]\);

  • the output of an LTN connective is always an LTN object (ltn.core.LTNObject).

__call__(*operands, **kwargs)

It applies the selected fuzzy connective operator (connective_op attribute) to the operands (LTN objects) given in input.

Parameters
operandstuple of ltn.core.LTNObject

Tuple of LTN objects representing the operands to which the fuzzy connective operator has to be applied.

Returns
ltn.core.LTNObject

The LTNObject that is the result of the application of the fuzzy connective operator to the given LTN objects.

Raises
TypeError

Raises when the types of the input parameters are incorrect.

ValueError

Raises when the values of the input parameters are incorrect. Raises when the truth values of the operands given in input are not in the range [0., 1.].

Examples

Use of \(\land\) to create a formula which is the conjunction of two predicates. Note that:

  • a connective operator can be applied only to inputs which represent truth values. In this case with have two predicates;

  • LTNtorch provides various semantics for the conjunction, here we use the Goguen conjunction (ltn.fuzzy_ops.AndProd);

  • LTNtorch applies the LTN broadcasting to the variables before computing the predicates;

  • LTNtorch applies the LTN brodcasting to the operands before applying the selected conjunction operator;

  • the result of a connective operator is a ltn.core.LTNObject instance containing truth values in [0., 1.];

  • the attribute value of the LTNObject in output contains the result of the connective operator;

  • the shape of the LTNObject in output is (2, 3, 4). The first dimension is associated with variable x, which has two individuals, the second dimension with variable y, which has three individuals, while the last dimension with variable z, which has four individuals;

  • it is possible to access to specific results by indexing the attribute value. For example, at index (0, 1, 2) there is the evaluation of the formula on the first individual of x, second individual of y, and third individual of z;

  • the attribute free_vars of the LTNObject in output contains the labels of the three variables appearing in the formula.

>>> import ltn
>>> import torch
>>> p = ltn.Predicate(func=lambda a: torch.nn.Sigmoid()(
...                                     torch.sum(a, dim=1)
...                                  ))
>>> q = ltn.Predicate(func=lambda a, b: torch.nn.Sigmoid()(
...                                         torch.sum(torch.cat([a, b], dim=1),
...                                     dim=1)))
>>> x = ltn.Variable('x', torch.tensor([[0.3, 0.5],
...                                     [0.04, 0.43]]))
>>> y = ltn.Variable('y', torch.tensor([[0.5, 0.23],
...                                     [4.3, 9.3],
...                                     [4.3, 0.32]]))
>>> z = ltn.Variable('z', torch.tensor([[0.3, 0.4, 0.43],
...                                     [0.4, 4.3, 5.1],
...                                     [1.3, 4.3, 2.3],
...                                     [0.4, 0.2, 1.2]]))
>>> And = ltn.Connective(ltn.fuzzy_ops.AndProd())
>>> print(And)
Connective(connective_op=AndProd(stable=True))
>>> out = And(p(x), q(y, z))
>>> print(out)
LTNObject(value=tensor([[[0.5971, 0.6900, 0.6899, 0.6391],
         [0.6900, 0.6900, 0.6900, 0.6900],
         [0.6878, 0.6900, 0.6900, 0.6889]],

        [[0.5325, 0.6154, 0.6153, 0.5700],
         [0.6154, 0.6154, 0.6154, 0.6154],
         [0.6135, 0.6154, 0.6154, 0.6144]]]), free_vars=['x', 'y', 'z'])
>>> print(out.value)
tensor([[[0.5971, 0.6900, 0.6899, 0.6391],
         [0.6900, 0.6900, 0.6900, 0.6900],
         [0.6878, 0.6900, 0.6900, 0.6889]],

        [[0.5325, 0.6154, 0.6153, 0.5700],
         [0.6154, 0.6154, 0.6154, 0.6154],
         [0.6135, 0.6154, 0.6154, 0.6144]]])
>>> print(out.free_vars)
['x', 'y', 'z']
>>> print(out.shape())
torch.Size([2, 3, 4])
Attributes
connective_opltn.fuzzy_ops.ConnectiveOperator

See connective_op parameter.

class ltn.core.Quantifier(agg_op, quantifier)

Bases: object

Class representing an LTN quantifier.

An LTN quantifier is grounded as a fuzzy aggregation operator. See quantification in LTN for more information about quantification.

Parameters
agg_opltn.fuzzy_ops.AggregationOperator

The fuzzy aggregation operator that becomes the grounding of the LTN quantifier.

quantifierstr

String indicating the quantification that has to be performed (‘e’ for ∃, or ‘f’ for ∀).

Raises
TypeError

Raises when the type of the agg_op parameter is incorrect.

ValueError

Raises when the value of the quantifier parameter is incorrect.

See also

ltn.fuzzy_ops

The ltn.fuzzy_ops module contains the definition of common fuzzy aggregation operators that can be used with LTN quantifiers.

Notes

  • the LTN quantifier supports various fuzzy aggregation operators, which can be found in ltn.fuzzy_ops;

  • the LTN quantifier allows to use these fuzzy aggregators with LTN formulas. It takes care of selecting the formula (LTNObject) dimensions to aggregate, given some LTN variables in arguments.

  • boolean conditions (by setting parameters mask_fn and mask_vars) can be used for guarded quantification;

  • an LTN quantifier can be applied only to LTN objects containing truth values, namely values in \([0., 1.]\);

  • the output of an LTN quantifier is always an LTN object (ltn.core.LTNObject).

__call__(vars, formula, cond_vars=None, cond_fn=None, **kwargs)

It applies the selected aggregation operator (agg_op attribute) to the formula given in input based on the selected variables.

It allows also to perform a guarded quantification by setting cond_vars and cond_fn parameters.

Parameters
varslist of ltn.core.Variable

List of LTN variables on which the quantification has to be performed.

formulaltn.core.LTNObject

Formula on which the quantification has to be performed.

cond_varslist of ltn.core.Variable, default=None

List of LTN variables that appear in the guarded quantification condition.

cond_fnfunction, default=None

Function representing the guarded quantification condition.

Raises
TypeError

Raises when the types of the input parameters are incorrect.

ValueError

Raises when the values of the input parameters are incorrect. Raises when the truth values of the formula given in input are not in the range [0., 1.].

Examples

Behavior of a binary predicate evaluated on two variables. Note that:

  • the shape of the LTNObject in output is (2, 3) since the predicate has been computed on a variable with two individuals and a variable with three individuals;

  • the attribute free_vars of the LTNObject in output contains the labels of the two variables given in input to the predicate.

>>> import ltn
>>> import torch
>>> p = ltn.Predicate(func=lambda a, b: torch.nn.Sigmoid()(
...                                         torch.sum(torch.cat([a, b], dim=1),
...                                     dim=1)))
>>> x = ltn.Variable('x', torch.tensor([[0.3, 1.3],
...                                     [0.3, 0.3]]))
>>> y = ltn.Variable('y', torch.tensor([[2.3, 0.3, 0.4],
...                                     [1.2, 3.4, 1.3],
...                                     [2.3, 1.4, 1.4]]))
>>> out = p(x, y)
>>> print(out)
LTNObject(value=tensor([[0.9900, 0.9994, 0.9988],
        [0.9734, 0.9985, 0.9967]]), free_vars=['x', 'y'])
>>> print(out.value)
tensor([[0.9900, 0.9994, 0.9988],
        [0.9734, 0.9985, 0.9967]])
>>> print(out.free_vars)
['x', 'y']
>>> print(out.shape())
torch.Size([2, 3])

Universal quantification on one single variable of the same predicate. Note that:

  • quantifier=’f’ means that we are defining the fuzzy semantics for the universal quantifier;

  • the result of a quantification operator is always a ltn.core.LTNObject instance;

  • LTNtorch supports various sematics for quantifiers, here we use ltn.fuzzy_ops.AggregPMeanError for \(\forall\);

  • the shape of the LTNObject in output is (3) since the quantification has been performed on variable x. Only the dimension associated with variable y has left since the quantification has been computed by LTNtorch as an aggregation on the dimension related with variable x;

  • the attribute free_vars of the LTNObject in output contains only the label of variable y. This is because variable x has been quantified, namely it is not a free variable anymore;

  • in LTNtorch, the quantification is performed by computing the value of the predicate first and then by aggregating on the selected dimensions, specified by the quantified variables.

>>> Forall = ltn.Quantifier(ltn.fuzzy_ops.AggregPMeanError(), quantifier='f')
>>> print(Forall)
Quantifier(agg_op=AggregPMeanError(p=2, stable=True), quantifier='f')
>>> out = Forall(x, p(x, y))
>>> print(out)
LTNObject(value=tensor([0.9798, 0.9988, 0.9974]), free_vars=['y'])
>>> print(out.value)
tensor([0.9798, 0.9988, 0.9974])
>>> print(out.free_vars)
['y']
>>> print(out.shape())
torch.Size([3])

Universal quantification on both variables of the same predicate. Note that:

  • the shape of the LTNObject in output is empty since the quantification has been performed on both variables. No dimension has left since the quantification has been computed by LTNtorch as an aggregation on both dimensions of the value of the predicate;

  • the attribute free_vars of the LTNObject in output contains no labels of variables. This is because both variables have been quantified, namely they are not free variables anymore.

>>> out = Forall([x, y], p(x, y))
>>> print(out)
LTNObject(value=tensor(0.9882), free_vars=[])
>>> print(out.value)
tensor(0.9882)
>>> print(out.free_vars)
[]
>>> print(out.shape())
torch.Size([])

Universal quantification on one variable, and existential quantification on the other variable, of the same predicate. Note that:

  • the only way in LTNtorch to apply two different quantifiers to the same formula is a nested syntax;

  • quantifier=’e’ means that we are defining the fuzzy semantics for the existential quantifier;

  • LTNtorch supports various sematics for quantifiers, here we use ltn.fuzzy_ops.AggregPMean for \(\exists\).

>>> Exists = ltn.Quantifier(ltn.fuzzy_ops.AggregPMean(), quantifier='e')
>>> print(Exists)
Quantifier(agg_op=AggregPMean(p=2, stable=True), quantifier='e')
>>> out = Forall(x, Exists(y, p(x, y)))
>>> print(out)
LTNObject(value=tensor(0.9920), free_vars=[])
>>> print(out.value)
tensor(0.9920)
>>> print(out.free_vars)
[]
>>> print(out.shape())
torch.Size([])

Guarded quantification. We perform a universal quantification on both variables of the same predicate, considering only the individuals of variable x whose sum of features is lower than a certain threshold. Note that:

  • guarded quantification requires the parameters cond_vars and cond_fn to be set;

  • cond_vars contains the variables on which the guarded condition is based on. In this case, we have decided to create a condition on x;

  • cond_fn contains the function which is the guarded condition. In this case, it verifies if the sum of features of the individuals of x is lower than 1. (our threshold);

  • the second individual of x, which is [0.3, 0.3], satisfies the condition, namely it will not be considered when the aggregation has to be performed. In other words, all the results of the predicate computed using the second individual of x will not be considered in the aggregation;

  • notice the result changes compared to the previous example (\(\forall x \forall y P(x, y)\)). This is due to the fact that some truth values of the result of the predicate are not considered in the aggregation due to guarded quantification. These values are at positions (1, 0), (1, 1), and (1, 2), namely all the positions related with the second individual of x in the result of the predicate;

  • notice that the shape of the LTNObject in output and its attribute free_vars remain the same compared to the previous example. This is because the quantification is still on both variables, namely it is perfomed on both dimensions of the result of the predicate.

>>> out = Forall([x, y], p(x, y),
...             cond_vars=[x],
...             cond_fn=lambda x: torch.less(torch.sum(x.value, dim=1), 1.))
>>> print(out)
LTNObject(value=tensor(0.9844, dtype=torch.float64), free_vars=[])
>>> print(out.value)
tensor(0.9844, dtype=torch.float64)
>>> print(out.free_vars)
[]
>>> print(out.shape())
torch.Size([])

Universal quantification of both variables of the same predicate using diagonal quantification (ltn.core.diag()). Note that:

  • the variables have the same number of individuals since it is a constraint for applying diagonal quantification;

  • since diagonal quantification has been set, the predicate will not be computed on all the possible combinations of individuals of the two variables (that are 4), namely the LTN broadcasting is disabled;

  • the predicate is computed only on the given tuples of individuals in a one-to-one correspondence, namely on the first individual of x and y, and second individual of x and y;

  • the result changes compared to the case without diagonal quantification. This is due to the fact that we are aggregating a smaller number of truth values since the predicate has been computed only two times.

>>> x = ltn.Variable('x', torch.tensor([[0.3, 1.3],
...                                     [0.3, 0.3]]))
>>> y = ltn.Variable('y', torch.tensor([[2.3, 0.3],
...                                    [1.2, 3.4]]))
>>> out = Forall(ltn.diag(x, y), p(x, y)) # with diagonal quantification
>>> out_without_diag = Forall([x, y], p(x, y)) # without diagonal quantification
>>> print(out_without_diag)
LTNObject(value=tensor(0.9788), free_vars=[])
>>> print(out_without_diag.value)
tensor(0.9788)
>>> print(out)
LTNObject(value=tensor(0.9888), free_vars=[])
>>> print(out.value)
tensor(0.9888)
>>> print(out.free_vars)
[]
>>> print(out.shape())
torch.Size([])
Attributes
agg_opltn.fuzzy_ops.AggregationOperator

See agg_op parameter.

quantifierstr

See quantifier parameter.