ltn.fuzzy_ops
Members
Abstract class for connective operators. |
|
Abstract class for unary connective operators. |
|
Abstract class for binary connective operators. |
|
Standard fuzzy negation operator. |
|
Godel fuzzy negation operator. |
|
Godel fuzzy conjunction operator (min operator). |
|
|
Goguen fuzzy conjunction operator (product operator). |
Lukasiewicz fuzzy conjunction operator. |
|
Godel fuzzy disjunction operator (max operator). |
|
|
Goguen fuzzy disjunction operator (probabilistic sum). |
Lukasiewicz fuzzy disjunction operator. |
|
Kleene Dienes fuzzy implication operator. |
|
Godel fuzzy implication operand. |
|
|
Reichenbach fuzzy implication operator. |
|
Goguen fuzzy implication operator. |
|
Equivalence (\(\leftrightarrow\)) fuzzy operator. |
Abstract class for aggregation operators. |
|
Min fuzzy aggregation operator. |
|
Mean fuzzy aggregation operator. |
|
|
pMean fuzzy aggregation operator. |
|
pMeanError fuzzy aggregation operator. |
|
SatAgg aggregation operator. |
The ltn.fuzzy_ops module contains the PyTorch implementation of some common fuzzy logic operators and aggregators. Refer to the LTN paper for a detailed description of these operators (see the Appendix).
All the operators included in this module support the traditional NumPy/PyTorch broadcasting.
The operators have been designed to be used with ltn.core.Connective or ltn.core.Quantifier.
- class ltn.fuzzy_ops.ConnectiveOperator
Bases:
objectAbstract class for connective operators.
Every connective operator implemented in LTNtorch must inherit from this class and implements the __call__() method.
- Raises
NotImplementedErrorRaised when __call__() is not implemented in the sub-class.
- class ltn.fuzzy_ops.UnaryConnectiveOperator
Bases:
ltn.fuzzy_ops.ConnectiveOperatorAbstract class for unary connective operators.
Every unary connective operator implemented in LTNtorch must inherit from this class and implement the __call__() method.
- Raises
NotImplementedErrorRaised when __call__() is not implemented in the sub-class.
- class ltn.fuzzy_ops.BinaryConnectiveOperator
Bases:
ltn.fuzzy_ops.ConnectiveOperatorAbstract class for binary connective operators.
Every binary connective operator implemented in LTNtorch must inherit from this class and implement the __call__() method.
- Raises
NotImplementedErrorRaised when __call__() is not implemented in the sub-class.
- class ltn.fuzzy_ops.NotStandard
Bases:
ltn.fuzzy_ops.UnaryConnectiveOperatorStandard fuzzy negation operator.
\(\lnot_{standard}(x) = 1 - x\)
Examples
>>> import ltn >>> import torch >>> Not = ltn.Connective(ltn.fuzzy_ops.NotStandard()) >>> print(Not) Connective(connective_op=NotStandard()) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(Not(p(x)).value) tensor([0.3635, 0.2891])
- __call__(x)
It applies the standard fuzzy negation operator to the given operand.
- Parameters
- x
torch.Tensor Operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe standard fuzzy negation of the given operand.
- class ltn.fuzzy_ops.NotGodel
Bases:
ltn.fuzzy_ops.UnaryConnectiveOperatorGodel fuzzy negation operator.
\(\lnot_{Godel}(x) = \left\{\begin{array}{ c l }1 & \quad \textrm{if } x = 0 \\ 0 & \quad \textrm{otherwise} \end{array} \right.\)
Examples
>>> import ltn >>> import torch >>> Not = ltn.Connective(ltn.fuzzy_ops.NotGodel()) >>> print(Not) Connective(connective_op=NotGodel()) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(Not(p(x)).value) tensor([0., 0.])
- __call__(x)
It applies the Godel fuzzy negation operator to the given operand.
- Parameters
- x
torch.Tensor Operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe Godel fuzzy negation of the given operand.
- class ltn.fuzzy_ops.AndMin
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorGodel fuzzy conjunction operator (min operator).
\(\land_{Godel}(x, y) = \operatorname{min}(x, y)\)
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> And = ltn.Connective(ltn.fuzzy_ops.AndMin()) >>> print(And) Connective(connective_op=AndMin()) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(And(p(x), p(y)).value) tensor([[0.6365, 0.5498, 0.5250], [0.6682, 0.5498, 0.5250]])
- __call__(x, y)
It applies the Godel fuzzy conjunction operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe Godel fuzzy conjunction of the two operands.
- class ltn.fuzzy_ops.AndProd(stable=True)
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorGoguen fuzzy conjunction operator (product operator).
\(\land_{Goguen}(x, y) = xy\)
- Parameters
- stable
bool, default=True Flag indicating whether to use the stable version of the operator or not.
- stable
Notes
The Gougen fuzzy conjunction could have vanishing gradients if not used in its stable version.
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> And = ltn.Connective(ltn.fuzzy_ops.AndProd()) >>> print(And) Connective(connective_op=AndProd(stable=True)) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(And(p(x), p(y)).value) tensor([[0.4253, 0.3500, 0.3342], [0.4751, 0.3910, 0.3733]])
- __call__(x, y, stable=None)
It applies the Goguen fuzzy conjunction operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- stable
bool, default=None Flag indicating whether to use the stable version of the operator or not.
- x
- Returns
torch.TensorThe Goguen fuzzy conjunction of the two operands.
- Attributes
- stable
bool See stable parameter.
- stable
- class ltn.fuzzy_ops.AndLuk
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorLukasiewicz fuzzy conjunction operator.
\(\land_{Lukasiewicz}(x, y) = \operatorname{max}(x + y - 1, 0)\)
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> And = ltn.Connective(ltn.fuzzy_ops.AndLuk()) >>> print(And) Connective(connective_op=AndLuk()) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(And(p(x), p(y)).value) tensor([[0.3046, 0.1863, 0.1614], [0.3791, 0.2608, 0.2359]])
- __call__(x, y)
It applies the Lukasiewicz fuzzy conjunction operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe Lukasiewicz fuzzy conjunction of the two operands.
- class ltn.fuzzy_ops.OrMax
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorGodel fuzzy disjunction operator (max operator).
\(\lor_{Godel}(x, y) = \operatorname{max}(x, y)\)
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> Or = ltn.Connective(ltn.fuzzy_ops.OrMax()) >>> print(Or) Connective(connective_op=OrMax()) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(Or(p(x), p(y)).value) tensor([[0.6682, 0.6365, 0.6365], [0.7109, 0.7109, 0.7109]])
- __call__(x, y)
It applies the Godel fuzzy disjunction operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe Godel fuzzy disjunction of the two operands.
- class ltn.fuzzy_ops.OrProbSum(stable=True)
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorGoguen fuzzy disjunction operator (probabilistic sum).
\(\lor_{Goguen}(x, y) = x + y - xy\)
- Parameters
- stable
bool, default=True Flag indicating whether to use the stable version of the operator or not.
- stable
Notes
The Gougen fuzzy disjunction could have vanishing gradients if not used in its stable version.
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> Or = ltn.Connective(ltn.fuzzy_ops.OrProbSum()) >>> print(Or) Connective(connective_op=OrProbSum(stable=True)) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(Or(p(x), p(y)).value) tensor([[0.8793, 0.8363, 0.8273], [0.9040, 0.8698, 0.8626]])
- __call__(x, y, stable=None)
It applies the Goguen fuzzy disjunction operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- stable
bool, default=None Flag indicating whether to use the stable version of the operator or not.
- x
- Returns
torch.TensorThe Goguen fuzzy disjunction of the two operands.
- Attributes
- stable
bool See stable parameter.
- stable
- class ltn.fuzzy_ops.OrLuk
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorLukasiewicz fuzzy disjunction operator.
\(\lor_{Lukasiewicz}(x, y) = \operatorname{min}(x + y, 1)\)
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> Or = ltn.Connective(ltn.fuzzy_ops.OrLuk()) >>> print(Or) Connective(connective_op=OrLuk()) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(Or(p(x), p(y)).value) tensor([[1., 1., 1.], [1., 1., 1.]])
- __call__(x, y)
It applies the Lukasiewicz fuzzy disjunction operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe Lukasiewicz fuzzy disjunction of the two operands.
- class ltn.fuzzy_ops.ImpliesKleeneDienes
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorKleene Dienes fuzzy implication operator.
\(\rightarrow_{KleeneDienes}(x, y) = \operatorname{max}(1 - x, y)\)
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> Implies = ltn.Connective(ltn.fuzzy_ops.ImpliesKleeneDienes()) >>> print(Implies) Connective(connective_op=ImpliesKleeneDienes()) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(Implies(p(x), p(y)).value) tensor([[0.6682, 0.5498, 0.5250], [0.6682, 0.5498, 0.5250]])
- __call__(x, y)
It applies the Kleene Dienes fuzzy implication operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe Kleene Dienes fuzzy implication of the two operands.
- class ltn.fuzzy_ops.ImpliesGodel
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorGodel fuzzy implication operand.
\(\rightarrow_{Godel}(x, y) = \left\{\begin{array}{ c l }1 & \quad \textrm{if } x \le y \\ y & \quad \textrm{otherwise} \end{array} \right.\)
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> Implies = ltn.Connective(ltn.fuzzy_ops.ImpliesGodel()) >>> print(Implies) Connective(connective_op=ImpliesGodel()) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(Implies(p(x), p(y)).value) tensor([[1.0000, 0.5498, 0.5250], [0.6682, 0.5498, 0.5250]])
- __call__(x, y)
It applies the Godel fuzzy implication operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe Godel fuzzy implication of the two operands.
- class ltn.fuzzy_ops.ImpliesReichenbach(stable=True)
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorReichenbach fuzzy implication operator.
\(\rightarrow_{Reichenbach}(x, y) = 1 - x + xy\)
- Parameters
- stable
bool, default=True Flag indicating whether to use the stable version of the operator or not.
- stable
Notes
The Reichenbach fuzzy implication could have vanishing gradients if not used in its stable version.
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> Implies = ltn.Connective(ltn.fuzzy_ops.ImpliesReichenbach()) >>> print(Implies) Connective(connective_op=ImpliesReichenbach(stable=True)) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(Implies(p(x), p(y)).value) tensor([[0.7888, 0.7134, 0.6976], [0.7640, 0.6799, 0.6622]])
- __call__(x, y, stable=None)
It applies the Reichenbach fuzzy implication operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- stable: :obj:`bool`, default=None
Flag indicating whether to use the stable version of the operator or not.
- x
- Returns
torch.TensorThe Reichenbach fuzzy implication of the two operands.
- Attributes
- stable
bool See stable parameter.
- stable
- class ltn.fuzzy_ops.ImpliesGoguen(stable=True)
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorGoguen fuzzy implication operator.
\(\rightarrow_{Goguen}(x, y) = \left\{\begin{array}{ c l }1 & \quad \textrm{if } x \le y \\ \frac{y}{x} & \quad \textrm{otherwise} \end{array} \right.\)
- Parameters
- stable
bool, default=True Flag indicating whether to use the stable version of the operator or not.
- stable
Notes
The Goguen fuzzy implication could have vanishing gradients if not used in its stable version.
Examples
Note that:
variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> Implies = ltn.Connective(ltn.fuzzy_ops.ImpliesGoguen()) >>> print(Implies) Connective(connective_op=ImpliesGoguen(stable=True)) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(Implies(p(x), p(y)).value) tensor([[1.0000, 0.8639, 0.8248], [0.9398, 0.7733, 0.7384]])
- __call__(x, y, stable=None)
It applies the Goguen fuzzy implication operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- stable
bool, default=None Flag indicating whether to use the stable version of the operator or not.
- x
- Returns
torch.TensorThe Goguen fuzzy implication of the two operands.
- Attributes
- stable
bool See stable parameter.
- stable
- class ltn.fuzzy_ops.Equiv(and_op, implies_op)
Bases:
ltn.fuzzy_ops.BinaryConnectiveOperatorEquivalence (\(\leftrightarrow\)) fuzzy operator.
\(x \leftrightarrow y \equiv x \rightarrow y \land y \rightarrow x\)
- Parameters
- and_op
ltn.fuzzy_ops.BinaryConnectiveOperator Fuzzy conjunction operator to use for the equivalence operator.
- implies_op
ltn.fuzzy_ops.BinaryConnectiveOperator Fuzzy implication operator to use for the implication operator.
- and_op
Notes
the equivalence operator (\(\leftrightarrow\)) is implemented in LTNtorch as an operator which computes: \(x \rightarrow y \land y \rightarrow x\);
the and_op parameter defines the operator for \(\land\);
the implies_op parameter defines the operator for \(\rightarrow\).
Examples
Note that:
we have selected
ltn.fuzzy_ops.AndProd()as an operator for the conjunction of the equivalence, andltn.fuzzy_ops.ImpliesReichenbachas an operator for the implication;variable x has two individuals;
variable y has three individuals;
the shape of the result of the conjunction is (2, 3) due to the LTN broadcasting. The first dimension is dedicated two variable x, while the second dimension to variable y;
at index (0, 0) there is the evaluation of the formula on first individual of x and first individual of y, at index (0, 1) there is the evaluation of the formula on first individual of x and second individual of y, and so forth.
>>> import ltn >>> import torch >>> Equiv = ltn.Connective(ltn.fuzzy_ops.Equiv( ... and_op=ltn.fuzzy_ops.AndProd(), ... implies_op=ltn.fuzzy_ops.ImpliesReichenbach() ... )) >>> print(Equiv) Connective(connective_op=Equiv(and_op=AndProd(stable=True), implies_op=ImpliesReichenbach(stable=True))) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9]])) >>> y = ltn.Variable('y', torch.tensor([[0.7], [0.2], [0.1]])) >>> print(p(x).value) tensor([0.6365, 0.7109]) >>> print(p(y).value) tensor([0.6682, 0.5498, 0.5250]) >>> print(Equiv(p(x), p(y)).value) tensor([[0.5972, 0.5708, 0.5645], [0.6165, 0.5718, 0.5617]])
- __call__(x, y)
It applies the fuzzy equivalence operator to the given operands.
- Parameters
- x
torch.Tensor First operand on which the operator has to be applied.
- y
torch.Tensor Second operand on which the operator has to be applied.
- x
- Returns
torch.TensorThe fuzzy equivalence of the two operands.
- Attributes
- and_op: :class:`ltn.fuzzy_ops.BinaryConnectiveOperator`
See and_op parameter.
- implies_op: :class:`ltn.fuzzy_ops.BinaryConnectiveOperator`
See implies_op parameter.
- class ltn.fuzzy_ops.AggregationOperator
Bases:
objectAbstract class for aggregation operators.
Every aggregation operator implemented in LTNtorch must inherit from this class and implement the __call__() method.
- Raises
NotImplementedErrorRaised when __call__() is not implemented in the sub-class.
- class ltn.fuzzy_ops.AggregMin
Bases:
ltn.fuzzy_ops.AggregationOperatorMin fuzzy aggregation operator.
\(A_{T_{M}}(x_1, \dots, x_n) = \operatorname{min}(x_1, \dots, x_n)\)
Examples
>>> import ltn >>> import torch >>> Forall = ltn.Quantifier(ltn.fuzzy_ops.AggregMin(), quantifier='f') >>> print(Forall) Quantifier(agg_op=AggregMin(), quantifier='f') >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9], [0.7]])) >>> print(p(x).value) tensor([0.6365, 0.7109, 0.6682]) >>> print(Forall(x, p(x)).value) tensor(0.6365)
- __call__(xs, dim=None, keepdim=False, mask=None)
It applies the min fuzzy aggregation operator to the given formula’s grounding on the selected dimensions.
- Parameters
- xs
torch.Tensor Grounding of formula on which the aggregation has to be performed.
- dim
tupleofint, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
- keepdim
bool, default=False Flag indicating whether the output has to keep the same dimensions as the input after the aggregation.
- mask
torch.Tensor, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.
- xs
- Returns
torch.TensorMin fuzzy aggregation of the formula.
- Raises
ValueErrorRaises when the grounding of the formula (‘xs’) and the mask do not have the same shape. Raises when the ‘mask’ is not boolean.
- class ltn.fuzzy_ops.AggregMean
Bases:
ltn.fuzzy_ops.AggregationOperatorMean fuzzy aggregation operator.
\(A_{M}(x_1, \dots, x_n) = \frac{1}{n} \sum_{i = 1}^n x_i\)
Examples
>>> import ltn >>> import torch >>> Forall = ltn.Quantifier(ltn.fuzzy_ops.AggregMean(), quantifier='f') >>> print(Forall) Quantifier(agg_op=AggregMean(), quantifier='f') >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9], [0.7]])) >>> print(p(x).value) tensor([0.6365, 0.7109, 0.6682]) >>> print(Forall(x, p(x)).value) tensor(0.6719)
- __call__(xs, dim=None, keepdim=False, mask=None)
It applies the mean fuzzy aggregation operator to the given formula’s grounding on the selected dimensions.
- Parameters
- xs
torch.Tensor Grounding of formula on which the aggregation has to be performed.
- dim
tupleofint, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
- keepdim
bool, default=False Flag indicating whether the output has to keep the same dimensions as the input after the aggregation.
- mask
torch.Tensor, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.
- xs
- Returns
torch.TensorMean fuzzy aggregation of the formula.
- Raises
ValueErrorRaises when the grounding of the formula (‘xs’) and the mask do not have the same shape. Raises when the ‘mask’ is not boolean.
- class ltn.fuzzy_ops.AggregPMean(p=2, stable=True)
Bases:
ltn.fuzzy_ops.AggregationOperatorpMean fuzzy aggregation operator.
\(A_{pM}(x_1, \dots, x_n) = (\frac{1}{n} \sum_{i = 1}^n x_i^p)^{\frac{1}{p}}\)
- Parameters
- p
int, default=2 Value of hyper-parameter p of the pMean fuzzy aggregation operator.
- stable
bool, default=True Flag indicating whether to use the stable version of the operator or not.
- p
Notes
The pMean aggregation operator has been selected as an approximation of \(\exists\) with \(p \geq 1\). If \(p \to \infty\), then the pMean operator tends to the maximum of the input values (classical behavior of \(\exists\)).
Examples
>>> import ltn >>> import torch >>> Exists = ltn.Quantifier(ltn.fuzzy_ops.AggregPMean(), quantifier='e') >>> print(Exists) Quantifier(agg_op=AggregPMean(p=2, stable=True), quantifier='e') >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9], [0.7]])) >>> print(p(x).value) tensor([0.6365, 0.7109, 0.6682]) >>> print(Exists(x, p(x)).value) tensor(0.6726)
- __call__(xs, dim=None, keepdim=False, mask=None, p=None, stable=None)
It applies the pMean aggregation operator to the given formula’s grounding on the selected dimensions.
- Parameters
- xs
torch.Tensor Grounding of formula on which the aggregation has to be performed.
- dim
tupleofint, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
- keepdim
bool, default=False Flag indicating whether the output has to keep the same dimensions as the input after the aggregation.
- mask
torch.Tensor, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.
- p
int, default=None Value of hyper-parameter p of the pMean fuzzy aggregation operator.
- stable
bool, default=None Flag indicating whether to use the stable version of the operator or not.
- xs
- Returns
torch.TensorpMean fuzzy aggregation of the formula.
- Raises
ValueErrorRaises when the grounding of the formula (‘xs’) and the mask do not have the same shape. Raises when the ‘mask’ is not boolean.
- class ltn.fuzzy_ops.AggregPMeanError(p=2, stable=True)
Bases:
ltn.fuzzy_ops.AggregationOperatorpMeanError fuzzy aggregation operator.
\(A_{pME}(x_1, \dots, x_n) = 1 - (\frac{1}{n} \sum_{i = 1}^n (1 - x_i)^p)^{\frac{1}{p}}\)
- Parameters
- p
int, default=2 Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.
- stable
bool, default=True Flag indicating whether to use the stable version of the operator or not.
- p
Notes
The pMeanError aggregation operator has been selected as an approximation of \(\forall\) with \(p \geq 1\). If \(p \to \infty\), then the pMeanError operator tends to the minimum of the input values (classical behavior of \(\forall\)).
Examples
>>> import ltn >>> import torch >>> Forall = ltn.Quantifier(ltn.fuzzy_ops.AggregPMeanError(), quantifier='f') >>> print(Forall) Quantifier(agg_op=AggregPMeanError(p=2, stable=True), quantifier='f') >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> x = ltn.Variable('x', torch.tensor([[0.56], [0.9], [0.7]])) >>> print(p(x).value) tensor([0.6365, 0.7109, 0.6682]) >>> print(Forall(x, p(x)).value) tensor(0.6704)
- __call__(xs, dim=None, keepdim=False, mask=None, p=None, stable=None)
It applies the pMeanError aggregation operator to the given formula’s grounding on the selected dimensions.
- Parameters
- xs
torch.Tensor Grounding of formula on which the aggregation has to be performed.
- dim
tupleofint, default=None Tuple containing the indexes of dimensions on which the aggregation has to be performed.
- keepdim
bool, default=False Flag indicating whether the output has to keep the same dimensions as the input after the aggregation.
- mask
torch.Tensor, default=None Boolean mask for excluding values of ‘xs’ from the aggregation. It is internally used for guarded quantification. The mask must have the same shape of ‘xs’. False means exclusion, True means inclusion.
- p
int, default=None Value of hyper-parameter p of the pMeanError fuzzy aggregation operator.
- stable: :obj:`bool`, default=None
Flag indicating whether to use the stable version of the operator or not.
- xs
- Returns
torch.TensorpMeanError fuzzy aggregation of the formula.
- Raises
ValueErrorRaises when the grounding of the formula (‘xs’) and the mask do not have the same shape. Raises when the ‘mask’ is not boolean.
- class ltn.fuzzy_ops.SatAgg(agg_op=AggregPMeanError(p=2, stable=True))
Bases:
objectSatAgg aggregation operator.
\(\operatorname{SatAgg}_{\phi \in \mathcal{K}} \mathcal{G}_{\theta} (\phi)\)
It aggregates the truth values of the closed formulas given in input, namely the formulas \(\phi_1, \dots, \phi_n\) contained in the knowledge base \(\mathcal{K}\). In the notation, \(\mathcal{G}_{\theta}\) is the grounding function, parametrized by \(\theta\).
- Parameters
- agg_op
ltn.fuzzy_ops.AggregationOperator, default=AggregPMeanError(p=2) Fuzzy aggregation operator used by the SatAgg operator to perform the aggregation.
- agg_op
- Raises
TypeErrorRaises when the type of the input parameter is not correct.
Notes
SatAgg is particularly useful for computing the overall satisfaction level of a knowledge base when learning a Logic Tensor Network;
the result of the SatAgg aggregation is a scalar. It is the satisfaction level of the knowledge based composed of the closed formulas given in input.
Examples
SatAgg can be used to aggregate the truth values of formulas contained in a knowledge base. Note that:
SatAgg takes as input a tuple of
ltn.core.LTNObjectand/ortorch.Tensor;when some
torch.Tensorare given to SatAgg, they have to be scalars in [0., 1.] since SatAgg is designed to work with closed formulas;in this example, our knowledge base is composed of closed formulas f1, f2, and f3;
SatAgg applies the pMeanError aggregation operator to the truth values of these formulas. The result is a new truth value which can be interpreted as a satisfaction level of the entire knowledge base;
the result of SatAgg is a
torch.Tensorsince it has been designed for learning in PyTorch. The idea is to put the result of the operator directly inside the loss function of the LTN. See this tutorial for a detailed example.
>>> import ltn >>> import torch >>> x = ltn.Variable('x', torch.tensor([[0.1, 0.03], ... [2.3, 4.3]])) >>> y = ltn.Variable('y', torch.tensor([[3.4, 2.3], ... [5.4, 0.43]])) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> q = ltn.Predicate(func=lambda x, y: torch.nn.Sigmoid()( ... torch.sum(torch.cat([x, y], dim=1), ... dim=1))) >>> Forall = ltn.Quantifier(ltn.fuzzy_ops.AggregPMeanError(), quantifier='f') >>> And = ltn.Connective(ltn.fuzzy_ops.AndProd()) >>> f1 = Forall(x, p(x)) >>> f2 = Forall([x, y], q(x, y)) >>> f3 = And(Forall([x, y], q(x, y)), Forall(x, p(x))) >>> sat_agg = ltn.fuzzy_ops.SatAgg(ltn.fuzzy_ops.AggregPMeanError()) >>> print(sat_agg) SatAgg(agg_op=AggregPMeanError(p=2, stable=True)) >>> out = sat_agg(f1, f2, f3) >>> print(type(out)) <class 'torch.Tensor'> >>> print(out) tensor(0.7294)
In the previous example, some closed formulas (
ltn.core.LTNObject) have been given to the SatAgg operator. In this example, we show that SatAgg can take as input alsotorch.Tensorcontaining the result of some closed formulas, namely scalars in [0., 1.]. Note that:f2 is just a
torch.Tensor;since f2 contains a scalar in [0., 1.], its value can be interpreted as a truth value of a closed formula. For this reason, it is possible to give f2 to the SatAgg operator to get the aggregation of f1 (
ltn.core.LTNObject) and f2 (torch.Tensor).
>>> x = ltn.Variable('x', torch.tensor([[0.1, 0.03], ... [2.3, 4.3]])) >>> p = ltn.Predicate(func=lambda x: torch.nn.Sigmoid()( ... torch.sum(x, dim=1) ... )) >>> Forall = ltn.Quantifier(ltn.fuzzy_ops.AggregPMeanError(), quantifier='f') >>> f1 = Forall(x, p(x)) >>> f2 = torch.tensor(0.7) >>> sat_agg = ltn.fuzzy_ops.SatAgg(ltn.fuzzy_ops.AggregPMeanError()) >>> print(sat_agg) SatAgg(agg_op=AggregPMeanError(p=2, stable=True)) >>> out = sat_agg(f1, f2) >>> print(type(out)) <class 'torch.Tensor'> >>> print(out) tensor(0.6842)
- __call__(*closed_formulas)
It applies the SatAgg aggregation operator to the given closed formula’s groundings.
- Parameters
- closed_formulas
tupleofltn.core.LTNObjectand/ortorch.Tensor Tuple of closed formulas (LTNObject and/or tensors) for which the aggregation has to be computed.
- closed_formulas
- Returns
torch.TensorThe result of the SatAgg aggregation.
- Raises
TypeErrorRaises when the type of the input parameter is not correct.
ValueErrorRaises when the truth values of the formulas/tensors given in input are not in the range [0., 1.]. Raises when the truth values of the formulas/tensors given in input are not scalars, namely some formulas are not closed formulas.
- Attributes
- agg_op
ltn.fuzzy_ops.AggregationOperator, default=AggregPMeanError(p=2) See agg_op parameter.
- agg_op