deephyper.keras.layers.AttentionGenLinear#
- class deephyper.keras.layers.AttentionGenLinear(*args: Any, **kwargs: Any)[source]#
Bases:
Layer
Generalized Linear Attention.
Check details here https://arxiv.org/abs/1802.00910
The attention coefficient between node \(i\) and \(j\) is calculated as:
\[\textbf{W}_G \text{tanh} (\textbf{Wh}_i + \textbf{Wh}_j)\]where \(\textbf{W}_G\) is a trainable matrix.
- Parameters:
Methods
build
Apply the layer on input tensors.
- call(inputs, **kwargs)[source]#
Apply the layer on input tensors.
- Parameters:
inputs (list) – X (tensor): node feature tensor N (int): number of nodes targets (tensor): target node index tensor sources (tensor): source node index tensor degree (tensor): node degree sqrt tensor (for GCN attention)
- Returns:
attention coefficient tensor
- Return type:
attn_coef (tensor)