deephyper.keras.layers.AttentionGenLinear#

class deephyper.keras.layers.AttentionGenLinear(*args: Any, **kwargs: Any)[source]#

Bases: Layer

Generalized Linear Attention.

Check details here https://arxiv.org/abs/1802.00910

The attention coefficient between node \(i\) and \(j\) is calculated as:

\[\textbf{W}_G \text{tanh} (\textbf{Wh}_i + \textbf{Wh}_j)\]

where \(\textbf{W}_G\) is a trainable matrix.

Parameters:
  • state_dim (int) – number of output channels.

  • attn_heads (int) – number of attention heads.

Methods

build

call

Apply the layer on input tensors.

__call__(*args: Any, **kwargs: Any) Any#

Call self as a function.

call(inputs, **kwargs)[source]#

Apply the layer on input tensors.

Parameters:

inputs (list) – X (tensor): node feature tensor N (int): number of nodes targets (tensor): target node index tensor sources (tensor): source node index tensor degree (tensor): node degree sqrt tensor (for GCN attention)

Returns:

attention coefficient tensor

Return type:

attn_coef (tensor)