RNN#

class pytorch_forecasting.models.nn.rnn.RNN(mode: str, input_size: int, hidden_size: int, num_layers: int = 1, bias: bool = True, batch_first: bool = False, dropout: float = 0.0, bidirectional: bool = False, proj_size: int = 0, device=None, dtype=None)[source]#

Bases: ABC, RNNBase

Base class flexible RNNs.

Forward function can handle sequences of length 0.

Methods

forward(x[, hx, lengths, enforce_sorted])

Forward function of rnn that allows zero-length sequences.

handle_no_encoding(hidden_state, ...)

Mask the hidden_state where there is no encoding.

init_hidden_state(x)

Initialise a hidden_state.

repeat_interleave(hidden_state, n_samples)

Duplicate the hidden_state n_samples times.

forward(x: PackedSequence | Tensor, hx: Tuple[Tensor, Tensor] | Tensor | None = None, lengths: LongTensor | None = None, enforce_sorted: bool = True) Tuple[PackedSequence | Tensor, Tuple[Tensor, Tensor] | Tensor][source]#

Forward function of rnn that allows zero-length sequences.

Functions as normal for RNN. Only changes output if lengths are defined.

Parameters:
  • x (Union[rnn.PackedSequence, torch.Tensor]) – input to RNN. either packed sequence or tensor of padded sequences

  • hx (HiddenState, optional) – hidden state. Defaults to None.

  • lengths (torch.LongTensor, optional) – lengths of sequences. If not None, used to determine correct returned hidden state. Can contain zeros. Defaults to None.

  • enforce_sorted (bool, optional) – if lengths are passed, determines if RNN expects them to be sorted. Defaults to True.

Returns:

output and hidden state.

Output is packed sequence if input has been a packed sequence.

Return type:

Tuple[Union[rnn.PackedSequence, torch.Tensor], HiddenState]

abstract handle_no_encoding(hidden_state: Tuple[Tensor, Tensor] | Tensor, no_encoding: BoolTensor, initial_hidden_state: Tuple[Tensor, Tensor] | Tensor) Tuple[Tensor, Tensor] | Tensor[source]#

Mask the hidden_state where there is no encoding.

Parameters:
  • hidden_state (HiddenState) – hidden state where some entries need replacement

  • no_encoding (torch.BoolTensor) – positions that need replacement

  • initial_hidden_state (HiddenState) – hidden state to use for replacement

Returns:

hidden state with propagated initial hidden state where appropriate

Return type:

HiddenState

abstract init_hidden_state(x: Tensor) Tuple[Tensor, Tensor] | Tensor[source]#

Initialise a hidden_state.

Parameters:

x (torch.Tensor) – network input

Returns:

default (zero-like) hidden state

Return type:

HiddenState

abstract repeat_interleave(hidden_state: Tuple[Tensor, Tensor] | Tensor, n_samples: int) Tuple[Tensor, Tensor] | Tensor[source]#

Duplicate the hidden_state n_samples times.

Parameters:
  • hidden_state (HiddenState) – hidden state to repeat

  • n_samples (int) – number of repetitions

Returns:

repeated hidden state

Return type:

HiddenState