DecoderMLP#

class pytorch_forecasting.models.mlp.DecoderMLP(activation_class: str = 'ReLU', hidden_size: int = 300, n_hidden_layers: int = 3, dropout: float = 0.1, norm: bool = True, static_categoricals: List[str] = [], static_reals: List[str] = [], time_varying_categoricals_encoder: List[str] = [], time_varying_categoricals_decoder: List[str] = [], categorical_groups: Dict[str, List[str]] = {}, time_varying_reals_encoder: List[str] = [], time_varying_reals_decoder: List[str] = [], embedding_sizes: Dict[str, Tuple[int, int]] = {}, embedding_paddings: List[str] = [], embedding_labels: Dict[str, ndarray] = {}, x_reals: List[str] = [], x_categoricals: List[str] = [], output_size: int | List[int] = 1, target: str | List[str] | None = None, loss: MultiHorizonMetric | None = None, logging_metrics: ModuleList | None = None, **kwargs)[source]#

Bases: BaseModelWithCovariates

MLP on the decoder.

MLP that predicts output only based on information available in the decoder.

Parameters:
  • activation_class (str, optional) – PyTorch activation class. Defaults to “ReLU”.

  • hidden_size (int, optional) – hidden recurrent size - the most important hyperparameter along with n_hidden_layers. Defaults to 10.

  • n_hidden_layers (int, optional) – Number of hidden layers - important hyperparameter. Defaults to 2.

  • dropout (float, optional) – Dropout. Defaults to 0.1.

  • norm (bool, optional) – if to use normalization in the MLP. Defaults to True.

  • static_categoricals – integer of positions of static categorical variables

  • static_reals – integer of positions of static continuous variables

  • time_varying_categoricals_encoder – integer of positions of categorical variables for encoder

  • time_varying_categoricals_decoder – integer of positions of categorical variables for decoder

  • time_varying_reals_encoder – integer of positions of continuous variables for encoder

  • time_varying_reals_decoder – integer of positions of continuous variables for decoder

  • categorical_groups – dictionary where values are list of categorical variables that are forming together a new categorical variable which is the key in the dictionary

  • x_reals – order of continuous variables in tensor passed to forward function

  • x_categoricals – order of categorical variables in tensor passed to forward function

  • embedding_sizes – dictionary mapping (string) indices to tuple of number of categorical classes and embedding size

  • embedding_paddings – list of indices for embeddings which transform the zero’s embedding to a zero vector

  • embedding_labels – dictionary mapping (string) indices to list of categorical labels

  • output_size (Union[int, List[int]], optional) – number of outputs (e.g. number of quantiles for QuantileLoss and one target or list of output sizes).

  • target (str, optional) – Target variable or list of target variables. Defaults to None.

  • loss (MultiHorizonMetric, optional) – loss: loss function taking prediction and targets. Defaults to QuantileLoss.

  • logging_metrics (nn.ModuleList, optional) – Metrics to log during training. Defaults to nn.ModuleList([SMAPE(), MAE(), RMSE(), MAPE(), MASE()]).

Methods

forward(x[, n_samples])

Forward network

from_dataset(dataset, **kwargs)

Create model from dataset and set parameters related to covariates.

forward(x: Dict[str, Tensor], n_samples: int | None = None) Dict[str, Tensor][source]#

Forward network

classmethod from_dataset(dataset: TimeSeriesDataSet, **kwargs)[source]#

Create model from dataset and set parameters related to covariates.

Parameters:
  • dataset – timeseries dataset

  • allowed_encoder_known_variable_names – List of known variables that are allowed in encoder, defaults to all

  • **kwargs – additional arguments such as hyperparameters for model (see __init__())

Returns:

LightningModule