AutoRegressiveBaseModelWithCovariates#

class pytorch_forecasting.models.base_model.AutoRegressiveBaseModelWithCovariates(dataset_parameters: Dict[str, Any] | None = None, log_interval: int | float = -1, log_val_interval: float | int | None = None, learning_rate: float | List[float] = 0.001, log_gradient_flow: bool = False, loss: Metric = SMAPE(), logging_metrics: ModuleList = ModuleList(), reduce_on_plateau_patience: int = 1000, reduce_on_plateau_reduction: float = 2.0, reduce_on_plateau_min_lr: float = 1e-05, weight_decay: float = 0.0, optimizer_params: Dict[str, Any] | None = None, monotone_constaints: Dict[str, int] = {}, output_transformer: Callable | None = None, optimizer='Ranger')[source]#

Bases: BaseModelWithCovariates, AutoRegressiveBaseModel

Model with additional methods for autoregressive models with covariates.

Assumes the following hyperparameters:

Parameters:
  • target (str) – name of target variable

  • target_lags (Dict[str, Dict[str, int]]) – dictionary of target names mapped each to a dictionary of corresponding lagged variables and their lags. Lags can be useful to indicate seasonality to the models. If you know the seasonalit(ies) of your data, add at least the target variables with the corresponding lags to improve performance. Defaults to no lags, i.e. an empty dictionary.

  • static_categoricals (List[str]) – names of static categorical variables

  • static_reals (List[str]) – names of static continuous variables

  • time_varying_categoricals_encoder (List[str]) – names of categorical variables for encoder

  • time_varying_categoricals_decoder (List[str]) – names of categorical variables for decoder

  • time_varying_reals_encoder (List[str]) – names of continuous variables for encoder

  • time_varying_reals_decoder (List[str]) – names of continuous variables for decoder

  • x_reals (List[str]) – order of continuous variables in tensor passed to forward function

  • x_categoricals (List[str]) – order of categorical variables in tensor passed to forward function

  • embedding_sizes (Dict[str, Tuple[int, int]]) – dictionary mapping categorical variables to tuple of integers where the first integer denotes the number of categorical classes and the second the embedding size

  • embedding_labels (Dict[str, List[str]]) – dictionary mapping (string) indices to list of categorical labels

  • embedding_paddings (List[str]) – names of categorical variables for which label 0 is always mapped to an embedding vector filled with zeros

  • categorical_groups (Dict[str, List[str]]) – dictionary of categorical variables that are grouped together and can also take multiple values simultaneously (e.g. holiday during octoberfest). They should be implemented as bag of embeddings

BaseModel for timeseries forecasting from which to inherit from

Parameters:
  • log_interval (Union[int, float], optional) – Batches after which predictions are logged. If < 1.0, will log multiple entries per batch. Defaults to -1.

  • log_val_interval (Union[int, float], optional) – batches after which predictions for validation are logged. Defaults to None/log_interval.

  • learning_rate (float, optional) – Learning rate. Defaults to 1e-3.

  • log_gradient_flow (bool) – If to log gradient flow, this takes time and should be only done to diagnose training failures. Defaults to False.

  • loss (Metric, optional) – metric to optimize, can also be list of metrics. Defaults to SMAPE().

  • logging_metrics (nn.ModuleList[MultiHorizonMetric]) – list of metrics that are logged during training. Defaults to [].

  • reduce_on_plateau_patience (int) – patience after which learning rate is reduced by a factor of 10. Defaults to 1000

  • reduce_on_plateau_reduction (float) – reduction in learning rate when encountering plateau. Defaults to 2.0.

  • reduce_on_plateau_min_lr (float) – minimum learning rate for reduce on plateua learning rate scheduler. Defaults to 1e-5

  • weight_decay (float) – weight decay. Defaults to 0.0.

  • optimizer_params (Dict[str, Any]) – additional parameters for the optimizer. Defaults to {}.

  • monotone_constaints (Dict[str, int]) – dictionary of monotonicity constraints for continuous decoder variables mapping position (e.g. "0" for first position) to constraint (-1 for negative and +1 for positive, larger numbers add more weight to the constraint vs. the loss but are usually not necessary). This constraint significantly slows down training. Defaults to {}.

  • output_transformer (Callable) – transformer that takes network output and transforms it to prediction space. Defaults to None which is equivalent to lambda out: out["prediction"].

  • optimizer (str) – Optimizer, “ranger”, “sgd”, “adam”, “adamw” or class name of optimizer in torch.optim or pytorch_optimizer. Alternatively, a class or function can be passed which takes parameters as first argument and a lr argument (optionally also weight_decay). Defaults to “ranger”.

property lagged_target_positions: Dict[int, LongTensor]#

Positions of lagged target variable(s) in covariates.

Returns:

dictionary mapping integer lags to tensor of variable positions.

Return type:

Dict[int, torch.LongTensor]