Baseline#

class pytorch_forecasting.models.baseline.Baseline(dataset_parameters: Dict[str, Any] = None, log_interval: int | float = -1, log_val_interval: int | float = None, learning_rate: float | List[float] = 0.001, log_gradient_flow: bool = False, loss: Metric = SMAPE(), logging_metrics: ModuleList = ModuleList(), reduce_on_plateau_patience: int = 1000, reduce_on_plateau_reduction: float = 2.0, reduce_on_plateau_min_lr: float = 1e-05, weight_decay: float = 0.0, optimizer_params: Dict[str, Any] = None, monotone_constaints: Dict[str, int] = None, output_transformer: Callable = None, optimizer=None)[source]#

Bases: BaseModel

Baseline model that uses last known target value to make prediction.

Example:

from pytorch_forecasting import BaseModel, MAE

# generating predictions
predictions = Baseline().predict(dataloader)

# calculate baseline performance in terms of mean absolute error (MAE)
metric = MAE()
model = Baseline()
for x, y in dataloader:
    metric.update(model(x), y)

metric.compute()

BaseModel for timeseries forecasting from which to inherit from

Parameters:
  • log_interval (Union[int, float], optional) – Batches after which predictions are logged. If < 1.0, will log multiple entries per batch. Defaults to -1.

  • log_val_interval (Union[int, float], optional) – batches after which predictions for validation are logged. Defaults to None/log_interval.

  • learning_rate (float, optional) – Learning rate. Defaults to 1e-3.

  • log_gradient_flow (bool) – If to log gradient flow, this takes time and should be only done to diagnose training failures. Defaults to False.

  • loss (Metric, optional) – metric to optimize, can also be list of metrics. Defaults to SMAPE().

  • logging_metrics (nn.ModuleList[MultiHorizonMetric]) – list of metrics that are logged during training. Defaults to [].

  • reduce_on_plateau_patience (int) – patience after which learning rate is reduced by a factor of 10. Defaults to 1000

  • reduce_on_plateau_reduction (float) – reduction in learning rate when encountering plateau. Defaults to 2.0.

  • reduce_on_plateau_min_lr (float) – minimum learning rate for reduce on plateua learning rate scheduler. Defaults to 1e-5

  • weight_decay (float) – weight decay. Defaults to 0.0.

  • optimizer_params (Dict[str, Any]) – additional parameters for the optimizer. Defaults to {}.

  • monotone_constaints (Dict[str, int]) – dictionary of monotonicity constraints for continuous decoder variables mapping position (e.g. "0" for first position) to constraint (-1 for negative and +1 for positive, larger numbers add more weight to the constraint vs. the loss but are usually not necessary). This constraint significantly slows down training. Defaults to {}.

  • output_transformer (Callable) – transformer that takes network output and transforms it to prediction space. Defaults to None which is equivalent to lambda out: out["prediction"].

  • optimizer (str) – Optimizer, “ranger”, “sgd”, “adam”, “adamw” or class name of optimizer in torch.optim or pytorch_optimizer. Alternatively, a class or function can be passed which takes parameters as first argument and a lr argument (optionally also weight_decay). Defaults to “ranger”, if pytorch_optimizer is installed, otherwise “adam”.

Methods

forward(x)

Network forward pass.

forward_one_target(encoder_lengths, ...)

to_prediction(out[, use_metric])

Convert output to prediction using the loss metric.

to_quantiles(out[, use_metric])

Convert output to quantiles using the loss metric.

forward(x: Dict[str, Tensor]) Dict[str, Tensor][source]#

Network forward pass.

Parameters:

x (Dict[str, torch.Tensor]) – network input

Returns:

netowrk outputs

Return type:

Dict[str, torch.Tensor]

to_prediction(out: Dict[str, Any], use_metric: bool = True, **kwargs)[source]#

Convert output to prediction using the loss metric.

Parameters:
  • out (Dict[str, Any]) – output of network where “prediction” has been transformed with transform_output()

  • use_metric (bool) – if to use metric to convert for conversion, if False, simply take the average over out["prediction"]

  • **kwargs – arguments to metric to_quantiles method

Returns:

predictions of shape batch_size x timesteps

Return type:

torch.Tensor

to_quantiles(out: Dict[str, Any], use_metric: bool = True, **kwargs)[source]#

Convert output to quantiles using the loss metric.

Parameters:
  • out (Dict[str, Any]) – output of network where “prediction” has been transformed with transform_output()

  • use_metric (bool) – if to use metric to convert for conversion, if False, simply take the quantiles over out["prediction"]

  • **kwargs – arguments to metric to_quantiles method

Returns:

quantiles of shape batch_size x timesteps x n_quantiles

Return type:

torch.Tensor