Metrics

Multiple metrics have been implemented to ease adaptation.

In particular, these metrics can be applied to the multi-horizon forecasting problem, i.e. can take tensors that are not only of shape n_samples but also n_samples x prediction_horizon or even n_samples x prediction_horizon x n_outputs, where n_outputs could be the number of forecasted quantiles.

Metrics can be easily combined by addition, e.g.

from pytorch_forecasting.metrics import SMAPE, MAE

composite_metric = SMAPE() + 1e-4 * MAE()

Such composite metrics are useful when training because they can reduce outliers in other metrics. In the example, SMAPE is mostly optimized, while large outliers in MAE are avoided.

Further, one can modify a loss metric to reduce a mean prediction bias, i.e. ensure that predictions add up. For example:

from pytorch_forecasting.metrics import MAE, AggregationMetric

composite_metric = MAE() + AggregationMetric(metric=MAE())

Here we add to MAE an additional loss. This additional loss is the MAE calculated on the mean predictions and actuals. We can also use other metrics such as SMAPE to ensure aggregated results are unbiased in that metric. One important point to keep in mind is that this metric is calculated accross samples, i.e. it will vary depending on the batch size. In particular, errors tend to average out with increased batch sizes.

Details

See the API documentation for further details on available metrics:

pytorch_forecasting.metrics.AggregationMetric(...)

Calculate metric on mean prediction and actuals.

pytorch_forecasting.metrics.BetaDistributionLoss([...])

Beta distribution loss for unit interval data.

pytorch_forecasting.metrics.CompositeMetric([...])

Metric that combines multiple metrics.

pytorch_forecasting.metrics.CrossEntropy([...])

Cross entropy loss for classification.

pytorch_forecasting.metrics.DistributionLoss([...])

DistributionLoss base class.

pytorch_forecasting.metrics.LogNormalDistributionLoss([...])

Log-normal loss.

pytorch_forecasting.metrics.MAE([reduction])

Mean average absolute error.

pytorch_forecasting.metrics.MAPE([reduction])

Mean absolute percentage.

pytorch_forecasting.metrics.MASE([reduction])

Mean absolute scaled error

pytorch_forecasting.metrics.Metric([name, ...])

Base metric class that has basic functions that can handle predicting quantiles and operate in log space.

pytorch_forecasting.metrics.MultiHorizonMetric([...])

Abstract class for defining metric for a multihorizon forecast

pytorch_forecasting.metrics.MultiLoss(metrics)

Metric that can be used with muliple metrics.

pytorch_forecasting.metrics.NegativeBinomialDistributionLoss([...])

Negative binomial loss, e.g.

pytorch_forecasting.metrics.NormalDistributionLoss([...])

Normal distribution loss.

pytorch_forecasting.metrics.PoissonLoss([...])

Poisson loss for count data

pytorch_forecasting.metrics.QuantileLoss([...])

Quantile loss, i.e. a quantile of q=0.5 will give half of the mean absolute error as it is calcualted as.

pytorch_forecasting.metrics.RMSE([reduction])

Root mean square error

pytorch_forecasting.metrics.SMAPE([reduction])

Symmetric mean absolute percentage.

pytorch_forecasting.utils.create_mask(size, ...)

Create boolean masks of shape len(lenghts) x size.

pytorch_forecasting.utils.unpack_sequence(...)

Unpack RNN sequence.

pytorch_forecasting.utils.unsqueeze_like(...)

Unsqueeze last dimensions of tensor to match another tensor's number of dimensions.