Metrics¶
Multiple metrics have been implemented to ease adaptation.
In particular, these metrics can be applied to the multihorizon forecasting problem, i.e.
can take tensors that are not only of shape n_samples
but also n_samples x prediction_horizon
or even n_samples x prediction_horizon x n_outputs
, where n_outputs
could be the number
of forecasted quantiles.
Metrics can be easily combined by addition, e.g.
from pytorch_forecasting.metrics import SMAPE, MAE
composite_metric = SMAPE() + 1e4 * MAE()
Such composite metrics are useful when training because they can reduce outliers in other metrics. In the example, SMAPE is mostly optimized, while large outliers in MAE are avoided.
Further, one can modify a loss metric to reduce a mean prediction bias, i.e. ensure that predictions add up. For example:
from pytorch_forecasting.metrics import MAE, AggregationMetric
composite_metric = MAE() + AggregationMetric(metric=MAE())
Here we add to MAE an additional loss. This additional loss is the MAE calculated on the mean predictions and actuals. We can also use other metrics such as SMAPE to ensure aggregated results are unbiased in that metric. One important point to keep in mind is that this metric is calculated accross samples, i.e. it will vary depending on the batch size. In particular, errors tend to average out with increased batch sizes.
Details¶
See the API documentation for further details on available metrics:
Calculate metric on mean prediction and actuals. 

Beta distribution loss for unit interval data. 

Metric that combines multiple metrics. 

Cross entropy loss for classification. 

DistributionLoss base class. 

Lognormal loss. 


Mean average absolute error. 

Mean absolute percentage. 

Mean absolute scaled error 

Base metric class that has basic functions that can handle predicting quantiles and operate in log space. 
Abstract class for defining metric for a multihorizon forecast 

Metric that can be used with muliple metrics. 


Negative binomial loss, e.g. 
Normal distribution loss. 

Poisson loss for count data 

Quantile loss, i.e. a quantile of 


Root mean square error 

Symmetric mean absolute percentage. 
Create boolean masks of shape len(lenghts) x size. 

Unpack RNN sequence. 

Unsqueeze last dimensions of tensor to match another tensor’s number of dimensions. 