BetaDistributionLoss#
- class pytorch_forecasting.metrics.distributions.BetaDistributionLoss(name: str = None, quantiles: List[float] = [0.02, 0.1, 0.25, 0.5, 0.75, 0.9, 0.98], reduction='mean')[source]#
Bases:
DistributionLoss
Beta distribution loss for unit interval data.
- Requirements for original target normalizer:
logit transformation
Initialize metric
- Parameters:
name (str) – metric name. Defaults to class name.
quantiles (List[float], optional) – quantiles for probability range. Defaults to [0.02, 0.1, 0.25, 0.5, 0.75, 0.9, 0.98].
reduction (str, optional) – Reduction, “none”, “mean” or “sqrt-mean”. Defaults to “mean”.
Methods
loss
(y_pred, y_actual)Calculate negative likelihood
Map the a tensor of parameters to a probability distribution.
rescale_parameters
(parameters, target_scale, ...)Rescale normalized parameters into the scale required for the output.
- distribution_class#
alias of
Beta
- loss(y_pred: Tensor, y_actual: Tensor) Tensor [source]#
Calculate negative likelihood
- Parameters:
y_pred – network output
y_actual – actual values
- Returns:
metric value on which backpropagation can be applied
- Return type:
torch.Tensor
- map_x_to_distribution(x: Tensor) Beta [source]#
Map the a tensor of parameters to a probability distribution.
- Parameters:
x (torch.Tensor) – parameters for probability distribution. Last dimension will index the parameters
- Returns:
- torch probability distribution as defined in the
class attribute
distribution_class
- Return type:
distributions.Distribution
- rescale_parameters(parameters: Tensor, target_scale: Tensor, encoder: BaseEstimator) Tensor [source]#
Rescale normalized parameters into the scale required for the output.
- Parameters:
parameters (torch.Tensor) – normalized parameters (indexed by last dimension)
target_scale (torch.Tensor) – scale of parameters (n_batch_samples x (center, scale))
encoder (BaseEstimator) – original encoder that normalized the target in the first place
- Returns:
parameters in real/not normalized space
- Return type:
torch.Tensor