TorchNormalizer#

class pytorch_forecasting.data.encoders.TorchNormalizer(method: str = 'standard', center: bool = True, transformation: str | Tuple[Callable, Callable] | None = None, method_kwargs: Dict[str, Any] = {})[source]#

Bases: InitialParameterRepresenterMixIn, BaseEstimator, TransformerMixin, TransformMixIn

Basic target transformer that can be fit also on torch tensors.

Parameters:
  • method (str, optional) – method to rescale series. Either “identity”, “standard” (standard scaling) or “robust” (scale using quantiles 0.25-0.75). Defaults to “standard”.

  • method_kwargs (Dict[str, Any], optional) – Dictionary of method specific arguments as listed below * “robust” method: “upper”, “lower”, “center” quantiles defaulting to 0.75, 0.25 and 0.5

  • center (bool, optional) – If to center the output to zero. Defaults to True.

  • transformation (Union[str, Dict[str, Callable]] optional) –

    Transform values before applying normalizer. Available options are

    • None (default): No transformation of values

    • log: Estimate in log-space leading to a multiplicative model

    • logp1: Estimate in log-space but add 1 to values before transforming for stability (e.g. if many small values <<1 are present). Note, that inverse transform is still only torch.exp() and not torch.expm1().

    • logit: Apply logit transformation on values that are between 0 and 1

    • count: Apply softplus to output (inverse transformation) and x + 1 to input (transformation)

    • softplus: Apply softplus to output (inverse transformation) and inverse softplus to input

      (transformation)

    • relu: Apply max(0, x) to output

    • Dict[str, Callable] of PyTorch functions that transforms and inversely transforms values. forward and reverse entries are required. inverse transformation is optional and should be defined if reverse is not the inverse of the forward transformation. inverse_torch can be defined to provide a torch distribution transform for inverse transformations.

Inherited-members:

Methods

extra_repr()

fit(y)

Fit transformer, i.e. determine center and scale of data.

fit_transform(X[, y])

Fit to data, then transform it.

get_parameters(*args, **kwargs)

Returns parameters that were used for encoding.

get_params([deep])

Get parameters for this estimator.

get_transform(transformation)

Return transformation functions.

inverse_preprocess(y)

Inverse preprocess re-scaled data (e.g.

inverse_transform(y)

Inverse scale.

preprocess(y)

Preprocess input data (e.g.

set_output(*[, transform])

Set output container.

set_params(**params)

Set the parameters of this estimator.

transform(y[, return_norm, target_scale])

Rescale data.

Attributes

TRANSFORMATIONS

fit(y: Series | ndarray | Tensor)[source]#

Fit transformer, i.e. determine center and scale of data

Parameters:

y (Union[pd.Series, np.ndarray, torch.Tensor]) – input data

Returns:

self

Return type:

TorchNormalizer

get_parameters(*args, **kwargs) Tensor[source]#

Returns parameters that were used for encoding.

Returns:

First element is center of data and second is scale

Return type:

torch.Tensor

inverse_transform(y: Tensor) Tensor[source]#

Inverse scale.

Parameters:

y (torch.Tensor) – scaled data

Returns:

de-scaled data

Return type:

torch.Tensor

transform(y: Series | ndarray | Tensor, return_norm: bool = False, target_scale: Tensor = None) Tuple[ndarray | Tensor, ndarray] | ndarray | Tensor[source]#

Rescale data.

Parameters:
  • y (Union[pd.Series, np.ndarray, torch.Tensor]) – input data

  • return_norm (bool, optional) – [description]. Defaults to False.

  • target_scale (torch.Tensor) – target scale to use instead of fitted center and scale

Returns:

rescaled

data with type depending on input type. returns second element if return_norm=True

Return type:

Union[Tuple[Union[np.ndarray, torch.Tensor], np.ndarray], Union[np.ndarray, torch.Tensor]]