Data

Loading data for timeseries forecasting is not trivial - in particular if covariates are included and values are missing. PyTorch Forecasting provides the TimeSeriesDataSet which comes with a to_dataloader() method to convert it to a dataloader and a from_dataset() method to create, e.g. a validation or test dataset from a training dataset using the same label encoders and data normalization.

Further, timeseries have to be (almost always) normalized for a neural network to learn efficiently. PyTorch Forecasting provides multiple such target normalizers (some of which can also be used for normalizing covariates).

Time series data set

The time series dataset is the central data-holding object in PyTorch Forecasting. It primarily takes a pandas DataFrame along with some metadata. See the tutorial on passing data to models to learn more it is coupled to models.

class pytorch_forecasting.data.timeseries.TimeSeriesDataSet(data: pandas.core.frame.DataFrame, time_idx: str, target: Union[str, List[str]], group_ids: List[str], weight: Optional[str] = None, max_encoder_length: int = 30, min_encoder_length: Optional[int] = None, min_prediction_idx: Optional[int] = None, min_prediction_length: Optional[int] = None, max_prediction_length: int = 1, static_categoricals: List[str] = [], static_reals: List[str] = [], time_varying_known_categoricals: List[str] = [], time_varying_known_reals: List[str] = [], time_varying_unknown_categoricals: List[str] = [], time_varying_unknown_reals: List[str] = [], variable_groups: Dict[str, List[int]] = {}, constant_fill_strategy: Dict[str, Union[str, float, int, bool]] = {}, allow_missing_timesteps: bool = False, lags: Dict[str, List[int]] = {}, add_relative_time_idx: bool = False, add_target_scales: bool = False, add_encoder_length: Union[bool, str] = 'auto', target_normalizer: Union[pytorch_forecasting.data.encoders.TorchNormalizer, pytorch_forecasting.data.encoders.NaNLabelEncoder, pytorch_forecasting.data.encoders.EncoderNormalizer, str, List[Union[pytorch_forecasting.data.encoders.TorchNormalizer, pytorch_forecasting.data.encoders.NaNLabelEncoder, pytorch_forecasting.data.encoders.EncoderNormalizer]], Tuple[Union[pytorch_forecasting.data.encoders.TorchNormalizer, pytorch_forecasting.data.encoders.NaNLabelEncoder, pytorch_forecasting.data.encoders.EncoderNormalizer]]] = 'auto', categorical_encoders: Dict[str, pytorch_forecasting.data.encoders.NaNLabelEncoder] = {}, scalers: Dict[str, Union[sklearn.preprocessing._data.StandardScaler, sklearn.preprocessing._data.RobustScaler, pytorch_forecasting.data.encoders.TorchNormalizer, pytorch_forecasting.data.encoders.EncoderNormalizer]] = {}, randomize_length: Union[None, Tuple[float, float], bool] = False, predict_mode: bool = False)[source]

PyTorch Dataset for fitting timeseries models.

The dataset automates common tasks such as

  • scaling and encoding of variables

  • normalizing the target variable

  • efficiently converting timeseries in pandas dataframes to torch tensors

  • holding information about static and time-varying variables known and unknown in the future

  • holiding information about related categories (such as holidays)

  • downsampling for data augmentation

  • generating inference, validation and test datasets

  • etc.

Timeseries dataset holding data for models.

The tutorial on passing data to models is helpful to understand the output of the dataset and how it is coupled to models.

Each sample is a subsequence of a full time series. The subsequence consists of encoder and decoder/prediction timepoints for a given time series. This class constructs an index which defined which subsequences exists and can be samples from (index attribute). The samples in the index are defined by by the various parameters. to the class (encoder and prediction lengths, minimum prediction length, randomize length and predict keywords). How samples are sampled into batches for training, is determined by the DataLoader. The class provides the to_dataloader() method to convert the dataset into a dataloader.

Large datasets:

Currently the class is limited to in-memory operations (that can be sped up by an existing installation of numba). If you have extremely large data, however, you can pass prefitted encoders and and scalers to it and a subset of sequences to the class to construct a valid dataset (plus, likely the EncoderNormalizer should be used to normalize targets). when fitting a network, you would then to create a custom DataLoader that rotates through the datasets. There is currently no in-built methods to do this.

Parameters
  • data (pd.DataFrame) – dataframe with sequence data - each row can be identified with time_idx and the group_ids

  • time_idx (str) – integer column denoting the time index. This columns is used to determine the sequence of samples. If there no missings observations, the time index should increase by +1 for each subsequent sample. The first time_idx for each series does not necessarily have to be 0 but any value is allowed.

  • target (Union[str, List[str]]) – column denoting the target or list of columns denoting the target - categorical or continous.

  • group_ids (List[str]) – list of column names identifying a time series. This means that the group_ids identify a sample together with the time_idx. If you have only one timeseries, set this to the name of column that is constant.

  • weight (str) – column name for weights. Defaults to None.

  • max_encoder_length (int) – maximum length to encode. This is the maximum history length used by the time series dataset.

  • min_encoder_length (int) – minimum allowed length to encode. Defaults to max_encoder_length.

  • min_prediction_idx (int) – minimum time_idx from where to start predictions. This parameter can be useful to create a validation or test set.

  • max_prediction_length (int) – maximum prediction/decoder length (choose this not too short as it can help convergence)

  • min_prediction_length (int) – minimum prediction/decoder length. Defaults to max_prediction_length

  • static_categoricals (List[str]) – list of categorical variables that do not change over time, entries can be also lists which are then encoded together (e.g. useful for product categories)

  • static_reals (List[str]) – list of continuous variables that do not change over time

  • time_varying_known_categoricals (List[str]) – list of categorical variables that change over time and are known in the future, entries can be also lists which are then encoded together (e.g. useful for special days or promotion categories)

  • time_varying_known_reals (List[str]) – list of continuous variables that change over time and are known in the future (e.g. price of a product, but not demand of a product)

  • time_varying_unknown_categoricals (List[str]) – list of categorical variables that change over time and are not known in the future, entries can be also lists which are then encoded together (e.g. useful for weather categories). You might want to include your target here.

  • time_varying_unknown_reals (List[str]) – list of continuous variables that change over time and are not known in the future. You might want to include your target here.

  • variable_groups (Dict[str, List[str]]) – dictionary mapping a name to a list of columns in the data. The name should be present in a categorical or real class argument, to be able to encode or scale the columns by group. This will effectively combine categorical variables is particularly useful if a categorical variable can have multiple values at the same time. An example are holidays which can be overlapping.

  • constant_fill_strategy (Dict[str, Union[str, float, int, bool]]) – dictionary of column names with constants to fill in missing values if there are gaps in the sequence (by default forward fill strategy is used). The values will be only used if allow_missing_timesteps=True. A common use case is to denote that demand was 0 if the sample is not in the dataset.

  • allow_missing_timesteps (bool) – if to allow missing timesteps that are automatically filled up. Missing values refer to gaps in the time_idx, e.g. if a specific timeseries has only samples for 1, 2, 4, 5, the sample for 3 will be generated on-the-fly. Allow missings does not deal with NA values. You should fill NA values before passing the dataframe to the TimeSeriesDataSet.

  • lags (Dict[str, List[int]]) – dictionary of variable names mapped to list of time steps by which the variable should be lagged. Lags can be useful to indicate seasonality to the models. If you know the seasonalit(ies) of your data, add at least the target variables with the corresponding lags to improve performance. Lags must be at not larger than the shortest time series as all time series will be cut by the largest lag value to prevent NA values. A lagged variable has to appear in the time-varying variables. If you only want the lagged but not the current value, lag it manually in your input data using data[lagged_variable_name] = data.sort_values(time_idx).groupby(group_ids, observed=True).shift(lag) . Defaults to no lags.

  • add_relative_time_idx (bool) – if to add a relative time index as feature (i.e. for each sampled sequence, the index will range from -encoder_length to prediction_length)

  • add_target_scales (bool) – if to add scales for target to static real features (i.e. add the center and scale of the unnormalized timeseries as features)

  • add_encoder_length (bool) – if to add decoder length to list of static real variables. Defaults to “auto”, i.e. True if min_encoder_length != max_encoder_length.

  • target_normalizer (Union[TorchNormalizer, NaNLabelEncoder, EncoderNormalizer, str, list, tuple]) – transformer that take group_ids, target and time_idx to normalize targets. You can choose from TorchNormalizer, GroupNormalizer, NaNLabelEncoder, EncoderNormalizer (on which overfitting tests will fail) or None for using no normalizer. For multiple targets, use a :py:class`~pytorch_forecasting.data.encoders.MultiNormalizer`. By default an appropriate normalizer is chosen automatically.

  • categorical_encoders (Dict[str, NaNLabelEncoder]) – dictionary of scikit learn label transformers. If you have unobserved categories in the future / a cold-start problem, you can use the NaNLabelEncoder with add_nan=True. Defaults effectively to sklearn’s LabelEncoder(). Prefittet encoders will not be fit again.

  • scalers (Dict[str, Union[StandardScaler, RobustScaler, TorchNormalizer, EncoderNormalizer]]) – dictionary of scikit-learn scalers. Defaults to sklearn’s StandardScaler(). Other options are EncoderNormalizer, GroupNormalizer or scikit-learn’s StandarScaler(), RobustScaler() or None for using no normalizer / normalizer with center=0 and scale=1 (method=”identity”). Prefittet encoders will not be fit again (with the exception of the EncoderNormalizer that is fit on every encoder sequence).

  • randomize_length (Union[None, Tuple[float, float], bool]) – None or False if not to randomize lengths. Tuple of beta distribution concentrations from which probabilities are sampled that are used to sample new sequence lengths with a binomial distribution. If True, defaults to (0.2, 0.05), i.e. ~1/4 of samples around minimum encoder length. Defaults to False otherwise.

  • predict_mode (bool) – if to only iterate over each timeseries once (only the last provided samples). Effectively, this will take choose for each time series identified by group_ids the last max_prediction_length samples of each time series as prediction samples and everthing previous up to max_encoder_length samples as encoder samples.

Details

See the API documentation for further details on available data encoders and the TimeSeriesDataSet:

pytorch_forecasting.data.encoders.EncoderNormalizer([...])

Special Normalizer that is fit on each encoding sequence.

pytorch_forecasting.data.encoders.GroupNormalizer([...])

Normalizer that scales by groups.

pytorch_forecasting.data.encoders.MultiNormalizer(...)

Normalizer for multiple targets.

pytorch_forecasting.data.encoders.NaNLabelEncoder([...])

Labelencoder that can optionally always encode nan and unknown classes (in transform) as class 0

pytorch_forecasting.data.encoders.TorchNormalizer([...])

Basic target transformer that can be fit also on torch tensors.

pytorch_forecasting.data.timeseries.TimeSeriesDataSet(...)

PyTorch Dataset for fitting timeseries models.

pytorch_forecasting.data.timeseries.TimeSynchronizedBatchSampler(...)

Samples mini-batches randomly but in a time-synchronised manner.