{ "cells": [ { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "# How to use custom data and implement custom models and metrics\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. _new-model-tutorial:\n", "\n", "Building a new model in PyTorch Forecasting is relatively easy. Many things are taken care of automatically\n", "\n", "* Training, validation and inference is automatically handled for most models - defining the architecture and hyperparameters is sufficient\n", "* Dataloading, normalization, re-scaling etc. is provided by the TimeSeriesDataSet\n", "* Logging training progress with multiple metrics including plotting examples is automatically taken care of\n", "* Masking of entries if different time series have different lengths is automatic\n", "\n", "However, there a couple of things to keep in mind if you want to make full use of the package. This tutorial first demonstrates how to implement a simple model and then turns to more complicated implementation scenarios.\n", "\n", "We will answer questions such as\n", "\n", "* How to transfer an existing PyTorch implementation into PyTorch Forecasting\n", "* How to handle data loading and enable different length time series\n", "* How to define and use a custom metric\n", "* How to handle recurrent networks\n", "* How to deal with covariates\n", "* How to test new models" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Building a simple, first model\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "For demonstration purposes we will choose a simple fully connected model. It takes a timeseries of size `input_size` as input and outputs a new timeseries of size `output_size`. You can think of this `input_size` encoding steps and `output_size` decoding/prediction steps.\n" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "import warnings\n", "\n", "warnings.filterwarnings(\"ignore\")\n", "\n", "os.chdir(\"../../..\")" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "torch.Size([20, 2])" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import torch\n", "from torch import nn\n", "\n", "\n", "class FullyConnectedModule(nn.Module):\n", " def __init__(self, input_size: int, output_size: int, hidden_size: int, n_hidden_layers: int):\n", " super().__init__()\n", "\n", " # input layer\n", " module_list = [nn.Linear(input_size, hidden_size), nn.ReLU()]\n", " # hidden layers\n", " for _ in range(n_hidden_layers):\n", " module_list.extend([nn.Linear(hidden_size, hidden_size), nn.ReLU()])\n", " # output layer\n", " module_list.append(nn.Linear(hidden_size, output_size))\n", "\n", " self.sequential = nn.Sequential(*module_list)\n", "\n", " def forward(self, x: torch.Tensor) -> torch.Tensor:\n", " # x of shape: batch_size x n_timesteps_in\n", " # output of shape batch_size x n_timesteps_out\n", " return self.sequential(x)\n", "\n", "\n", "# test that network works as intended\n", "network = FullyConnectedModule(input_size=5, output_size=2, hidden_size=10, n_hidden_layers=2)\n", "x = torch.rand(20, 5)\n", "network(x).shape" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "The above model is not yet a PyTorch Forecasting model but it is easy to get there. As this is a simple model, we will use the :py:class:`~pytorch_forecasting.models.base_model.BaseModel`. This base class is modified `LightningModule `_ with pre-defined hooks for training and validating time series models. The :py:class:`~pytorch_forecasting.models.base_model.BaseModelWithCovariates` will be discussed later in this tutorial.\n", "\n", "Either way, the main requirement is for the model to have a ``forward`` method.\n", "\n", ".. automethod:: pytorch_forecasting.models.base_model.BaseModel.forward\n", " :noindex:" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from typing import Dict\n", "\n", "from pytorch_forecasting.models import BaseModel\n", "\n", "\n", "class FullyConnectedModel(BaseModel):\n", " def __init__(self, input_size: int, output_size: int, hidden_size: int, n_hidden_layers: int, **kwargs):\n", " # saves arguments in signature to `.hparams` attribute, mandatory call - do not skip this\n", " self.save_hyperparameters()\n", " # pass additional arguments to BaseModel.__init__, mandatory call - do not skip this\n", " super().__init__(**kwargs)\n", " self.network = FullyConnectedModule(\n", " input_size=self.hparams.input_size,\n", " output_size=self.hparams.output_size,\n", " hidden_size=self.hparams.hidden_size,\n", " n_hidden_layers=self.hparams.n_hidden_layers,\n", " )\n", "\n", " def forward(self, x: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:\n", " # x is a batch generated based on the TimeSeriesDataset\n", " network_input = x[\"encoder_cont\"].squeeze(-1)\n", " prediction = self.network(network_input)\n", "\n", " # rescale predictions into target space\n", " prediction = self.transform_output(prediction, target_scale=x[\"target_scale\"])\n", "\n", " # We need to return a dictionary that at least contains the prediction\n", " # The parameter can be directly forwarded from the input.\n", " # The conversion to a named tuple can be directly achieved with the `to_network_output` function.\n", " return self.to_network_output(prediction=prediction)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "This is a very basic implementation that could be readily used for training. But before we add additional features, let's first have a look how we pass data to this model before we go about initializing our model.\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Passing data to a model\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ ".. _passing-data:\n", "\n", "Instead of having to write our own dataloader (which can be rather complicated), we can leverage PyTorch Forecasting's :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet` to feed data to our model.\n", "In fact, PyTorch Forecasting expects us to use a :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet`.\n", "\n", "The data has to be in a specific format to be used by the :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet`. It should be in a pandas `DataFrame` and have a categorical column to identify each series and a integer column to specify the time of the record.\n", "\n", "Below, we create such a dataset with 30 different observations - 10 for 3 time series." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
valuegrouptime_idx
0-0.12559700
10.32566801
2-0.26596202
30.13230503
40.16711704
50.48124105
6-0.11318806
7-0.08960907
80.02915608
9-0.18195009
100.15033410
110.42862411
12-0.13910612
13-0.08533413
14-0.24366814
150.05591315
160.30859116
170.14118317
180.23075918
190.17352819
200.22631520
21-0.34839021
220.06781622
23-0.07479423
240.05939624
250.30074525
26-0.34403226
27-0.08393427
28-0.34348128
29-0.38520229
\n", "
" ], "text/plain": [ " value group time_idx\n", "0 -0.125597 0 0\n", "1 0.325668 0 1\n", "2 -0.265962 0 2\n", "3 0.132305 0 3\n", "4 0.167117 0 4\n", "5 0.481241 0 5\n", "6 -0.113188 0 6\n", "7 -0.089609 0 7\n", "8 0.029156 0 8\n", "9 -0.181950 0 9\n", "10 0.150334 1 0\n", "11 0.428624 1 1\n", "12 -0.139106 1 2\n", "13 -0.085334 1 3\n", "14 -0.243668 1 4\n", "15 0.055913 1 5\n", "16 0.308591 1 6\n", "17 0.141183 1 7\n", "18 0.230759 1 8\n", "19 0.173528 1 9\n", "20 0.226315 2 0\n", "21 -0.348390 2 1\n", "22 0.067816 2 2\n", "23 -0.074794 2 3\n", "24 0.059396 2 4\n", "25 0.300745 2 5\n", "26 -0.344032 2 6\n", "27 -0.083934 2 7\n", "28 -0.343481 2 8\n", "29 -0.385202 2 9" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "import pandas as pd\n", "\n", "test_data = pd.DataFrame(\n", " dict(\n", " value=np.random.rand(30) - 0.5,\n", " group=np.repeat(np.arange(3), 10),\n", " time_idx=np.tile(np.arange(10), 3),\n", " )\n", ")\n", "test_data" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Converting it to a :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet` is easy:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "from pytorch_forecasting import TimeSeriesDataSet\n", "\n", "# create the dataset from the pandas dataframe\n", "dataset = TimeSeriesDataSet(\n", " test_data,\n", " group_ids=[\"group\"],\n", " target=\"value\",\n", " time_idx=\"time_idx\",\n", " min_encoder_length=5,\n", " max_encoder_length=5,\n", " min_prediction_length=2,\n", " max_prediction_length=2,\n", " time_varying_unknown_reals=[\"value\"],\n", ")" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "We can take a look at all the defaults and settings that were set by PyTorch Forecasting. These are all available as arguments to :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet` - see its documentation for more all the details." ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'time_idx': 'time_idx',\n", " 'target': 'value',\n", " 'group_ids': ['group'],\n", " 'weight': None,\n", " 'max_encoder_length': 5,\n", " 'min_encoder_length': 5,\n", " 'min_prediction_idx': 0,\n", " 'min_prediction_length': 2,\n", " 'max_prediction_length': 2,\n", " 'static_categoricals': [],\n", " 'static_reals': [],\n", " 'time_varying_known_categoricals': [],\n", " 'time_varying_known_reals': [],\n", " 'time_varying_unknown_categoricals': [],\n", " 'time_varying_unknown_reals': ['value'],\n", " 'variable_groups': {},\n", " 'constant_fill_strategy': {},\n", " 'allow_missing_timesteps': False,\n", " 'lags': {},\n", " 'add_relative_time_idx': False,\n", " 'add_target_scales': False,\n", " 'add_encoder_length': False,\n", " 'target_normalizer': GroupNormalizer(\n", " \tmethod='standard',\n", " \tgroups=[],\n", " \tcenter=True,\n", " \tscale_by_group=False,\n", " \ttransformation=None,\n", " \tmethod_kwargs={}\n", " ),\n", " 'categorical_encoders': {'__group_id__group': NaNLabelEncoder(add_nan=False, warn=True),\n", " 'group': NaNLabelEncoder(add_nan=False, warn=True)},\n", " 'scalers': {},\n", " 'randomize_length': None,\n", " 'predict_mode': False}" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dataset.get_parameters()" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now, we take a look at the output of the dataloader. It's `x` will be fed to the model's forward method, that is why it is so important to understand it.\n" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "x = {'encoder_cat': tensor([], size=(4, 5, 0), dtype=torch.int64), 'encoder_cont': tensor([[[ 1.7401],\n", " [-0.6492],\n", " [-0.4229],\n", " [-1.0892],\n", " [ 0.1716]],\n", "\n", " [[-0.4229],\n", " [-1.0892],\n", " [ 0.1716],\n", " [ 1.2349],\n", " [ 0.5304]],\n", "\n", " [[-0.6492],\n", " [-0.4229],\n", " [-1.0892],\n", " [ 0.1716],\n", " [ 1.2349]],\n", "\n", " [[-1.5299],\n", " [ 0.2216],\n", " [-0.3785],\n", " [ 0.1862],\n", " [ 1.2019]]]), 'encoder_target': tensor([[ 0.4286, -0.1391, -0.0853, -0.2437, 0.0559],\n", " [-0.0853, -0.2437, 0.0559, 0.3086, 0.1412],\n", " [-0.1391, -0.0853, -0.2437, 0.0559, 0.3086],\n", " [-0.3484, 0.0678, -0.0748, 0.0594, 0.3007]]), 'encoder_lengths': tensor([5, 5, 5, 5]), 'decoder_cat': tensor([], size=(4, 2, 0), dtype=torch.int64), 'decoder_cont': tensor([[[ 1.2349],\n", " [ 0.5304]],\n", "\n", " [[ 0.9074],\n", " [ 0.6665]],\n", "\n", " [[ 0.5304],\n", " [ 0.9074]],\n", "\n", " [[-1.5116],\n", " [-0.4170]]]), 'decoder_target': tensor([[ 0.3086, 0.1412],\n", " [ 0.2308, 0.1735],\n", " [ 0.1412, 0.2308],\n", " [-0.3440, -0.0839]]), 'decoder_lengths': tensor([2, 2, 2, 2]), 'decoder_time_idx': tensor([[6, 7],\n", " [8, 9],\n", " [7, 8],\n", " [6, 7]]), 'groups': tensor([[1],\n", " [1],\n", " [1],\n", " [2]]), 'target_scale': tensor([[0.0151, 0.2376],\n", " [0.0151, 0.2376],\n", " [0.0151, 0.2376],\n", " [0.0151, 0.2376]])}\n", "\n", "y = (tensor([[ 0.3086, 0.1412],\n", " [ 0.2308, 0.1735],\n", " [ 0.1412, 0.2308],\n", " [-0.3440, -0.0839]]), None)\n", "\n", "sizes of x =\n", "\tencoder_cat = torch.Size([4, 5, 0])\n", "\tencoder_cont = torch.Size([4, 5, 1])\n", "\tencoder_target = torch.Size([4, 5])\n", "\tencoder_lengths = torch.Size([4])\n", "\tdecoder_cat = torch.Size([4, 2, 0])\n", "\tdecoder_cont = torch.Size([4, 2, 1])\n", "\tdecoder_target = torch.Size([4, 2])\n", "\tdecoder_lengths = torch.Size([4])\n", "\tdecoder_time_idx = torch.Size([4, 2])\n", "\tgroups = torch.Size([4, 1])\n", "\ttarget_scale = torch.Size([4, 2])\n" ] } ], "source": [ "# convert the dataset to a dataloader\n", "dataloader = dataset.to_dataloader(batch_size=4)\n", "\n", "# and load the first batch\n", "x, y = next(iter(dataloader))\n", "print(\"x =\", x)\n", "print(\"\\ny =\", y)\n", "print(\"\\nsizes of x =\")\n", "for key, value in x.items():\n", " print(f\"\\t{key} = {value.size()}\")" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "To understand it better, we look at documentation of the :py:meth:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet.to_dataloader` method:\n", "\n", ".. automethod:: pytorch_forecasting.data.timeseries.TimeSeriesDataSet.to_dataloader\n", " :noindex:" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "This explains why we had to first extract the correct input in our simple `FullyConnectedModel` above before passing it to our `FullyConnectedModule`.\n", "As a reminder:\n" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [], "source": [ "def forward(self, x: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:\n", " # x is a batch generated based on the TimeSeriesDataset\n", " network_input = x[\"encoder_cont\"].squeeze(-1)\n", " prediction = self.network(network_input)\n", "\n", " # rescale predictions into target space\n", " prediction = self.transform_output(prediction, target_scale=x[\"target_scale\"])\n", "\n", " # We need to return a dictionary that at least contains the prediction\n", " # The parameter can be directly forwarded from the input.\n", " # The conversion to a named tuple can be directly achieved with the `to_network_output` function.\n", " return self.to_network_output(prediction=prediction)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "For such a simple architecture, we can ignore most of the inputs in `x`. You do not have to worry about moving tensors to specifc GPUs, [PyTorch Lightning](https://pytorch-lightning.readthedocs.io) will take care of this for you.\n", "\n", "Now, let's check if our model works. We initialize model always with their `from_dataset()` method with takes hyperparameters from the dataset, hyperparameters for the model and hyperparameters for the optimizer. Read more about it in the next section.\n" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Output(prediction=tensor([[-0.0175, -0.0045],\n", " [-0.0203, 0.0039],\n", " [-0.0128, 0.0033],\n", " [-0.0162, -0.0026]], grad_fn=))" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model = FullyConnectedModel.from_dataset(dataset, input_size=5, output_size=2, hidden_size=10, n_hidden_layers=2)\n", "x, y = next(iter(dataloader))\n", "model(x)" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "If you want to know to which group and time index (at the first prediction) the samples in the batch link to, you can find out by using :py:meth:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet.x_to_index`:" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
time_idxgroup
052
151
272
350
\n", "
" ], "text/plain": [ " time_idx group\n", "0 5 2\n", "1 5 1\n", "2 7 2\n", "3 5 0" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "dataset.x_to_index(x)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Coupling datasets and models\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "You might have noticed that the encoder and decoder/prediction lengths (5 and 2) are already specified in the :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet` and we specified them a second time when initializing the model. This might be acceptable for such a simple model but will make it hard for users to understand how to map form the dataset to the model parameters in more complicated settings.\n", "This is why we should implement another method in the model: ``from_dataset()``. Typically, a user would always initialize a model from a dataset. The method is also an opportunity to validate that the dataset defined by the user is compatible with your model architecture.\n", "\n", "While the :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet` and all PyTorch Forecasting metrics support different length time series, not every network architecture does." ] }, { "cell_type": "code", "execution_count": 11, "metadata": {}, "outputs": [], "source": [ "class FullyConnectedModel(BaseModel):\n", " def __init__(self, input_size: int, output_size: int, hidden_size: int, n_hidden_layers: int, **kwargs):\n", " # saves arguments in signature to `.hparams` attribute, mandatory call - do not skip this\n", " self.save_hyperparameters()\n", " # pass additional arguments to BaseModel.__init__, mandatory call - do not skip this\n", " super().__init__(**kwargs)\n", " self.network = FullyConnectedModule(\n", " input_size=self.hparams.input_size,\n", " output_size=self.hparams.output_size,\n", " hidden_size=self.hparams.hidden_size,\n", " n_hidden_layers=self.hparams.n_hidden_layers,\n", " )\n", "\n", " def forward(self, x: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:\n", " # x is a batch generated based on the TimeSeriesDataset\n", " network_input = x[\"encoder_cont\"].squeeze(-1)\n", " prediction = self.network(network_input).unsqueeze(-1)\n", "\n", " # rescale predictions into target space\n", " prediction = self.transform_output(prediction, target_scale=x[\"target_scale\"])\n", "\n", " # We need to return a dictionary that at least contains the prediction.\n", " # The parameter can be directly forwarded from the input.\n", " # The conversion to a named tuple can be directly achieved with the `to_network_output` function.\n", " return self.to_network_output(prediction=prediction)\n", "\n", " @classmethod\n", " def from_dataset(cls, dataset: TimeSeriesDataSet, **kwargs):\n", " new_kwargs = {\n", " \"output_size\": dataset.max_prediction_length,\n", " \"input_size\": dataset.max_encoder_length,\n", " }\n", " new_kwargs.update(kwargs) # use to pass real hyperparameters and override defaults set by dataset\n", " # example for dataset validation\n", " assert dataset.max_prediction_length == dataset.min_prediction_length, \"Decoder only supports a fixed length\"\n", " assert dataset.min_encoder_length == dataset.max_encoder_length, \"Encoder only supports a fixed length\"\n", " assert (\n", " len(dataset.time_varying_known_categoricals) == 0\n", " and len(dataset.time_varying_known_reals) == 0\n", " and len(dataset.time_varying_unknown_categoricals) == 0\n", " and len(dataset.static_categoricals) == 0\n", " and len(dataset.static_reals) == 0\n", " and len(dataset.time_varying_unknown_reals) == 1\n", " and dataset.time_varying_unknown_reals[0] == dataset.target\n", " ), \"Only covariate should be the target in 'time_varying_unknown_reals'\"\n", "\n", " return super().from_dataset(dataset, **new_kwargs)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's initialize from our dataset:\n" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " | Name | Type | Params\n", "---------------------------------------------------------------\n", "0 | loss | SMAPE | 0 \n", "1 | logging_metrics | ModuleList | 0 \n", "2 | network | FullyConnectedModule | 302 \n", "3 | network.sequential | Sequential | 302 \n", "4 | network.sequential.0 | Linear | 60 \n", "5 | network.sequential.1 | ReLU | 0 \n", "6 | network.sequential.2 | Linear | 110 \n", "7 | network.sequential.3 | ReLU | 0 \n", "8 | network.sequential.4 | Linear | 110 \n", "9 | network.sequential.5 | ReLU | 0 \n", "10 | network.sequential.6 | Linear | 22 \n", "---------------------------------------------------------------\n", "302 Trainable params\n", "0 Non-trainable params\n", "302 Total params\n", "0.001 Total estimated model params size (MB)\n" ] }, { "data": { "text/plain": [ "\"hidden_size\": 10\n", "\"input_size\": 5\n", "\"learning_rate\": 0.001\n", "\"log_gradient_flow\": False\n", "\"log_interval\": -1\n", "\"log_val_interval\": -1\n", "\"logging_metrics\": ModuleList()\n", "\"loss\": SMAPE()\n", "\"monotone_constaints\": {}\n", "\"n_hidden_layers\": 2\n", "\"optimizer\": ranger\n", "\"optimizer_params\": None\n", "\"output_size\": 2\n", "\"output_transformer\": GroupNormalizer(\n", "\tmethod='standard',\n", "\tgroups=[],\n", "\tcenter=True,\n", "\tscale_by_group=False,\n", "\ttransformation=None,\n", "\tmethod_kwargs={}\n", ")\n", "\"reduce_on_plateau_min_lr\": 1e-05\n", "\"reduce_on_plateau_patience\": 1000\n", "\"reduce_on_plateau_reduction\": 2.0\n", "\"weight_decay\": 0.0" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from lightning.pytorch.utilities.model_summary import ModelSummary\n", "\n", "model = FullyConnectedModel.from_dataset(dataset, hidden_size=10, n_hidden_layers=2)\n", "print(ModelSummary(model, max_depth=-1))\n", "model.hparams" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Defining additional hyperparameters\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "So far, we have kept a wildcard ``**kwargs`` argument in the model initialization signature. We then pass these ``**kwargs`` to the :py:class:`~pytorch_forecasting.models.base_model.BaseModel` using a ``super().__init__(**kwargs)`` call. We can see which additional hyperparameters are available as they are all saved in the ``hparams`` attribute of the model:" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"hidden_size\": 10\n", "\"input_size\": 5\n", "\"learning_rate\": 0.001\n", "\"log_gradient_flow\": False\n", "\"log_interval\": -1\n", "\"log_val_interval\": -1\n", "\"logging_metrics\": ModuleList()\n", "\"loss\": SMAPE()\n", "\"monotone_constaints\": {}\n", "\"n_hidden_layers\": 2\n", "\"optimizer\": ranger\n", "\"optimizer_params\": None\n", "\"output_size\": 2\n", "\"output_transformer\": GroupNormalizer(\n", "\tmethod='standard',\n", "\tgroups=[],\n", "\tcenter=True,\n", "\tscale_by_group=False,\n", "\ttransformation=None,\n", "\tmethod_kwargs={}\n", ")\n", "\"reduce_on_plateau_min_lr\": 1e-05\n", "\"reduce_on_plateau_patience\": 1000\n", "\"reduce_on_plateau_reduction\": 2.0\n", "\"weight_decay\": 0.0" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.hparams" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "While not required, to give the user transparancy over these additional hyperparameters, it is worth passing them explicitly instead of implicitly in ``**kwargs``\n", "\n", "They are described in detail in the :py:class:`~pytorch_forecasting.models.base_model.BaseModel`. \n", "\n", ".. automethod:: pytorch_forecasting.models.base_model.BaseModel.__init__\n", " :noindex:\n", " \n", "You can simply copy this docstring into your model implementation:" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", " BaseModel for timeseries forecasting from which to inherit from\n", "\n", " Args:\n", " log_interval (Union[int, float], optional): Batches after which predictions are logged. If < 1.0, will log\n", " multiple entries per batch. Defaults to -1.\n", " log_val_interval (Union[int, float], optional): batches after which predictions for validation are\n", " logged. Defaults to None/log_interval.\n", " learning_rate (float, optional): Learning rate. Defaults to 1e-3.\n", " log_gradient_flow (bool): If to log gradient flow, this takes time and should be only done to diagnose\n", " training failures. Defaults to False.\n", " loss (Metric, optional): metric to optimize, can also be list of metrics. Defaults to SMAPE().\n", " logging_metrics (nn.ModuleList[MultiHorizonMetric]): list of metrics that are logged during training.\n", " Defaults to [].\n", " reduce_on_plateau_patience (int): patience after which learning rate is reduced by a factor of 10. Defaults\n", " to 1000\n", " reduce_on_plateau_reduction (float): reduction in learning rate when encountering plateau. Defaults to 2.0.\n", " reduce_on_plateau_min_lr (float): minimum learning rate for reduce on plateua learning rate scheduler.\n", " Defaults to 1e-5\n", " weight_decay (float): weight decay. Defaults to 0.0.\n", " optimizer_params (Dict[str, Any]): additional parameters for the optimizer. Defaults to {}.\n", " monotone_constaints (Dict[str, int]): dictionary of monotonicity constraints for continuous decoder\n", " variables mapping\n", " position (e.g. ``\"0\"`` for first position) to constraint (``-1`` for negative and ``+1`` for positive,\n", " larger numbers add more weight to the constraint vs. the loss but are usually not necessary).\n", " This constraint significantly slows down training. Defaults to {}.\n", " output_transformer (Callable): transformer that takes network output and transforms it to prediction space.\n", " Defaults to None which is equivalent to ``lambda out: out[\"prediction\"]``.\n", " optimizer (str): Optimizer, \"ranger\", \"sgd\", \"adam\", \"adamw\" or class name of optimizer in ``torch.optim``\n", " or ``pytorch_optimizer``.\n", " Alternatively, a class or function can be passed which takes parameters as first argument and\n", " a `lr` argument (optionally also `weight_decay`). Defaults to\n", " `\"ranger\" `_.\n", " \n" ] } ], "source": [ "print(BaseModel.__init__.__doc__)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Classification\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Classification is a common task and can be easily implemented. In fact, we only have to change the target in our :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet` and adjust the number of prediction outputs to reflect the number of classes we want to predict. The changes for the :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet` are marked below." ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
targetvaluegrouptime_idx
0B0.96715300
1A0.16529701
2B0.10974402
3A0.85084203
4C0.26409004
5A0.32398605
6B0.08549906
7A0.77299007
8C0.48427308
9C0.06574209
10C0.38706910
11A0.56454011
12B0.97942512
13C0.44959613
14C0.84480314
15C0.62255115
16C0.23227016
17C0.13269817
18A0.50196818
19C0.99766219
20C0.05438120
21C0.00659721
22B0.43417922
23A0.20202823
24A0.84301824
25B0.06882225
26C0.46217526
27B0.06395527
28C0.86186028
29B0.43856629
\n", "
" ], "text/plain": [ " target value group time_idx\n", "0 B 0.967153 0 0\n", "1 A 0.165297 0 1\n", "2 B 0.109744 0 2\n", "3 A 0.850842 0 3\n", "4 C 0.264090 0 4\n", "5 A 0.323986 0 5\n", "6 B 0.085499 0 6\n", "7 A 0.772990 0 7\n", "8 C 0.484273 0 8\n", "9 C 0.065742 0 9\n", "10 C 0.387069 1 0\n", "11 A 0.564540 1 1\n", "12 B 0.979425 1 2\n", "13 C 0.449596 1 3\n", "14 C 0.844803 1 4\n", "15 C 0.622551 1 5\n", "16 C 0.232270 1 6\n", "17 C 0.132698 1 7\n", "18 A 0.501968 1 8\n", "19 C 0.997662 1 9\n", "20 C 0.054381 2 0\n", "21 C 0.006597 2 1\n", "22 B 0.434179 2 2\n", "23 A 0.202028 2 3\n", "24 A 0.843018 2 4\n", "25 B 0.068822 2 5\n", "26 C 0.462175 2 6\n", "27 B 0.063955 2 7\n", "28 C 0.861860 2 8\n", "29 B 0.438566 2 9" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "classification_test_data = pd.DataFrame(\n", " dict(\n", " target=np.random.choice([\"A\", \"B\", \"C\"], size=30), # CHANGING values to predict to a categorical\n", " value=np.random.rand(30), # INPUT values - see next section on covariates how to use categorical inputs\n", " group=np.repeat(np.arange(3), 10),\n", " time_idx=np.tile(np.arange(10), 3),\n", " )\n", ")\n", "classification_test_data" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([[1, 0],\n", " [2, 0],\n", " [0, 2],\n", " [2, 2]])" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from pytorch_forecasting.data.encoders import NaNLabelEncoder\n", "\n", "# create the dataset from the pandas dataframe\n", "classification_dataset = TimeSeriesDataSet(\n", " classification_test_data,\n", " group_ids=[\"group\"],\n", " target=\"target\", # SWITCHING to categorical target\n", " time_idx=\"time_idx\",\n", " min_encoder_length=5,\n", " max_encoder_length=5,\n", " min_prediction_length=2,\n", " max_prediction_length=2,\n", " time_varying_unknown_reals=[\"value\"],\n", " target_normalizer=NaNLabelEncoder(), # Use the NaNLabelEncoder to encode categorical target\n", ")\n", "\n", "x, y = next(iter(classification_dataset.to_dataloader(batch_size=4)))\n", "y[0] # target values are encoded categories" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext", "tags": [] }, "source": [ "The keyword argument ``target_normalizer`` is here redundant because the would have detected that a categorical target is used and therefore a :py:class:`~pytorch_forecasting.data.encoders.NaNLabelEncoder` is required." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Now, we need to modify our implementation of the ``FullyConnectedModel``. In particular, we have to one hyperparameters to the model: ``n_classes`` which determines how\n", "many classes there are to predict. Our model will produce a number for each class at each timestep each of which can be converted into probabilities by applying a softmax (over the last dimension). This means we need a total of ``n_decoder_timesteps x n_classes`` predictions. Further, we need to specify the default loss function which we choose to be :py:class:`~pytorch_forecasting.metrics.CrossEntropy`." ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " | Name | Type | Params\n", "---------------------------------------------------------------\n", "0 | loss | SMAPE | 0 \n", "1 | logging_metrics | ModuleList | 0 \n", "2 | network | FullyConnectedModule | 346 \n", "3 | network.sequential | Sequential | 346 \n", "4 | network.sequential.0 | Linear | 60 \n", "5 | network.sequential.1 | ReLU | 0 \n", "6 | network.sequential.2 | Linear | 110 \n", "7 | network.sequential.3 | ReLU | 0 \n", "8 | network.sequential.4 | Linear | 110 \n", "9 | network.sequential.5 | ReLU | 0 \n", "10 | network.sequential.6 | Linear | 66 \n", "---------------------------------------------------------------\n", "346 Trainable params\n", "0 Non-trainable params\n", "346 Total params\n", "0.001 Total estimated model params size (MB)\n" ] }, { "data": { "text/plain": [ "\"hidden_size\": 10\n", "\"input_size\": 5\n", "\"learning_rate\": 0.001\n", "\"log_gradient_flow\": False\n", "\"log_interval\": -1\n", "\"log_val_interval\": -1\n", "\"logging_metrics\": ModuleList()\n", "\"loss\": CrossEntropy()\n", "\"monotone_constaints\": {}\n", "\"n_classes\": 3\n", "\"n_hidden_layers\": 2\n", "\"optimizer\": ranger\n", "\"optimizer_params\": None\n", "\"output_size\": 2\n", "\"output_transformer\": NaNLabelEncoder(add_nan=False, warn=True)\n", "\"reduce_on_plateau_min_lr\": 1e-05\n", "\"reduce_on_plateau_patience\": 1000\n", "\"reduce_on_plateau_reduction\": 2.0\n", "\"weight_decay\": 0.0" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from pytorch_forecasting.metrics import CrossEntropy\n", "\n", "\n", "class FullyConnectedClassificationModel(BaseModel):\n", " def __init__(\n", " self,\n", " input_size: int,\n", " output_size: int,\n", " hidden_size: int,\n", " n_hidden_layers: int,\n", " n_classes: int,\n", " loss=CrossEntropy(),\n", " **kwargs,\n", " ):\n", " # saves arguments in signature to `.hparams` attribute, mandatory call - do not skip this\n", " self.save_hyperparameters()\n", " # pass additional arguments to BaseModel.__init__, mandatory call - do not skip this\n", " super().__init__(**kwargs)\n", " self.network = FullyConnectedModule(\n", " input_size=self.hparams.input_size,\n", " output_size=self.hparams.output_size * self.hparams.n_classes,\n", " hidden_size=self.hparams.hidden_size,\n", " n_hidden_layers=self.hparams.n_hidden_layers,\n", " )\n", "\n", " def forward(self, x: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:\n", " # x is a batch generated based on the TimeSeriesDataset\n", " batch_size = x[\"encoder_cont\"].size(0)\n", " network_input = x[\"encoder_cont\"].squeeze(-1)\n", " prediction = self.network(network_input)\n", " # RESHAPE output to batch_size x n_decoder_timesteps x n_classes\n", " prediction = prediction.unsqueeze(-1).view(batch_size, -1, self.hparams.n_classes)\n", "\n", " # rescale predictions into target space\n", " prediction = self.transform_output(prediction, target_scale=x[\"target_scale\"])\n", "\n", " # We need to return a named tuple that at least contains the prediction.\n", " # The parameter can be directly forwarded from the input.\n", " # The conversion to a named tuple can be directly achieved with the `to_network_output` function.\n", " return self.to_network_output(prediction=prediction)\n", "\n", " @classmethod\n", " def from_dataset(cls, dataset: TimeSeriesDataSet, **kwargs):\n", " assert isinstance(dataset.target_normalizer, NaNLabelEncoder), \"target normalizer has to encode categories\"\n", " new_kwargs = {\n", " \"n_classes\": len(\n", " dataset.target_normalizer.classes_\n", " ), # ADD number of classes as encoded by the target normalizer\n", " \"output_size\": dataset.max_prediction_length,\n", " \"input_size\": dataset.max_encoder_length,\n", " }\n", " new_kwargs.update(kwargs) # use to pass real hyperparameters and override defaults set by dataset\n", " # example for dataset validation\n", " assert dataset.max_prediction_length == dataset.min_prediction_length, \"Decoder only supports a fixed length\"\n", " assert dataset.min_encoder_length == dataset.max_encoder_length, \"Encoder only supports a fixed length\"\n", " assert (\n", " len(dataset.time_varying_known_categoricals) == 0\n", " and len(dataset.time_varying_known_reals) == 0\n", " and len(dataset.time_varying_unknown_categoricals) == 0\n", " and len(dataset.static_categoricals) == 0\n", " and len(dataset.static_reals) == 0\n", " and len(dataset.time_varying_unknown_reals) == 1\n", " ), \"Only covariate should be in 'time_varying_unknown_reals'\"\n", "\n", " return super().from_dataset(dataset, **new_kwargs)\n", "\n", "\n", "model = FullyConnectedClassificationModel.from_dataset(classification_dataset, hidden_size=10, n_hidden_layers=2)\n", "print(ModelSummary(model, max_depth=-1))\n", "model.hparams" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "torch.Size([4, 2, 3])" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# passing x through model\n", "model(x)[\"prediction\"].shape" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Predicting multiple targets at the same time\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Training a model to predict multiple targets simulateneously is not difficult to implement. We can even employ mixed targets, i.e. a mix of categorical and continous targets. The first step is to use define a dataframe with multiple targets:\n" ] }, { "cell_type": "code", "execution_count": 19, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
target1target2grouptime_idx
00.9148550.87880100
10.8999520.94589201
20.3437210.94770302
30.1591210.59413603
40.9389190.61361504
50.6337400.66438905
60.3015080.48686906
70.5842050.76153207
80.6889110.91599508
90.3853330.45333809
100.5633180.70889310
110.1743960.96057311
120.9468800.06824112
130.3575710.34975913
140.9636210.90860314
150.4571520.71111015
160.7735430.69974716
170.4515170.74375917
180.9609910.76368618
190.9743210.66606619
200.4364440.57148620
210.7702660.41054921
220.0308380.41675322
230.5984300.70003823
240.5169090.48951424
250.1979440.04252025
260.9924300.19822326
270.5802340.05141327
280.6156180.25844428
290.2459290.29308129
\n", "
" ], "text/plain": [ " target1 target2 group time_idx\n", "0 0.914855 0.878801 0 0\n", "1 0.899952 0.945892 0 1\n", "2 0.343721 0.947703 0 2\n", "3 0.159121 0.594136 0 3\n", "4 0.938919 0.613615 0 4\n", "5 0.633740 0.664389 0 5\n", "6 0.301508 0.486869 0 6\n", "7 0.584205 0.761532 0 7\n", "8 0.688911 0.915995 0 8\n", "9 0.385333 0.453338 0 9\n", "10 0.563318 0.708893 1 0\n", "11 0.174396 0.960573 1 1\n", "12 0.946880 0.068241 1 2\n", "13 0.357571 0.349759 1 3\n", "14 0.963621 0.908603 1 4\n", "15 0.457152 0.711110 1 5\n", "16 0.773543 0.699747 1 6\n", "17 0.451517 0.743759 1 7\n", "18 0.960991 0.763686 1 8\n", "19 0.974321 0.666066 1 9\n", "20 0.436444 0.571486 2 0\n", "21 0.770266 0.410549 2 1\n", "22 0.030838 0.416753 2 2\n", "23 0.598430 0.700038 2 3\n", "24 0.516909 0.489514 2 4\n", "25 0.197944 0.042520 2 5\n", "26 0.992430 0.198223 2 6\n", "27 0.580234 0.051413 2 7\n", "28 0.615618 0.258444 2 8\n", "29 0.245929 0.293081 2 9" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "multi_target_test_data = pd.DataFrame(\n", " dict(\n", " target1=np.random.rand(30),\n", " target2=np.random.rand(30),\n", " group=np.repeat(np.arange(3), 10),\n", " time_idx=np.tile(np.arange(10), 3),\n", " )\n", ")\n", "multi_target_test_data" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "We can then simply pass a list to ``target`` keyword of the :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet`. The class will choose reasonable defaults for normalizing the targets but we can also specify the normalizer explicitly by assigning an instance of :py:class:`~pytorch_forecasting.data.encoders.MultiNormalizer` to the ``target_normalizer`` keyword - for fun, lets use different ways of normalization." ] }, { "cell_type": "code", "execution_count": 20, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[tensor([[0.9610, 0.9743],\n", " [0.6889, 0.3853],\n", " [0.6337, 0.3015],\n", " [0.5802, 0.6156]]),\n", " tensor([[0.7637, 0.6661],\n", " [0.9160, 0.4533],\n", " [0.6644, 0.4869],\n", " [0.0514, 0.2584]])]" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from pytorch_forecasting.data.encoders import EncoderNormalizer, MultiNormalizer, TorchNormalizer\n", "\n", "# create the dataset from the pandas dataframe\n", "multi_target_dataset = TimeSeriesDataSet(\n", " multi_target_test_data,\n", " group_ids=[\"group\"],\n", " target=[\"target1\", \"target2\"], # USING two targets\n", " time_idx=\"time_idx\",\n", " min_encoder_length=5,\n", " max_encoder_length=5,\n", " min_prediction_length=2,\n", " max_prediction_length=2,\n", " time_varying_unknown_reals=[\"target1\", \"target2\"],\n", " target_normalizer=MultiNormalizer(\n", " [EncoderNormalizer(), TorchNormalizer()]\n", " ), # Use the NaNLabelEncoder to encode categorical target\n", ")\n", "\n", "x, y = next(iter(multi_target_dataset.to_dataloader(batch_size=4)))\n", "y[0] # target values are a list of targets" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Using multiple targets leads to a slightly different ``x`` and ``y`` of the :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet`'s dataloader.\n", "``y`` is still a tuple of target and weight but the target is now a list of tensors. So is the ``target_scale``, the ``encoder_target`` and the ``decoder_target`` in ``x``.\n", "\n", "For this reason not every model is automatically suited to deal with multiple targets. However, it is (very often) fairly simple to extend a model to output a list of tensors (for each target) as opposed to just one tensor (for one target). We will now modify our ``FullyConnectedModel`` to work with one or more targets.\n", "\n", "As we use multiple targets, we need to define a loss function that can handle them. The :py:class:`~pytorch_forecasting.metrics.MultiLoss` is exactly built for that purpose. It also allows weighing the losses differently. Soley for demonstration purposes, we decide to optimize the mean absolute error for the first and the symmetric mean average percentage error for the second target. We weight the error on the first target double as high as the error on the second target." ] }, { "cell_type": "code", "execution_count": 21, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " | Name | Type | Params\n", "---------------------------------------------------------------\n", "0 | loss | MultiLoss | 0 \n", "1 | logging_metrics | ModuleList | 0 \n", "2 | network | FullyConnectedModule | 374 \n", "3 | network.sequential | Sequential | 374 \n", "4 | network.sequential.0 | Linear | 110 \n", "5 | network.sequential.1 | ReLU | 0 \n", "6 | network.sequential.2 | Linear | 110 \n", "7 | network.sequential.3 | ReLU | 0 \n", "8 | network.sequential.4 | Linear | 110 \n", "9 | network.sequential.5 | ReLU | 0 \n", "10 | network.sequential.6 | Linear | 44 \n", "---------------------------------------------------------------\n", "374 Trainable params\n", "0 Non-trainable params\n", "374 Total params\n", "0.001 Total estimated model params size (MB)\n" ] }, { "data": { "text/plain": [ "\"hidden_size\": 10\n", "\"input_size\": 5\n", "\"learning_rate\": 0.001\n", "\"log_gradient_flow\": False\n", "\"log_interval\": -1\n", "\"log_val_interval\": -1\n", "\"logging_metrics\": ModuleList()\n", "\"loss\": MultiLoss(2 * MAE(), SMAPE())\n", "\"monotone_constaints\": {}\n", "\"n_hidden_layers\": 2\n", "\"optimizer\": ranger\n", "\"optimizer_params\": None\n", "\"output_size\": 2\n", "\"output_transformer\": MultiNormalizer(\n", "\tnormalizers=[EncoderNormalizer(\n", "\tmethod='standard',\n", "\tcenter=True,\n", "\tmax_length=None,\n", "\ttransformation=None,\n", "\tmethod_kwargs={}\n", "), TorchNormalizer(method='standard', center=True, transformation=None, method_kwargs={})]\n", ")\n", "\"reduce_on_plateau_min_lr\": 1e-05\n", "\"reduce_on_plateau_patience\": 1000\n", "\"reduce_on_plateau_reduction\": 2.0\n", "\"target_sizes\": [1, 1]\n", "\"weight_decay\": 0.0" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from typing import List, Union\n", "\n", "from pytorch_forecasting.metrics import MAE, SMAPE, MultiLoss\n", "from pytorch_forecasting.utils import to_list\n", "\n", "\n", "class FullyConnectedMultiTargetModel(BaseModel):\n", " def __init__(\n", " self,\n", " input_size: int,\n", " output_size: int,\n", " hidden_size: int,\n", " n_hidden_layers: int,\n", " target_sizes: Union[int, List[int]] = [],\n", " **kwargs,\n", " ):\n", " # saves arguments in signature to `.hparams` attribute, mandatory call - do not skip this\n", " self.save_hyperparameters()\n", " # pass additional arguments to BaseModel.__init__, mandatory call - do not skip this\n", " super().__init__(**kwargs)\n", " self.network = FullyConnectedModule(\n", " input_size=self.hparams.input_size * len(to_list(self.hparams.target_sizes)),\n", " output_size=self.hparams.output_size * sum(to_list(self.hparams.target_sizes)),\n", " hidden_size=self.hparams.hidden_size,\n", " n_hidden_layers=self.hparams.n_hidden_layers,\n", " )\n", "\n", " def forward(self, x: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:\n", " # x is a batch generated based on the TimeSeriesDataset\n", " batch_size = x[\"encoder_cont\"].size(0)\n", " network_input = x[\"encoder_cont\"].view(batch_size, -1)\n", " prediction = self.network(network_input)\n", " # RESHAPE output to batch_size x n_decoder_timesteps x sum_of_target_sizes\n", " prediction = prediction.unsqueeze(-1).view(batch_size, self.hparams.output_size, sum(self.hparams.target_sizes))\n", " # RESHAPE into list of batch_size x n_decoder_timesteps x target_sizes[i] where i=1..len(target_sizes)\n", " stops = np.cumsum(self.hparams.target_sizes)\n", " starts = stops - self.hparams.target_sizes\n", " prediction = [prediction[..., start:stop] for start, stop in zip(starts, stops)]\n", " if isinstance(self.hparams.target_sizes, int): # only one target\n", " prediction = prediction[0]\n", "\n", " # rescale predictions into target space\n", " prediction = self.transform_output(prediction, target_scale=x[\"target_scale\"])\n", "\n", " # We need to return a named tuple that at least contains the prediction.\n", " # The parameter can be directly forwarded from the input.\n", " # The conversion to a named tuple can be directly achieved with the `to_network_output` function.\n", " return self.to_network_output(prediction=prediction)\n", "\n", " @classmethod\n", " def from_dataset(cls, dataset: TimeSeriesDataSet, **kwargs):\n", " # By default only handle targets of size one here, categorical targets would be of larger size\n", " new_kwargs = {\n", " \"target_sizes\": [1] * len(to_list(dataset.target)),\n", " \"output_size\": dataset.max_prediction_length,\n", " \"input_size\": dataset.max_encoder_length,\n", " }\n", " new_kwargs.update(kwargs) # use to pass real hyperparameters and override defaults set by dataset\n", " # example for dataset validation\n", " assert dataset.max_prediction_length == dataset.min_prediction_length, \"Decoder only supports a fixed length\"\n", " assert dataset.min_encoder_length == dataset.max_encoder_length, \"Encoder only supports a fixed length\"\n", " assert (\n", " len(dataset.time_varying_known_categoricals) == 0\n", " and len(dataset.time_varying_known_reals) == 0\n", " and len(dataset.time_varying_unknown_categoricals) == 0\n", " and len(dataset.static_categoricals) == 0\n", " and len(dataset.static_reals) == 0\n", " and len(dataset.time_varying_unknown_reals)\n", " == len(dataset.target_names) # Expect as as many unknown reals as targets\n", " ), \"Only covariate should be in 'time_varying_unknown_reals'\"\n", "\n", " return super().from_dataset(dataset, **new_kwargs)\n", "\n", "\n", "model = FullyConnectedMultiTargetModel.from_dataset(\n", " multi_target_dataset,\n", " hidden_size=10,\n", " n_hidden_layers=2,\n", " loss=MultiLoss(metrics=[MAE(), SMAPE()], weights=[2.0, 1.0]),\n", ")\n", "print(ModelSummary(model, max_depth=-1))\n", "model.hparams" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Now, let's pass some data through our model and calculate the loss.\n" ] }, { "cell_type": "code", "execution_count": 22, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Output(prediction=[tensor([[[0.6287],\n", " [0.6112]],\n", "\n", " [[0.5641],\n", " [0.5441]],\n", "\n", " [[0.6994],\n", " [0.6710]],\n", "\n", " [[0.5038],\n", " [0.4876]]], grad_fn=), tensor([[[0.6652],\n", " [0.4931]],\n", "\n", " [[0.6647],\n", " [0.4883]],\n", "\n", " [[0.6632],\n", " [0.4920]],\n", "\n", " [[0.6718],\n", " [0.4899]]], grad_fn=)])" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "out = model(x)\n", "out" ] }, { "cell_type": "code", "execution_count": 23, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor(0.8016, grad_fn=)" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.loss(out[\"prediction\"], y)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Using covariates\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Now that we have established the basics, we can move on to more advanced use cases, e.g. how can we make use of covariates - static and continuous alike. We can leverage the :py:class:`~pytorch_forecasting.models.base_model.BaseModelWithCovariates` for this. The difference to the :py:class:`~pytorch_forecasting.models.base_model.BaseModel` is a :py:meth:`~pytorch_forecasting.models.base_model.BaseModelWithCovariates.from_dataset` method that pre-defines hyperparameters for architectures with covariates.\n", "\n", ".. autoclass:: pytorch_forecasting.models.base_model.BaseModelWithCovariates\n", " :noindex:\n", " :members: from_dataset\n", " \n", "\n", "Here is a from the BaseModelWithCovariates docstring to copy:" ] }, { "cell_type": "code", "execution_count": 24, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", " Model with additional methods using covariates.\n", "\n", " Assumes the following hyperparameters:\n", "\n", " Args:\n", " static_categoricals (List[str]): names of static categorical variables\n", " static_reals (List[str]): names of static continuous variables\n", " time_varying_categoricals_encoder (List[str]): names of categorical variables for encoder\n", " time_varying_categoricals_decoder (List[str]): names of categorical variables for decoder\n", " time_varying_reals_encoder (List[str]): names of continuous variables for encoder\n", " time_varying_reals_decoder (List[str]): names of continuous variables for decoder\n", " x_reals (List[str]): order of continuous variables in tensor passed to forward function\n", " x_categoricals (List[str]): order of categorical variables in tensor passed to forward function\n", " embedding_sizes (Dict[str, Tuple[int, int]]): dictionary mapping categorical variables to tuple of integers\n", " where the first integer denotes the number of categorical classes and the second the embedding size\n", " embedding_labels (Dict[str, List[str]]): dictionary mapping (string) indices to list of categorical labels\n", " embedding_paddings (List[str]): names of categorical variables for which label 0 is always mapped to an\n", " embedding vector filled with zeros\n", " categorical_groups (Dict[str, List[str]]): dictionary of categorical variables that are grouped together and\n", " can also take multiple values simultaneously (e.g. holiday during octoberfest). They should be implemented\n", " as bag of embeddings\n", " \n" ] } ], "source": [ "from pytorch_forecasting.models.base_model import BaseModelWithCovariates\n", "\n", "print(BaseModelWithCovariates.__doc__)" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "We will now implement the model. A helpful module is the :py:class:`~pytorch_forecasting.models.nn.embeddings.MultiEmbedding` which can be used to embed categorical features. It is compliant with he :py:class:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet`, i.e. it supports bags of embeddings that are useful for embeddings where multiple categories can occur at the same time such holidays. Again, we will create a fully-connected network. It is easy to recycle our ``FullyConnectedModule`` by simply replacing setting ``input_size`` to the number of encoder time steps times the number of features instead of simply the number of encoder time steps." ] }, { "cell_type": "code", "execution_count": 25, "metadata": {}, "outputs": [], "source": [ "from typing import Dict, List, Tuple\n", "\n", "from pytorch_forecasting.models.nn import MultiEmbedding\n", "\n", "\n", "class FullyConnectedModelWithCovariates(BaseModelWithCovariates):\n", " def __init__(\n", " self,\n", " input_size: int,\n", " output_size: int,\n", " hidden_size: int,\n", " n_hidden_layers: int,\n", " x_reals: List[str],\n", " x_categoricals: List[str],\n", " embedding_sizes: Dict[str, Tuple[int, int]],\n", " embedding_labels: Dict[str, List[str]],\n", " static_categoricals: List[str],\n", " static_reals: List[str],\n", " time_varying_categoricals_encoder: List[str],\n", " time_varying_categoricals_decoder: List[str],\n", " time_varying_reals_encoder: List[str],\n", " time_varying_reals_decoder: List[str],\n", " embedding_paddings: List[str],\n", " categorical_groups: Dict[str, List[str]],\n", " **kwargs,\n", " ):\n", " # saves arguments in signature to `.hparams` attribute, mandatory call - do not skip this\n", " self.save_hyperparameters()\n", " # pass additional arguments to BaseModel.__init__, mandatory call - do not skip this\n", " super().__init__(**kwargs)\n", "\n", " # create embedder - can be fed with x[\"encoder_cat\"] or x[\"decoder_cat\"] and will return\n", " # dictionary of category names mapped to embeddings\n", " self.input_embeddings = MultiEmbedding(\n", " embedding_sizes=self.hparams.embedding_sizes,\n", " categorical_groups=self.hparams.categorical_groups,\n", " embedding_paddings=self.hparams.embedding_paddings,\n", " x_categoricals=self.hparams.x_categoricals,\n", " max_embedding_size=self.hparams.hidden_size,\n", " )\n", "\n", " # calculate the size of all concatenated embeddings + continous variables\n", " n_features = sum(\n", " embedding_size for classes_size, embedding_size in self.hparams.embedding_sizes.values()\n", " ) + len(self.reals)\n", "\n", " # create network that will be fed with continious variables and embeddings\n", " self.network = FullyConnectedModule(\n", " input_size=self.hparams.input_size * n_features,\n", " output_size=self.hparams.output_size,\n", " hidden_size=self.hparams.hidden_size,\n", " n_hidden_layers=self.hparams.n_hidden_layers,\n", " )\n", "\n", " def forward(self, x: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:\n", " # x is a batch generated based on the TimeSeriesDataset\n", " batch_size = x[\"encoder_lengths\"].size(0)\n", " embeddings = self.input_embeddings(x[\"encoder_cat\"]) # returns dictionary with embedding tensors\n", " network_input = torch.cat(\n", " [x[\"encoder_cont\"]]\n", " + [\n", " emb\n", " for name, emb in embeddings.items()\n", " if name in self.encoder_variables or name in self.static_variables\n", " ],\n", " dim=-1,\n", " )\n", " prediction = self.network(network_input.view(batch_size, -1))\n", "\n", " # rescale predictions into target space\n", " prediction = self.transform_output(prediction, target_scale=x[\"target_scale\"])\n", "\n", " # We need to return a dictionary that at least contains the prediction.\n", " # The parameter can be directly forwarded from the input.\n", " # The conversion to a named tuple can be directly achieved with the `to_network_output` function.\n", " return self.to_network_output(prediction=prediction)\n", "\n", " @classmethod\n", " def from_dataset(cls, dataset: TimeSeriesDataSet, **kwargs):\n", " new_kwargs = {\n", " \"output_size\": dataset.max_prediction_length,\n", " \"input_size\": dataset.max_encoder_length,\n", " }\n", " new_kwargs.update(kwargs) # use to pass real hyperparameters and override defaults set by dataset\n", " # example for dataset validation\n", " assert dataset.max_prediction_length == dataset.min_prediction_length, \"Decoder only supports a fixed length\"\n", " assert dataset.min_encoder_length == dataset.max_encoder_length, \"Encoder only supports a fixed length\"\n", "\n", " return super().from_dataset(dataset, **new_kwargs)" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "We have used here additional hooks available through the :py:class:`~pytorch_forecasting.models.base_model.BaseModelWithCovariates` such as ``self.static_variables`` or ``self.encoder_variables`` that can be readily determined from the hyperparameters. See the documentation of the :py:class:`~pytorch_forecasting.models.base_model.BaseModelWithCovariates` class for all available additions to the :py:class:`~pytorch_forecasting.models.base_model.BaseModel`.\n", "\n", "When the model receives its input `x`, you can use the hyperparameters and linked to variables and the additional variables by the :py:class:`~pytorch_forecasting.models.base_model.BaseModelWithCovariates` to identify the different variables. This is important as ``x[\"encoder_cat\"].size(2) == x[\"decoder_cat\"].size(2)`` and ``x[\"encoder_cont\"].size(2) == x[\"decoder_cont\"].size(2)``. This means all variables are passed to the encoder and decoder even if some are not allowed to be used by the decoder as they are not known in the future. The order of variables in ``x[\"encoder_cont\"]`` / ``x[\"decoder_cont\"]`` and ``x[\"encoder_cat\"]`` / ``x[\"decoder_cat\"]``is determined by the hyperparameters ``x_reals`` and ``x_categoricals``. Consequently, you can idenify, for example, the position of all continuous decoder variables with ``[self.hparams.x_reals.index(name) for name in self.hparams.time_varying_reals_decoder]``." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Note that the model does not make use of the known covariates in the decoder - this is obviously suboptimal but not scope of this tutorial. Anyways, let us create a new dataset with categorical variables and see how the model can be instantiated from it.\n" ] }, { "cell_type": "code", "execution_count": 26, "metadata": {}, "outputs": [ { "data": { "text/html": [ "
\n", "\n", "\n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", " \n", "
valuegrouptime_idxcategorical_covariatereal_covariate
00.94460400a0.405124
10.64074901b0.573697
20.01913302b0.253981
30.74983703a0.200379
40.71482404a0.297402
50.34958305b0.822654
60.28039206a0.857269
70.33307107b0.744103
80.02468108b0.084565
90.33907609a0.108766
100.61636410b0.965863
110.65018011b0.339208
120.10908712b0.840201
130.50265213a0.938904
140.99395914a0.730369
150.67132215b0.611059
160.85847916b0.885494
170.17871617a0.894173
180.86069118b0.987288
190.74990519a0.494003
200.78331720a0.176965
210.75645321a0.505112
220.41897422b0.151147
230.16182023a0.160465
240.22411624b0.504209
250.79923525b0.273152
260.50100726b0.151468
270.96315427a0.778906
280.19895528b0.016670
290.17224729b0.818567
\n", "
" ], "text/plain": [ " value group time_idx categorical_covariate real_covariate\n", "0 0.944604 0 0 a 0.405124\n", "1 0.640749 0 1 b 0.573697\n", "2 0.019133 0 2 b 0.253981\n", "3 0.749837 0 3 a 0.200379\n", "4 0.714824 0 4 a 0.297402\n", "5 0.349583 0 5 b 0.822654\n", "6 0.280392 0 6 a 0.857269\n", "7 0.333071 0 7 b 0.744103\n", "8 0.024681 0 8 b 0.084565\n", "9 0.339076 0 9 a 0.108766\n", "10 0.616364 1 0 b 0.965863\n", "11 0.650180 1 1 b 0.339208\n", "12 0.109087 1 2 b 0.840201\n", "13 0.502652 1 3 a 0.938904\n", "14 0.993959 1 4 a 0.730369\n", "15 0.671322 1 5 b 0.611059\n", "16 0.858479 1 6 b 0.885494\n", "17 0.178716 1 7 a 0.894173\n", "18 0.860691 1 8 b 0.987288\n", "19 0.749905 1 9 a 0.494003\n", "20 0.783317 2 0 a 0.176965\n", "21 0.756453 2 1 a 0.505112\n", "22 0.418974 2 2 b 0.151147\n", "23 0.161820 2 3 a 0.160465\n", "24 0.224116 2 4 b 0.504209\n", "25 0.799235 2 5 b 0.273152\n", "26 0.501007 2 6 b 0.151468\n", "27 0.963154 2 7 a 0.778906\n", "28 0.198955 2 8 b 0.016670\n", "29 0.172247 2 9 b 0.818567" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import numpy as np\n", "import pandas as pd\n", "\n", "from pytorch_forecasting import TimeSeriesDataSet\n", "\n", "test_data_with_covariates = pd.DataFrame(\n", " dict(\n", " # as before\n", " value=np.random.rand(30),\n", " group=np.repeat(np.arange(3), 10),\n", " time_idx=np.tile(np.arange(10), 3),\n", " # now adding covariates\n", " categorical_covariate=np.random.choice([\"a\", \"b\"], size=30),\n", " real_covariate=np.random.rand(30),\n", " )\n", ").astype(\n", " dict(group=str)\n", ") # categorical covariates have to be of string type\n", "test_data_with_covariates" ] }, { "cell_type": "code", "execution_count": 27, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " | Name | Type | Params\n", "--------------------------------------------------------------------------------------------\n", "0 | loss | SMAPE | 0 \n", "1 | logging_metrics | ModuleList | 0 \n", "2 | input_embeddings | MultiEmbedding | 11 \n", "3 | input_embeddings.embeddings | ModuleDict | 11 \n", "4 | input_embeddings.embeddings.group | Embedding | 9 \n", "5 | input_embeddings.embeddings.categorical_covariate | Embedding | 2 \n", "6 | network | FullyConnectedModule | 552 \n", "7 | network.sequential | Sequential | 552 \n", "8 | network.sequential.0 | Linear | 310 \n", "9 | network.sequential.1 | ReLU | 0 \n", "10 | network.sequential.2 | Linear | 110 \n", "11 | network.sequential.3 | ReLU | 0 \n", "12 | network.sequential.4 | Linear | 110 \n", "13 | network.sequential.5 | ReLU | 0 \n", "14 | network.sequential.6 | Linear | 22 \n", "--------------------------------------------------------------------------------------------\n", "563 Trainable params\n", "0 Non-trainable params\n", "563 Total params\n", "0.002 Total estimated model params size (MB)\n" ] }, { "data": { "text/plain": [ "\"categorical_groups\": {}\n", "\"embedding_labels\": {'group': {'0': 0, '1': 1, '2': 2}, 'categorical_covariate': {'a': 0, 'b': 1}}\n", "\"embedding_paddings\": []\n", "\"embedding_sizes\": {'group': (3, 3), 'categorical_covariate': (2, 1)}\n", "\"hidden_size\": 10\n", "\"input_size\": 5\n", "\"learning_rate\": 0.001\n", "\"log_gradient_flow\": False\n", "\"log_interval\": -1\n", "\"log_val_interval\": -1\n", "\"logging_metrics\": ModuleList()\n", "\"loss\": SMAPE()\n", "\"monotone_constaints\": {}\n", "\"n_hidden_layers\": 2\n", "\"optimizer\": ranger\n", "\"optimizer_params\": None\n", "\"output_size\": 2\n", "\"output_transformer\": GroupNormalizer(\n", "\tmethod='standard',\n", "\tgroups=[],\n", "\tcenter=True,\n", "\tscale_by_group=False,\n", "\ttransformation='relu',\n", "\tmethod_kwargs={}\n", ")\n", "\"reduce_on_plateau_min_lr\": 1e-05\n", "\"reduce_on_plateau_patience\": 1000\n", "\"reduce_on_plateau_reduction\": 2.0\n", "\"static_categoricals\": ['group']\n", "\"static_reals\": []\n", "\"time_varying_categoricals_decoder\": ['categorical_covariate']\n", "\"time_varying_categoricals_encoder\": ['categorical_covariate']\n", "\"time_varying_reals_decoder\": ['real_covariate']\n", "\"time_varying_reals_encoder\": ['real_covariate', 'value']\n", "\"weight_decay\": 0.0\n", "\"x_categoricals\": ['group', 'categorical_covariate']\n", "\"x_reals\": ['real_covariate', 'value']" ] }, "execution_count": 27, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# create the dataset from the pandas dataframe\n", "dataset_with_covariates = TimeSeriesDataSet(\n", " test_data_with_covariates,\n", " group_ids=[\"group\"],\n", " target=\"value\",\n", " time_idx=\"time_idx\",\n", " min_encoder_length=5,\n", " max_encoder_length=5,\n", " min_prediction_length=2,\n", " max_prediction_length=2,\n", " time_varying_unknown_reals=[\"value\"],\n", " time_varying_known_reals=[\"real_covariate\"],\n", " time_varying_known_categoricals=[\"categorical_covariate\"],\n", " static_categoricals=[\"group\"],\n", ")\n", "\n", "model = FullyConnectedModelWithCovariates.from_dataset(dataset_with_covariates, hidden_size=10, n_hidden_layers=2)\n", "print(ModelSummary(model, max_depth=-1)) # print model summary\n", "model.hparams" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "To test that the model could be trained, pass a sample batch.\n" ] }, { "cell_type": "code", "execution_count": 28, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "Output(prediction=tensor([[0.6245, 0.5642],\n", " [0.6215, 0.5603],\n", " [0.6228, 0.5637],\n", " [0.6277, 0.5627]], grad_fn=))" ] }, "execution_count": 28, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x, y = next(iter(dataset_with_covariates.to_dataloader(batch_size=4))) # generate batch\n", "model(x) # pass batch through model" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Implementing an autoregressive / recurrent model\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Often time series models are autoregressive, i.e. one does not make `n` predictions for all future steps in one function call but predicts ``n`` times one step ahead. PyTorch Forecasting comes with a\n", ":py:class:`~pytorch_forecasting.models.base_model.AutoRegressiveBaseModel` and a :py:class:`~pytorch_forecasting.models.base_model.AutoRegressiveBaseModelWithCovariates` for such models.\n", "\n", ".. autoclass:: pytorch_forecasting.models.base_model.AutoRegressiveBaseModel\n", " :noindex:\n", "\n", "In this section, we will implement a simple LSTM model that could be easily extended to work with covariates. Note that because we do not handle covariates, lagged targets cannot be incorporated in this network. We use an implementation of the :py:class:`~pytorch_forecasting.models.nn.rnn.LSTM` that can handle zero-length sequences but otherwise 100% mirrors the PyTorch-native implementation." ] }, { "cell_type": "code", "execution_count": 29, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " | Name | Type | Params\n", "-----------------------------------------------\n", "0 | loss | SMAPE | 0 \n", "1 | logging_metrics | ModuleList | 0 \n", "2 | lstm | LSTM | 1.4 K \n", "3 | output_layer | Linear | 11 \n", "-----------------------------------------------\n", "1.4 K Trainable params\n", "0 Non-trainable params\n", "1.4 K Total params\n", "0.006 Total estimated model params size (MB)\n" ] }, { "data": { "text/plain": [ "\"dropout\": 0.1\n", "\"hidden_size\": 10\n", "\"learning_rate\": 0.001\n", "\"log_gradient_flow\": False\n", "\"log_interval\": -1\n", "\"log_val_interval\": -1\n", "\"logging_metrics\": ModuleList()\n", "\"loss\": SMAPE()\n", "\"monotone_constaints\": {}\n", "\"n_layers\": 2\n", "\"optimizer\": ranger\n", "\"optimizer_params\": None\n", "\"output_transformer\": GroupNormalizer(\n", "\tmethod='standard',\n", "\tgroups=[],\n", "\tcenter=True,\n", "\tscale_by_group=False,\n", "\ttransformation=None,\n", "\tmethod_kwargs={}\n", ")\n", "\"reduce_on_plateau_min_lr\": 1e-05\n", "\"reduce_on_plateau_patience\": 1000\n", "\"reduce_on_plateau_reduction\": 2.0\n", "\"target\": value\n", "\"target_lags\": {}\n", "\"weight_decay\": 0.0" ] }, "execution_count": 29, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from torch.nn.utils import rnn\n", "\n", "from pytorch_forecasting.models.base_model import AutoRegressiveBaseModel\n", "from pytorch_forecasting.models.nn import LSTM\n", "\n", "\n", "class LSTMModel(AutoRegressiveBaseModel):\n", " def __init__(\n", " self,\n", " target: str,\n", " target_lags: Dict[str, Dict[str, int]],\n", " n_layers: int,\n", " hidden_size: int,\n", " dropout: float = 0.1,\n", " **kwargs,\n", " ):\n", " # arguments target and target_lags are required for autoregressive models\n", " # even though target_lags cannot be used without covariates\n", " # saves arguments in signature to `.hparams` attribute, mandatory call - do not skip this\n", " self.save_hyperparameters()\n", " # pass additional arguments to BaseModel.__init__, mandatory call - do not skip this\n", " super().__init__(**kwargs)\n", "\n", " # use version of LSTM that can handle zero-length sequences\n", " self.lstm = LSTM(\n", " hidden_size=self.hparams.hidden_size,\n", " input_size=1,\n", " num_layers=self.hparams.n_layers,\n", " dropout=self.hparams.dropout,\n", " batch_first=True,\n", " )\n", " self.output_layer = nn.Linear(self.hparams.hidden_size, 1)\n", "\n", " def encode(self, x: Dict[str, torch.Tensor]):\n", " # we need at least one encoding step as because the target needs to be lagged by one time step\n", " # because we use the custom LSTM, we do not have to require encoder lengths of > 1\n", " # but can handle lengths of >= 1\n", " assert x[\"encoder_lengths\"].min() >= 1\n", " input_vector = x[\"encoder_cont\"].clone()\n", " # lag target by one\n", " input_vector[..., self.target_positions] = torch.roll(\n", " input_vector[..., self.target_positions], shifts=1, dims=1\n", " )\n", " input_vector = input_vector[:, 1:] # first time step cannot be used because of lagging\n", "\n", " # determine effective encoder_length length\n", " effective_encoder_lengths = x[\"encoder_lengths\"] - 1\n", " # run through LSTM network\n", " _, hidden_state = self.lstm(\n", " input_vector, lengths=effective_encoder_lengths, enforce_sorted=False # passing the lengths directly\n", " ) # second ouput is not needed (hidden state)\n", " return hidden_state\n", "\n", " def decode(self, x: Dict[str, torch.Tensor], hidden_state):\n", " # again lag target by one\n", " input_vector = x[\"decoder_cont\"].clone()\n", " input_vector[..., self.target_positions] = torch.roll(\n", " input_vector[..., self.target_positions], shifts=1, dims=1\n", " )\n", " # but this time fill in missing target from encoder_cont at the first time step instead of throwing it away\n", " last_encoder_target = x[\"encoder_cont\"][\n", " torch.arange(x[\"encoder_cont\"].size(0), device=x[\"encoder_cont\"].device),\n", " x[\"encoder_lengths\"] - 1,\n", " self.target_positions.unsqueeze(-1),\n", " ].T\n", " input_vector[:, 0, self.target_positions] = last_encoder_target\n", "\n", " if self.training: # training mode\n", " lstm_output, _ = self.lstm(input_vector, hidden_state, lengths=x[\"decoder_lengths\"], enforce_sorted=False)\n", "\n", " # transform into right shape\n", " prediction = self.output_layer(lstm_output)\n", " prediction = self.transform_output(prediction, target_scale=x[\"target_scale\"])\n", "\n", " # predictions are not yet rescaled\n", " return prediction\n", "\n", " else: # prediction mode\n", " target_pos = self.target_positions\n", "\n", " def decode_one(idx, lagged_targets, hidden_state):\n", " x = input_vector[:, [idx]]\n", " # overwrite at target positions\n", " x[:, 0, target_pos] = lagged_targets[-1] # take most recent target (i.e. lag=1)\n", " lstm_output, hidden_state = self.lstm(x, hidden_state)\n", " # transform into right shape\n", " prediction = self.output_layer(lstm_output)[:, 0] # take first timestep\n", " return prediction, hidden_state\n", "\n", " # make predictions which are fed into next step\n", " output = self.decode_autoregressive(\n", " decode_one,\n", " first_target=input_vector[:, 0, target_pos],\n", " first_hidden_state=hidden_state,\n", " target_scale=x[\"target_scale\"],\n", " n_decoder_steps=input_vector.size(1),\n", " )\n", "\n", " # predictions are already rescaled\n", " return output\n", "\n", " def forward(self, x: Dict[str, torch.Tensor]) -> Dict[str, torch.Tensor]:\n", " hidden_state = self.encode(x) # encode to hidden state\n", " output = self.decode(x, hidden_state) # decode leveraging hidden state\n", "\n", " return self.to_network_output(prediction=output)\n", "\n", "\n", "model = LSTMModel.from_dataset(dataset, n_layers=2, hidden_size=10)\n", "print(ModelSummary(model, max_depth=-1))\n", "model.hparams" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "We used the :py:meth:`~pytorch_forecasting.models.base_model.BaseModel.transform_output` method to apply the inverse transformation. It is also used under the hood for re-scaling/de-normalizing predictions and leverages the ``output_transformer`` to do so. The ``output_transformer`` is the ``target_normalizer`` as used in the dataset. When initializing the model from the dataset, it is automatically copied to the model.\n", "\n", "We can now check that both approaches deliver the same result in terms of prediction shape:" ] }, { "cell_type": "code", "execution_count": 30, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "prediction shape in training: torch.Size([4, 2, 1])\n", "prediction shape in inference: torch.Size([4, 2, 1])\n" ] } ], "source": [ "x, y = next(iter(dataloader))\n", "\n", "print(\n", " \"prediction shape in training:\", model(x)[\"prediction\"].size()\n", ") # batch_size x decoder time steps x 1 (1 for one target dimension)\n", "model.eval() # set model into eval mode to use autoregressive prediction\n", "print(\"prediction shape in inference:\", model(x)[\"prediction\"].size()) # should be the same as in training" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Using and defining a custom/non-trivial metric\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "To use a different metric, simply pass it to the model when initializing it (preferably via the `from_dataset()` method). For example, to use mean absolute error with our `FullyConnectedModel` from the beginning of this tutorial, type\n" ] }, { "cell_type": "code", "execution_count": 31, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"hidden_size\": 10\n", "\"input_size\": 5\n", "\"learning_rate\": 0.001\n", "\"log_gradient_flow\": False\n", "\"log_interval\": -1\n", "\"log_val_interval\": -1\n", "\"logging_metrics\": ModuleList()\n", "\"loss\": MAE()\n", "\"monotone_constaints\": {}\n", "\"n_hidden_layers\": 2\n", "\"optimizer\": ranger\n", "\"optimizer_params\": None\n", "\"output_size\": 2\n", "\"output_transformer\": GroupNormalizer(\n", "\tmethod='standard',\n", "\tgroups=[],\n", "\tcenter=True,\n", "\tscale_by_group=False,\n", "\ttransformation=None,\n", "\tmethod_kwargs={}\n", ")\n", "\"reduce_on_plateau_min_lr\": 1e-05\n", "\"reduce_on_plateau_patience\": 1000\n", "\"reduce_on_plateau_reduction\": 2.0\n", "\"weight_decay\": 0.0" ] }, "execution_count": 31, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from pytorch_forecasting.metrics import MAE\n", "\n", "model = FullyConnectedModel.from_dataset(dataset, hidden_size=10, n_hidden_layers=2, loss=MAE())\n", "model.hparams" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Note that some metrics might require a certain form of model prediction, e.g. quantile prediction assumes an output of shape `batch_size x n_decoder_timesteps x n_quantiles` instead of `batch_size x n_decoder_timesteps`. For the `FullyConnectedModel`, this means that we need to use a modified `FullyConnectedModule`network. Here `n_outputs` corresponds to the number of quantiles.\n" ] }, { "cell_type": "code", "execution_count": 32, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "torch.Size([20, 2, 7])" ] }, "execution_count": 32, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import torch\n", "from torch import nn\n", "\n", "\n", "class FullyConnectedMultiOutputModule(nn.Module):\n", " def __init__(self, input_size: int, output_size: int, hidden_size: int, n_hidden_layers: int, n_outputs: int):\n", " super().__init__()\n", "\n", " # input layer\n", " module_list = [nn.Linear(input_size, hidden_size), nn.ReLU()]\n", " # hidden layers\n", " for _ in range(n_hidden_layers):\n", " module_list.extend([nn.Linear(hidden_size, hidden_size), nn.ReLU()])\n", " # output layer\n", " self.n_outputs = n_outputs\n", " module_list.append(\n", " nn.Linear(hidden_size, output_size * n_outputs)\n", " ) # <<<<<<<< modified: replaced output_size with output_size * n_outputs\n", "\n", " self.sequential = nn.Sequential(*module_list)\n", "\n", " def forward(self, x: torch.Tensor) -> torch.Tensor:\n", " # x of shape: batch_size x n_timesteps_in\n", " # output of shape batch_size x n_timesteps_out\n", " return self.sequential(x).reshape(x.size(0), -1, self.n_outputs) # <<<<<<<< modified: added reshape\n", "\n", "\n", "# test that network works as intended\n", "network = FullyConnectedMultiOutputModule(input_size=5, output_size=2, hidden_size=10, n_hidden_layers=2, n_outputs=7)\n", "network(torch.rand(20, 5)).shape # <<<<<<<<<< instead of shape (20, 2), returning additional dimension for quantiles" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Using the above-defined ``FullyConnectedMultiOutputModule``, we could create a new model and use :py:class:`~pytorch_forecasting.metrics.QuantileLoss`. Note that you would have to align ``n_outputs`` with the number of quantiles in the :py:class:`~pytorch_forecasting.metrics.QuantileLoss` class either manually or by making use of the `from_dataset()` method. If you want to switch back to a loss on a single output such as for :py:class:`~pytorch_forecasting.metrics.MAE`, simply set the ``n_ouputs=1`` as all PyTorch Forecasting metrics can handle the additional third dimension as long as it is of size 1." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Implement a new metric\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "To implement a new metric, you simply need to inherit from the :py:class:`~pytorch_forecasting.metrics.MultiHorizonMetric` and define the loss function. The :py:class:`~pytorch_forecasting.metrics.MultiHorizonMetric` handles everything from weighting to masking values for you. E.g. the mean absolute error is implemented as" ] }, { "cell_type": "code", "execution_count": 33, "metadata": {}, "outputs": [], "source": [ "from pytorch_forecasting.metrics import MultiHorizonMetric\n", "\n", "\n", "class MAE(MultiHorizonMetric):\n", " def loss(self, y_pred, target):\n", " loss = (self.to_prediction(y_pred) - target).abs()\n", " return loss" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "You might notice the :py:meth:`~pytorch_forecasting.metrics.Metric.to_prediction` method. Generally speaking, it convertes ``y_pred`` to a point-prediction. By default, this means that it removes the third dimension from ``y_pred`` if there is one. For most metrics, this is exactly what you need.\n", "\n", "For custom :py:class:`~pytorch_forecasting.metrics.DistributionLoss` metrics, different methods need to be implemented.\n", "\n", ".. autoclass:: pytorch_forecasting.metrics.DistributionLoss\n", " :members: map_x_to_distribution, rescale_parameters\n", " :noindex:" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Model ouptut cannot be readily converted to prediction\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Sometimes a networks's ``forward()`` output does not trivially map to a prediction. For example, this is the case if you predict the parameters of a distribution as is the case for all classes deriving from :py:class:`~pytorch_forecasting.metrics.DistributionLoss`. In particular, this means that you need to handle training and prediction differently. Converting the parameters to predictions is typically implemented by the metric's ``to_prediction()`` method.\n", "\n", "We will study now the case of the :py:class:`~pytorch_forecasting.metrics.NormalDistributionLoss`. It requires us to predict the ``mean`` and the ``scale`` of the normal distribution. We can do so by leveraging our ``FullyConnectedMultiOutputModule`` class that we used for predicting multiple quantiles." ] }, { "cell_type": "code", "execution_count": 34, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " | Name | Type | Params\n", "--------------------------------------------------------------------------\n", "0 | loss | NormalDistributionLoss | 0 \n", "1 | logging_metrics | ModuleList | 0 \n", "2 | network | FullyConnectedMultiOutputModule | 324 \n", "3 | network.sequential | Sequential | 324 \n", "4 | network.sequential.0 | Linear | 60 \n", "5 | network.sequential.1 | ReLU | 0 \n", "6 | network.sequential.2 | Linear | 110 \n", "7 | network.sequential.3 | ReLU | 0 \n", "8 | network.sequential.4 | Linear | 110 \n", "9 | network.sequential.5 | ReLU | 0 \n", "10 | network.sequential.6 | Linear | 44 \n", "--------------------------------------------------------------------------\n", "324 Trainable params\n", "0 Non-trainable params\n", "324 Total params\n", "0.001 Total estimated model params size (MB)\n" ] }, { "data": { "text/plain": [ "\"hidden_size\": 10\n", "\"input_size\": 5\n", "\"learning_rate\": 0.001\n", "\"log_gradient_flow\": False\n", "\"log_interval\": -1\n", "\"log_val_interval\": -1\n", "\"logging_metrics\": ModuleList()\n", "\"loss\": SMAPE()\n", "\"monotone_constaints\": {}\n", "\"n_hidden_layers\": 2\n", "\"optimizer\": ranger\n", "\"optimizer_params\": None\n", "\"output_size\": 2\n", "\"output_transformer\": GroupNormalizer(\n", "\tmethod='standard',\n", "\tgroups=[],\n", "\tcenter=True,\n", "\tscale_by_group=False,\n", "\ttransformation=None,\n", "\tmethod_kwargs={}\n", ")\n", "\"reduce_on_plateau_min_lr\": 1e-05\n", "\"reduce_on_plateau_patience\": 1000\n", "\"reduce_on_plateau_reduction\": 2.0\n", "\"weight_decay\": 0.0" ] }, "execution_count": 34, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from copy import copy\n", "\n", "from pytorch_forecasting.metrics import NormalDistributionLoss\n", "\n", "\n", "class FullyConnectedForDistributionLossModel(BaseModel): # we inherit the `from_dataset` method\n", " def __init__(self, input_size: int, output_size: int, hidden_size: int, n_hidden_layers: int, **kwargs):\n", " # saves arguments in signature to `.hparams` attribute, mandatory call - do not skip this\n", " self.save_hyperparameters()\n", " # pass additional arguments to BaseModel.__init__, mandatory call - do not skip this\n", " super().__init__(**kwargs)\n", " self.network = FullyConnectedMultiOutputModule(\n", " input_size=self.hparams.input_size,\n", " output_size=self.hparams.output_size,\n", " hidden_size=self.hparams.hidden_size,\n", " n_hidden_layers=self.hparams.n_hidden_layers,\n", " n_outputs=2, # <<<<<<<< we predict two outputs for mean and scale of the normal distribution\n", " )\n", " self.loss = NormalDistributionLoss()\n", "\n", " @classmethod\n", " def from_dataset(cls, dataset: TimeSeriesDataSet, **kwargs):\n", " new_kwargs = {\n", " \"output_size\": dataset.max_prediction_length,\n", " \"input_size\": dataset.max_encoder_length,\n", " }\n", " new_kwargs.update(kwargs) # use to pass real hyperparameters and override defaults set by dataset\n", " # example for dataset validation\n", " assert dataset.max_prediction_length == dataset.min_prediction_length, \"Decoder only supports a fixed length\"\n", " assert dataset.min_encoder_length == dataset.max_encoder_length, \"Encoder only supports a fixed length\"\n", " assert (\n", " len(dataset.time_varying_known_categoricals) == 0\n", " and len(dataset.time_varying_known_reals) == 0\n", " and len(dataset.time_varying_unknown_categoricals) == 0\n", " and len(dataset.static_categoricals) == 0\n", " and len(dataset.static_reals) == 0\n", " and len(dataset.time_varying_unknown_reals) == 1\n", " and dataset.time_varying_unknown_reals[0] == dataset.target\n", " ), \"Only covariate should be the target in 'time_varying_unknown_reals'\"\n", "\n", " return super().from_dataset(dataset, **new_kwargs)\n", "\n", " def forward(self, x: Dict[str, torch.Tensor], n_samples: int = None) -> Dict[str, torch.Tensor]:\n", " # x is a batch generated based on the TimeSeriesDataset\n", " network_input = x[\"encoder_cont\"].squeeze(-1)\n", " prediction = self.network(network_input) # shape batch_size x n_decoder_steps x 2\n", " # we need to scale the parameters to real space\n", " prediction = self.transform_output(\n", " prediction=prediction,\n", " target_scale=x[\"target_scale\"],\n", " )\n", " if n_samples is not None:\n", " # sample from distribution\n", " prediction = self.loss.sample(prediction, n_samples)\n", " # The conversion to a named tuple can be directly achieved with the `to_network_output` function.\n", " return self.to_network_output(prediction=prediction)\n", "\n", "\n", "model = FullyConnectedForDistributionLossModel.from_dataset(dataset, hidden_size=10, n_hidden_layers=2)\n", "print(ModelSummary(model, max_depth=-1))\n", "model.hparams" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "You notice that not much changes. All the magic is implemented in the metric itself that knows how to re-scale the network output to \"parameters\" transform distribution \"parameters\" to \"predictions\" using the model's ``transform_output()`` method and the metric's ``to_prediction`` method under the hood, respectively.\n", "\n", "We can now test that the network works as expected:" ] }, { "cell_type": "code", "execution_count": 35, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "tensor([2, 2, 2, 2])" ] }, "execution_count": 35, "metadata": {}, "output_type": "execute_result" } ], "source": [ "x[\"decoder_lengths\"]" ] }, { "cell_type": "code", "execution_count": 36, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "parameter predition shape: torch.Size([4, 2, 4])\n", "sample prediction shape: torch.Size([4, 2, 200])\n" ] } ], "source": [ "x, y = next(iter(dataloader))\n", "\n", "print(\"parameter predition shape: \", model(x)[\"prediction\"].size())\n", "model.eval() # set model into eval mode for sampling\n", "print(\"sample prediction shape: \", model(x, n_samples=200)[\"prediction\"].size())" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "To run inference, you can still use the :py:meth:`~pytorch_forecasting.models.base_model.BaseModel.predict()` method as additional arguments are passed to the metrics's ``to_quantiles()`` method with the ``mode_kwargs`` parameter, e.g. we can execute the following line to generate 100 traces and subsequently calculate quantiles." ] }, { "cell_type": "code", "execution_count": 37, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "GPU available: True (mps), used: True\n", "TPU available: False, using: 0 TPU cores\n", "IPU available: False, using: 0 IPUs\n", "HPU available: False, using: 0 HPUs\n" ] }, { "data": { "text/plain": [ "torch.Size([12, 2, 7])" ] }, "execution_count": 37, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.predict(dataloader, mode=\"quantiles\", mode_kwargs=dict(n_samples=100)).shape" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext", "tags": [] }, "source": [ "The returned quantiles are here determined by the quantiles defined in the loss function and can be modified by passing a list of quantiles to at initialization.\n", "\n", "Note that the sampling in the network's ``forward()`` method is not strictly necessary here. However, e.g. for stochastic, autogressive networks such as :py:class:`~pytorch_forecasting.models.deepar.DeepAR`, predicting should be done by passing ``n_samples=100`` directly to the predict method. Samples should be either aggregated with ``mode_kwargs=dict(use_metric=False)`` (added automatically) or extracted directly with ``mode=(\"raw\", \"prediction\")`` (equivalent to ``mode=\"samples\"`` in DeepAR)." ] }, { "cell_type": "code", "execution_count": 38, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0.02, 0.1, 0.25, 0.5, 0.75, 0.9, 0.98]" ] }, "execution_count": 38, "metadata": {}, "output_type": "execute_result" } ], "source": [ "model.loss.quantiles" ] }, { "cell_type": "code", "execution_count": 39, "metadata": {}, "outputs": [ { "data": { "text/plain": [ "[0.2, 0.8]" ] }, "execution_count": 39, "metadata": {}, "output_type": "execute_result" } ], "source": [ "NormalDistributionLoss(quantiles=[0.2, 0.8]).quantiles" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Adding custom plotting and interpretation\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "PyTorch Forecasting supports plotting of predictions and interpretations. The figures can also be logged as part of monitoring training progress using tensorboard. Sometimes, the output of the network cannot be directly plotted together with the actually observed time series. In these cases (such as our ``FullyConnectedForDistributionLossModel`` from the previous section), we need to fix the plotting function. Further, sometimes we want to visualize certain properties of the network every other batch or after every epoch. It is easy to make this happen with PyTorch Forecasting and the `LightningModule `_ on which the :py:class:`~pytorch_forecasting.models.base_model.BaseModel` is based." ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "The :py:meth:`~pytorch_forecasting.models.base_model.BaseModel.log_interval` property provides a log_interval that switches automatically between the hyperparameters ``log_interval`` or ``log_val_interval`` depending if the model is in training or validation mode. If it is larger than 0, logging is enabled and if ``batch_idx % log_interval == 0`` for a batch, logging for that batch is triggered. You can even set it to a number smaller than 1 leading to multiple logging events during a single batch." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Log often whenever an example prediction vs actuals plot is created\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "One of the easiest ways to log a figure regularly, is overriding the :py:meth:`~pytorch_forecasting.models.base_model.BaseModel.plot_prediction` method, e.g. to add something to the generated plot.\n", "\n", "In the following example, we will add an additional line indicating attention to the figure logged:" ] }, { "cell_type": "code", "execution_count": 40, "metadata": {}, "outputs": [], "source": [ "import matplotlib.pyplot as plt\n", "\n", "\n", "def plot_prediction(\n", " self,\n", " x: Dict[str, torch.Tensor],\n", " out: Dict[str, torch.Tensor],\n", " idx: int,\n", " plot_attention: bool = True,\n", " add_loss_to_title: bool = False,\n", " show_future_observed: bool = True,\n", " ax=None,\n", ") -> plt.Figure:\n", " \"\"\"\n", " Plot actuals vs prediction and attention\n", "\n", " Args:\n", " x (Dict[str, torch.Tensor]): network input\n", " out (Dict[str, torch.Tensor]): network output\n", " idx (int): sample index\n", " plot_attention: if to plot attention on secondary axis\n", " add_loss_to_title: if to add loss to title. Default to False.\n", " show_future_observed: if to show actuals for future. Defaults to True.\n", " ax: matplotlib axes to plot on\n", "\n", " Returns:\n", " plt.Figure: matplotlib figure\n", " \"\"\"\n", " # plot prediction as normal\n", " fig = super().plot_prediction(\n", " x, out, idx=idx, add_loss_to_title=add_loss_to_title, show_future_observed=show_future_observed, ax=ax\n", " )\n", "\n", " # add attention on secondary axis\n", " if plot_attention:\n", " interpretation = self.interpret_output(out)\n", " ax = fig.axes[0]\n", " ax2 = ax.twinx()\n", " ax2.set_ylabel(\"Attention\")\n", " encoder_length = x[\"encoder_lengths\"][idx]\n", " ax2.plot(\n", " torch.arange(-encoder_length, 0),\n", " interpretation[\"attention\"][idx, :encoder_length].detach().cpu(),\n", " alpha=0.2,\n", " color=\"k\",\n", " )\n", " fig.tight_layout()\n", " return fig" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "If you want to add a completely new figure, override the :py:meth:`~pytorch_forecasting.models.base_model.BaseModel.log_prediction` method." ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Log at the end of an epoch\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Logging at the end of an epoch is another common use case. You might want to calculate additional results in each step and then summarize them at the end of an epoch. Here, you can override the :py:meth:`~pytorch_forecasting.models.base_model.BaseModel.create_log` method to calculate additional results to summarize and the ``on_epoch_end()`` hook provided by PyTorch Lightning.\n", "\n", "In the example below, we first calculate some interpretation result (but only if logging is enabled) and add it to the ``log`` object for later summarization. In the ``on_epoch_end()`` hook we take the list of saved results, and\n", "use the ``log_interpretation()`` method (that is defined in the model elsewhere) to log a figure to the tensorboard." ] }, { "cell_type": "code", "execution_count": 41, "metadata": {}, "outputs": [], "source": [ "from pytorch_forecasting.utils import detach\n", "\n", "\n", "def create_log(self, x, y, out, batch_idx, **kwargs):\n", " # log standard\n", " log = super().create_log(x, y, out, batch_idx, **kwargs)\n", " # calculate interpretations etc for latter logging\n", " if self.log_interval > 0:\n", " interpretation = self.interpret_output(\n", " detach(out),\n", " reduction=\"sum\",\n", " attention_prediction_horizon=0, # attention only for first prediction horizon\n", " )\n", " log[\"interpretation\"] = interpretation\n", " return log\n", "\n", "\n", "def on_epoch_end(self, outputs):\n", " \"\"\"\n", " Run at epoch end for training or validation\n", " \"\"\"\n", " if self.log_interval > 0:\n", " self.log_interpretation(outputs)" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "### Log at the end of training\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "A common use case is to log the final embeddings at the end of training. You can easily achieve this by levering the PyTorch Lightning ``on_fit_end()`` model hook. Override that method to log the embeddings.\n", "\n", "The follow example assumes that there is a ``input_embeddings`` is a dictionary like object of embeddings that are being trained such as the :py:class:`~pytorch_forecasting.models.nn.embeddings.MultiEmbedding` class. Further a hyperparameter ``embedding_labels`` exists (as automatically required and created by the :py:class:`~pytorch_forecasting.models.base_model.BaseModelWithCovariates`." ] }, { "cell_type": "code", "execution_count": 42, "metadata": {}, "outputs": [], "source": [ "def on_fit_end(self):\n", " \"\"\"\n", " run at the end of training\n", " \"\"\"\n", " if self.log_interval > 0:\n", " for name, emb in self.input_embeddings.items():\n", " labels = self.hparams.embedding_labels[name]\n", " self.logger.experiment.add_embedding(\n", " emb.weight.data.cpu(), metadata=labels, tag=name, global_step=self.global_step\n", " )" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "## Minimal testing of models\n" ] }, { "attachments": {}, "cell_type": "markdown", "metadata": {}, "source": [ "Testing models is essential to quickly detect problems and iterate quickly. Some issues can be only identified after lengthy training but many problems show up after one or two batches. PyTorch Lightning, on which PyTorch Forecasting is built, makes it easy to set up such tests.\n" ] }, { "cell_type": "raw", "metadata": { "raw_mimetype": "text/restructuredtext" }, "source": [ "Every model should be trainable with some minimal dataset. Here is how:\n", "\n", "#. Define a dataset that works with the model. If it takes long to create, you can save it to disk with the :py:meth:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet.save` method and load it with the :py:meth:`~pytorch_forecasting.data.timeseries.TimeSeriesDataSet.load` method when you want to run tests. In any case, create a reasonably small dataset.\n", "\n", "#. Initialize your model with ``log_interval=1`` to test logging of plots - in particular the `plot_prediction()` method.\n", "\n", "#. Define a `Pytorch Lightning Trainer `_ and initialize it with ``fast_dev_run=True``. This ensures that not full epochs but just a couple of batches are passed through the training and validation steps.\n", "\n", "#. Train your model and check that it executes.\n", "\n", "As example, we marshall the ``FullyConnectedForDistributionLossModel`` defined earlier in this tutorial:" ] }, { "cell_type": "code", "execution_count": 43, "metadata": {}, "outputs": [ { "name": "stderr", "output_type": "stream", "text": [ "GPU available: True (mps), used: True\n", "TPU available: False, using: 0 TPU cores\n", "IPU available: False, using: 0 IPUs\n", "HPU available: False, using: 0 HPUs\n", "Running in `fast_dev_run` mode: will run the requested loop using 1 batch(es). Logging and checkpointing is suppressed.\n", "\n", " | Name | Type | Params\n", "--------------------------------------------------------------------\n", "0 | loss | NormalDistributionLoss | 0 \n", "1 | logging_metrics | ModuleList | 0 \n", "2 | network | FullyConnectedMultiOutputModule | 324 \n", "--------------------------------------------------------------------\n", "324 Trainable params\n", "0 Non-trainable params\n", "324 Total params\n", "0.001 Total estimated model params size (MB)\n" ] }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "d6e5e14c57e443629b86be774042e631", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Training: 0it [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "data": { "application/vnd.jupyter.widget-view+json": { "model_id": "a67cad7761fc4a039cfa004973b94b34", "version_major": 2, "version_minor": 0 }, "text/plain": [ "Validation: 0it [00:00, ?it/s]" ] }, "metadata": {}, "output_type": "display_data" }, { "name": "stderr", "output_type": "stream", "text": [ "`Trainer.fit` stopped: `max_steps=1` reached.\n" ] }, { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAmgAAAHjCAYAAACXcOPPAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjcuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/bCgiHAAAACXBIWXMAAA9hAAAPYQGoP6dpAAB2r0lEQVR4nO3dd3hUVfoH8O+dmcyk9w4JoYcOgjRBqpDg6uK6CEgX61Jkda27iB3X1dXfoi6uuwLromJ3VwlFuoBUQUInJKR3UieZen5/TGbIkAQSksmd8v08zzyaO3fufTOkvDnvOe+RhBACREREROQ0FHIHQERERET2mKARERERORkmaEREREROhgkaERERkZNRyR0AERERtYzJZILBYJA7DGoFtVoNhaLpcTImaERERC5CCIH8/HyUlZXJHQq1kkKhQOfOnaFWqxt9XmKbDSIiIteQl5eHsrIyREZGwtfXF5IkyR0S3QCz2Yzc3Fx4eXkhPj6+0X9HjqARERG5AJPJZEvOwsLC5A6HWikiIgK5ubkwGo3w8vJq8DwXCRAREbkA65wzX19fmSOhtmAtbZpMpkafZ4JGRETkQljWdA/X+3dkgkZERETkZJigERERETkZJmhEREQkm507d0KSJLdpHdJWnw8TNCIiIiInwwSNiIiIPIper5c7hOtigkZEROSihBDQ6o2yPFrS516n02Hp0qWIjIyEt7c3Ro0ahUOHDtmds3fvXvTv3x/e3t4YPnw4UlNTbc9dunQJd9xxB0JCQuDn54c+ffpg48aNtudTU1ORnJwMf39/REVFYc6cOSguLrY9P3bsWCxevBjLli1DeHg4Jk+ejHvvvRfTp0+3i8FgMCA8PBz//ve/AVgayq5cuRKdO3eGj48PBgwYgC+++MLuNRs3bkSPHj3g4+ODcePGISMjo9nvy7WwUS0REZGLqjGY0Pu5zbLc+9SLk+Grbl4a8eSTT+LLL7/EunXr0KlTJ7z++uuYPHkyLly4YDvniSeewP/93/8hOjoazz77LO644w6cO3cOXl5eWLRoEfR6PXbv3g0/Pz+cOnUK/v7+AICysjKMHz8e999/P9566y3U1NTgqaeewj333IPt27fbrr9u3To88sgj2Lt3LwDgwoULmDZtGqqqqmzX2rx5M7RaLe666y4AwMqVK/Gf//wHq1evRvfu3bF7927Mnj0bERERGDNmDLKysvCb3/wGixYtwoMPPojDhw/j8ccfb5P3l1s9ERERuYDa2lqkp6ejc+fO8Pb2BgBo9UanT9Cqq6sREhKCtWvX4t577wVgGalKSEjAsmXLcPPNN2PcuHH49NNPbSNapaWl6NixI9auXYt77rkH/fv3x913340VK1Y0uP7LL7+MPXv2YPPmK+9DdnY24uLicPbsWfTo0QNjx45FRUUFjh49ajvHaDQiJiYGf/3rXzFnzhwAwL333guz2YxPP/0UOp0OoaGh+OGHHzBixAjb6+6//35otVp8/PHHePbZZ/Htt9/i5MmTtueffvpp/PnPf8bly5cRHBzc5PvS2L9nfRxBIyIiclE+XkqcenGybPdujrS0NBgMBtxyyy22Y15eXhg6dChOnz6Nm2++GQDskqDQ0FD07NkTp0+fBgAsXboUjzzyCLZs2YKJEyfi7rvvRv/+/QEAx48fx44dO2yjYFffu0ePHgCAwYMH2z2nUqlwzz33YP369ZgzZw6qq6vx7bff4tNPPwVgGWHTarW47bbb7F6n1+sxaNAgAMDp06cxbNgwu+frfx6twQSNiIjIRUmS1Owyoyu7//77MXnyZHz//ffYsmULVq5ciTfffBNLlixBVVUV7rjjDvz5z39u8LqYmBjb//v5+TV4ftasWRgzZgwKCwuxdetW+Pj4ICkpCQBQVVUFAPj+++/RoUMHu9dpNJq2/PQaxUUCRERE5DBdu3aFWq22zf0CLCXOQ4cOoXfv3rZjP/30k+3/L1++jHPnzqFXr162Y3FxcXj44Yfx1Vdf4fHHH8cHH3wAALjppptw8uRJJCQkoFu3bnaPxpKy+kaOHIm4uDhs2LAB69evx7Rp02wbl/fu3RsajQaZmZkNrhsXFwcA6NWrFw4ePGh3zfqfR2swQSMiIiKH8fPzwyOPPIInnngCmzZtwqlTp/DAAw9Aq9Vi4cKFtvNefPFFbNu2DampqZg/fz7Cw8MxdepUAMCyZcuwefNmpKen4+jRo9ixY4cteVu0aBFKS0sxc+ZMHDp0CGlpadi8eTMWLFjQ5Ebk9d17771YvXo1tm7dilmzZtmOBwQE4A9/+AN+//vfY926dUhLS8PRo0exatUqrFu3DgDw8MMP4/z583jiiSdw9uxZfPzxx1i7dm2bvG9M0IiIiMihXnvtNdx9992YM2cObrrpJly4cAGbN29GSEiI3TmPPvooBg8ejPz8fPzvf/+DWq0GAJhMJixatAi9evVCUlISevTogffeew8AEBsbi71798JkMmHSpEno168fli1bhuDgYCgU109zZs2ahVOnTqFDhw528+QA4KWXXsLy5cuxcuVK272///57dO7cGQAQHx+PL7/8Et988w0GDBiA1atX49VXX22T94yrOImIiFzA9Vb9kWu53r8nR9CIiIiInAwTNCIiIiInwwSNiIiIyMkwQSMiIiJyMkzQiIiIiJwMEzQiIiIiJ8MEjYiIiMjJMEEjIiIicjJM0IiIiMhtJCQk4O2337Z9LEkSvvnmm3aP4/nnn8fAgQNv+PVM0IiIiMht5eXlITk5uVnntjapaksquQMgIiIiqk+v19v24Wyt6OjoNrlOe+MIGhERETnU2LFjsXjxYixevBhBQUEIDw/H8uXLYd0OPCEhAS+99BLmzp2LwMBAPPjggwCAH3/8EaNHj4aPjw/i4uKwdOlSVFdX265bWFiIO+64Az4+PujcuTPWr1/f4N5Xlzizs7Mxc+ZMhIaGws/PD0OGDMGBAwewdu1avPDCCzh+/DgkSYIkSVi7di0AoKysDPfffz8iIiIQGBiI8ePH4/jx43b3ee211xAVFYWAgAAsXLgQtbW1rXrPOIJGRETkqoQADFp57u3lC0hSs09ft24dFi5ciIMHD+Lw4cN48MEHER8fjwceeAAA8MYbb+C5557DihUrAABpaWlISkrCyy+/jA8//BBFRUW2JG/NmjUAgPnz5yM3Nxc7duyAl5cXli5disLCwiZjqKqqwpgxY9ChQwf897//RXR0NI4ePQqz2Yzp06cjNTUVmzZtwg8//AAACAoKAgBMmzYNPj4+SElJQVBQEN5//31MmDAB586dQ2hoKD777DM8//zzePfddzFq1Ch89NFH+Nvf/oYuXbrc0FsLMEEjIiJyXQYt8GqsPPd+NhdQ+zX79Li4OLz11luQJAk9e/bEiRMn8NZbb9kStPHjx+Pxxx+3nX///fdj1qxZWLZsGQCge/fu+Nvf/oYxY8bg73//OzIzM5GSkoKDBw/i5ptvBgD861//Qq9evZqM4eOPP0ZRUREOHTqE0NBQAEC3bt1sz/v7+0OlUtmVRX/88UccPHgQhYWF0Gg0ACzJ5DfffIMvvvgCDz74IN5++20sXLgQCxcuBAC8/PLL+OGHH1o1isYSJ3mctWvXQpIkHD58WO5Qrmnbtm2477770KNHD/j6+qJLly64//77kZeX16zXf/XVV5g+fTq6dOkCX19f9OzZE48//jjKysoanLthwwbMnj0b3bt3hyRJGDt2bLPu8corr0CSJPTt27fR5/ft24dRo0bB19cX0dHRWLp0Kaqqqho99+jRo7jzzjsRGhoKX19f9O3bF3/729/szjEYDHjhhRfQpUsXaDQadOnSBS+//DKMRuMNX7OlcTbnczebzVi9ejUGDhwIf39/REVFITk5Gfv27bM7b/78+bZSSmOPnJwc27mvvvoqhg8fjoiICHh7e6N79+5YtmwZioqKGo0vLS0N9957LyIjI+Hj44Pu3bvjj3/8Y7Pun5iYaHfe888/f8049+7da/u8165dizvvvBNxcXHw8/ND37598fLLL7e63EOub/jw4ZDqjbiNGDEC58+fh8lkAgAMGTLE7vzjx49j7dq18Pf3tz0mT54Ms9mM9PR0nD59GiqVCoMHD7a9JjExEcHBwU3GcOzYMQwaNMiWnDXH8ePHUVVVhbCwMLtY0tPTkZaWBgA4ffo0hg0bZve6ESNGNPsejeEIGpGTeuqpp1BaWopp06ahe/fuuHjxIt555x189913OHbs2HUnvj744IOIjY3F7NmzER8fjxMnTuCdd97Bxo0bcfToUfj4+NjO/fvf/44jR47g5ptvRklJSbPiy87Oxquvvgo/v8b/gj527BgmTJiAXr164a9//Suys7Pxxhtv4Pz580hJSbE7d8uWLbjjjjswaNAgLF++HP7+/khLS0N2drbdebNnz8bnn3+O++67D0OGDMFPP/2E5cuXIzMzE//4xz9u6JotibO5n/sTTzyBv/71r5g9ezZ+97vfoaysDO+//z7GjBmDvXv3YujQoQCAhx56CBMnTrR7rRACDz/8MBISEtChQwfb8SNHjmDgwIGYMWMGAgICcPr0aXzwwQf4/vvvcezYMbtYjh07hrFjx6JDhw54/PHHERYWhszMTGRlZTWIVaPR4J///KfdMWtZx+o3v/mN3SiD1bPPPouqqirb6IVWq8WCBQswfPhwPPzww4iMjMT+/fuxYsUKbNu2Ddu3b7f7BU1twMvXMpIl173b0NXfT1VVVXjooYewdOnSBufGx8fj3LlzLb5H/Z97zVVVVYWYmBjs3LmzwXPXSgZbTRB5mDVr1ggA4tChQ3KHck27du0SJpOpwTEA4o9//ON1X79jx44Gx9atWycAiA8++MDueGZmpu1effr0EWPGjLnu9adPny7Gjx8vxowZI/r06dPg+eTkZBETEyPKy8ttxz744AMBQGzevNl2rLy8XERFRYm77rqrwedb38GDBwUAsXz5crvjjz/+uJAkSRw/frzF12xJnM393A0Gg/Dx8RG//e1v7Y5fvHhRABBLly69Zjx79uwRAMQrr7xyzfOEEOKLL74QAMQnn3xiO2YymUTfvn3FsGHDhFarvebr582bJ/z8/K57n8ZkZmYKSZLEAw88YDum0+nE3r17G5z7wgsvCABi69atN3QvsqipqRGnTp0SNTU1cofSYmPGjBG9e/e2O/b000+LXr16CSGE6NSpk3jrrbfsnr/33nvFhAkTmrzmmTNnBABx8ODBBsfqXwuA+Prrr4UQQqxdu1YEBgaKkpKSRq/5yiuviL59+9od27Jli1AqlSI9Pb3JWEaMGCF+97vf2R0bPny4GDBgQJOvud6/J0ucRE34+eefkZycjMDAQPj7+2PChAn46aef7M6xlty6d+8Ob29vhIWFYdSoUdi6davtnPz8fCxYsAAdO3aERqNBTEwMfv3rXyMjI+Oa97/11luhUCgaHAsNDcXp06evG39jZcq77roLABq8Pi4ursG9rmX37t344osv7JpB1ldRUYGtW7di9uzZCAwMtB2fO3cu/P398dlnn9mOffzxxygoKMArr7wChUKB6upqmM3mBtfcs2cPAGDGjBl2x2fMmAEhBDZs2NDia7YkzuZ+7gaDATU1NYiKirI7HhkZCYVCcd2/4D/++GNIkoR77733mucBlpVvAOzK1lu2bEFqaipWrFgBHx8faLVaWwmpKSaTCRUVFde9X32ffPIJhBCYNWuW7ZharcbIkSMbnNvU1x15lszMTDz22GM4e/YsPvnkE6xatQqPPvpok+c/9dRT2LdvHxYvXoxjx47h/Pnz+Pbbb7F48WIAQM+ePZGUlISHHnoIBw4cwJEjR3D//fdf83ts5syZiI6OxtSpU7F3715cvHgRX375Jfbv3w/A8j2Vnp6OY8eOobi4GDqdDhMnTsSIESMwdepUbNmyBRkZGdi3bx/++Mc/2qbKPProo/jwww+xZs0anDt3DitWrMDJkydb9X4xQSNqxMmTJzF69GgcP34cTz75JJYvX4709HSMHTsWBw4csJ33/PPP44UXXsC4cePwzjvv4I9//CPi4+Nx9OhR2zl33303vv76ayxYsADvvfceli5disrKSmRmZrY4rqqqKlRVVSE8PPyGPq/8/HwAuOHXA5Zf5kuWLMH999+Pfv36NXrOiRMnYDQaG8wpUavVGDhwIH7++WfbsR9++AGBgYHIyclBz5494e/vj8DAQDzyyCN285Z0Oh2AhiUKX19LmeXIkSMtvmZL4mzu5+7j44Nhw4Zh7dq1WL9+PTIzM/HLL79g/vz5CAkJsbUPaIzBYMBnn32GkSNH2pKv+oQQKC4uRn5+Pvbs2YOlS5dCqVTaJePW1WcajQZDhgyBn58ffH19MWPGDJSWlja4plarRWBgIIKCghAaGopFixZdd/4dAKxfvx5xcXG49dZbr3tuW3zdkeubO3cuampqMHToUCxatAiPPvroNb8f+vfvj127duHcuXMYPXo0Bg0ahOeeew6xsVcWRaxZswaxsbEYM2YMfvOb3+DBBx9EZGRkk9dUq9XYsmULIiMjMWXKFPTr1w+vvfYalEolAMvP66SkJIwbNw4RERH45JNPIEkSNm7ciFtvvRULFixAjx49MGPGDFy6dMn2h9j06dOxfPlyPPnkkxg8eDAuXbqERx55pHVvWJNjb0RuqjklzqlTpwq1Wi3S0tJsx3Jzc0VAQIC49dZbbccGDBggbr/99iavc/nyZQFA/OUvf2mT2F966SUBQGzbtu2GXr9w4UKhVCrFuXPnmjzneiXOd955RwQFBYnCwkIhhGi0zPf5558LAGL37t0NXj9t2jQRHR1t+7h///7C19dX+Pr6iiVLlogvv/xSLFmyRAAQM2bMsJ335ZdfCgDio48+srve6tWrBQC7skRzr9mSOJv7uQshxPnz58VNN90kANgeXbp0EWfOnGn4htbzv//9TwAQ7733XqPP5+Xl2V2zY8eOYsOGDXbn3HnnnQKACAsLE7NmzRJffPGFWL58uVCpVGLkyJHCbDbbzn366afFU089JTZs2CA++eQTMW/ePAFA3HLLLcJgMDQZZ2pqqgAgnnzyyWt+PlYTJ04UgYGB4vLly806nxrn6iXORx99VO4wnMr1/j2ZoJHHuV6CZjQaha+vr7jnnnsaPPfQQw8JhUJhm680ZswYkZCQ0GTCU1tbK9Rqtbj99ttFaWlpq+LetWuXUKlUjcbVHOvXr2/WL9VrJWjFxcUiNDRUvPHGG7ZjjSUp//73vwUAceDAgQbXmDNnjggKCrJ93KVLFwFAPPzww3bnPfTQQwKA7b2tqakRnTp1ElFRUeLLL78UGRkZYsOGDSIsLEyoVCrRtWvXFl+zJXE293MXQoj8/HwxZ84csWjRIvHVV1+J9957T8THx4vExERRVFTU4HyrmTNnCi8vL1FcXNzo8zqdTmzdulX873//Ey+++KIYOHCg+Ne//mV3zvjx4wUAkZSUZHd85cqVzZoH9sorrzSY13a1Z555RgCwm/d3ves1lXRS8zFBcy+cg0bUQkVFRdBqtejZs2eD53r16gWz2WxbDffiiy+irKwMPXr0QL9+/fDEE0/gl19+sZ2v0Wjw5z//GSkpKYiKisKtt96K119/3Vbyaa4zZ87grrvuQt++fRusuGuOPXv2YOHChZg8eTJeeeWVFr/e6k9/+hNCQ0OxZMmSa55nLUNay5L11dbW2pUprf8/c+ZMu/Osc7Csc0O8vb3x/fffIywsDHfffTcSEhIwd+5cPPfccwgNDYW/v3+Lr9mSOJv7uRuNRkycOBFBQUF45513cNddd+GRRx7BDz/8gLS0NPzlL39p9HVVVVX49ttvMXnyZISFhTV6jlqtxsSJE/GrX/0Ky5cvx7vvvouFCxfiu+++a/bnfnWrj6v9/ve/h0KhsJVKryaEwMcff4y+ffuif//+17zWhg0b8Kc//QkLFy5sfbmHyMMwQSNqhVtvvRVpaWn48MMPbcnTTTfdZJdELVu2DOfOncPKlSvh7e2N5cuXo1evXg3mNzUlKysLkyZNQlBQEDZu3IiAgIAWxXj8+HHceeed6Nu3L7744guoVDfWXef8+fP4xz/+gaVLlyI3NxcZGRnIyMhAbW0tDAYDMjIybHOcYmJiAKDRnm15eXl2c0is/9/YpHoAuHz5su1Ynz59kJqaitTUVOzZswe5ubl44IEHUFxcjB49erT4ms2NsyWf++7du5Gamoo777zT7nrdu3dHr169bD3DrvbNN99Aq9XaTbq/npEjRyImJsZue5uWvJ+N8fHxQVhYWKPz1QBg7969uHTp0nXj3Lp1K+bOnYvbb78dq1evvu7nQu5t586dTS6socYxQSO6SkREBHx9fXH27NkGz505cwYKhQJxcXG2Y6GhoViwYAE++eQTZGVloX///nj++eftXte1a1c8/vjjthV2er0eb7755nVjKSkpwaRJk6DT6bB582ZbQtFc1q1SIiMjsXHjRrtRppbKycmB2WzG0qVL0blzZ9vjwIEDOHfuHDp37owXX3wRANC3b1+oVKoGzYD1ej2OHTuGgQMH2o5Zm0zWb8oKALm5lt5OERERdsclSUKfPn0watQohIaGYseOHTCbzXb9xJp7zebG2ZLPvaCgAAAaXTlpMBiabKq7fv16+Pv7N0jsrqe2thbl5eUt/tybUllZieLi4ibPW79+/XVXmR44cAB33XUXhgwZgs8+++yG/ygg8mjtW3Elkl9zFwloNBq7vjf5+fkiMDDQbpFAY3OFpk2bJsLDw4UQQlRXVzeYX2AymURUVFSDPllXq6qqEkOHDhUBAQHi8OHD1zz30qVL4vTp03bH8vLyRJcuXURsbOw1+/dcrak5aEVFReLrr79u8OjTp4+Ij48XX3/9tfjll19s5yclJYmYmBhRUVFhO/bPf/5TABApKSm2Y0ePHhUAxL333mt3v5kzZwqVSiVycnKajFWr1YqbbrqpwX1acs3mxNmSz/3w4cMCgJg3b57dvY8cOSIUCkWDeXFCCFFYWChUKpWYM2dOo59nVVWVqK6ubnDc2getfm+4vLw8odFoxKhRo+x6wFnnjVl7RtXU1Nh9zlZPPPGEACC++uqrBs/p9XoRFhYmRo8e3WicQghx6tQpERYWJvr06dPqeZdkz5XnoFFD1/v35J815LE+/PBDbNq0qcHxRx99FC+//DK2bt2KUaNG4Xe/+x1UKhXef/996HQ6vP7667Zze/fujbFjx2Lw4MEIDQ3F4cOH8cUXX9j69Jw7dw4TJkzAPffcg969e0OlUuHrr79GQUFBg35eV5s1axYOHjyI++67D6dPn7brIeXv74+pU6faPp47dy527doFIYTtWFJSEi5evIgnn3wSP/74I3788Ufbc1FRUbjttttsH+/evRu7d+8GYJmDV11djZdffhmApYx76623Ijw83O6eVtayxdXPvfLKKxg5ciTGjBmDBx98ENnZ2XjzzTcxadIkJCUl2c4bNGgQ7rvvPnz44YcwGo0YM2YMdu7cic8//xzPPPOMXTn0nnvuQWxsLHr37o2Kigp8+OGHuHjxIr7//nu70m9LrtmcOFvyuQ8ePBi33XYb1q1bh4qKCkyaNAl5eXlYtWoVfHx8bPsK1rdhwwYYjcYmy4bnz5/HxIkTMX36dCQmJkKhUODw4cP4z3/+g4SEBLteUtHR0fjjH/+I5557DklJSZg6dSqOHz+ODz74ADNnzrR1/c/Pz8egQYMwc+ZM29ZOmzdvxsaNG5GUlIRf//rXDeLYvHkzSkpKmoyzsrISkydPxuXLl/HEE0/g+++/t3u+a9eurd7+htBoTz9yPfV/Xjd1ApFHsY6gNfXIysoSQlhGYSZPniz8/f2Fr6+vGDdunNi3b5/dtV5++WUxdOhQERwcLHx8fERiYqJ45ZVXhF6vF0JYRtgWLVokEhMThZ+fnwgKChLDhg0Tn3322XXj7NSpU5MxdurUye7cMWPGiKu/na/1OV49QrZixYomz12xYsU142xqJaMQlq74I0eOFN7e3iIiIkIsWrSo0VEbvV4vnn/+edGpUyfh5eUlunXr1qCruBBC/PnPfxaJiYnC29tbhISEiDvvvFP8/PPPjd67uddsSZzN/dy1Wq148cUXRe/evYWPj48ICgoSv/rVr5qMdfjw4SIyMlIYjcZGny8qKhIPPvig7etIrVaL7t27i2XLljW6KtRsNotVq1aJHj16CC8vLxEXFyf+9Kc/2b4uhbC0gJk9e7bo1q2b8PX1FRqNRvTp00e8+uqrdufVN2PGDOHl5dVkF/b09PRrft1dPapILWMymcSZM2fE+fPnRVlZmdBqtaKmpoYPF3xotVpb5aOp73tJiOulcEREROQM9Ho98vLyoNVq5Q6FWkmSJHTs2LHJucFM0IiIiFyIEAJGo/G6W3iRc/Py8rLtYNAYJmhEREREToZtNoiIiIicDBM0IiIiIifDBI2IiIjIyTBBIyIiInIybteo1mw2Izc3FwEBAZAkSe5wiIiIqBmEEKisrERsbCwUCo4fuV2Clpuba7dPIhEREbmOrKwsdOzYUe4wZOd2CZp1u5esrCwEBgbKHA0RERE1R0VFBeLi4uy2bfNkbpegWcuagYGBTNCIiIhcDKcnWbDIS0RERORkmKARERERORkmaEREREROhgkaERERkZNhgkZERETkZJigERERETkZJmhEREREToYJGhEREZGTYYJGRERE5GSYoBERERE5GSZoRERERE6GCRoRERGRk2GCRkREHk0IgXMFlSisrJU7FCIbJmhEROTRnv36BCa9tRufHcqSOxQiGyZoRETk0QbGBQMANp7IlzcQonqYoBERkUe7rXc0lAoJp/IqcKmkWu5wiAAwQSMiIg8X6qfGiC5hAICUVI6ikXNggkZERB4vqW80ACDlRJ7MkRBZMEEjIiKPN7lPNCQJOJ5djuzLWrnDIWKCRkREFBGgwdCEUADAJpY5yQkwQSMiIgIwpV8MAM5DI+fABI2IiAiWMicAHLl0GfnlbFpL8mKCRkREBCA6yBuDO4UAADalcrEAyYsJGhERUZ1k62pOljlJZkzQiIiI6ljbbRzMKEVRpU7maMiTMUEjIiKq0zHEFwM6BkEIYPNJjqKRfJigERER1ZNct5qT7TZITkzQiIiI6rHOQ9t/sQSl1XqZoyFPxQSNiIionk5hfugdEwiTWWDrKY6ikTyYoBEREV1lSj/LKNrGE0zQGhBC7gg8AhM0IiKiq1jnoe1LK0a51iBzNE5ErwWqi+SOwiMwQSMiIrpK1wh/9Ijyh8Ek8MPpArnDcQ615UBlHmA2yR2JR2CCRkRE1Ijkvta9ObmrAKqLgaoiljfbERM0IiKiRlg3T999vhiVtR5a5hQCqMgDasrkjsTjMEEjIiJqRI8of3SJ8IPeaMb2M4Vyh9P+TEagPBvQV8sdiUdigkZERNQISZKu7M3paas5jTqgPMvyX5IFEzQiIqImWOeh7TxXCK3eKHM07URfbRk542IAWTFBIyIiakKf2EDEh/qi1mDGzrMe0F6ipswy54yLAWTHBI2IiKgJ9cucG0+4+WrOqiLLak1yCkzQiIiIrsHatHb7mULUGtyw7Gc2AxW5lj5n5DSYoBEREV3DgI5BiA3yhlZvwu5zblbmNBmBimzLDgHkVNolQXv33XeRkJAAb29vDBs2DAcPHmzW6z799FNIkoSpU6c6NkAiIqImSJKEJFvTWjdazWmorVupqZc7EmqEwxO0DRs24LHHHsOKFStw9OhRDBgwAJMnT0Zh4bV7ymRkZOAPf/gDRo8e7egQiYiIrsm6efoPpwqgM7pBmVNXBVTkcKWmE3N4gvbXv/4VDzzwABYsWIDevXtj9erV8PX1xYcfftjka0wmE2bNmoUXXngBXbp0cXSIRERE13RTfAgiAzSo1Bmx70KJ3OG0Ts1loDKfKzWdnEMTNL1ejyNHjmDixIlXbqhQYOLEidi/f3+Tr3vxxRcRGRmJhQsXXvceOp0OFRUVdg8iIqK2pFC4wWpOIYCqQqDaxRNMD+HQBK24uBgmkwlRUVF2x6OiopCf33gd/8cff8S//vUvfPDBB826x8qVKxEUFGR7xMXFtTpuIiKiq1nnoW05VQCDySxzNC1kW6nJQQxX4VSrOCsrKzFnzhx88MEHCA8Pb9ZrnnnmGZSXl9seWVlZDo6SiIg80dDOoQjzU6O8xoD9aS40CmUyWBYDGGrkjoRaQOXIi4eHh0OpVKKgoMDueEFBAaKjoxucn5aWhoyMDNxxxx22Y2az5a8UlUqFs2fPomvXrnav0Wg00Gg0DoieiIjoCqVCwuS+0fj4QCZSUvNxa48IuUO6PkMtUJlrGUEjl+LQETS1Wo3Bgwdj27ZttmNmsxnbtm3DiBEjGpyfmJiIEydO4NixY7bHnXfeiXHjxuHYsWMsXxIRkays89C2nMyH0dnLnLrKupWaTh4nNcqhI2gA8Nhjj2HevHkYMmQIhg4dirfffhvV1dVYsGABAGDu3Lno0KEDVq5cCW9vb/Tt29fu9cHBwQDQ4DgREVF7G94lDMG+Xiip1uNgRilGdm3edJx2py21PMhlOTxBmz59OoqKivDcc88hPz8fAwcOxKZNm2wLBzIzM6FQONVUOCIiokZ5KRWY1DsKnx3OxqbUfOdL0KwrNXWVckdCrSQJ4V6NUCoqKhAUFITy8nIEBgbKHQ4REbmZHWcLsWDNIUQEaHDgmQlQKCS5Q7Iwm4DKPMu8M0dS+wGBMW1+Wf7+tsehKyIioha4pWs4ArxVKKrU4UjmZbnDsTDqgfJsxydn1G6YoBEREbWAWqXAbb0s03Scommtocay4bnJIHck1IaYoBEREbVQcj9LiW9Taj7MZhlnCtVWWBrQcqWm22GCRkRE1EKju4fDT61EXnktjmeXyRNEdYllQYB7TSWnOkzQiIiIWsjbS4nxdWXOlNTGty50GCEsm53XOMn8N3IIJmhEREQ3YEpd09qU1Dy0W0MEs8myGEBX1T73I9kwQSMiIroBY3tGwsdLiazSGpzMbYdNyI16y56aRp3j70WyY4JGRER0A3zUSoztadmP0+GrOfVaS3JmMjr2PuQ0mKARERHdIOtqzpTUfMeVOWvLLQ1ouRjAozBBIyIiukHjEyOhVimQXlyNswUO2F6puhioKmJy5oGYoBEREd0gf40Kt3a3ljnbcDWnEEBFHlBT1nbXJJfCBI2IiKgVpvSrW83ZVvPQTEbLSk19ddtcj1wSEzQiIqJWmNArCl5KCecLq3ChsJVlTqOOKzUJABM0IiKiVgny8cIt3cIBACmtKXPqqy0jZ2ZTG0VGrowJGhERUStN6WtZzbnxRncVqCmzzDnjYgCqwwSNiIiolW7rHQWlQsLpvApkFLdw7lhVkWW1JlE9TNCIiIhaKcRPjZFdwwC0YG9OsxmoyLX0OSO6ChM0IiKiNpBUb2/O6zIZgYpsyw4BRI1ggkZERNQGJvWOhkICfskuR1bpNRIvQ23dSk19+wVHLocJGhERURuICNBgaOdQAMDmk02UOXVVQEUOV2rSdTFBIyIiaiPJ1tWcjTWt1ZYClflcqUnNwgSN2lyN3oTp7+/H7zcckzsUIqJ2ZZ2HdjSzDHnlNZaDQgBVhZYEjaiZmKBRm/vq52wcSC/F1z/ntL6rNhGRC4kK9MaQTiEAgM2p+fVWalbIHBm5GiZo1KaEEFi7N8P2cau6ahMRuaDkfvXKnOVZgKFG5ojIFTFBoza1L60E5wurbB/fcFdtIiIXZS1zHsq4jMIKJmd0Y5igUZtaszcdAHDHgFio6rpqp7e0qzYRkQvr4GPEgBhvCACbz3OaB90YJmjUZi6VVGPbmUIAwLKJ3THC1lW7GU0biYjcgbYUqCzAlO4BAIBN56qu8wKixjFBozbz7/2XIAQwpkcEukb4Y0rdPAzOQyMitycEUFlgW6mZ3MMfAPBTlhYlWqOckZGLYoJGbaJaZ8Rnh7IAAPNvSQAATOodBYUEnMi5TldtIiJXZjZZms/qrpQz44PV6BOpgUkAWy9wFI1ajgkatYkvj2ajUmdEl3A/jOkeAQAI89dgeBeWOYnIjRn1dSs1axs8NaWHpcyZwjIn3QAmaNRqZrPA2n0ZAIB5IxOgUEi2564sN2eZk4jcjF5r2fDc1HgJM7kuQdubWY3yWm7tRC3DBI1abc+FYlwsqoa/RoW7B3e0e25ynyhIEnAsqww5ZVxuTkRuorYCqMyzNKJtQpdQNXqGq2E0s8xJLccEjVptbV1rjWlDOsJfo7J7LjLAGzcnWDYPTmlsbzoiIldTXWLZuqkZe2om28qcbLdBLcMEjVrlYlEVdpwtgiQB80YkNHrOlLqmjSlsWktErkwIy2bnNZeb/RLrPLQ9l7So1LHMSc3HBI1a5d/7LwEAxveMREK4X6PnJPW1zEM7cuky8ssbTqQlInJ6ZhNQng3oWlaq7B6mRpdQNfQmge0X2bSbmo8JGt2wyloDPj9s31qjMdFBVzYP3sTVnETkaqwrNY26Fr9UkiRMqeuJtpFlTmoBJmh0w744ko1qvQndIv0xqlv4Nc+1reZkmZOIXIm+2pKcNbFSszms89B2plejWt/0ogKi+pig0Q0xmwXW1WutIUnSNc+/snlwKQorWeYkIhdQUwZU5DVrMcC19I7QID7ICzqjwM50ljmpeZig0Q3Zea4QGSVaBHircPdNHa57fodgHwyMC4YQwOaTBe0QIRFRK1QXWx5tQJIk2yiay5c5DTW27azIsZig0Q1ZszcDADDj5jj4qlXXPrnOlH51qznZboOInJXZbBk1qylr08ta56HtuFiFWoMLljnNJkvCWlUImA1yR+MRmKBRi10orMSe88WQJGBuE601GpNct5rzp4slKKlq+WRbIiKHMhktOwPo274M2T/aGx0CVNAaBHZluFiZU68FKnMd8r5Q05igUYtZt3Wa2CsKcaG+zX5dXKgv+nUIglkAW06xzElETsSoq1upqXfI5SVJQpKrNa01m4CqIqC66Jo7JpBjMEGjFimvMeDLIzkAgAXXaK3RlOS6MudGljmJyJnoqy0JiQNZm9ZuS6uGzujkCY++yjJqZtDKHYnHYoJGLfL54SzUGEzoGRWAEV3CWvx6a5lzX1oJLlc75i9VIiJnNCjWG1H+KlTqzdh7yUkTH7PRMs+suoSjZjJjgkbNZjILrNufAcDSmPZ6rTUa0zncD71iAmEyC2xlmZOIPIhCkpDU3Ymb1uoqLQskDDVyR0JggkYtsO10AbJKaxDs64WpA6/fWqMpt1vLnNxVgIg8jLXdxta0KhhMreuv1mZMBqCqwNI+Q3DUzFkwQaNmsy4OmHFzPHzUyhu+jnVXgb0XilGu5XJtIvIcN3fwQbivEuW1ZuzPcoIyZ20FUJkHGNhA3NkwQaNmOZtfiX1pJVBIwJwRnVp1ra4R/ugZFQCDSeCH0yxzEpHnUCokTK4rc8q6mtNksCRmNZdbvVMCOQYTNGqWtfvSAQCT+0SjQ7BPq69nXc2ZwjInEXkYa5lz8/kqGM3tnBwJAdSWW5IzB7UUobbBBI2u63K1Hl//bG2t0blNrjmlrsy5+1wxKmtZ5iQizzGsoy9CfJQorTHhYHY7ljlNeqAy37JLAkfNnB4TNLquDYezUGswo3dMIG5OCGmTa3aP9EfXCD/oTWZsP1PYJtckInIFXkoJk7pZy5xVjr+hEJZSZmW+JUkjl8AEja7JaDLjo/2XANx4a43GSJJkG0Vj01oi8jTWdhubzlfC5Mgyp1FnKWfWVnDUzMUwQaNr2nqqADllNQj1U+POAbFtem1r09qdZ4tQrTO26bWJiJzZLZ38EKBRoKjahCO5Dug7ZjdqxmkkrogJGl3TmrrWGvcOjYe314231mhMr5gAJIT5QmdkmZOIPItaKeG2rg4qcxpqLNs01Va07XWpXTFBoyadzC3HwfRSKBUSZg9vXWuNxkiSZOuJxtWcRORppvS0rObcdL4S5rYoPwozoC2xbNVkYlXC1TFBoyatqxs9S+4bjeggb4fc4/a6BG3HmSJo9fyBQkSeY1QnX/h5ScirNOJYXisbxRpqgIpcQNcOiw6oXTBBo0aVVOnwzbFcAMCCWxIcdp8+sYGIC/VBjcGEXWeLHHYfIiJn461SYELXVjatNZuA6mLLqJnZ1IbRkdyYoFGjPj2UBb3RjP4dg3BTfNu01miMJEmYUrdYYGNqvsPuQ0TkjKbUNa1NOVcF0dIyp15rmWumr3ZAZCQ3JmjUgKF+a42RbddaoynWeWjbTxeg1sC/AInIc4zp7AcflYTsCgNSC3TNe5HZBFQVAdVFgJmbm7srJmjUwOaT+civqEW4vxq3949x+P0GdAxCbJA3qvUm7D7HMicReQ4fLwXGdfEDAGxsTplTX2UZNTM4wUbr5FBM0KiBNXszAAD3DusEjaptW2s0xn41J8ucRORZkm1lzsqmy5xmo2WeWXUJR808BBM0svNLdhmOXLoML6WE2cPi2+2+U+o2T//hVAF0RpY5ichzjOviD7VSQkaZAWeKGylz6iqBijzLSk3yGEzQyM7autYat/eLQWSgY1prNGZQXAiiAjWo1Bnx4/nidrsvEZHc/NUKjOlsKXOmnK3XJsNkAKoKAG2ppccZeRQmaGRTVKnDd8ctDWPn39K5Xe+tUEi2rZ82nmCZk4g8y5Qede02ztfNQ6utsOyhaWhlfzRyWUzQyObjA5nQm8wYGBeMgXHB7X7/5L6WMufWU/nQG/nXIhF5jgld/eGlAM6X6HE+M9uyjyY3N/doTNAIAKA3mvGfA5bWGo5sTHstQxJCERGgQUWtEfvSWOYkIs8RqFZgVJwGAJByoZntNsitMUEjAJa9MIsqdYgM0NhKje1NqZCQ1McyipbCMicReQqTHqjMR3K8ZYHUxgwulKJ2StDeffddJCQkwNvbG8OGDcPBgwebPPeDDz7A6NGjERISgpCQEEycOPGa51PbsLbWmD28E9Qq+fL25LrVnJtP5cNgYpmTiNyYEJZSZmU+YNJjUiclVBJwplQgvZw//zydw38Tb9iwAY899hhWrFiBo0ePYsCAAZg8eTIKCwsbPX/nzp2YOXMmduzYgf379yMuLg6TJk1CTk6Oo0P1WD9nXsaxrDKolQrc246tNRozNCEUYX5qlGkNOHCxVNZYiIgcxqizLAKorbDNNQvWSBgRY/m1nJLBBM3TOTxB++tf/4oHHngACxYsQO/evbF69Wr4+vriww8/bPT89evX43e/+x0GDhyIxMRE/POf/4TZbMa2bdscHarHsrbWuGNALML9NbLGolIqMKmuzLkxNU/WWIiI2pzdqJmhwdPJna0JGsucns6hCZper8eRI0cwceLEKzdUKDBx4kTs37+/WdfQarUwGAwIDQ1t9HmdToeKigq7BzVfQUUtvv+lrrXGyAR5g6ljbVq7OTUfJjNXMRGRmzDUWLZpqm3699SkTkooJOBEsUBWJUfRPJlDE7Ti4mKYTCZERUXZHY+KikJ+fvMmgT/11FOIjY21S/LqW7lyJYKCgmyPuLi4VsftSdYfyITRLDCkUwj6dQySOxwAwPAuYQj29UJJtR4H01nmJCIXJ8yAtsSyVZPJeM1Tw30kDIu2/GrexDKnR3PqVZyvvfYaPv30U3z99dfw9m68q/0zzzyD8vJy2yMrK6udo3RdOqMJH9taa7RvY9pr8VIqMKm3JanfeIJlTiJyYYYaoCIX0FVd/9w6yQmWX80b01nm9GQOTdDCw8OhVCpRUFBgd7ygoADR0dHXfO0bb7yB1157DVu2bEH//v2bPE+j0SAwMNDuQc3z3fE8FFfpERPkjUl9oq7/gnZk3Tx900mWOYnIBZlNQHWxZdTM3LJEa3InJSQAPxcJ5FXz55+ncmiCplarMXjwYLsJ/tYJ/yNGjGjyda+//jpeeuklbNq0CUOGDHFkiB5LCGFbHDB7eCd4KZ1rMPWWruEI9FahqFKHI5cuyx0OEVHz6bWWuWb66ht6eZSfhCFREgBgExcLeCyH/1Z+7LHH8MEHH2DdunU4ffo0HnnkEVRXV2PBggUAgLlz5+KZZ56xnf/nP/8Zy5cvx4cffoiEhATk5+cjPz8fVVXNHx6m6zty6TJO5JRDo1Jg5lB5W2s0Rq1S4Lbedas5WeYkIldgNgFVRUB1EWBu3fyxpAQlACCFZU6P5fAEbfr06XjjjTfw3HPPYeDAgTh27Bg2bdpkWziQmZmJvLwrv4D//ve/Q6/X47e//S1iYmJsjzfeeMPRoXqUNXWjZ1MHdkCon1reYJpgXc25KTUfZpY5iciZ6asso2YGbZtczpqgHSoQKNTy558nUrXHTRYvXozFixc3+tzOnTvtPs7IyHB8QB4ur7wGm1Itq2jnOUlrjcaM6h4Of40K+RW1+DmrDIM7hcgdEhGRPbMR0JZaFgO0oQ7+EgZGSDhWJLD5kglzerXLr2tyIs418YjaxX9+ugSTWWBY51D0jnXeRRUalRITe0UCAFJY5iQiZ6OrBCry2jw5s5rS2VrmZLsNT8QEzcPUGkz4+EAmAGDBLQnyBtMM1tWcKan5EILD/ETkBEwGoKrAMnImHJc8Wdtt/JRvRkkNf/55GiZoHua/x3JxWWtAh2AfTOzlXK01GjOmRwR81UrklNXgl+xyucMhIk9XW2HZQ9NQ6/BbxQUo0DdMglkAWy5xsYCnYYLmQYQQtsUBc0d0gsrJWms0xttLifGJljIn9+YkItmYDJbErOaybXPz9pBsLXNyVwGP4/y/oanNHEwvxem8Cnh7KTD9ZtfZEmuKtcx5gmVOomsxmsw4X1CJ/x7PxeubzuC+tYdwKIPbpbWKEEBtuSU5M+rb/fbWMue+XDPKdPz550m4LMSDrNmbAQC4a1BHBPs6Z2uNxoztGQFvLwUyS7U4mVuBvh2cY89QIjkVVepwJr8CZ/IqcbruvxcKq6A32Y+0jOgShpsTQmWK0sWZ9EB1ieW/MukSpEBiiIQzlwW2XjJhWg/+2vYU/Jf2ENmXtdhyytJawxUWB9Tnq1ZhXM9IpKTmY+OJPCZo5FFqDSZcKKzC6bwKnMmvxJn8CpzNr0RxVeNJg59aiZ7RAUiMCUSv6ACM6BrezhG7ASGA2jLLKk0nGLVP7qzEmctGbMowY1oPuaOh9sIEzUN89NMlmAVwS7cw9IgKkDucFpvSL8aWoD0xuSckSZI7JKI2JYRATlkNzuRZkrDT+ZU4m1+Ji0VVaKxPsyQBncP8kBgTgMToQCRGB6BXTCA6BPtAoeD3xw0z6gBtiWXOmZNITlDgraPAnhwzKvQCgWoZ/31NhhvewopahgmaB6jRm/DpwSwAwPyRnWWO5saMS4yERqVARokWZ/Ir0SvGefu3EV1PZa0B5woqcTrvyojYmbxKVOqMjZ4f7OuFXtGBSIwJQK/oQPSMDkCPqAD4qJXtHLkbE8KyAKC2Qu5IGugeLKFrkIS0coHtmWZM7SbDv7vJAOgrLatXlUwd2gPfZQ/w9c85KK8xID7U17Yi0tX4a1QY0yMCW04VIOVEHhM0cgkms0BGSbVtVMxaoswqbbyxqZdSQtcIf/SKsSRh1lGxyAANR40drbbcKZMzAJAkCVM6K7DqmAkpGab2TdDqJ2bUrpiguTkhBNbuSwdgaa2hdOHSx5R+MdhyqgAbU/Px2KSecodDZKe0Wo8z9eaJnakrUeqMjbdHiA70rpsrFmAbHesS7g+1iovrqaHkBCVWHTNhZ7YZ1QYBPy8H/yxnYiY7Jmhubn9aCc4VVMFXrcS0Ia7TWqMx43tFQq1U4EJhFc4XVKK7C86lI9enM5qQVliNswXWFZSVOJNXgcJKXaPne3sp0DOqbp5YvfliIX6us5Ka5NcrVEKnAAmXKgV2ZJnxqy4OGkVjYuY0mKC5OWtj2rtv6oggHy95g2mlQG8vjO4ejm1nCrHxRD4eZYJGDiSEQEGFztbCwtrSIq2oCsbGZu0DiA/1RWK9FZSJMYGID/V16ZFrcg6SJCG5swKrf7GUOds8QTMZLKtWjUzMnAUTNDeWWaLFD6cLAADzRibIG0wbSe4Xg21nCpGSmodHJ3aXOxxyE1q9EecKqmwlSmtLi/KaxlfyBXirbGXJxLpJ+z2jA+Cv4Y9UcpwpCUqs/sWEHVlm1BgFfFRtkPgzMXNa/Gnixv69PwNCALf2iEC3SH+5w2kTt/WKgkoh4Uy+ZSSja4R7fF7UPsxmgazLWtvqSevI2KVSbaPtrpQKCV3C/ZAYYylLWkfHYoO8OWmf2l2/cAkd/IGcKmBXthlJCa0YRWNi5vSYoLmpap0RGw5bWmsscJPRMwAI8vXCLd3CsetcEVJO5GHxeI6iUePKtQa7lZOn8ypxrqASWn3jm06H+2vQKybAMl+sLiHrFukPby+2siDnIEkSkhOU+Geqpcx5QwkaEzOXwQTNTX31cw4qa43oHO6HMT0i5A6nTU3pF41d54qw8UQ+EzSCwWRGenH1lU77eZa+Yrnljf8CUqsU6B7pj8ToQPSqV6KMCNC0c+RELWdN0LZlmqEzCWiUzRzJZWLmcpiguSGzWWDtXktrjXkjOrldV/FJvaPx7NepOJVXgYziaiSE+8kdErWTWoMJB9NLr4yMNbH/pFWHYJ+6smSALSFLCPODSslWFuSaBkVKiPYF8rXAjzlmTIi/ziiaSV+XmDW+ypicFxM0N/TjhWKkFVXDX6PC3YM7yh1OmwvxU2Nk1zDsOV+MlNR8PDK2q9whUTsQQmDehwdxIL20wXNX7z+ZGBOIHlEBLr9ymehqCklCUoISa0+ZsDHD1HSCxsTM5TFBc0Nr61pr/HZwRwR4u+cvqOS+MXUJWh4TNA+x90IJDqSXQq1UYGLvSFs/scToQHQM4f6T5DmS6xK0rZfM0JsE1PXLnEzM3AYTNDeTXlyN7WcKIUnu01qjMZP6ROFP35zAL9nlyCrVIi7UV+6QyMFWbT8PALh3WDyev7OPzNEQyWdIlIRwH6C4BtifZ8aYjkomZm6IEzHczLq60bNxPSPR2Y3nZoX7azCscxgAYFNqvszRkKMdTC+1jZ49NKaL3OEQyUqpkDC5k6W0mXLRAGhLgOpiJmduhgmaG6msNeCLI9kAgPluPHpmNaVfNABgY2qezJGQo1lHz347pCNignxkjoZIflPiLQtjNl8yw6hnYuaOmKC5kS+PZKNKZ0TXCD+M7h4udzgON7lPNCQJ+DmzDLllNXKHQw7yc+Zl7DlfDJVCwiNjON+QPJxRB2iLMSygBCFqMy7rFThYzNlK7ogJmpswmwXW7b8EwDJ65gldziMDvXFzp1AALHO6s1XbLwAA7hrUgXMNyXPVJWbQlgBGPVQKYHIHPQBgY7Za5uDIEZiguYld54qQXlyNAG8VfnOT+7XWaEqytcx5gmVOd5SaU47tZwqhkIDfjesmdzhE7e+qxKy+pI6WvWI3ZathamSrMnJtTNDcxJq6xQHTh8TBz4M2bE7uGwMAOHzpMvKb6BxPrss69+yOAbFuveiFqIFrJGZWIyMNCPQyo1inwGGWOd0OEzQ3cKGwCrvPFUGSgLkjEuQOp11FB3ljcKcQAMDmkyxzupMz+RXYfLIAkgQs5ugZeYpmJGZWagVwW6xlFC2FZU63wwTNDVhba0xIjEJ8mOfN0UnuyzKnO3qnbu5Zct9odI8KkDkaIgcz6iytMpqRmNWX3NFy7qYcNcwsc7oVJmgurrzGgC+PWlpr3HdLgrzByCS5n6XMeTCjFEWVXG7uDi4UVuH7uoR78bjuMkdD5ED1EzNT8xMzq1FRBvirBPJrFPi5lGVOd8IEzcV9fjgLWr0JPaL8MaJrmNzhyKJDsA8GxAVDCJY53cV7Oy5ACGBiryj0jg2UOxyittfKxMzKWwlMiK0bRct2z639PBUTNBdmMgv829Zao7NHtNZoypS6MmcKm9a6vEsl1fj2eC4AYOkEzj0jN9NGiVl9yfXabQiWOd0GEzQXtv1MITJLtQjy8cJdgzrIHY6srKs5f7pYipIqljld2Xs70mAyC4zpEYH+HYPlDoeobTggMbMaE22Aj1IgR6vEicvKNr02yYcJmgtbuy8dADBjaBx81J79TRkf5ou+HQJhMgtsPVUgdzh0g7Iva21zKjl6Rm7BgYmZlY8KGB9juXZKDldzugsmaC7qXEEl9l4ogUIC5gzvJHc4TsE6iraRuwq4rNW70mA0C4zsGobBdbtEELkkYy1QXeTQxKy+5I5X2m2wzOkemKC5qLV1rTUm9Y5GxxDPa63RmCl1qzn3XShGmdbxPxCpbRVU1OKzQ5bRsyXjuXKTXJQtMSsFTIZ2u+24GD00CoGMKiVOl3t2RcVdMEFzQWVaPb6qKwMt8NDWGo3pHO6HXjGBMJoFtrDM6XLe33URepMZNyeEYHgXjp6Ri5EpMbPyU1nmogFsWusumKC5oA2HslBrMKNXTCCGduYvsvpsqznZtNalFFfp8PFBy4rkJeO7e/SKZHIxMidm9U2pa1rLBM09MEFzMUaT2dZaY8HIBP4iu4q1ae2PF4pRXiPvD0tqvg/2XEStwYwBccEY3T1c7nCIrs+JEjOr8bEGeEkCFyqVOF/BX++ujv+CLuaH04XIKatBiK8X7hwYK3c4TqdbpD96RPnDYBLYdpplTldwuVqPj+r+6Fg6vhv/6CDn5oSJmVWgl8DoujLnRo6iuTwmaC5mzV5La417h8XD24sTQRtjW815gqs5XcGHe9Oh1ZvQOyYQ4xMj5Q6HqHGGGqCq0CkTs/qSWeZ0G0zQXMip3AocSC+FUiFhNltrNMm6mnP3+SJU1jrvD1Ky7CW7dm8GAGAJR8/IGVkTs5rLgNkodzTXdVusASpJ4Ey5Chcr+SvelfFfz4Wsq2utkdQ3GjFBPvIG48R6RPmjS4Qf9EYztp8plDscuoZ1+zJQqTOiR5Q/JveJljscoitcLDGzClYLjIi0xMtRNNfGBM1FlFbr8c2xHACWxQHUNEmSMKWuzJnCMqfTqtIZ8WFdyX7RuG5QKDh6Rk7ARROz+ria0z0wQXMRnxzMhM5oRr8OQRjcKUTucJxecj/LaMyOs4Wo1rnmD1l399H+SyjTGtAl3A+/6s8FLyQzo87lEzOrSR30UEAgtUyFrGr+mndV/JdzAQaTGf/5ybLKbT5bazRL75hAdArzhc5oxs6zRXKHQ1fR6o34556LAIDfjesGJUfPSG7GWpdPzKzCNALDIljmdHVM0FzAlpMFyCuvRbi/Gr8aECN3OC5BkiTbYoGNbFrrdD4+kImSaj3iQn3wa7aLIWpz1jIn2224LiZoLmDtvrrWGkPjoVGxtUZzWeehbT9TiBq9SeZoyKrWYMI/dteNno3tBi8lfwwRtbXJHfSQIHCsVIVcLb/HXBH/1Zxcak45DmVchoqtNVqsb4dAdAzxQY3BhF3nuJrTWXx2OAuFlTrEBnnj7ps6yh0OkVuK9BG4OdxS5tyU4yVzNHQjmKA5uTV1PaJu7x+DyEBveYNxMfZlTq7mdAZ6oxmrd6YBAB4e2xVqFX8EETlKEldzujT+dHRixVU6/O94LgDL4gBqueS6zdO3nS5ArYFlTrl9eTQbueW1iAzQ4J4hcXKHQ+TWkjpYGnUfLlahsIYLcVwNEzQn9vGBTOhNZgyMC8ageLbWuBED44IRG+SNar0Je84Xyx2ORzOazHhv5wUAwIO3duFWZUQOFutrxqBQIwQkbM7hKJqrYYLmpPTGK601FtySIG8wLkySJCTZmtZyNaecvj2Wi6zSGoT5qTFrGOdTErWHZK7mdFlM0JxUSmoeCit1iAjQ2Db/phszpa5p7dbTBdAZWeaUg8ks8O4Oy+jZ/aO7wEfN0TOi9mBN0A4UqVCiY5nTlTBBc1Jr6/bdnD2sEydSt9JN8SGIDNCgstaIfRdK5A7HI31/Ig8Xi6sR5OOFOSM4ekbUXuL8zOgXYoQZErawzOlS+JvfCR3LKsPPmWVQKxW4d1i83OG4PIVCsi0WYNPa9mc2C7yz/TwA4L5bOsNfo5I5IiLPktSBZU5XxATNCa2t20D6VwNiEBGgkTka92Btt7HlVAEMJrPM0XiWLafyca6gCgEaFeZzPiVRu7OWOfcXqlCmZ5nTVTBBczKFFbX4vm6UZ8HIzjJH4z6GJIQi3F+D8hoD9qWxzNlehBBYtd0y92zeyAQE+bBhJlF76xJgRmKQEUYhYWsuvwddBRM0J7P+QCYMJoHBnULQr2OQ3OG4DaVCQlLfKABczdmetp8pxMncCviqlbhvFP/gIJLLlLZsWmvSA0Zd669D18QEzYnojCasP8DWGo5i3Ztz88l8GFnmdDghBP5WN3o2Z3gnhPpx/guRXKxlzj35Xqgw3ECZ06QHdJWAthioLgZqLrdxhHQ1JmhO5Ptf8lBcpUd0oDcm94mWOxy3M7RzKEL91LisNeBAeqnc4bi9PeeLcTyrDN5eCtw/uovc4RB5tO6BZnQLMMEgJGxrTplTCMBYl5RZEzKDFjBbWxVxLpujMUFzEkII276bc0Z0gpeS/zRtTaVUYHIfS5mTqzkdyzL3zLJyc+bQeC52IXIC1y1zCmEpXeoqLCNltXVJmTABkgJQ+QDewUBADBAQ1X6BeyhmAU7iaGYZTuSUQ61SYMbN3KPQUZLrlTlNZiFzNO7rp4ulOJRxGWqlAg/d2lXucIgIVzZP35nvhSpD3UFrUlZrTcrKAEMNIMyWpMyrLinzDQe8AwGVBpA4etYemKA5iTV1rTWmDoxFmD9HGxxlRNcwBPl4obhKj0MZLHM6inX07J6bOyI6yFvmaIgIAHoFmZDgb4LBLLA7xwzUll9JyozWpEwJePkC3iGWpEzDpEwuTNCcQF55DVJS8wFYWhGQ43gpFZjUm6s5HenIpVLsSyuBSiHh4TEcPSNyCsIMyViDadEF6CQV4FB2NWCstSRlirqkzCcE8AsHNAGASs2kTGZM0JzA+p8yYTILDO0cij6xbK3haNamtSmp+TCzzNnm/rbNsnLz7ps6omOIr8zREHkwYbaUK2suWyb66yowMaoKEoDdhT6oUfgBPqF1I2UBgJIrrZ1JuyRo7777LhISEuDt7Y1hw4bh4MGD1zz/888/R2JiIry9vdGvXz9s3LixPcKURa3BhI8PZgIAFnD0rF3c0i0cAd4qFFbqcCSTS8Xb0i/ZZdh1rghKhYTfjePoGVG7M5ssE/trLgPVRZYJ/yY9AAEovNAjzBvwDkGaKRK7SkMAZTNWdBq0QGkacOlHIPVz4Ifnga0rHP2ZeDyHb4q3YcMGPPbYY1i9ejWGDRuGt99+G5MnT8bZs2cRGRnZ4Px9+/Zh5syZWLlyJX71q1/h448/xtSpU3H06FH07dvX0eG2u/8ez0VptR4dgn1wW2+uimkPapUCt/WOwldHc7DxRB5uTgiVOyS3Yd014NcDYtEpzE/maIg8hNlkmehvrAXMBvvnFF6WOWQqDaBQQQIwMU7gg3OWvTmTOtY736gDKvPqPXKBynzLHLWrqTk67miSEMKhNZ5hw4bh5ptvxjvvvAMAMJvNiIuLw5IlS/D00083OH/69Omorq7Gd999Zzs2fPhwDBw4EKtXr77u/SoqKhAUFITy8nIEBga23SfiAEII3P63H3EqrwJPJydyvk47+uFUAe7/92HEBHlj71PjoVBwrkVrnc6rQPL/7YEkAVt/PwbdIv3lDomo+XJ+BvJT5Y6i+czGuqRM1zApU6oBpTUpUzZ46c+FJvxpdxX6q7LwUvcLUFXVJWQ111g45R0MBMQCAdFAZC9gwHQgZkCbfkqu9Pu7PTh0BE2v1+PIkSN45plnbMcUCgUmTpyI/fv3N/qa/fv347HHHrM7NnnyZHzzzTeNnq/T6aDTXdlyoqKiovWBt5NDGZdxKq8C3l5srdHeRnUPh79GhbzyWhzLLsNN8SFyh+Ty3qkbPZvSL4bJGZEjmAyWhMyksyRo9SnVloRMWS8pMxmAivwrI2IVeUBVHgZWF+N7Td3YzMWr7qEJrOtzdtXDy+fKOYExbZ6cUUMOTdCKi4thMpkQFWVfuouKisKZM2cafU1+fn6j5+fn5zd6/sqVK/HCCy+0TcDtbO0+S2uNuwZ1RLAvJ2e2J28vJSb0isS3x3KRciKPCVorXSisxMZUy6rYJeO7yRwNkRsxGSylS5OuXhd/AJCuJGUKFaAtAS5nXClLVuZa5qCJhtvaSQC0Cn/8YoyD0S8Go7qFXxkdU/OPK2fh8DlojvbMM8/YjbhVVFQgLs75R6Nyymqw+WQBAGA+FwfIIrlvDL49louNJ/Lx7JRekLik/Ia9s/0ChAAm9Y5CYjRLE0StYt2M3KizdPG3EsIyYV9bYlmVWVU3OlZd2HBEzUrl03A0LDAGqeUhmLErCIHCjMPxZVCzp4PTcWiCFh4eDqVSiYKCArvjBQUFiI5ufK/J6OjoFp2v0Wig0bheY9eP9l+CySwwsmsYekYHyB2ORxrbMwK+aiVyympwIqcc/TsGyx2SS0ovrsZ/j+cCAJaM7y5zNEQuSAjLSJmptm5OmQnQlQNVRYC2yJKQVRVaHqKJREypbrw06R3caD+zwREmRHibUVSrwL5CL4yNNjS8JsnKoQmaWq3G4MGDsW3bNkydOhWAZZHAtm3bsHjx4kZfM2LECGzbtg3Lli2zHdu6dStGjBjhyFDbVY3ehE8PWVprcPRMPt5eSoxLjMT3v+Rh44l8Jmg36L0dF2AWwLieEejXkX38iJrFusVSdRFQkQ1UFVj+v7rIMjpm1jf+OoUX4B8FBMbWJWHRlvKkT4hla6ZmUkrA5A56/CfNGynZTNCckcNLnI899hjmzZuHIUOGYOjQoXj77bdRXV2NBQsWAADmzp2LDh06YOXKlQCARx99FGPGjMGbb76J22+/HZ9++ikOHz6Mf/zjH44Otd18eywHZVoD4kJ9MKEXW2vIaUrfGHz/Sx5SUvPwVFJPljlbKKtUi69/zgEALJnA0TOiRgkB6CuBilygLBuozLHME6sutMwta4yktCRiV4+I+YW3KBG7likdLQnalhw1XrlJCxXLnE7F4Qna9OnTUVRUhOeeew75+fkYOHAgNm3aZFsIkJmZCYXiylfFyJEj8fHHH+NPf/oTnn32WXTv3h3ffPON2/RAE0Jgzd4MAMC8EQlQsr2DrMYlRsDbS4FLJVqcyqvgTg4t9PddaTCaBUZ1C+dCCyIA0FfVTdKv6yNWUTdp31DdxAskwC8CCOxQNxoWYxkR84totEVGWxoabkSo2oxSvQIHilS4JaqJ8inJol0WCSxevLjJkubOnTsbHJs2bRqmTZvm4Kjksf9iCc4WVMLHS4lpQ5x/MYO781WrMK5nJFJS87HxRB4TtBbIK6/BF4ezAXDlJnkgQ81VTV3rHrprtHryCQH8oiyJWFBHICgO8ItsXjd/B1ApgEkd9Pg03Rsbs9VM0JyMy6/idDVr60bP7h7cAUE+8nxTkr3kfjF1CVo+/jCJZc7men/XRehNZgztHIphXcLkDofIMYy6K6slraNhlXlA7TW2idMEWUbA/CIsCVhAjGXOmCZAtmSsKckdDfg03Rubc9R48SYtlPzx5zSYoLWjrFIttp5maw1nMz4xEmqVAunF1ThbUMk2Ec1QWFmLT+r2kF3KlZvkDow6oDy7ro9YvRExbUnTr9EEAf6RlnlhvuFXkjKlusEWS85qZKQBQV5mFOsUOFyswrAIjqI5C+f9qnFD/96fASGA0d3D0S2SrTWchb9GhTE9IrD1VAE2nshngtYMH+y+CJ3RjEHxwbilG0fPyMVtWQ7sf6fRpq4AAHWApXt+QIylROkbZmlfobzqV6jCC1B5N7nFkjPyUgATYw348pIGKdlqJmhOhAlaO6nWGfHpoSwAwIJbEuQNhhqY0i8aW08VIOVEHh67rYfc4Ti1kiod/vPTldEzloTJ5fmFW5IzL9+mtzky1gJGfeP7Xl69xZKLmdJRb0vQnhuoBdeuOQcmaO3k659zUFlrREKYL8b2iJQ7HLrKhF5R8FJKOF9YhfMFlegexRHOpny4Nx01BhP6dQjC2J4RcodD1HqD5gCRfYDy3CtNXevve2nQ2p+vVF8ZKWujlhdyGhVlgL9KoKBWgZ9LVRgcxlE0Z+D6X1kuQAiBtfsyAADzRiZAwT9PnE6gtxdGd7ckGympje/7SkC51oB1+y4BABaP78bRM3IPvqGWsqXZAOgqAW0xUFNqaY1hNsKy76XGspG4X4RlNaaXj1skZwCgUQITYy2NcVOynWsRgydzj68uJ/fjhWJcKKyCn1qJ3w7uKHc41ITkvpbtxDaeyJM5Eue1Zl86qnRGJEYH4DY2WSZ3UXPZskKz5rJltMxsAiBZRsi8gywlUJ9gt0rKrpbU0ZqgqSGEzMEQACZo7cLaWmPakDgEePOvE2d1W+8oqBQSzuRX4mJRldzhOJ3KWgM+/DEdALBoXDeOBJP7EMKSlEkKS+nSO9gyUuYdbPnYTZOy+sZGG+CrFMjRKvHLZdecS+du3P+rTmYZxdXYfrYQADB3RCeZo6FrCfZVY2S3cAAsczbm3/svoaLWiC4RfpjSL0bucIjajibwSqsM76C6uWWe9QeItxIYF3NlFI3kxwTNwdbVtdYY1zMCXSL85Q6HruP2fpYyZ0oqy5z1afVG/Ktu9GzxuG7coozci1JVV7707K/r5I6WFaosczoHJmgOVKUz4vO6rXDm39JZ5mioOW7rHQ2lQkJqTgUyS7TXf4GHWP9TJkqr9egU5os7B8TKHQ4ROcC4GD00CoFL1UqcKmeZU25M0BzoyyPZqNJZSkKj60pn5NxC/dQYUbdt0UaOogEAag0mvL/7IgDgd2O7QqXkjw0id+SnAsbGWEbRNrHMKTv+pHUQs1lgXV1rjQVsreFSkq1lTq7mBAB8ejATxVU6dAj2wV2DuAqZyJ1NqVvN+T3LnLJjguYgu84X4WJxNQI0KvzmJv5ScyWTekdDIQHHs8uRfdmzy5w6owmrd1lGzx4e2xVqFX9kELmz8TF6qBUCFyuVOF/BMqec+NPWQaytNe65OQ5+Gm7Y4EoiAjQY2jkUALDJw1dzfnEkG/kVtYgK1GAae/gRub0AL2B0lKXMuZFNa2XFBM0B0oqqsOtcESQJmDciQe5w6AZY20h4ctNag8mMv+9MAwA8dGtXeHvxr2kiT5BcV+bclMN5aHJiguYA/66bezYhMQrxYb7yBkM3ZHKfaEgScDSzDHnlNXKHI4uvf85B9uUahPurMXNovNzhEFE7uS3WAJUkcKZchbRKpgly4TvfxipqDfjiiKW1xoJbEuQNhm5YVKA3hnQKAeCZZU6jyYz3dlwAADwwugt81Bw9I/IUQWqBkZFczSk3Jmht7PPD2ajWm9A90h8ju4bJHQ61QnJfS5kz5YTnJWjf/ZKHjBItQny9MHs4d8Ag8jRT6jWtJXkwQWtDpnqtNebfkgDJw7tSuzpru41Dl0pRWFErczTtx2wWeKdu9GzhqM5c5ELkgSZ10EMpCaSWqZBZxVRBDnzX29DOs4XILNUiyMcLdw3qIHc41EoxQT64KT4YQgCbTnrOKNqmk/m4UFiFAG8V5o5MkDscIpJBqEZgWIQRAJDCxQKyYILWhtbUtdaYcXMcfNUcdXAHnraaUwiBVdsto2cLRiYg0JvL7Ik8lXU150aWOWXBBK2NnC+oxI8XiqGQwDk7biSpr6XMeTC9FEWVOpmjcbwfThfidF4F/NRK3DeK+8cSebLJHfSQIHC8VIUcLdOF9sZ3vI2srZt7dlvvKMSFsrWGu+gY4osBHYNgFsCWU+5d5rSMnp0HAMwZkYBgX/7VTOTJIr0Fbg63lDk3sWltu2OC1gbKtQZ8dTQHALDgFo46uJvkfp6xmnPXuSL8kl0Oby8F7h/Nr2MiulLm5GrO9scErQ1sOJyJGoMJidEBGFa3RRC5j+S6Muf+iyUordbLHI1j1J97NmtYJ4T7a2SOiIicQVIHy8+8IyUqFNSwM0F7YoLWSpbWGpcAWBrTsrWG++kU5oc+sYEwmQW2ummZc39aCY5cugy1SoGHbu0idzhE5CRifAUGhRohIGEzV3O2KyZorfTD6QLklNUgxNcLvx7I1hru6spqTvdM0P5WN/dsxs1xiAz0ljkaInImU7iaUxZM0Fppzd50AMDMofHcTNqNWcucey8Uo1xrkDmatnUooxQ/XSyFl1LCw2O6yh0OETmZpLoE7WCRCsW1rBK1FyZorXA6rwI/XSyFUiGxtYab6xLhj8ToABjNAltPF8gdTpv62zbL6NlvB3dEbLCPzNEQkbOJ8zOjf4gRZkjYkstRtPbCBK0VrNs6JfWJ5i82D+COTWuPZZVhz/liKBUSHhnTTe5wiMhJJXE1Z7tjgnaDLlfr8fXP1tYaCfIGQ+1iSt3enHvOF6Gi1j3KnKvqRs+mDuyA+DD27yOixiXXrebcV6jC5VohczSegQnaDfrkUCZ0RjP6dgjE4E4hcodD7aBbZAC6R/rDYBLY5gZlztSccmw7UwiFBCwax7lnRNS0zgFm9AoywiQkbM00yx2OR2CCdgOMJjM+2m9prTF/ZGe21vAgyW60mvOdur5nv+ofiy4R/jJHQ0TOzta0Np0JWntggnYDtpwqQF55LcL81PhV/xi5w6F2ZC1z7jpXhCqdUeZobtzZ/EpsOmlJMheP59wzIro+a7uNH3PNKK9xj2kezowJ2g1YuzcDADBrGFtreJqeUQHoEu4HvdGM7WcK5Q7nhr27wzJ6ltw3Gj2iAmSOhoicnqRAt1AVugeZYTAD28+4/jQPZ8cErYVSc8pxMKMUKoWEWWyt4XEkSUJy3Shaiouu5rxYVIXvfskFACwax9EzIqojSYDSC/DyATQBgE8I4BcBBERbHr7hSO5i2TTdHaZ5ODsmaC20tq61xpR+MYhix3WPlNzXUtbecbYQWr3rlTnf3ZEGswAmJEaib4cgucMhovamUAEqb0DtB3gHAb5hgH8UEBBjSch8QiwJmpePJWGTrqQKyQlKhPsqEB/KVd+OppI7AFdSXKXDf49ZRh7ms7WGx+oTG4j4UF9klmqx82yRrT+aK8gs0eKbY5b2MEsmdJc5GiJyGEkBKFWApLIkZEoVICkt/9+KhW2JIRIO3BcNZcfebRgsNYYjaC3w6cFM6E1mDIgLxk3xbK3hqSRJctmmtX/fdQEms8Do7uEYGBcsdzhE1Bq2kqR3kyVJ+AQDGn/LiJnSq1XJmeWWEpQKdi5oDxxBayaDyYyPfrK01lgwMkHeYEh2U/pFY/WuNGw/U4hag8klFovklNXgiyPZAIClHD0jch0KFaCoG/2yezj/zx26cUzQmiklNR8FFTpEBGhcqqRFjtGvQxA6BPsgp6wGO88WIaluM3Vn9v6uNBhMAsO7hOLmhFC5wyGi+iSFffLVRiVJcl1M0Jqpf4cgzB3RCR1DfKBWsTLs6Sxlzmh8sCcdKal5Tp+gFVbU4tNDWQCApeM5ekYkC0myJF1KFaDwsh8Vk/h7hewxQWumhHA/vPjrvnKHQU4kuV8MPtiTjm2nnb/M+f7ui9AbzRjcKQQjuobJHQ6Re2u0HMmSJLUMEzSiGzSwYzBigryRV16LH88XY2LvKLlDalRxlQ7rD1jmTy4Z341bkxG1BbuS5FUJGb/HqA1wTJXoBikUkq20uTHVeVdz/nNPOmoNZvTvGIQxPSLkDofItShUllWSan/Liki/8CurJP2sqyTr9wxjckZtgyNoRK0wpV8M1uzNwNZTBdAbzU43P/FytR4f7c8AACwZ352jZ0RNUXgBat+6uWEsSZL8nOu3CZGLGRwfgsgADSprjdibVix3OA2s2ZuOar0JvWICMbFXpNzhEDkvtS/gHWzprq/SMDkj2TFBI2oFhUJCcl/n3JuzotaANXVbk3HuGRGRa2GCRtRKyXV98bacKoDBZJY5mivW7c1AZa0R3SP9kdTHuduAEBGRPSZoRK10c0Iowv3VKNMa8NPFErnDAQBU64z41950AMDi8d2g4NYsREQuhQkaUSspFRIm141QOcvenP/56RLKtAYkhPnidu58QUTkcpigEbUB6/Zfm08WwChzmbNGb8IHey4CAH43rhtUSn6bExG5Gv7kJmoDwzqHIsTXC6XVehxML5U1lk8OZqK4So+OIT64a1AHWWMhIqIbwwSNqA2olIorZU4Zm9bWGkx4f3caAOCRsV3hxdEzIiKXxJ/eRG3EuppzU2oBTGYhSwyfH8lGQYUOMUHe+O3gjrLEQERErccEjaiNjOwahiAfLxRX6XA4o/3LnHqjGat3WkbPHrq1CzQqNtokInJVTNCI2oiXUoHb6jZMT0nNb/f7f/1zNnLKahDur8GMofHtfn8iImo7TNCI2pC1pUVKah7M7VjmNJrMeHfHldEzby+OnhERuTImaERtaGS3MAR4q1BQocPPWZfb7b7/PZ6LzFItQv3UmDWco2dERK6OCRpRG9KolLitl6XMufFE+5Q5TWaBd3ZcAAAsHNUZvmpVu9yXiIgchwkaURuzruZMOdE+Zc6NJ/JwsagaQT5emDuik8PvR0REjscEjaiNje4eDj+1ErnltTieXebQe5nNAu9st4yeLbglAQHeXg69HxERtQ8maERtzNtLiQm92mc155ZTBThbUAl/jQoLRnZ26L2IiKj9MEEjcoAp/a5sni6EY8qcQgis2n4eADBvZCcE+XL0jIjIXTBBI3KAMT0i4eOlRPblGqTmVDjkHjvPFuFkbgV8vJS47xaOnhERuROHJWilpaWYNWsWAgMDERwcjIULF6Kqquqa5y9ZsgQ9e/aEj48P4uPjsXTpUpSXlzsqRCKH8VErMT4xEoBj9uYUQuBvdaNns4fHI8xf0+b3ICIi+TgsQZs1axZOnjyJrVu34rvvvsPu3bvx4IMPNnl+bm4ucnNz8cYbbyA1NRVr167Fpk2bsHDhQkeFSORQyXVlzhQHlDn3XijBz5ll0KgUeODWLm16bSIikp9DGiadPn0amzZtwqFDhzBkyBAAwKpVqzBlyhS88cYbiI2NbfCavn374ssvv7R93LVrV7zyyiuYPXs2jEYjVCr2diLXMq5nJLy9FMgo0eJ0XiV6xwa22bWto2czh8YjMsC7za5LRETOwSEjaPv370dwcLAtOQOAiRMnQqFQ4MCBA82+Tnl5OQIDA6+ZnOl0OlRUVNg9iJyBn0aFsT0sZc6UNixzHrhYgoPppVArFXhoDEfPiIjckUMStPz8fERGRtodU6lUCA0NRX5+89oOFBcX46WXXrpmWRQAVq5ciaCgINsjLi7uhuMmamvWMuf3bVjmXFXX9+y3QzoiJsinTa5JRETOpUUJ2tNPPw1Jkq75OHPmTKuDqqiowO23347evXvj+eefv+a5zzzzDMrLy22PrKysVt+fqK2MT4yEWqXAxaJqnCtoepFMcx3NvIwfLxRDpZDwyJiubRAhERE5oxZN7Hr88ccxf/78a57TpUsXREdHo7Cw0O640WhEaWkpoqOjr/n6yspKJCUlISAgAF9//TW8vK7d20mj0UCj4Qo2ck4B3l64tXsEfjhdgI0n8tAzOqBV11u1zTL37K5BHRAX6tsWIRIRkRNqUYIWERGBiIiI6543YsQIlJWV4ciRIxg8eDAAYPv27TCbzRg2bFiTr6uoqMDkyZOh0Wjw3//+F97enPxMrm9Kv2j8cLoAKal5+P1tPW74Oieyy7HjbBEUErBoXLc2jJCIiJyNQ+ag9erVC0lJSXjggQdw8OBB7N27F4sXL8aMGTNsKzhzcnKQmJiIgwcPArAkZ5MmTUJ1dTX+9a9/oaKiAvn5+cjPz4fJZHJEmETtYkKvKHgpJZwrqMKFwsobvo5114A7B8QiIdyvrcIjIiIn5LA+aOvXr0diYiImTJiAKVOmYNSoUfjHP/5he95gMODs2bPQarUAgKNHj+LAgQM4ceIEunXrhpiYGNuD88rIlQX5eGFUt3AAQMqJG9ub83ReBbacKoAkAYvHc/SMiMjdOay5WGhoKD7++OMmn09ISLBb1TZ27FiH7VlIJLfkfjHYcbYIG1PzsWRC9xa//p0dlpWbU/rGoFtk6+axERGR8+NenETtYFLvKKgUEk7nVSC9uLpFr71QWImNJyx91Dh6RkTkGZigEbWDYF81RlrLnC1sWvvujjQIAdzWOwq9YtpuNwIiInJeTNCI2smUvta9OZs/Dy2juBrfHssBACzh6BkRkcdggkbUTib1iYZSIeFETjmySrXNes17Oy/ALIAxPSLQv2OwYwMkIiKnwQSNqJ2E+qkxvEsoANjmlF1L9mUtvjpqGT1bOoGjZ0REnsRhqziJqKHkvjHYe6EEG1Pz8dB1tmpavSsNRrPAyK5hGNwptJ0iJPJQaj/Av24P6QYdBa73cWOvaUSj54jrnNPM7gbXe11z7t3ooUbOUaqbFxO1ChM0onY0uU80ln+biuNZZci+rEXHkMa3a8ovr8Vnh7IBAEvGt7wtBxG1kNIL8PKROwrXoGaj7PbAEidRO4oI0GBogmU0bFNq04sF3t+dBr3JjJsTQmxlUSIi8hxM0Ija2ZR+MQCAlCYStKJKHT4+kAnAMnomSVK7xUZERM6BCRpRO0uqa7dx5NJl5JfXNnj+n3suQmc0Y0BcMEZ3D2/v8IiIyAkwQSNqZ1GB3hjSKQQAsOmqprWl1Xp89NMlAMDS8d04ekZE5KGYoBHJwFrm3HhVmfPDH9Oh1ZvQJzYQ4xMj5QiNiIicABM0IhlYy5yHMkpRWGkpc5bXGLBuXwYAy64BHD0jIvJcTNCIZBAb7INB8cEQAth8sgAAsHZvBip1RvSMCsCk3tEyR0hERHJigkYkkyl961ZznshDZa0BH+5NBwAsGt8NCgVHz4iIPBkTNCKZWMucP10swf/9cB7lNQZ0ifDD7XXz04iIyHMxQSOSSVyoL/p3DIJZAP/8sW70bGw3KDl6RkTk8ZigEckoue+V0bK4UB/8emCsjNEQEZGzYIJGJKPkvlcWA/xubDeolPyWJCIibpZOJKuEcD/cP6oz8itqcfdNHeUOh4iInAQTNCKZ/elXveUOgYiInAzrKUREREROhgkaERERkZNhgkZERETkZJigERERETkZJmhEREREToYJGhEREZGTYYJGRERE5GSYoBERERE5GSZoRERERE6GCRoRERGRk2GCRkRERORkmKARERERORkmaEREREROhgkaERERkZNhgkZERETkZJigERERETkZJmhEREREToYJGhEREZGTYYJGRERE5GSYoBERERE5GSZoRERERE6GCRoRERGRk2GCRkRERORkmKARERERORkmaEREREROhgkaERERkZNhgkZERETkZJigERERETkZJmhEREREToYJGhERkUoDePkAkiR3JEQAAJXcARAREclO7Wd5AIBRDxhrAVPdf406QAh54yOPwwSNiIioPpXa8qjPqKv3qEvemLSRAzFBIyIiuh6VxvKwEsJ+hM1YC5gMTNqozTBBIyIiailJajxpsyZr9ZM2ohvABI2IiKgtSBLg5W15WJnNgOnqpM0oX4zkMpigEREROYpCASh8LCtErcymhiNtZpN8MZJTYoJGRETUnhRKQO1reViZTfYJm1HHpM3DMUEjIiKSm0Jp3+oDsJRC7UqjOkvJlDwCEzQiIiJnpFQBSn9A43/lmMlQL2mrS9y4ctQtMUEjIiJyFUovy0MTcOUYG+u6JSZoRERErqzJxrq19skbkzaXwgSNiIjI3TTVo82kAxvrugYmaERERO6usR5tbKzr1JigEREReSI21nVqTNCIiIjIojmNdRVMHdoD32UiIiJqWmONdcnhFI66cGlpKWbNmoXAwEAEBwdj4cKFqKqqatZrhRBITk6GJEn45ptvHBUiERERkVNyWII2a9YsnDx5Elu3bsV3332H3bt348EHH2zWa99++21IkuSo0IiIiIicmkNKnKdPn8amTZtw6NAhDBkyBACwatUqTJkyBW+88QZiY2ObfO2xY8fw5ptv4vDhw4iJiXFEeEREREROzSEjaPv370dwcLAtOQOAiRMnQqFQ4MCBA02+TqvV4t5778W7776L6OjoZt1Lp9OhoqLC7kFERETkyhySoOXn5yMyMtLumEqlQmhoKPLz85t83e9//3uMHDkSv/71r5t9r5UrVyIoKMj2iIuLu+G4iYiIiJxBixK0p59+GpIkXfNx5syZGwrkv//9L7Zv34633367Ra975plnUF5ebntkZWXd0P2JiIiInEWL5qA9/vjjmD9//jXP6dKlC6Kjo1FYWGh33Gg0orS0tMnS5fbt25GWlobg4GC743fffTdGjx6NnTt3Nvo6jUYDjUbT6HNERERErqhFCVpERAQiIiKue96IESNQVlaGI0eOYPDgwQAsCZjZbMawYcMafc3TTz+N+++/3+5Yv3798NZbb+GOO+5oSZhERERELs0hqzh79eqFpKQkPPDAA1i9ejUMBgMWL16MGTNm2FZw5uTkYMKECfj3v/+NoUOHIjo6utHRtfj4eHTu3NkRYRIRERE5JYf1QVu/fj0SExMxYcIETJkyBaNGjcI//vEP2/MGgwFnz56FVqt1VAhERERELkkSQgi5g2hLFRUVCAoKQnl5OQIDA+UOh4iIiJqBv7/tOWwEjYiIiIhuDBM0IiIiIifDBI2IiIjIyTBBIyIiInIyDmmzISfrmgfuyUlEROQ6rL+33Wzt4g1zuwStsrISALgnJxERkQuqrKxEUFCQ3GHIzu3abJjNZuTm5iIgIACSJLXptSsqKhAXF4esrCwuAb4OvlfNx/eq+fheNR/fq5bh+9V8jnqvhBCorKxEbGwsFArOwHK7ETSFQoGOHTs69B6BgYH8Bm4mvlfNx/eq+fheNR/fq5bh+9V8jnivOHJ2BVNUIiIiIifDBI2IiIjIyTBBawGNRoMVK1ZAo9HIHYrT43vVfHyvmo/vVfPxvWoZvl/Nx/eqfbjdIgEiIiIiV8cRNCIiIiInwwSNiIiIyMkwQSMiIiJyMkzQiIiIiJwME7QblJCQAEmS7B6vvfaa3GE5NZ1Oh4EDB0KSJBw7dkzucJzSnXfeifj4eHh7eyMmJgZz5sxBbm6u3GE5pYyMDCxcuBCdO3eGj48PunbtihUrVkCv18sdmlN65ZVXMHLkSPj6+iI4OFjucJzKu+++i4SEBHh7e2PYsGE4ePCg3CE5pd27d+OOO+5AbGwsJEnCN998I3dIbo0JWiu8+OKLyMvLsz2WLFkid0hO7cknn0RsbKzcYTi1cePG4bPPPsPZs2fx5ZdfIi0tDb/97W/lDsspnTlzBmazGe+//z5OnjyJt956C6tXr8azzz4rd2hOSa/XY9q0aXjkkUfkDsWpbNiwAY899hhWrFiBo0ePYsCAAZg8eTIKCwvlDs3pVFdXY8CAAXj33XflDsUzCLohnTp1Em+99ZbcYbiMjRs3isTERHHy5EkBQPz8889yh+QSvv32WyFJktDr9XKH4hJef/110blzZ7nDcGpr1qwRQUFBcofhNIYOHSoWLVpk+9hkMonY2FixcuVKGaNyfgDE119/LXcYbo0jaK3w2muvISwsDIMGDcJf/vIXGI1GuUNySgUFBXjggQfw0UcfwdfXV+5wXEZpaSnWr1+PkSNHwsvLS+5wXEJ5eTlCQ0PlDoNchF6vx5EjRzBx4kTbMYVCgYkTJ2L//v0yRkbEEucNW7p0KT799FPs2LEDDz30EF599VU8+eSTcofldIQQmD9/Ph5++GEMGTJE7nBcwlNPPQU/Pz+EhYUhMzMT3377rdwhuYQLFy5g1apVeOihh+QOhVxEcXExTCYToqKi7I5HRUUhPz9fpqiILJig1fP00083mPh/9ePMmTMAgMceewxjx45F//798fDDD+PNN9/EqlWroNPpZP4s2kdz36tVq1ahsrISzzzzjNwhy6YlX1cA8MQTT+Dnn3/Gli1boFQqMXfuXAgP2vCjpe8XAOTk5CApKQnTpk3DAw88IFPk7e9G3isicg3c6qmeoqIilJSUXPOcLl26QK1WNzh+8uRJ9O3bF2fOnEHPnj0dFaLTaO57dc899+B///sfJEmyHTeZTFAqlZg1axbWrVvn6FBl15qvq+zsbMTFxWHfvn0YMWKEo0J0Ki19v3JzczF27FgMHz4ca9euhULhOX933sjX1tq1a7Fs2TKUlZU5ODrnp9fr4evriy+++AJTp061HZ83bx7Kyso4en0NkiTh66+/tnvfqG2p5A7AmURERCAiIuKGXnvs2DEoFApERka2cVTOqbnv1d/+9je8/PLLto9zc3MxefJkbNiwAcOGDXNkiE6jNV9XZrMZADxmZBZo2fuVk5ODcePGYfDgwVizZo1HJWdA6762CFCr1Rg8eDC2bdtmSzTMZjO2bduGxYsXyxsceTwmaDdg//79OHDgAMaNG4eAgADs378fv//97zF79myEhITIHZ5TiY+Pt/vY398fANC1a1d07NhRjpCc1oEDB3Do0CGMGjUKISEhSEtLw/Lly9G1a1ePGT1riZycHIwdOxadOnXCG2+8gaKiIttz0dHRMkbmnDIzM1FaWorMzEyYTCZbL8Ju3brZvi890WOPPYZ58+ZhyJAhGDp0KN5++21UV1djwYIFcofmdKqqqnDhwgXbx+np6Th27BhCQ0Mb/KynNiDvIlLXdOTIETFs2DARFBQkvL29Ra9evcSrr74qamtr5Q7N6aWnp7PNRhN++eUXMW7cOBEaGio0Go1ISEgQDz/8sMjOzpY7NKe0Zs0aAaDRBzU0b968Rt+rHTt2yB2a7FatWiXi4+OFWq0WQ4cOFT/99JPcITmlHTt2NPo1NG/ePLlDc0ucg0ZERETkZDxrwgYRERGRC2CCRkRERORkmKARERERORkmaEREREROhgkaERERkZNhgkZERETkZJigERERETkZJmhEREREToYJGhHdsPnz58uyWfLatWsRHBzc6uuMHTsWy5Yta/V1iIjaGvfiJKJGSZJ0zedXrFiB//u//4Mcm5FMnz4dU6ZMaff7EhG1FyZoRNSovLw82/9v2LABzz33HM6ePWs75u/vL9sm2z4+PvDx8ZHl3kRE7YElTiJqVHR0tO0RFBQESZLsjvn7+zcocY4dOxZLlizBsmXLEBISgqioKHzwwQeorq7GggULEBAQgG7duiElJcXuXqmpqUhOToa/vz+ioqIwZ84cFBcXNxnb1SXO559/HgMHDsRHH32EhIQEBAUFYcaMGaisrLSdU11djblz58Lf3x8xMTF48803G1xXp9PhD3/4Azp06AA/Pz8MGzYMO3fuBADU1taiT58+ePDBB23np6WlISAgAB9++GEL310iomtjgkZEbWrdunUIDw/HwYMHsWTJEjzyyCOYNm0aRo4ciaNHj2LSpEmYM2cOtFotAKCsrAzjx4/HoEGDcPjwYWzatAkFBQW45557WnTftLQ0fPPNN/juu+/w3XffYdeuXXjttddszz/xxBPYtWsXvv32W2zZsgU7d+7E0aNH7a6xePFi7N+/H59++il++eUXTJs2DUlJSTh//jy8vb2xfv16rFu3Dt9++y1MJhNmz56N2267Dffdd1/r3zgiovoEEdF1rFmzRgQFBTU4Pm/ePPHrX//a9vGYMWPEqFGjbB8bjUbh5+cn5syZYzuWl5cnAIj9+/cLIYR46aWXxKRJk+yum5WVJQCIs2fPNiueFStWCF9fX1FRUWE79sQTT4hhw4YJIYSorKwUarVafPbZZ7bnS0pKhI+Pj3j00UeFEEJcunRJKJVKkZOTY3evCRMmiGeeecb28euvvy7Cw8PF4sWLRUxMjCguLm40RiKi1uAcNCJqU/3797f9v1KpRFhYGPr162c7FhUVBQAoLCwEABw/fhw7duxodD5bWloaevTo0az7JiQkICAgwPZxTEyM7R5paWnQ6/UYNmyY7fnQ0FD07NnT9vGJEydgMpka3E+n0yEsLMz28eOPP45vvvkG77zzDlJSUuyeIyJqK0zQiKhNeXl52X0sSZLdMevqULPZDACoqqrCHXfcgT//+c8NrhUTE9Oq+1rv0RxVVVVQKpU4cuQIlEql3XP1k8fCwkKcO3cOSqUS58+fR1JSUrPvQUTUXEzQiEhWN910E7788kskJCRApXLMj6SuXbvCy8sLBw4cQHx8PADg8uXLOHfuHMaMGQMAGDRoEEwmEwoLCzF69Ogmr3XfffehX79+WLhwIR544AFMnDgRvXr1ckjcROS5uEiAiGS1aNEilJaWYubMmTh06BDS0tKwefNmLFiwACaTqU3u4e/vj4ULF+KJJ57A9u3bkZqaivnz50OhuPIjsEePHpg1axbmzp2Lr776Cunp6Th48CBWrlyJ77//HgDw7rvvYv/+/Vi3bh1mzZqFqVOnYtasWdDr9W0SJxGRFRM0IpJVbGws9u7dC5PJhEmTJqFfv35YtmwZgoOD7RKo1vrLX/6C0aNH44477sDEiRMxatQoDB482O6cNWvWYO7cuXj88cfRs2dPTJ06FYcOHUJ8fDzOnDmDJ554Au+99x7i4uIAAO+99x6Ki4uxfPnyNouTiAgAJCFkaANORERERE3iCBoRERGRk2GCRkRERORkmKARERERORkmaEREREROhgkaERERkZNhgkZERETkZJigERERETkZJmhEREREToYJGhEREZGTYYJGRERE5GSYoBERERE5mf8HNl9n1bHMOgsAAAAASUVORK5CYII=", "text/plain": [ "
" ] }, "metadata": {}, "output_type": "display_data" } ], "source": [ "from lightning.pytorch import Trainer\n", "\n", "model = FullyConnectedForDistributionLossModel.from_dataset(dataset, hidden_size=10, n_hidden_layers=2, log_interval=1)\n", "trainer = Trainer(fast_dev_run=True)\n", "trainer.fit(model, train_dataloaders=dataloader, val_dataloaders=dataloader)" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.10.9" }, "vscode": { "interpreter": { "hash": "9aebce72564876525c4f775620217d3701f12ed8dccc94588028ba1e29a0a158" } } }, "nbformat": 4, "nbformat_minor": 4 }