[1]:
# Copyright 2020 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
NVTabular demo on Rossmann data
Overview
NVTabular is a feature engineering and preprocessing library for tabular data designed to quickly and easily manipulate terabyte scale datasets used to train deep learning based recommender systems. It provides a high level abstraction to simplify code and accelerates computation on the GPU using the RAPIDS cuDF library.
Learning objectives
This notebook demonstrates the steps for carrying out data preprocessing, transformation and loading with NVTabular on the Kaggle Rossmann dataset. Rossmann operates over 3,000 drug stores in 7 European countries. Historical sales data for 1,115 Rossmann stores are provided. The task is to forecast the “Sales” column for the test set.
The following example will illustrate how to use NVTabular to preprocess and load tabular data for training neural networks in both PyTorch and TensorFlow. We’ll use a dataset built by FastAI for solving the Kaggle Rossmann Store Sales competition. Some pandas preprocessing is required to build the appropriate feature set, so make sure to run rossmann-store-sales-preproc.ipynb first before going through this notebook.
[2]:
import os
import numpy as np
import nvtabular as nvt
import glob
Preparing our dataset
Let’s start by defining some of the a priori information about our data, including its schema (what columns to use and what sorts of variables they represent), as well as the location of the files corresponding to some particular sampling from this schema. Note that throughout, I’ll use UPPERCASE variables to represent this sort of a priori information that you might usually encode using commandline arguments or config files.
[3]:
DATA_DIR = os.environ.get("OUTPUT_DATA_DIR", "./data")
CATEGORICAL_COLUMNS = [
'Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday', 'CompetitionMonthsOpen',
'Promo2Weeks', 'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear', 'Promo2SinceYear',
'State', 'Week', 'Events', 'Promo_fw', 'Promo_bw', 'StateHoliday_fw', 'StateHoliday_bw',
'SchoolHoliday_fw', 'SchoolHoliday_bw'
]
CONTINUOUS_COLUMNS = [
'CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday'
]
LABEL_COLUMNS = ['Sales']
COLUMNS = CATEGORICAL_COLUMNS + CONTINUOUS_COLUMNS + LABEL_COLUMNS
What files are available to train on in our data directory?
[4]:
! ls $DATA_DIR
test.csv train.csv valid.csv
train.csv
and valid.csv
seem like good candidates, let’s use those.
[5]:
TRAIN_PATH = os.path.join(DATA_DIR, 'train.csv')
VALID_PATH = os.path.join(DATA_DIR, 'valid.csv')
Workflows and Preprocessing
A Workflow
is used to represent the chains of feature engineering and preprocessing operations performed on a dataset, and is instantiated with a description of the dataset’s schema so that it can keep track of how columns transform with each operation.
[6]:
# note that here, we want to perform a normalization transformation on the label
# column. Since NVT doesn't support transforming label columns right now, we'll
# pretend it's a regular continuous column during our feature engineering phase
proc = nvt.Workflow(
cat_names=CATEGORICAL_COLUMNS,
cont_names=CONTINUOUS_COLUMNS,
label_name=LABEL_COLUMNS
)
Ops
We add operations to a Workflow
by leveraging the add_(cat|cont)_feature
and add_(cat|cont)_preprocess
methods for categorical and continuous variables, respectively. When we’re done adding ops, we call the finalize
method to let the Workflow
build a representation of its outputs.
[7]:
proc.add_cont_feature(nvt.ops.FillMissing())
proc.add_cont_preprocess(nvt.ops.LogOp(columns=LABEL_COLUMNS))
proc.add_cont_preprocess(nvt.ops.Normalize())
proc.add_cat_preprocess(nvt.ops.Categorify())
proc.finalize()
Datasets
In general, the Op
s in our Workflow
will require measurements of statistical properties of our data in order to be leveraged. For example, the Normalize
op requires measurements of the dataset mean and standard deviation, and the Categorify
op requires an accounting of all the categories a particular feature can manifest. However, we frequently need to measure these properties across datasets which are too large to fit into GPU memory (or CPU memory for that matter) at once.
NVTabular solves this by providing the Dataset
class, which breaks a set of parquet or csv files into into a collection of cudf.DataFrame
chunks that can fit in device memory. Under the hood, the data decomposition corresponds to the construction of a dask_cudf.DataFrame object. By representing our dataset as a lazily-evaluated Dask collection, we can handle the calculation of complex global statistics
(and later, can also iterate over the partitions while feeding data into a neural network).
[8]:
train_dataset = nvt.Dataset(TRAIN_PATH)
valid_dataset = nvt.Dataset(VALID_PATH)
Now that we have our datasets, we’ll apply our Workflow
to them and save the results out to parquet files for fast reading at train time. We’ll also measure and record statistics on our training set using the record_stats=True
kwarg so that our Workflow
can use them at apply time.
[9]:
PREPROCESS_DIR = os.path.join(DATA_DIR, 'ross_pre')
PREPROCESS_DIR_TRAIN = os.path.join(PREPROCESS_DIR, 'train')
PREPROCESS_DIR_VALID = os.path.join(PREPROCESS_DIR, 'valid')
! rm -rf $PREPROCESS_DIR # remove previous trials
! mkdir -p $PREPROCESS_DIR_TRAIN
! mkdir -p $PREPROCESS_DIR_VALID
[10]:
proc.apply(train_dataset, record_stats=True, output_path=PREPROCESS_DIR_TRAIN, shuffle=nvt.io.Shuffle.PER_WORKER, out_files_per_proc=2)
proc.apply(valid_dataset, record_stats=False, output_path=PREPROCESS_DIR_VALID, shuffle=None)
Finalize columns
The FastAI workflow will use nvtabular.loader.torch.TorchAsyncItr
, which will map a dataset to its corresponding PyTorch tensors. In order to make sure it runs correctly, we’ll call the create_final_cols
method to let the Workflow
know to build the output dataset schema, and then we’ll be sure to remove instances of the label column that got added to that schema when we performed processing on it.
[11]:
proc.create_final_cols()
# using log op and normalize on sales column causes it to get added to
# continuous columns_ctx, so we'll remove it here
while True:
try:
proc.columns_ctx['final']['cols']['continuous'].remove(LABEL_COLUMNS[0])
except ValueError:
break
Training a Network
Now that our data is preprocessed and saved out, we can leverage dataset
s to read through the preprocessed parquet files in an online fashion to train neural networks.
We’ll start by setting some universal hyperparameters for our model and optimizer. These settings will be shared across all of the frameworks that we explore below.
If you’re interested in contributing to NVTabular, feel free to take this challenge on and submit a pull request if successful. 12% RMSPE is achievable using the Novograd optimizer, but we know of no Novograd implementation for TensorFlow that supports sparse gradients, and so we are not including that solution below.
[12]:
EMBEDDING_DROPOUT_RATE = 0.04
DROPOUT_RATES = [0.001, 0.01]
HIDDEN_DIMS = [1000, 500]
BATCH_SIZE = 65536
LEARNING_RATE = 0.001
EPOCHS = 25
# TODO: Calculate on the fly rather than recalling from previous analysis.
MAX_SALES_IN_TRAINING_SET = 38722.0
MAX_LOG_SALES_PREDICTION = 1.2 * np.log(MAX_SALES_IN_TRAINING_SET + 1.0)
# It's possible to use defaults defined within NVTabular.
EMBEDDING_TABLE_SHAPES = {
column: shape for column, shape in
nvt.ops.get_embedding_sizes(proc).items()
}
# Here, however, we will use fast.ai's rule for embedding sizes.
for col in EMBEDDING_TABLE_SHAPES:
EMBEDDING_TABLE_SHAPES[col] = (EMBEDDING_TABLE_SHAPES[col][0], min(600, round(1.6 * EMBEDDING_TABLE_SHAPES[col][0] ** 0.56)))
TRAIN_PATHS = sorted(glob.glob(os.path.join(PREPROCESS_DIR_TRAIN, '*.parquet')))
VALID_PATHS = sorted(glob.glob(os.path.join(PREPROCESS_DIR_VALID, '*.parquet')))
The following shows the cardinality of each categorical variable along with its associated embedding size. Each entry is of the form (cardinality, embedding_size)
.
[13]:
EMBEDDING_TABLE_SHAPES
[13]:
{'Assortment': (4, 3),
'CompetitionMonthsOpen': (26, 10),
'CompetitionOpenSinceYear': (24, 9),
'Day': (32, 11),
'DayOfWeek': (8, 5),
'Events': (22, 9),
'Month': (13, 7),
'Promo2SinceYear': (9, 5),
'Promo2Weeks': (27, 10),
'PromoInterval': (4, 3),
'Promo_bw': (7, 5),
'Promo_fw': (7, 5),
'SchoolHoliday_bw': (9, 5),
'SchoolHoliday_fw': (9, 5),
'State': (13, 7),
'StateHoliday': (3, 3),
'StateHoliday_bw': (4, 3),
'StateHoliday_fw': (4, 3),
'Store': (1116, 81),
'StoreType': (5, 4),
'Week': (53, 15),
'Year': (4, 3)}
Choose a Framework
We’re now ready to move on to framework-specific code.
The code for each framework can be run independently of the others, so feel free to skip to your framework of choice.
TensorFlow
TensorFlow: Preparing Datasets
KerasSequenceLoader
wraps a lightweight iterator around a dataset
object to handle chunking, shuffling, and application of any workflows (which can be applied online as a preprocessing step). For column names, can use either a list of string names or a list of TensorFlow feature_columns
that will be used to feed the network
[14]:
import tensorflow as tf
# we can control how much memory to give tensorflow with this environment variable
# IMPORTANT: make sure you do this before you initialize TF's runtime, otherwise
# it's too late and TF will have claimed all free GPU memory
os.environ['TF_MEMORY_ALLOCATION'] = "8192" # explicit MB
os.environ['TF_MEMORY_ALLOCATION'] = "0.5" # fraction of free memory
from nvtabular.loader.tensorflow import KerasSequenceLoader, KerasSequenceValidater
# cheap wrapper to keep things some semblance of neat
def make_categorical_embedding_column(name, dictionary_size, embedding_dim):
return tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(name, dictionary_size),
embedding_dim
)
# instantiate our columns
categorical_columns = [
make_categorical_embedding_column(name, *EMBEDDING_TABLE_SHAPES[name]) for
name in CATEGORICAL_COLUMNS
]
continuous_columns = [
tf.feature_column.numeric_column(name, (1,)) for name in CONTINUOUS_COLUMNS
]
# feed them to our datasets
train_dataset_tf = KerasSequenceLoader(
TRAIN_PATHS, # you could also use a glob pattern
feature_columns=categorical_columns+continuous_columns,
batch_size=BATCH_SIZE,
label_names=LABEL_COLUMNS,
shuffle=True,
buffer_size=0.06 # amount of data, as a fraction of GPU memory, to load at once
)
valid_dataset_tf = KerasSequenceLoader(
VALID_PATHS, # you could also use a glob pattern
feature_columns=categorical_columns+continuous_columns,
batch_size=BATCH_SIZE*4,
label_names=LABEL_COLUMNS,
shuffle=False,
buffer_size=0.06 # amount of data, as a fraction of GPU memory, to load at once
)
TensorFlow: Defining a Model
Using Keras, we can define the layers of our model and their parameters explicitly. Here, for the sake of consistency, we’ll mimic fast.ai’s TabularModel.
[15]:
# DenseFeatures layer needs a dictionary of {feature_name: input}
categorical_inputs = {}
for column_name in CATEGORICAL_COLUMNS:
categorical_inputs[column_name] = tf.keras.Input(name=column_name, shape=(1,), dtype=tf.int64)
categorical_embedding_layer = tf.keras.layers.DenseFeatures(categorical_columns)
categorical_x = categorical_embedding_layer(categorical_inputs)
categorical_x = tf.keras.layers.Dropout(EMBEDDING_DROPOUT_RATE)(categorical_x)
# Just concatenating continuous, so can use a list
continuous_inputs = []
for column_name in CONTINUOUS_COLUMNS:
continuous_inputs.append(tf.keras.Input(name=column_name, shape=(1,), dtype=tf.float32))
continuous_embedding_layer = tf.keras.layers.Concatenate(axis=1)
continuous_x = continuous_embedding_layer(continuous_inputs)
continuous_x = tf.keras.layers.BatchNormalization(epsilon=1e-5, momentum=0.1)(continuous_x)
# concatenate and build MLP
x = tf.keras.layers.Concatenate(axis=1)([categorical_x, continuous_x])
for dim, dropout_rate in zip(HIDDEN_DIMS, DROPOUT_RATES):
x = tf.keras.layers.Dense(dim, activation='relu')(x)
x = tf.keras.layers.BatchNormalization(epsilon=1e-5, momentum=0.1)(x)
x = tf.keras.layers.Dropout(dropout_rate)(x)
x = tf.keras.layers.Dense(1, activation='linear')(x)
# TODO: Initialize model weights to fix saturation issues.
# For now, we'll just scale the output of our model directly before
# hitting the sigmoid.
x = 0.1 * x
x = MAX_LOG_SALES_PREDICTION * tf.keras.activations.sigmoid(x)
# combine all our inputs into a single list
# (note that you can still use .fit, .predict, etc. on a dict
# that maps input tensor names to input values)
inputs = list(categorical_inputs.values()) + continuous_inputs
tf_model = tf.keras.Model(inputs=inputs, outputs=x)
TensorFlow: Training
[16]:
def rmspe_tf(y_true, y_pred):
# map back into "true" space by undoing transform
y_true = tf.exp(y_true) - 1
y_pred = tf.exp(y_pred) - 1
percent_error = (y_true - y_pred) / y_true
return tf.sqrt(tf.reduce_mean(percent_error**2))
[17]:
optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE)
tf_model.compile(optimizer, 'mse', metrics=[rmspe_tf])
[18]:
%%time
optimizer = tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE)
tf_model.compile(optimizer, 'mse', metrics=[rmspe_tf])
validation_callback = KerasSequenceValidater(valid_dataset_tf)
history = tf_model.fit(
train_dataset_tf,
callbacks=[validation_callback],
epochs=EPOCHS,
)
Epoch 1/25
13/13 [==============================] - 4s 278ms/step - loss: 6.0125 - rmspe_tf: 0.8914 - val_loss: 6.1718 - val_rmspe_tf: 0.9068
Epoch 2/25
13/13 [==============================] - 6s 470ms/step - loss: 5.2308 - rmspe_tf: 0.8905 - val_loss: 4.6752 - val_rmspe_tf: 0.8793
Epoch 3/25
13/13 [==============================] - 18s 1s/step - loss: 4.5764 - rmspe_tf: 0.8769 - val_loss: 4.0147 - val_rmspe_tf: 0.8604
Epoch 4/25
13/13 [==============================] - 16s 1s/step - loss: 3.7594 - rmspe_tf: 0.8504 - val_loss: 3.1879 - val_rmspe_tf: 0.8263
Epoch 5/25
13/13 [==============================] - 6s 494ms/step - loss: 2.7753 - rmspe_tf: 0.8028 - val_loss: 2.1554 - val_rmspe_tf: 0.7611
Epoch 6/25
13/13 [==============================] - 5s 411ms/step - loss: 1.7590 - rmspe_tf: 0.7221 - val_loss: 1.1834 - val_rmspe_tf: 0.6504
Epoch 7/25
13/13 [==============================] - 4s 275ms/step - loss: 0.9113 - rmspe_tf: 0.5963 - val_loss: 0.4885 - val_rmspe_tf: 0.4849
Epoch 8/25
13/13 [==============================] - 5s 394ms/step - loss: 0.3716 - rmspe_tf: 0.4380 - val_loss: 0.1876 - val_rmspe_tf: 0.3339
Epoch 9/25
13/13 [==============================] - 5s 397ms/step - loss: 0.1259 - rmspe_tf: 0.2894 - val_loss: 0.0627 - val_rmspe_tf: 0.2305
Epoch 10/25
13/13 [==============================] - 4s 294ms/step - loss: 0.0510 - rmspe_tf: 0.2263 - val_loss: 0.0487 - val_rmspe_tf: 0.2156
Epoch 11/25
13/13 [==============================] - 3s 206ms/step - loss: 0.0371 - rmspe_tf: 0.2373 - val_loss: 0.0447 - val_rmspe_tf: 0.2192
Epoch 12/25
13/13 [==============================] - 3s 199ms/step - loss: 0.0350 - rmspe_tf: 0.2269 - val_loss: 0.0513 - val_rmspe_tf: 0.2567
Epoch 13/25
13/13 [==============================] - 2s 189ms/step - loss: 0.0330 - rmspe_tf: 0.2072 - val_loss: 0.0465 - val_rmspe_tf: 0.2418
Epoch 14/25
13/13 [==============================] - 2s 177ms/step - loss: 0.0313 - rmspe_tf: 0.2094 - val_loss: 0.0410 - val_rmspe_tf: 0.2078
Epoch 15/25
13/13 [==============================] - 2s 173ms/step - loss: 0.0298 - rmspe_tf: 0.2043 - val_loss: 0.0430 - val_rmspe_tf: 0.2290
Epoch 16/25
13/13 [==============================] - 3s 197ms/step - loss: 0.0289 - rmspe_tf: 0.2112 - val_loss: 0.0417 - val_rmspe_tf: 0.2251
Epoch 17/25
13/13 [==============================] - 2s 160ms/step - loss: 0.0281 - rmspe_tf: 0.1864 - val_loss: 0.0481 - val_rmspe_tf: 0.2554
Epoch 18/25
13/13 [==============================] - 2s 160ms/step - loss: 0.0273 - rmspe_tf: 0.1959 - val_loss: 0.0393 - val_rmspe_tf: 0.2190
Epoch 19/25
13/13 [==============================] - 2s 167ms/step - loss: 0.0262 - rmspe_tf: 0.1923 - val_loss: 0.0464 - val_rmspe_tf: 0.2512
Epoch 20/25
13/13 [==============================] - 2s 181ms/step - loss: 0.0260 - rmspe_tf: 0.1978 - val_loss: 0.0472 - val_rmspe_tf: 0.2549
Epoch 21/25
13/13 [==============================] - 2s 162ms/step - loss: 0.0255 - rmspe_tf: 0.1891 - val_loss: 0.0449 - val_rmspe_tf: 0.2469
Epoch 22/25
13/13 [==============================] - 2s 170ms/step - loss: 0.0253 - rmspe_tf: 0.1858 - val_loss: 0.0371 - val_rmspe_tf: 0.2104
Epoch 23/25
13/13 [==============================] - 2s 160ms/step - loss: 0.0252 - rmspe_tf: 0.1948 - val_loss: 0.0534 - val_rmspe_tf: 0.2788
Epoch 24/25
13/13 [==============================] - 2s 171ms/step - loss: 0.0267 - rmspe_tf: 0.1804 - val_loss: 0.0382 - val_rmspe_tf: 0.1884
Epoch 25/25
13/13 [==============================] - 2s 155ms/step - loss: 0.0252 - rmspe_tf: 0.1940 - val_loss: 0.0344 - val_rmspe_tf: 0.1996
CPU times: user 2min 27s, sys: 27.2 s, total: 2min 54s
Wall time: 2min 3s
PyTorch
PyTorch: Preparing Datasets
[19]:
import torch
from nvtabular.loader.torch import TorchAsyncItr, DLDataLoader
from nvtabular.framework_utils.torch.models import Model
from nvtabular.framework_utils.torch.utils import process_epoch
# TensorItrDataset returns a single batch of x_cat, x_cont, y.
collate_fn = lambda x: x
train_dataset = TorchAsyncItr(nvt.Dataset(TRAIN_PATHS), batch_size=BATCH_SIZE, cats=CATEGORICAL_COLUMNS, conts=CONTINUOUS_COLUMNS, labels=LABEL_COLUMNS)
train_loader = DLDataLoader(train_dataset, batch_size=None, collate_fn=collate_fn, pin_memory=False, num_workers=0)
valid_dataset = TorchAsyncItr(nvt.Dataset(VALID_PATHS), batch_size=BATCH_SIZE, cats=CATEGORICAL_COLUMNS, conts=CONTINUOUS_COLUMNS, labels=LABEL_COLUMNS)
valid_loader = DLDataLoader(valid_dataset, batch_size=None, collate_fn=collate_fn, pin_memory=False, num_workers=0)
PyTorch: Defining a Model
[20]:
model = Model(
embedding_table_shapes=EMBEDDING_TABLE_SHAPES,
num_continuous=len(CONTINUOUS_COLUMNS),
emb_dropout=EMBEDDING_DROPOUT_RATE,
layer_hidden_dims=HIDDEN_DIMS,
layer_dropout_rates=DROPOUT_RATES,
max_output=MAX_LOG_SALES_PREDICTION
).to('cuda')
PyTorch: Training
[21]:
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
[22]:
def rmspe_func(y_pred, y):
"Return y_pred and y to non-log space and compute RMSPE"
y_pred, y = torch.exp(y_pred) - 1, torch.exp(y) - 1
pct_var = (y_pred - y) / y
return (pct_var**2).mean().pow(0.5)
[23]:
%%time
for epoch in range(EPOCHS):
train_loss, y_pred, y = process_epoch(train_loader, model, train=True, optimizer=optimizer)
train_rmspe = rmspe_func(y_pred, y)
valid_loss, y_pred, y = process_epoch(valid_loader, model, train=False)
valid_rmspe = rmspe_func(y_pred, y)
print(f'Epoch {epoch:02d}. Train loss: {train_loss:.4f}. Train RMSPE: {train_rmspe:.4f}. Valid loss: {valid_loss:.4f}. Valid RMSPE: {valid_rmspe:.4f}.')
Epoch 00. Train loss: 7.6541. Train RMSPE: 2.3277. Valid loss: 3.8562. Valid RMSPE: 0.8409.
Epoch 01. Train loss: 3.8174. Train RMSPE: 0.8135. Valid loss: 2.6833. Valid RMSPE: 0.7823.
Epoch 02. Train loss: 2.4370. Train RMSPE: 0.7532. Valid loss: 1.6169. Valid RMSPE: 0.6843.
Epoch 03. Train loss: 1.1605. Train RMSPE: 0.6109. Valid loss: 0.6108. Valid RMSPE: 0.4976.
Epoch 04. Train loss: 0.3830. Train RMSPE: 0.4844. Valid loss: 0.2226. Valid RMSPE: 0.4973.
Epoch 05. Train loss: 0.2199. Train RMSPE: 0.6033. Valid loss: 0.1974. Valid RMSPE: 0.5660.
Epoch 06. Train loss: 0.1954. Train RMSPE: 0.5941. Valid loss: 0.1653. Valid RMSPE: 0.4996.
Epoch 07. Train loss: 0.1642. Train RMSPE: 0.5099. Valid loss: 0.1389. Valid RMSPE: 0.4282.
Epoch 08. Train loss: 0.1489. Train RMSPE: 0.4298. Valid loss: 0.1250. Valid RMSPE: 0.3837.
Epoch 09. Train loss: 0.1369. Train RMSPE: 0.4224. Valid loss: 0.1160. Valid RMSPE: 0.3943.
Epoch 10. Train loss: 0.1269. Train RMSPE: 0.4047. Valid loss: 0.1068. Valid RMSPE: 0.3653.
Epoch 11. Train loss: 0.1180. Train RMSPE: 0.4122. Valid loss: 0.1038. Valid RMSPE: 0.3748.
Epoch 12. Train loss: 0.1123. Train RMSPE: 0.3866. Valid loss: 0.0939. Valid RMSPE: 0.3489.
Epoch 13. Train loss: 0.1067. Train RMSPE: 0.3759. Valid loss: 0.0976. Valid RMSPE: 0.3089.
Epoch 14. Train loss: 0.1104. Train RMSPE: 0.4015. Valid loss: 0.0850. Valid RMSPE: 0.3047.
Epoch 15. Train loss: 0.1000. Train RMSPE: 0.3564. Valid loss: 0.0877. Valid RMSPE: 0.3471.
Epoch 16. Train loss: 0.0950. Train RMSPE: 0.3490. Valid loss: 0.0781. Valid RMSPE: 0.3194.
Epoch 17. Train loss: 0.0938. Train RMSPE: 0.3709. Valid loss: 0.0781. Valid RMSPE: 0.2871.
Epoch 18. Train loss: 0.0985. Train RMSPE: 0.3634. Valid loss: 0.1720. Valid RMSPE: 0.3634.
Epoch 19. Train loss: 0.1039. Train RMSPE: 0.3729. Valid loss: 0.0805. Valid RMSPE: 0.3397.
Epoch 20. Train loss: 0.0817. Train RMSPE: 0.3229. Valid loss: 0.0713. Valid RMSPE: 0.2713.
Epoch 21. Train loss: 0.0779. Train RMSPE: 0.3260. Valid loss: 0.0633. Valid RMSPE: 0.2670.
Epoch 22. Train loss: 0.0759. Train RMSPE: 0.3280. Valid loss: 0.0610. Valid RMSPE: 0.2712.
Epoch 23. Train loss: 0.0724. Train RMSPE: 0.3059. Valid loss: 0.0588. Valid RMSPE: 0.2574.
Epoch 24. Train loss: 0.0710. Train RMSPE: 0.3046. Valid loss: 0.0577. Valid RMSPE: 0.2619.
CPU times: user 40.8 s, sys: 14.6 s, total: 55.4 s
Wall time: 44.9 s
fast.ai
fast.ai: Preparing Datasets
AsyncTensorBatchDatasetItr maps a symbolic dataset object to cat_features
, cont_features
, labels
PyTorch tenosrs by iterating through the dataset and concatenating the results.
[24]:
import torch
from nvtabular.loader.torch import TorchAsyncItr, DLDataLoader
from fastai.basic_data import DataBunch
from fastai.tabular import TabularModel
from fastai.basic_train import Learner
from fastai.layers import MSELossFlat
def make_batched_dataloader(paths, columns, batch_size):
dataset = nvt.Dataset(paths)
ds_batch_sets = TorchAsyncItr(dataset,
batch_size=batch_size,
cats=CATEGORICAL_COLUMNS,
conts=CONTINUOUS_COLUMNS,
labels=LABEL_COLUMNS)
return DLDataLoader(
ds_batch_sets,
batch_size=None,
pin_memory=False,
num_workers=0
)
# Our examples are of the form (cat, cont, label) whereas fast.ai
# expects ((cat, cont), label).
def collate_fn(x):
return (x[0], x[1]), x[2]
train_dataset_pt = make_batched_dataloader(TRAIN_PATHS, COLUMNS, BATCH_SIZE)
valid_dataset_pt = make_batched_dataloader(VALID_PATHS, COLUMNS, BATCH_SIZE*4)
databunch = DataBunch(
train_dataset_pt,
valid_dataset_pt,
collate_fn=collate_fn,
device="cuda"
)
fast.ai: Defining a Model
Next we’ll need to define the inputs that will feed our model and build an architecture on top of them. For now, we’ll just stick to a simple MLP model.
Using FastAI’s TabularModel
, we can build an MLP under the hood by defining its high-level characteristics.
[25]:
pt_model = TabularModel(
emb_szs=list(EMBEDDING_TABLE_SHAPES.values()),
n_cont=len(CONTINUOUS_COLUMNS),
out_sz=1,
layers=HIDDEN_DIMS,
ps=DROPOUT_RATES,
use_bn=True,
emb_drop=EMBEDDING_DROPOUT_RATE,
y_range=torch.tensor([0.0, MAX_LOG_SALES_PREDICTION]),
)
fast.ai: Training
[26]:
%%time
from fastai.basic_data import DatasetType
from fastai.torch_core import flatten_check
def exp_rmspe(pred, targ):
"Exp RMSE between `pred` and `targ`."
pred,targ = flatten_check(pred,targ)
pred, targ = torch.exp(pred)-1, torch.exp(targ)-1
pct_var = (targ - pred)/targ
return torch.sqrt((pct_var**2).mean())
opt_func = torch.optim.Adam
loss_func = MSELossFlat()
learner = Learner(databunch, pt_model, opt_func=opt_func, loss_func=loss_func, metrics=[exp_rmspe])
learner.fit(EPOCHS, LEARNING_RATE)
epoch | train_loss | valid_loss | exp_rmspe | time |
---|---|---|---|---|
0 | 13.525331 | 4.873478 | 0.878330 | 00:01 |
1 | 8.483813 | 3.047947 | 0.812673 | 00:01 |
2 | 6.146219 | 2.120484 | 0.755971 | 00:01 |
3 | 4.508721 | 1.058006 | 0.626745 | 00:01 |
4 | 3.234402 | 0.232775 | 0.366768 | 00:01 |
5 | 2.322749 | 0.060692 | 0.261448 | 00:01 |
6 | 1.707432 | 0.058402 | 0.285743 | 00:01 |
7 | 1.278393 | 0.051923 | 0.247533 | 00:01 |
8 | 0.969832 | 0.056696 | 0.230926 | 00:01 |
9 | 0.744710 | 0.056189 | 0.224781 | 00:01 |
10 | 0.578064 | 0.048058 | 0.216876 | 00:01 |
11 | 0.453373 | 0.042783 | 0.213269 | 00:01 |
12 | 0.359234 | 0.041317 | 0.208709 | 00:01 |
13 | 0.287807 | 0.041065 | 0.204405 | 00:01 |
14 | 0.233357 | 0.040674 | 0.201885 | 00:02 |
15 | 0.191616 | 0.037278 | 0.197115 | 00:01 |
16 | 0.159574 | 0.037522 | 0.196429 | 00:01 |
17 | 0.134852 | 0.038437 | 0.194369 | 00:01 |
18 | 0.115685 | 0.040871 | 0.195767 | 00:01 |
19 | 0.100767 | 0.037118 | 0.194891 | 00:01 |
20 | 0.089153 | 0.034571 | 0.187381 | 00:01 |
21 | 0.079943 | 0.034003 | 0.186958 | 00:01 |
22 | 0.072624 | 0.033941 | 0.183455 | 00:01 |
23 | 0.066638 | 0.030964 | 0.179361 | 00:01 |
24 | 0.061784 | 0.029164 | 0.180001 | 00:01 |
CPU times: user 41.6 s, sys: 14.2 s, total: 55.8 s
Wall time: 46.4 s