Criteo Example
Here we’ll show how to use NVTabular first as a preprocessing library to prepare the Criteo Display Advertising Challenge dataset, and then as a dataloader to train a FastAI model on the prepared data. The large memory footprint of the Criteo dataset presents a great opportunity to highlight the advantages of the online fashion in which NVTabular loads and transforms data.
Data Prep
Before we get started, make sure you’ve run the `optimize_criteo
notebook <./optimize_criteo.ipynb>`__, which will convert the tsv data published by Criteo into the parquet format that our accelerated readers prefer. It’s fair to mention at this point that that notebook will take ~4 hours to run. While we’re hoping to release accelerated csv readers in the near future, we also believe that inefficiencies in existing data representations like csv are in no small part a consequence of
inefficiencies in the existing hardware/software stack. Accelerating these pipelines on new hardware like GPUs may require us to make new choices about the representations we use to store that data, and parquet represents a strong alternative.
Quick Aside: Clearing Cache
The following line is not strictly necessary, but is included for those who want to validate NVIDIA’s benchmarks. We start by clearing the existing cache to start as “fresh” as possible. If you’re having trouble running it, try executing the container with the --priveleged
flag.
[1]:
!sync; echo 3 > /proc/sys/vm/drop_caches
/bin/sh: 1: cannot create /proc/sys/vm/drop_caches: Read-only file system
[1]:
import os
from time import time
import re
import glob
import warnings
# tools for data preproc/loading
import torch
import rmm
import nvtabular as nvt
from nvtabular.ops import Normalize, FillMissing, Categorify, Moments, Median, Encoder, LogOp, ZeroFill
from nvtabular.torch_dataloader import FileItrDataset, DLCollator, DLDataLoader
# tools for training
from fastai.basic_train import Learner
from fastai.basic_data import DataBunch
from fastai.tabular import TabularModel
from fastai.metrics import accuracy
/conda/envs/rapids/lib/python3.7/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_NVVM=/usr/local/cuda/nvvm/lib64/libnvvm.so.
For more information about alternatives visit: ('http://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
/conda/envs/rapids/lib/python3.7/site-packages/numba/cuda/envvars.py:17: NumbaWarning:
Environment variables with the 'NUMBAPRO' prefix are deprecated and consequently ignored, found use of NUMBAPRO_LIBDEVICE=/usr/local/cuda/nvvm/libdevice/.
For more information about alternatives visit: ('http://numba.pydata.org/numba-doc/latest/cuda/overview.html', '#cudatoolkit-lookup')
warnings.warn(errors.NumbaWarning(msg))
Initializing the Memory Pool
For applications like the one that follows where RAPIDS will be the only workhorse user of GPU memory and resource, a good best practices is to use the RAPIDS Memory Manager library rmm
to allocate a dedicated pool of GPU memory that allows for fast, asynchronous memory management. Here, we’ll dedicate 80% of free GPU memory to this pool to make sure we get the most utilization possible.
[2]:
rmm.reinitialize(pool_allocator=True, initial_pool_size=0.8 * rmm.get_info().free)
[2]:
0
Dataset and Dataset Schema
Once our data is ready, we’ll define some high level parameters to describe where our data is and what it “looks like” at a high level.
[3]:
# define some information about where to get our data
INPUT_DATA_DIR = os.environ.get('INPUT_DATA_DIR', '/data')
OUTPUT_DATA_DIR = os.environ.get('OUTPUT_DATA_DIR', '/processed') # where we'll save our procesed data to
NUM_TRAIN_DAYS = 23 # number of days worth of data to use for training, the rest will be used for validation
# define our dataset schema
CONTINUOUS_COLUMNS = ['I' + str(x) for x in range(1,14)]
CATEGORICAL_COLUMNS = ['C' + str(x) for x in range(1,27)]
LABEL_COLUMNS = ['label']
COLUMNS = CONTINUOUS_COLUMNS + CATEGORICAL_COLUMNS + LABEL_COLUMNS
[4]:
! ls $INPUT_DATA_DIR
_metadata day_12.parquet day_17.parquet day_21.parquet day_5.parquet
day_0.parquet day_13.parquet day_18.parquet day_22.parquet day_6.parquet
day_1.parquet day_14.parquet day_19.parquet day_23.parquet day_7.parquet
day_10.parquet day_15.parquet day_2.parquet day_3.parquet day_8.parquet
day_11.parquet day_16.parquet day_20.parquet day_4.parquet day_9.parquet
[5]:
fname = 'day_{}.parquet'
num_days = len([i for i in os.listdir(INPUT_DATA_DIR) if re.match(fname.format('[0-9]{1,2}'), i) is not None])
train_paths = [os.path.join(INPUT_DATA_DIR, fname.format(day)) for day in range(NUM_TRAIN_DAYS)]
valid_paths = [os.path.join(INPUT_DATA_DIR, fname.format(day)) for day in range(NUM_TRAIN_DAYS, num_days)]
Preprocessing
At this point, our data still isn’t in a form that’s ideal for consumption by neural networks. The most pressing issues are missing values and the fact that our categorical variables are still represented by random, discrete identifiers, and need to be transformed into contiguous indices that can be leveraged by a learned embedding. Less pressing, but still important for learning dynamics, are the distributions of our continuous variables, which are distributed across multiple orders of magnitude and are uncentered (i.e. E[x] != 0).
We can fix these issues in a conscise and GPU-accelerated manner with an NVTabular Workflow
. We’ll instantiate one with our current dataset schema, then symbolically add operations on that schema. By setting all these Ops
to use replace=True
, the schema itself will remain unmodified, while the variables represented by each field in the schema will be transformed.
Frequency Thresholding
One interesting thing worth pointing out is that we’re using frequency thresholding in our Categorify
op. This handy functionality will map all categories which occur in the dataset with some threshold level of infrequency (which we’ve set here to be 15 occurrences throughout the dataset) to the same index, keeping the model from overfitting to sparse signals.
[6]:
proc = nvt.Workflow(
cat_names=CATEGORICAL_COLUMNS,
cont_names=CONTINUOUS_COLUMNS,
label_name=LABEL_COLUMNS,
to_cpu=False)
# log -> normalize continuous features. Note that doing this in the opposite
# order wouldn't make sense! Note also that we're zero filling continuous
# values before the log: this is a good time to remember that LogOp
# performs log(1+x), not log(x)
proc.add_cont_feature([ZeroFill(), LogOp()])
proc.add_cont_preprocess(Normalize())
# categorification with frequency thresholding
proc.add_cat_preprocess(Categorify(use_frequency=True, freq_threshold=15))
Now instantiate dataset iterators to loop through our dataset (which we couldn’t fit into GPU memory)
[7]:
train_ds_iterator = nvt.dataset(train_paths, engine='parquet', gpu_memory_frac=0.2)
valid_ds_iterator = nvt.dataset(valid_paths, engine='parquet', gpu_memory_frac=0.2)
Now run them through our workflows to collect statistics on the train set, then transform and save to parquet files.
[8]:
output_train_dir = os.path.join(OUTPUT_DATA_DIR, 'train/')
output_valid_dir = os.path.join(OUTPUT_DATA_DIR, 'valid/')
! mkdir -p $output_train_dir
! mkdir -p $output_valid_dir
For reference, let’s time it to see how long it takes…
[9]:
%%time
proc.apply(train_ds_iterator, apply_offline=True, record_stats=True, shuffle=True, output_path=output_train_dir, num_out_files=35)
CPU times: user 22min 5s, sys: 14min 6s, total: 36min 12s
Wall time: 28min 58s
[10]:
%%time
proc.apply(valid_ds_iterator, apply_offline=True, record_stats=False, shuffle=True, output_path=output_valid_dir, num_out_files=35)
CPU times: user 33.5 s, sys: 24.5 s, total: 58 s
Wall time: 43.5 s
And just like that, we have training and validation sets ready to feed to a model!
Deep Learning
Data Loading
We’ll start by using the parquet files we just created to feed an NVTabular FileItrDataset
, which will looop through the files in chunks. First, we’ll reinitialize our memory pool from earlier to free up some memory so that we can share it with PyTorch.
[11]:
rmm.reinitialize(pool_allocator=False)
[11]:
0
[12]:
train_paths = glob.glob(os.path.join(output_train_dir, "*.parquet"))
valid_paths = glob.glob(os.path.join(output_valid_dir, "*.parquet"))
[15]:
train_data_itrs = [FileItrDataset(path, names=COLUMNS, engine='parquet', batch_size=1600000, sep="\t") for path in train_paths]
valid_data_itrs = [FileItrDataset(path, names=COLUMNS, engine='parquet', batch_size=1600000, sep="\t") for path in train_paths]
Now we chain these iterators together use PyTorch’s ChainDataset
[16]:
train_dataset = torch.utils.data.ChainDataset(train_data_itrs)
valid_dataset = torch.utils.data.ChainDataset(valid_data_itrs)
NVTabular’s DLCollator
maps the output of these iterators into the tensors expected by FastAI. We’ll use it a collate_fn
when we instantiate a DLDataLoader
.
[17]:
dlc = DLCollator(preproc=proc, apply_ops=False)
train_dataloader = DLDataLoader(train_dataset, collate_fn=dlc.gdf_col, pin_memory=False, num_workers=0)
valid_dataloader = DLDataLoader(valid_dataset, collate_fn=dlc.gdf_col, pin_memory=False, num_workers=0)
databunch = DataBunch(train_dataloader, valid_dataloader, collate_fn=dlc.gdf_col, device="cuda")
Now we have data ready to be fed to our model online!
Training
One extra handy functionality of NVTabular is the ability to use the stats collected by the Categorify
op to define embedding dictionary sizes (i.e. the number of rows of your embedding table). It even includes a heuristic for computing a good embedding size (i.e. the number of columns of your embedding table) based off of the number of categories.
[18]:
embeddings = [x[1] for x in proc.df_ops['Categorify'].get_emb_sz(proc.stats["categories"], proc.columns_ctx['categorical']['base'])]
[21]:
model = TabularModel(emb_szs=embeddings, n_cont=len(CONTINUOUS_COLUMNS), out_sz=2, layers=[512, 256])
learn = Learner(databunch, model, metrics=[accuracy])
learn.loss_func = torch.nn.CrossEntropyLoss()
[24]:
learning_rate = 1.32e-2
epochs = 1
start = time()
learn.fit_one_cycle(epochs, learning_rate)
t_final = time() - start
print(t_final)
epoch | train_loss | valid_loss | accuracy | time |
---|---|---|---|---|
0 | 0.122840 | 0.121734 | 0.967229 | 2:18:33 |