# Copyright 2021 NVIDIA Corporation. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

# Each user is responsible for checking the content of datasets and the
# applicable licenses and determining if suitable for the intended use.
https://developer.download.nvidia.com/notebooks/dlsw-notebooks/merlin_merlin_getting-started-movielens-03-training-with-hugectr/nvidia_logo.png

Getting Started MovieLens: Training with HugeCTR#

This notebook is created using the latest stable merlin-hugectr container.

Overview#

In this notebook, we want to provide an overview what HugeCTR framework is, its features and benefits. We will use HugeCTR to train a basic neural network architecture.

Learning Objectives:

  • Adopt NVTabular workflow to provide input files to HugeCTR

  • Define HugeCTR neural network architecture

  • Train a deep learning model with HugeCTR

Why use HugeCTR?#

HugeCTR is a GPU-accelerated recommender framework designed to distribute training across multiple GPUs and nodes and estimate Click-Through Rates (CTRs).

HugeCTR offers multiple advantages to train deep learning recommender systems:

  1. Speed: HugeCTR is a highly efficient framework written in C++. We experienced up to 10x speed up. HugeCTR on a NVIDIA DGX A100 system proved to be the fastest commercially available solution for training the architecture Deep Learning Recommender Model (DLRM) developed by Facebook.

  2. Scale: HugeCTR supports model parallel scaling. It distributes the large embedding tables over multiple GPUs or multiple nodes.

  3. Easy-to-use: Easy-to-use Python API similar to Keras. Examples for popular deep learning recommender systems architectures (Wide&Deep, DLRM, DCN, DeepFM) are available.

Other Features of HugeCTR#

HugeCTR is designed to scale deep learning models for recommender systems. It provides a list of other important features:

  • Proficiency in oversubscribing models to train embedding tables with single nodes that don’t fit within the GPU or CPU memory (only required embeddings are prefetched from a parameter server per batch)

  • Asynchronous and multithreaded data pipelines

  • A highly optimized data loader.

  • Supported data formats such as parquet and binary

  • Integration with Triton Inference Server for deployment to production

Getting Started#

In this example, we will train a neural network with HugeCTR. We will use preprocessed datasets generated via NVTabular in 02-ETL-with-NVTabular notebook.

# External dependencies
import os
import nvtabular as nvt
/usr/local/lib/python3.8/dist-packages/tqdm/auto.py:22: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
  from .autonotebook import tqdm as notebook_tqdm

We define our base directory, containing the data.

# path to preprocessed data
INPUT_DATA_DIR = os.environ.get(
    "INPUT_DATA_DIR", os.path.expanduser("/workspace/nvt-examples/movielens/data/")
)

Let’s load our saved workflow from the 02-ETL-with-NVTabular notebook.

workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))
workflow.output_dtypes
{'userId': dtype('int64'),
 'movieId': dtype('int64'),
 'genres': dtype('int64'),
 'rating': dtype('int8')}

Note: We do not have numerical output columns

Let’s clear existing directory should it exist from previous runs and create the output folders.

MODEL_DIR = os.path.join(INPUT_DATA_DIR, "model/movielens_hugectr/")
!rm -rf {MODEL_DIR}
!mkdir -p {MODEL_DIR}"1"

Scaling Accelerated training with HugeCTR#

HugeCTR is a deep learning framework dedicated to recommendation systems. It is written in CUDA C++. As HugeCTR optimizes the training in CUDA++, we need to define the training pipeline and model architecture and execute it via the commandline. We will use the Python API, which is similar to Keras models.

For more information on HugeCTR please consult the HugeCTR repository.

Let’s define our model#

Let’s define our model. We will write the model to ./train_hugeCTR.py and execute it afterwards.

%%writefile train_hugeCTR.py

# External dependencies
import os
import nvtabular as nvt
from nvtabular.ops import get_embedding_sizes
import hugectr
from mpi4py import MPI  

# path to preprocessed data
INPUT_DATA_DIR = os.environ.get(
    "INPUT_DATA_DIR", os.path.expanduser("/workspace/nvt-examples/movielens/data/")
)

MODEL_DIR = os.path.join(INPUT_DATA_DIR, "model/movielens_hugectr/")

workflow = nvt.Workflow.load(os.path.join(INPUT_DATA_DIR, "workflow"))

embeddings = get_embedding_sizes(workflow)

solver = hugectr.CreateSolver(
    vvgpu=[[0]],
    batchsize=2048,
    batchsize_eval=2048,
    max_eval_batches=160,
    i64_input_key=True,
    use_mixed_precision=False,
    repeat_dataset=True,
)
optimizer = hugectr.CreateOptimizer(optimizer_type=hugectr.Optimizer_t.Adam)
reader = hugectr.DataReaderParams(
    data_reader_type=hugectr.DataReaderType_t.Parquet,
    source=[INPUT_DATA_DIR + "train/_file_list.txt"],
    eval_source=INPUT_DATA_DIR + "valid/_file_list.txt",
    check_type=hugectr.Check_t.Non,
    slot_size_array=[162542, 56586, 21],
)


model = hugectr.Model(solver, reader, optimizer)

model.add(
    hugectr.Input(
        label_dim=1,
        label_name="label",
        dense_dim=0,
        dense_name="dense",
        data_reader_sparse_param_array=[
            hugectr.DataReaderSparseParam("data1", nnz_per_slot=10, is_fixed_length=False, slot_num=3)
        ],
    )
)
model.add(
    hugectr.SparseEmbedding(
        embedding_type=hugectr.Embedding_t.LocalizedSlotSparseEmbeddingHash,
        workspace_size_per_gpu_in_mb=200,
        embedding_vec_size=16,
        combiner="sum",
        sparse_embedding_name="sparse_embedding1",
        bottom_name="data1",
        optimizer=optimizer,
    )
)
model.add(
    hugectr.DenseLayer(
        layer_type=hugectr.Layer_t.Reshape,
        bottom_names=["sparse_embedding1"],
        top_names=["reshape1"],
        leading_dim=48,
    )
)
model.add(
    hugectr.DenseLayer(
        layer_type=hugectr.Layer_t.InnerProduct,
        bottom_names=["reshape1"],
        top_names=["fc1"],
        num_output=128,
    )
)
model.add(
    hugectr.DenseLayer(
        layer_type=hugectr.Layer_t.ReLU,
        bottom_names=["fc1"],
        top_names=["relu1"],
    )
)
model.add(
    hugectr.DenseLayer(
        layer_type=hugectr.Layer_t.InnerProduct,
        bottom_names=["relu1"],
        top_names=["fc2"],
        num_output=128,
    )
)
model.add(
    hugectr.DenseLayer(
        layer_type=hugectr.Layer_t.ReLU,
        bottom_names=["fc2"],
        top_names=["relu2"],
    )
)
model.add(
    hugectr.DenseLayer(
        layer_type=hugectr.Layer_t.InnerProduct,
        bottom_names=["relu2"],
        top_names=["fc3"],
        num_output=1,
    )
)
model.add(
    hugectr.DenseLayer(
        layer_type=hugectr.Layer_t.BinaryCrossEntropyLoss,
        bottom_names=["fc3", "label"],
        top_names=["loss"],
    )
)

model.compile()
model.summary()
model.fit(max_iter=2000, display=100, eval_interval=200, snapshot=1900)
model.graph_to_json(graph_config_file=MODEL_DIR + "1/movielens.json")
Overwriting train_hugeCTR.py

Now please run the script we outputted above in the terminal using the following command:

python train_hugeCTR.py

After training terminates, we can see that multiple .model files and folders are generated.

ls *.model
0_opt_sparse_1900.model  _dense_1900.model  _opt_dense_1900.model

0_sparse_1900.model:
emb_vector  key  slot_id

Let’s move these files into the movielens_hugectr folder. When we start the Triton Inference Server, we will be able to point it to that directory and ask it to load our model using the files we move there below.

!mv *.model {MODEL_DIR}