Merlin Systems Example Notebook

Merlin Systems Example Notebook#

These Jupyter notebooks demonstrate how to use Merlin Systems to deploy models to Triton Inference Server.

Running the Example Notebooks#

Docker containers are available from the NVIDIA GPU Cloud. We use the latest stable version of the merlin-tensorflow container to run the example notebooks. To run the example notebooks using Docker containers, perform the following steps:

  1. If you haven’t already created a Docker volume to share models and datasets between containers, create the volume by running the following command:

    docker volume create merlin-examples

    For the ranking models example, note that the saved dlrm model that was created with Merlin Models, the NVTabular workflow, and processed synthetic parquet files should be stored in the merlin-examples folder so that they can be mounted to the container for performing inference with Merlin Systems.

  2. Pull and start the container by running the following command:

    docker run --gpus all --rm -it \
      -p 8888:8888 -p 8797:8787 -p 8796:8786 --ipc=host \
      -v merlin-examples:/workspace/data \ /bin/bash

    In production, instead of using the nightly tag, specify a release tag. You can find the release tags and more information on the merlin-tensorflow container page.

    The container opens a shell when the run command execution is completed. Your shell prompt should look similar to the following example:

  3. Start the JupyterLab server by running the following command:

    jupyter-lab --allow-root --ip=''

    View the messages in your terminal to identify the URL for JupyterLab. The messages in your terminal show similar lines to the following example:

    Or copy and paste one of these URLs:
  4. Open a browser and use the URL provided in the messages by JupyterLab.

  5. After you log in to JupyterLab, navigate to the /systems/examples directory to try out the example notebooks.