Transformers4Rec
Transformers4Rec is a flexible and efficient library for sequential and session-based recommendation and can work with PyTorch.
The library works as a bridge between NLP and recommender systems by integrating with one the most popular NLP frameworks HuggingFace Transformers, making state-of-the-art Transformer architectures available for RecSys researchers and industry practitioners.
The following figure shows the use of the library in a recommender system. Input data is typically a sequence of interactions such as items that are browsed in a web session or items put in a cart. The library helps you process and model the interactions so that you can output better recommendations for the next item.
Transformers4Rec supports multiple input features and provides configurable building blocks that can be easily combined for custom architectures.
You can build a fully GPU-accelerated pipeline for sequential and session-based recommendation with Transformers4Rec and its smooth integration with other components of NVIDIA Merlin: NVTabular for preprocessing and Triton Inference Server.
Highlights
Winning and SOTA solution: We have leveraged and evolved the Transformers4Rec library to win two recent session-based recommendation competitions: the WSDM WebTour Workshop Challenge 2021, organized by Booking.com, and the SIGIR eCommerce Workshop Data Challenge 2021, organized by Coveo. Furthermore, we have also done extensive empirical evaluation on the usage of Transformers4Rec for session-based recommendation, which was able to provide higher accuracy than baselines algorithms, as published in our ACM RecSys’21 paper.
Flexibility: The building blocks are modularized and are compatible with standard PyTorch modules. You can create custom architectures such as multiple towers, multiple heads/tasks, and losses.
Production-ready: Exports trained models to serve with Triton Inference Server in a single pipeline that includes online features preprocessing and model inference.
Leverages cutting-edge NLP research: With the integration with HuggingFace Transformers, you have available more than 64 different Transformer architectures (and counting) to evaluate for your sequential and session-based recommendation task.
Support for multiple input features: HF Transformers supports only sequence of token id as input, as it was originally designed for NLP. Due to the rich features available in RecSys datasets, transformers4Rec enables the usage of HF Transformers with any type of sequential tabular data. The library uses a schema format to configure the input features, and automatically creates the necessary layers (e.g. embedding tables, projection layers, output layers based on the target) without requiring code changes to include new features. Interaction and sequence-level input features can be normalized and combined in configurable ways.
Seamless preprocessing and feature engineering: The integration with NVTabular has common preprocessing ops for session-based recommendation and exports a dataset schema compatible with Transformers4Rec, so that input features can be configured automatically.
Quick tour
To train a model on a dataset, the first step is to provide the schema and use this to construct an input-module. For session-based recommendation problems you typically want to use TabularSequenceFeatures, which merges context features with sequential features. Next, we need to provide the prediction-task(s) (the tasks we provide out of the box can be found here). Then all that’s left is to construct a transformer-body and convert this to a model.
The following sample code works with PyTorch:
from transformers4rec import torch as tr
schema: tr.Schema = tr.data.tabular_sequence_testing_data.schema
# Or read schema from disk: tr.Schema().from_json(SCHEMA_PATH)
max_sequence_length, d_model = 20, 64
# Define input module to process tabular input-features
input_module = tr.TabularSequenceFeatures.from_schema(
schema,
max_sequence_length=max_sequence_length,
continuous_projection=d_model,
aggregation="concat",
masking="causal",
)
# Define one or multiple prediction-tasks
prediction_tasks = tr.NextItemPredictionTask()
# Define a transformer-config, like the XLNet architecture
transformer_config = tr.XLNetConfig.build(
d_model=d_model, n_head=4, n_layer=2, total_seq_length=max_sequence_length
)
model: tr.Model = transformer_config.to_torch_model(input_module, prediction_tasks)
Use cases
Sequential and Session-based recommendation
Traditional recommendation algorithms usually ignore the temporal dynamics and the sequence of interactions when trying to model user behaviour. Generally, the next user interaction is related to the sequence of the user’s previous choices. In some cases, it might be even a repeated purchase or song play. User interests might also suffer from the interest drift, as preferences might change over time. Those challenges are addressed by the sequential recommendation task. A special case of sequential-recommendation is the session-based recommendation task, where you have only access to the short sequence of interactions within the current session. This is very common in online services like e-commerce, news and media portals where the user might choose to browse anonymously (and due to GDPR compliance no cookies are collected), or because it is a new user. This task is also relevant for scenarios where users’ interests change a lot over time depending on the user context or intent, so leveraging the current session interactions is more promising than old interactions to provide relevant recommendations.
To deal with sequential and session-based recommendation, many sequence learning algorithms previously applied in machine learning and NLP research have been explored for RecSys, based on k-Nearest Neighbors, Frequent Pattern Mining, Hidden Markov Models, Recurrent Neural Networks, and more recently neural architectures using the Self-Attention Mechanism and the Transformer architectures.
Differently from Transformers4Rec, existing frameworks for such tasks are generally focused for research, accept only sequence of item ids as input and do not provide a modularized and scalable implementation for production usage.
Installation
Installing with pip
You can install Transformers4Rec with the functionality to use the GPU-accelerated NVTabular dataloader.
Installation with the dataloader is highly recommended for better performance.
Those components can be installed as optional args for the pip install package. Note that installation NVTabular with pip
supports only CPU version of NVTabular for now.
PyTorch
pip install transformers4rec[pytorch,nvtabular]
Installing with conda
conda install -c nvidia -c conda-forge transformers4rec
Installing with Docker
Transformers4Rec library is pre-installed in the NVIDIA Merlin Docker containers.
Refer to the Merlin Containers documentation page for information about the Merlin container names, URLs to the container images on the NVIDIA GPU Cloud catalog, and key Merlin components.
Notebook Examples and Tutorials
The End-to-end pipeline with NVIDIA Merlin page shows how to use Transformers4Rec and other Merlin libraries like NVTabular to build a complete recommender system.
We have several example notebooks to help you build a recommender system or integrate Transformers4Rec into your system:
A getting started example that includes training a session-based model with an XLNET transformer architecture.
An end-to-end example that trains a model and takes the next step to serve inference with Triton Inference Server.
Another end-to-end example that trains and evaluates a session-based model on RNN and also serves inference with Triton Inference Server.
A notebook and scripts that reproduce the experiments presented in a paper for RecSys 2021.
Feedback and Support
If you’d like to contribute to the library directly, see the CONTRIBUTING.md. We’re particularly interested in contributions or feature requests for our feature engineering and preprocessing operations. To further advance our Merlin Roadmap, we encourage you to share all the details regarding your recommender system pipeline in this survey.
If you’re interested in learning more about how NVTabular works, see Transformers4Rec documentation. We also have the API documentation that outlines specifics of the available modules and classes within the Transformers4Rec library.