Merlin Models | Documentation
NVIDIA Merlin Models provides standard models for Recommender Systems. The models are high quality implementations that range from classic Machine Learning models, to more advanced Deep Learning models.
Highlights
TODO
TODO
TODO
Quick tour
TODO
Use cases
TODO—Does this apply?
Installation
TODO—Perhaps something more detailed than the README.md at the root of the repository?
Installing with pip
FIXME
Installing with conda
FIXME
Installing with Docker
FIXME: True?
Merlin Models is installed in the following NVIDIA Merlin Docker containers that are available in the NVIDIA container repository:
Container Name | Container Location | Functionality |
---|---|---|
merlin-tensorflow-training | https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-tensorflow-training | Transformers4Rec, NVTabular, TensorFlow, and HugeCTR Tensorflow Embedding plugin |
merlin-pytorch-training | https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-pytorch-training | Transformers4Rec, NVTabular and PyTorch |
merlin-inference | https://ngc.nvidia.com/catalog/containers/nvidia:merlin:merlin-inference | Transformers4Rec, NVTabular, PyTorch, and Triton Inference |
Before you can use these Docker containers, you need to install the NVIDIA Container Toolkit to provide GPU support for Docker. You can use the NGC links referenced in the preceding table to get more information about how to launch and run these containers.
Feedback and Support
If you’d like to contribute to the project, see the CONTRIBUTING.md file. We’re interested in contributions or feature requests for our feature engineering and preprocessing operations. To further advance our Merlin Roadmap, we encourage you to share all the details about your Recommender System pipeline in this survey.