Distributed Embeddings¶
distributed-embeddings is a library for building large embedding based (e.g. recommender) models in Tensorflow 2. It provides a scalable model parallel wrapper that automatically distribute embedding tables to multiple GPUs, as well as efficient embedding operations that cover and extend Tensorflow’s embedding functionalities.
Refer to NVIDIA Developer blog about Terabyte-scale Recommender Training for more details.
Features¶
Distributed Model Parallel Wrapper¶
distributed_embeddings.dist_model_parallel
is a tool to enable model parallel training by changing only three lines of your script. It can also be used alongside data parallel to form hybrid parallel training. Users can easily experiment large scale embeddings beyond single GPU’s memory capacity without complex code to handle cross-worker communication.
Embedding Layers¶
distributed_embeddings.Embedding
combines functionalities of tf.keras.layers.Embedding
and tf.nn.embedding_lookup_sparse
under a unified Keras layer API. The backend is designed to achieve high GPU efficiency.
Input Mapping Layers¶
distributed_embeddings.IntegerLookup
extends tf.keras.layers.IntegerLookup
’s functionality with on-the-fly vocabulary building. This allow user to start training directly from input keys, mapped to embeding offset, without offline preprocessing. A highly optimized GPU backend is also provided in addition to CPU support.
See more details at User Guide
Installation¶
Requirements¶
Python 3, CUDA 11 or newer, TensorFlow 2
Containers¶
You can build inside 22.03 or later NGC TF2 image:
Note: horovod v0.27 and TensorFlow 2.10, alternatively NGC 23.03 container, is required for building v0.3+
docker pull nvcr.io/nvidia/tensorflow:23.06-tf2-py3
Build from source¶
After clone this repository, run:
git submodule update --init --recursive
make pip_pkg && pip install artifacts/*.whl
Test installation with:
python -c "import distributed_embeddings"
Feedback and Support¶
If you’d like to contribute to the library directly, see the CONTRIBUTING.md. We’re particularly interested in contributions or feature requests for our feature engineering and preprocessing operations. To further advance our Merlin Roadmap, we encourage you to share all the details regarding your recommender system pipeline in this survey.
If you’re interested in learning more about how distributed-embeddings works, see documentation.