HPS Initialize


Abbreviated as hps.Init(**kwargs).

This function initializes the HPS for all the deployed models. It can be used explicitly or implicitly. When used explicitly, you must call the function only once and you must call it before any other HPS APIs. When used implicitly, ps_config_file and global_batch_size should be specified in the constructor of hps.SparseLookupLayer and hps.LookupLayer. When the layer is executed for the first time, it triggers the internal HPS initialization implicitly in a thread-safe call-once manner. The implicit initialization is especially useful for deploying the SavedModels that leverage the HPS layers for online inference.

HPS leverages all available GPUs for the current CPU process. Set CUDA_VISIBLE_DEVICES or tf.config.set_visible_devices to specify which GPUs to use in this process before you launch the TensorFlow runtime and calling this function. Additionally, ensure that the deployed_device_list parameter in the HPS configuration JSON file matches the visible devices.

In TensorFlow 2.x, HPS can be used with tf.distribute.Strategy or Horovod. When it is used with tf.distribute.Strategy, you must call it under strategy.scope() as shown in the following code block.

import hierarchical_parameter_server as hps

with strategy.scope():

To use the function with Horovod, call it one for each time you initialize a Horovod process such as the following code block shows.

import hierarchical_parameter_server as hps
import horovod.tensorflow as hvd



In TensorFlow 1.15, HPS can only work with Horovod. The returned status must be evaluated with sess.run and it must be the first step before evaluating any other HPS APIs.

import hierarchical_parameter_server as hps

hps_init = hps.Init(**kwargs)
with tf.Session() as sess:

kwargs (dict) –

Keyword arguments for this function. The dictionary must contain global_batch_size and ps_config_file.

  • global_batch_size: int, the global batch size for HPS that is deployed on multiple GPUs.

  • ps_config_file: str, the JSON configuration file for HPS initialization.

An example ps_config_file is as follows and global_batch_size can be configured as 16384 correspondingly:

ps_config_file = {
    "supportlonglong" : True,
    "models" :
        "model": "demo_model",
        "sparse_files": ["demo_model_sparse.model"],
        "num_of_worker_buffer_in_pool": 3,
        "embedding_vecsize_per_table": [16],
        "maxnum_catfeature_query_per_table_per_sample": [10],
        "default_value_for_each_table": [1.0],
        "deployed_device_list": [0],
        "max_batch_size": 16384,
        "cache_refresh_percentage_per_iteration": 0.2,
        "hit_rate_threshold": 1.0,
        "gpucacheper": 1.0,
        "gpucache": True
        "model": "demo_model2",
        "sparse_files": ["demo_model2_sparse_0.model", "demo_model2_sparse_1.model"],
        "num_of_worker_buffer_in_pool": 3,
        "embedding_table_names":["sparse_embedding0", "sparse_embedding1"],
        "embedding_vecsize_per_table": [64, 32],
        "maxnum_catfeature_query_per_table_per_sample": [3, 5],
        "default_value_for_each_table": [1.0, 1.0],
        "deployed_device_list": [0],
        "max_batch_size": 16384,
        "cache_refresh_percentage_per_iteration": 0.2,
        "hit_rate_threshold": 1.0,
        "gpucacheper": 1.0,
        "gpucache": True},


status – On success, the function returns string with the value OK.

Return type: