merlin.models.tf.ItemRetrievalScorer
-
class
merlin.models.tf.ItemRetrievalScorer(*args, **kwargs)[source] Bases:
merlin.models.tf.core.base.BlockBlock for ItemRetrieval, which expects query/user and item embeddings as input and uses dot product to score the positive item (inputs[“item”]) and also sampled negative items (during training). :param samplers: List of item samplers that provide negative samples when training=True :type samplers: List[ItemSampler], optional :param sampling_downscore_false_negatives: Identify false negatives (sampled item ids equal to the positive item and downscore them
to the sampling_downscore_false_negatives_value), by default True
- Parameters
sampling_downscore_false_negatives_value (int, optional) – Value to be used to downscore false negatives when sampling_downscore_false_negatives=True, by default np.finfo(np.float32).min / 100.0
item_id_feature_name (str) – Name of the column containing the item ids Defaults to item_id
query_name (str) – Identify query tower for query/user embeddings, by default ‘query’
item_name (str) – Identify item tower for item embeddings, by default’item’
cache_query (bool) – Add query embeddings to the context block, by default False
sampled_softmax_mode (bool) – Use sampled softmax for scoring, by default False
store_negative_ids (bool) – Returns negative items ids as part of the output, by default False
-
__init__(samplers: Sequence[merlin.models.tf.blocks.sampling.base.ItemSampler] = (), sampling_downscore_false_negatives=True, sampling_downscore_false_negatives_value: float = - 655.04, item_id_feature_name: str = 'item_id', item_domain: str = 'item_id', query_name: str = 'query', item_name: str = 'item', cache_query: bool = False, sampled_softmax_mode: bool = False, store_negative_ids: bool = False, **kwargs)[source]
Methods
__init__([samplers, …])add_loss(losses, **kwargs)Add loss tensor(s), potentially dependent on layer inputs.
add_metric(value[, name])Adds metric tensor to the layer.
add_update(updates)Add update op(s), potentially dependent on layer inputs.
add_variable(*args, **kwargs)Deprecated, do NOT use! Alias for add_weight.
add_weight([name, shape, dtype, …])Adds a new variable to the layer.
as_tabular([name])build(input_shapes)build_from_config(config)call(inputs[, training, testing])Based on the user/query embedding (inputs[self.query_name]), uses dot product to score
call_outputs(outputs[, features, training, …])Based on the user/query embedding (inputs[self.query_name]), uses dot product to score
check_schema([schema])compute_mask(inputs[, mask])Computes an output mask tensor.
compute_output_shape(input_shape)Computes the output shape of the layer.
compute_output_signature(input_signature)Compute the output tensor signature of the layer based on the inputs.
connect(*block[, block_name, context])Connect the block to other blocks sequentially.
connect_branch(*branches[, add_rest, post, …])Connect the block to one or multiple branches.
connect_debug_block([append])Connect the block to a debug block.
connect_with_residual(block[, activation])Connect the block to other blocks sequentially with a residual connection.
connect_with_shortcut(block[, …])Connect the block to other blocks sequentially with a shortcut connection.
copy()count_params()Count the total number of scalars composing the weights.
finalize_state()Finalizes the layers state after updating layer weights.
from_config(config)from_layer(layer)get_build_config()get_input_at(node_index)Retrieves the input tensor(s) of a layer at a given node.
get_input_mask_at(node_index)Retrieves the input mask tensor(s) of a layer at a given node.
get_input_shape_at(node_index)Retrieves the input shape(s) of a layer at a given node.
get_item_ids_from_inputs(inputs)get_output_at(node_index)Retrieves the output tensor(s) of a layer at a given node.
get_output_mask_at(node_index)Retrieves the output mask tensor(s) of a layer at a given node.
get_output_shape_at(node_index)Retrieves the output shape(s) of a layer at a given node.
get_padding_mask_from_item_id(inputs[, …])get_weights()Returns the current weights of the layer, as NumPy arrays.
parse(*block)parse_block(input)prepare([block, post, aggregation])Transform the inputs of this block.
register_features(feature_shapes)repeat([num])Repeat the block num times.
repeat_in_parallel([num, prefix, names, …])Repeat the block num times in parallel.
select_by_name(name)select_by_tag(tags)set_schema([schema])set_weights(weights)Sets the weights of the layer, from NumPy arrays.
with_name_scope(method)Decorator to automatically enter the module name scope.
Attributes
REQUIRES_SCHEMAactivity_regularizerOptional regularizer function for the output of this layer.
compute_dtypeThe dtype of the layer’s computations.
contextdtypeThe dtype of the layer weights.
dtype_policyThe dtype policy associated with this layer.
dynamicWhether the layer is dynamic (eager-only); set in the constructor.
has_schemainbound_nodesReturn Functional API nodes upstream of this layer.
inputRetrieves the input tensor(s) of a layer.
input_maskRetrieves the input mask tensor(s) of a layer.
input_shapeRetrieves the input shape(s) of a layer.
input_specInputSpec instance(s) describing the input format for this layer.
lossesList of losses added using the add_loss() API.
metricsList of metrics added using the add_metric() API.
nameName of the layer (string), set in the constructor.
name_scopeReturns a tf.name_scope instance for this class.
non_trainable_variablesnon_trainable_weightsList of all non-trainable weights tracked by this layer.
outbound_nodesReturn Functional API nodes downstream of this layer.
outputRetrieves the output tensor(s) of a layer.
output_maskRetrieves the output mask tensor(s) of a layer.
output_shapeRetrieves the output shape(s) of a layer.
registryschemastatefulsubmodulesSequence of all sub-modules.
supports_maskingWhether this layer supports computing a mask using compute_mask.
trainabletrainable_variablestrainable_weightsList of all trainable weights tracked by this layer.
updatesvariable_dtypeAlias of Layer.dtype, the dtype of the weights.
variablesReturns the list of all layer variables/weights.
weightsReturns the list of all layer variables/weights.
-
call(inputs: Union[tensorflow.python.framework.ops.Tensor, Dict[str, tensorflow.python.framework.ops.Tensor]], training: bool = True, testing: bool = False, **kwargs) → Union[tensorflow.python.framework.ops.Tensor, Dict[str, tensorflow.python.framework.ops.Tensor]][source] - Based on the user/query embedding (inputs[self.query_name]), uses dot product to score
the positive item (inputs[“item”]). For the sampled-softmax mode, logits are computed by multiplying the query vector and the item embeddings matrix (self.context.get_embedding(self.item_domain))
- Parameters
inputs (Union[tf.Tensor, TabularData]) – Dict with the query and item embeddings (e.g. {“query”: <emb>}, “item”: <emb>}), where embeddings are 2D tensors (batch size, embedding size)
training (bool, optional) – Flag that indicates whether in training mode, by default True
- Returns
2D Tensor with the scores for the positive items, If training=True, return the original inputs
- Return type
tf.Tensor
-
call_outputs(outputs: merlin.models.tf.core.base.PredictionOutput, features: Dict[str, tensorflow.python.framework.ops.Tensor] = None, training=True, testing=False, **kwargs) → PredictionOutput[source] - Based on the user/query embedding (inputs[self.query_name]), uses dot product to score
the positive item and also sampled negative items (during training).
- Parameters
inputs (TabularData) – Dict with the query and item embeddings (e.g. {“query”: <emb>}, “item”: <emb>}), where embeddings are 2D tensors (batch size, embedding size)
training (bool, optional) – Flag that indicates whether in training mode, by default True
- Returns
all_scores: 2D Tensor with the scores for the positive items and, if training=True, for the negative sampled items too. Return tensor is 2D (batch size, 1 + #negatives)
- Return type
[tf.Tensor,tf.Tensor]