merlin.models.tf.DualEncoderBlock
-
class
merlin.models.tf.DualEncoderBlock(*args, **kwargs)[source] Bases:
merlin.models.tf.core.combinators.ParallelBlock-
__init__(query_block: merlin.models.tf.core.base.Block, item_block: merlin.models.tf.core.base.Block, pre: Optional[Union[merlin.models.tf.core.base.Block, str, Sequence[str]]] = None, post: Optional[Union[merlin.models.tf.core.base.Block, str, Sequence[str]]] = None, aggregation: Optional[Union[str, merlin.models.tf.core.tabular.TabularAggregation]] = None, schema: Optional[merlin.schema.schema.Schema] = None, name: Optional[str] = None, strict: bool = False, l2_normalization: bool = False, **kwargs)[source] Prepare the Query and Item towers of a Retrieval block :param query_block: The Block instance that combines user features :type query_block: Block :param item_block: Optional Block instance that combines items features. :type item_block: Block :param pre: Optional Block instance to apply before the call method of the Two-Tower block :type pre: Optional[BlockType], optional :param post: Optional Block instance to apply on both outputs of Two-tower model :type post: Optional[BlockType], optional :param aggregation: The Aggregation operation to apply after processing the call method
to output a single Tensor.
- Parameters
schema (Optional[Schema], optional) – The Schema object with the input features.
name (Optional[str], optional) – Name of the layer.
strict (bool, optional) – If enabled, check that the input of the ParallelBlock instance is a dictionary.
l2_normalization (bool) – Apply L2 normalization to the user and item representations before computing dot interactions. Defaults to False.
Methods
__init__(query_block, item_block[, pre, …])Prepare the Query and Item towers of a Retrieval block :param query_block: The Block instance that combines user features :type query_block: Block :param item_block: Optional Block instance that combines items features. :type item_block: Block :param pre: Optional Block instance to apply before the call method of the Two-Tower block :type pre: Optional[BlockType], optional :param post: Optional Block instance to apply on both outputs of Two-tower model :type post: Optional[BlockType], optional :param aggregation: The Aggregation operation to apply after processing the call method to output a single Tensor. :type aggregation: Optional[TabularAggregationType], optional :param schema: The Schema object with the input features. :type schema: Optional[Schema], optional :param name: Name of the layer. :type name: Optional[str], optional :param strict: If enabled, check that the input of the ParallelBlock instance is a dictionary. :type strict: bool, optional :param l2_normalization: Apply L2 normalization to the user and item representations before computing dot interactions. Defaults to False. :type l2_normalization: bool.
add_branch(name, block)add_loss(losses, **kwargs)Add loss tensor(s), potentially dependent on layer inputs.
add_metric(value[, name])Adds metric tensor to the layer.
add_update(updates)Add update op(s), potentially dependent on layer inputs.
add_variable(*args, **kwargs)Deprecated, do NOT use! Alias for add_weight.
add_weight([name, shape, dtype, …])Adds a new variable to the layer.
apply_to_all(inputs[, columns_to_filter])apply_to_branch(branch_name, *block)as_tabular([name])build(input_shape)build_from_config(config)calculate_batch_size_from_input_shapes(…)call(inputs, **kwargs)The call method for ParallelBlock
call_outputs(outputs[, training])check_schema([schema])compute_call_output_shape(input_shape)compute_mask(inputs[, mask])Computes an output mask tensor.
compute_output_shape(input_shapes)compute_output_signature(input_signature)Compute the output tensor signature of the layer based on the inputs.
connect(*block[, block_name, context])Connect the block to other blocks sequentially.
connect_branch(*branches[, add_rest, post, …])Connect the block to one or multiple branches.
connect_debug_block([append])Connect the block to a debug block.
connect_with_residual(block[, activation])Connect the block to other blocks sequentially with a residual connection.
connect_with_shortcut(block[, …])Connect the block to other blocks sequentially with a shortcut connection.
copy()count_params()Count the total number of scalars composing the weights.
finalize_state()Finalizes the layers state after updating layer weights.
from_config(config[, custom_objects])from_features(features[, pre, post, …])Initializes a TabularLayer instance where the contents of features will be filtered out
from_layer(layer)from_schema(schema[, tags, allow_none])Instantiate a TabularLayer instance from a DatasetSchema.
get_build_config()get_config()get_input_at(node_index)Retrieves the input tensor(s) of a layer at a given node.
get_input_mask_at(node_index)Retrieves the input mask tensor(s) of a layer at a given node.
get_input_shape_at(node_index)Retrieves the input shape(s) of a layer at a given node.
get_item_ids_from_inputs(inputs)get_output_at(node_index)Retrieves the output tensor(s) of a layer at a given node.
get_output_mask_at(node_index)Retrieves the output mask tensor(s) of a layer at a given node.
get_output_shape_at(node_index)Retrieves the output shape(s) of a layer at a given node.
get_padding_mask_from_item_id(inputs[, …])get_weights()Returns the current weights of the layer, as NumPy arrays.
parse(*block)parse_block(input)parse_config(config[, custom_objects])post_call(inputs[, transformations, …])Method that’s typically called after the forward method for post-processing.
pre_call(inputs[, transformations])Method that’s typically called before the forward method for pre-processing.
prepare([block, post, aggregation])Transform the inputs of this block.
register_features(feature_shapes)repeat([num])Repeat the block num times.
repeat_in_parallel([num, prefix, names, …])Repeat the block num times in parallel.
repr_add()repr_extra()repr_ignore()select_by_name(name)Select a parallel block by name
select_by_names(names)Select a list of parallel blocks by names
select_by_tag(tags)Select layers of parallel blocks by tags.
set_aggregation(value)- param value
set_post(value)set_pre(value)set_schema([schema])set_weights(weights)Sets the weights of the layer, from NumPy arrays.
super()with_name_scope(method)Decorator to automatically enter the module name scope.
Attributes
REQUIRES_SCHEMAactivity_regularizerOptional regularizer function for the output of this layer.
aggregationreturns: :rtype: TabularAggregation, optional
compute_dtypeThe dtype of the layer’s computations.
contextdtypeThe dtype of the layer weights.
dtype_policyThe dtype policy associated with this layer.
dynamicWhether the layer is dynamic (eager-only); set in the constructor.
firsthas_schemainbound_nodesReturn Functional API nodes upstream of this layer.
inputRetrieves the input tensor(s) of a layer.
input_maskRetrieves the input mask tensor(s) of a layer.
input_shapeRetrieves the input shape(s) of a layer.
input_specInputSpec instance(s) describing the input format for this layer.
is_inputis_tabularlayerslossesList of losses added using the add_loss() API.
metricsList of metrics added using the add_metric() API.
nameName of the layer (string), set in the constructor.
name_scopeReturns a tf.name_scope instance for this class.
non_trainable_variablesnon_trainable_weightsList of all non-trainable weights tracked by this layer.
outbound_nodesReturn Functional API nodes downstream of this layer.
outputRetrieves the output tensor(s) of a layer.
output_maskRetrieves the output mask tensor(s) of a layer.
output_shapeRetrieves the output shape(s) of a layer.
parallel_dictparallel_valuespostreturns: :rtype: SequentialTabularTransformations, optional
prereturns: :rtype: SequentialTabularTransformations, optional
registryschemastatefulsubmodulesSequence of all sub-modules.
supports_maskingWhether this layer supports computing a mask using compute_mask.
trainabletrainable_variablestrainable_weightsList of all trainable weights tracked by this layer.
updatesvariable_dtypeAlias of Layer.dtype, the dtype of the weights.
variablesReturns the list of all layer variables/weights.
weightsReturns the list of all layer variables/weights.
-