merlin.models.tf.EmbeddingEncoder#
- class merlin.models.tf.EmbeddingEncoder(*args, **kwargs)[source]#
Bases:
merlin.models.tf.core.encoder.EncoderCreates an Encoder from an EmbeddingTable. Typically used with RetrievalModelV2.
- Parameters
schema (Union[ColumnSchema, Schema]) – The ColumnSchema of the column for which the embedding table needs to be created. If a Schema is passed, only the first column is considered
dim (int) – Dimension of the embeddings
embeddings_initializer (Union[str, tf.keras.layers.Layer], optional) – Initializer for the embeddings matrix (see keras.initializers). By default “uniform”
embeddings_regularizer (Union[str, tf.keras.layers.Layer], optional) – Regularizer function applied to the embeddings matrix (see keras.regularizers)., by default None
activity_regularizer (Union[str, tf.keras.layers.Layer], optional) – Sets a layer that applies an update to the cost function based input activity, by default None
embeddings_constraint (Union[str, tf.keras.layers.Layer], optional) – Constraint function applied to the embeddings matrix (see keras.constraints), by default None
mask_zero (bool, optional) – Whether or not the input value 0 is a special “padding” value that should be masked out. This is useful when using recurrent layers which may take variable length input. If this is True, then all subsequent layers in the model need to support masking or an exception will be raised. If mask_zero is set to True, as a consequence, index 0 cannot be used in the vocabulary (input_dim should equal size of vocabulary + 1), by default False
input_length (int, optional) – This argument is required if you are going to connect Flatten then Dense layers upstream (without it, the shape of the dense outputs cannot be computed), by default None
sequence_combiner (Optional[CombinerType], optional) – A string specifying how to combine embedding results for each entry (“mean”, “sqrtn” and “sum” are supported) or a layer. Default is None (no combiner used), by default None
trainable (bool, optional) – Whether the layer’s variables should be trainable, by default True
name (str, optional) – String name of the layer, by default None
dtype (optional) – The dtype of the layer’s computations and weights. Can also be a tf.keras.mixed_precision.Policy, which allows the computation and weight dtype to differ. Default of None means to use tf.keras.mixed_precision.global_policy(), which is a float32 policy unless set to different value., by default None
dynamic (bool, optional) – Set this to True if your layer should only be run eagerly, and should not be used to generate a static computation graph. This would be the case for a Tree-RNN or a recursive network, for example, or generally for any layer that manipulates tensors using Python control flow. If False, we assume that the layer can safely be used to generate a static computation graph., by default False
embeddings_l2_batch_regularization (Optional[Union[float, Dict[str, float]]], optional) – Factor for L2 regularization of the embeddings vectors (from the current batch only) by default 0.0, by default 0.0
post (Optional[tf.keras.layers.Layer], optional) – _description_, by default None
**kwargs (Forwarded Encoder parameters) –
- __init__(schema: Union[merlin.schema.schema.ColumnSchema, merlin.schema.schema.Schema], dim: int, embeddings_initializer: Optional[Union[str, keras.engine.base_layer.Layer]] = 'uniform', embeddings_regularizer: Optional[Union[str, keras.engine.base_layer.Layer]] = None, activity_regularizer: Optional[Union[str, keras.engine.base_layer.Layer]] = None, embeddings_constraint: Optional[Union[str, keras.engine.base_layer.Layer]] = None, mask_zero: bool = False, input_length: Optional[int] = None, sequence_combiner: Optional[Union[str, keras.engine.base_layer.Layer]] = None, trainable: bool = True, name: Optional[str] = None, dtype=None, dynamic: bool = False, embeddings_l2_batch_regularization: Optional[Union[float, Dict[str, float]]] = 0.0, post: Optional[keras.engine.base_layer.Layer] = None, **kwargs)[source]#
Methods
__init__(schema, dim[, ...])add_loss(losses, **kwargs)Add loss tensor(s), potentially dependent on layer inputs.
add_metric(value[, name])Adds metric tensor to the layer.
add_update(updates)Add update op(s), potentially dependent on layer inputs.
add_variable(*args, **kwargs)Deprecated, do NOT use! Alias for add_weight.
add_weight([name, shape, dtype, ...])Adds a new variable to the layer.
batch_predict(dataset, batch_size[, ...])Batched prediction using Dask.
build(input_shape)Creates the variables of the layer.
build_from_config(config)call(inputs, *[, targets, training, testing])Calls the model on new inputs and returns the outputs as tensors.
compile([optimizer, loss, metrics, ...])Configures the model for training.
compile_from_config(config)compute_loss([x, y, y_pred, sample_weight])Compute the total loss, validate it, and return it.
compute_mask(inputs[, mask])Computes an output mask tensor.
compute_metrics(x, y, y_pred, sample_weight)Update metric states and collect all metrics to be returned.
compute_output_shape(input_shape)Computes the output shape of the layer.
compute_output_signature(input_signature)Compute the output tensor signature of the layer based on the inputs.
count_params()Count the total number of scalars composing the weights.
encode(dataset, index, batch_size, **kwargs)Encodes the given dataset and index.
evaluate([x, y, batch_size, verbose, ...])Returns the loss value & metrics values for the model in test mode.
evaluate_generator(generator[, steps, ...])Evaluates the model on a data generator.
export(filepath)Create a SavedModel artifact for inference (e.g.
finalize_state()Finalizes the layers state after updating layer weights.
fit(*args, **kwargs)Fits the model.
fit_generator(generator[, steps_per_epoch, ...])Fits the model on data yielded batch-by-batch by a Python generator.
from_config(config[, custom_objects])Creates a new instance of the class by deserializing.
get_build_config()get_compile_config()get_config()Returns the configuration of the model as a dictionary.
get_input_at(node_index)Retrieves the input tensor(s) of a layer at a given node.
get_input_mask_at(node_index)Retrieves the input mask tensor(s) of a layer at a given node.
get_input_shape_at(node_index)Retrieves the input shape(s) of a layer at a given node.
get_layer([name, index])Retrieves a layer based on either its name (unique) or index.
get_metrics_result()Returns the model's metrics values as a dict.
get_output_at(node_index)Retrieves the output tensor(s) of a layer at a given node.
get_output_mask_at(node_index)Retrieves the output mask tensor(s) of a layer at a given node.
get_output_shape_at(node_index)Retrieves the output shape(s) of a layer at a given node.
get_weight_paths()Retrieve all the variables and their paths for the model.
get_weights()Retrieves the weights of the model.
load_weights(filepath[, skip_mismatch, ...])Loads all layer weights from a saved files.
make_predict_function([force])Creates a function that executes one step of inference.
make_test_function([force])Creates a function that executes one step of evaluation.
make_train_function([force])Creates a function that executes one step of training.
predict(x[, batch_size, verbose, steps, ...])Generates output predictions for the input samples.
predict_generator(generator[, steps, ...])Generates predictions for the input samples from a data generator.
predict_on_batch(x)Returns predictions for a single batch of samples.
predict_step(data)The logic for one inference step.
reset_metrics()Resets the state of all the metrics in the model.
reset_states()save(export_path[, include_optimizer, ...])Saves the model to export_path as a Tensorflow Saved Model.
save_spec([dynamic_batch])Returns the tf.TensorSpec of call args as a tuple (args, kwargs).
save_weights(filepath[, overwrite, ...])Saves all layer weights.
set_weights(weights)Sets the weights of the layer, from NumPy arrays.
summary([line_length, positions, print_fn, ...])Prints a string summary of the network.
test_on_batch(x[, y, sample_weight, ...])Test the model on a single batch of samples.
test_step(data)The logic for one evaluation step.
to_dataset([gpu])to_json(**kwargs)Returns a JSON string containing the network configuration.
to_yaml(**kwargs)Returns a yaml string containing the network configuration.
train_on_batch(x[, y, sample_weight, ...])Runs a single gradient update on a single batch of data.
train_step(data)Performs a training step.
with_name_scope(method)Decorator to automatically enter the module name scope.
Attributes
activity_regularizerOptional regularizer function for the output of this layer.
compute_dtypeThe dtype of the layer's computations.
distribute_reduction_methodThe method employed to reduce per-replica values during training.
distribute_strategyThe tf.distribute.Strategy this model was created under.
dtypeThe dtype of the layer weights.
dtype_policyThe dtype policy associated with this layer.
dynamicWhether the layer is dynamic (eager-only); set in the constructor.
firstReturns the first block of the model.
has_schemaReturns True as this class does contain a schema.
inbound_nodesReturn Functional API nodes upstream of this layer.
inputRetrieves the input tensor(s) of a layer.
input_maskRetrieves the input mask tensor(s) of a layer.
input_shapeRetrieves the input shape(s) of a layer.
input_specInputSpec instance(s) describing the input format for this layer.
jit_compileSpecify whether to compile the model with XLA.
lastReturns the last block of the model.
layerslossesList of losses added using the add_loss() API.
metricsReturn metrics added using compile() or add_metric().
metrics_namesReturns the model's display labels for all outputs.
nameName of the layer (string), set in the constructor.
name_scopeReturns a tf.name_scope instance for this class.
non_trainable_variablesnon_trainable_weightsoutbound_nodesReturn Functional API nodes downstream of this layer.
outputRetrieves the output tensor(s) of a layer.
output_maskRetrieves the output mask tensor(s) of a layer.
output_shapeRetrieves the output shape(s) of a layer.
run_eagerlySettable attribute indicating whether the model should run eagerly.
schemaReturns the schema of the model.
state_updatesDeprecated, do NOT use!
statefulsubmodulesSequence of all sub-modules.
supports_maskingWhether this layer supports computing a mask using compute_mask.
to_callProvides the list of blocks to be called during the execution of the model.
trainabletrainable_variablestrainable_weightsupdatesvariable_dtypeAlias of Layer.dtype, the dtype of the weights.
variablesReturns the list of all layer variables/weights.
weightsReturns the list of all layer variables/weights.
- to_dataset(gpu=None) merlin.io.dataset.Dataset[source]#