merlin.models.tf.Encoder#

class merlin.models.tf.Encoder(*args, **kwargs)[source]#

Bases: keras.engine.training.Model

Block that can be used for prediction and evaluation but not for training

Parameters
  • inputs (Union[Schema, tf.keras.layers.Layer]) – The input block or schema. When a schema is provided, a default input block will be created.

  • *blocks (tf.keras.layers.Layer) – The blocks to use for encoding.

  • pre (Optional[tf.keras.layers.Layer]) – A block to use before the main blocks

  • post (Optional[tf.keras.layers.Layer]) – A block to use after the main blocks

__init__(inputs: Union[merlin.schema.schema.Schema, keras.engine.base_layer.Layer], *blocks: keras.engine.base_layer.Layer, pre: Optional[keras.engine.base_layer.Layer] = None, post: Optional[keras.engine.base_layer.Layer] = None, **kwargs)[source]#

Methods

__init__(inputs, *blocks[, pre, post])

add_loss(losses, **kwargs)

Add loss tensor(s), potentially dependent on layer inputs.

add_metric(value[, name])

Adds metric tensor to the layer.

add_update(updates)

Add update op(s), potentially dependent on layer inputs.

add_variable(*args, **kwargs)

Deprecated, do NOT use! Alias for add_weight.

add_weight([name, shape, dtype, ...])

Adds a new variable to the layer.

batch_predict(dataset, batch_size[, ...])

Batched prediction using Dask.

build(input_shape)

build_from_config(config)

call(inputs[, training, testing, targets])

compile([optimizer, loss, metrics, ...])

Configures the model for training.

compile_from_config(config)

compute_loss([x, y, y_pred, sample_weight])

Compute the total loss, validate it, and return it.

compute_mask(inputs[, mask])

Computes an output mask tensor.

compute_metrics(x, y, y_pred, sample_weight)

Update metric states and collect all metrics to be returned.

compute_output_shape(input_shape)

compute_output_signature(input_signature)

Compute the output tensor signature of the layer based on the inputs.

count_params()

Count the total number of scalars composing the weights.

encode(dataset, index, batch_size, **kwargs)

evaluate([x, y, batch_size, verbose, ...])

Returns the loss value & metrics values for the model in test mode.

evaluate_generator(generator[, steps, ...])

Evaluates the model on a data generator.

export(filepath)

Create a SavedModel artifact for inference (e.g.

finalize_state()

Finalizes the layers state after updating layer weights.

fit(*args, **kwargs)

Fit model

fit_generator(generator[, steps_per_epoch, ...])

Fits the model on data yielded batch-by-batch by a Python generator.

from_config(config[, custom_objects])

get_build_config()

get_compile_config()

get_config()

get_input_at(node_index)

Retrieves the input tensor(s) of a layer at a given node.

get_input_mask_at(node_index)

Retrieves the input mask tensor(s) of a layer at a given node.

get_input_shape_at(node_index)

Retrieves the input shape(s) of a layer at a given node.

get_layer([name, index])

Retrieves a layer based on either its name (unique) or index.

get_metrics_result()

Returns the model's metrics values as a dict.

get_output_at(node_index)

Retrieves the output tensor(s) of a layer at a given node.

get_output_mask_at(node_index)

Retrieves the output mask tensor(s) of a layer at a given node.

get_output_shape_at(node_index)

Retrieves the output shape(s) of a layer at a given node.

get_weight_paths()

Retrieve all the variables and their paths for the model.

get_weights()

Retrieves the weights of the model.

load_weights(filepath[, skip_mismatch, ...])

Loads all layer weights from a saved files.

make_predict_function([force])

Creates a function that executes one step of inference.

make_test_function([force])

Creates a function that executes one step of evaluation.

make_train_function([force])

Creates a function that executes one step of training.

predict(x[, batch_size, verbose, steps, ...])

Generates output predictions for the input samples.

predict_generator(generator[, steps, ...])

Generates predictions for the input samples from a data generator.

predict_on_batch(x)

Returns predictions for a single batch of samples.

predict_step(data)

The logic for one inference step.

reset_metrics()

Resets the state of all the metrics in the model.

reset_states()

save(export_path[, include_optimizer, ...])

Saves the model to export_path as a Tensorflow Saved Model.

save_spec([dynamic_batch])

Returns the tf.TensorSpec of call args as a tuple (args, kwargs).

save_weights(filepath[, overwrite, ...])

Saves all layer weights.

set_weights(weights)

Sets the weights of the layer, from NumPy arrays.

summary([line_length, positions, print_fn, ...])

Prints a string summary of the network.

test_on_batch(x[, y, sample_weight, ...])

Test the model on a single batch of samples.

test_step(data)

The logic for one evaluation step.

to_json(**kwargs)

Returns a JSON string containing the network configuration.

to_yaml(**kwargs)

Returns a yaml string containing the network configuration.

train_on_batch(x[, y, sample_weight, ...])

Runs a single gradient update on a single batch of data.

train_step(data)

Train step

with_name_scope(method)

Decorator to automatically enter the module name scope.

Attributes

activity_regularizer

Optional regularizer function for the output of this layer.

compute_dtype

The dtype of the layer's computations.

distribute_reduction_method

The method employed to reduce per-replica values during training.

distribute_strategy

The tf.distribute.Strategy this model was created under.

dtype

The dtype of the layer weights.

dtype_policy

The dtype policy associated with this layer.

dynamic

Whether the layer is dynamic (eager-only); set in the constructor.

first

has_schema

inbound_nodes

Return Functional API nodes upstream of this layer.

input

Retrieves the input tensor(s) of a layer.

input_mask

Retrieves the input mask tensor(s) of a layer.

input_shape

Retrieves the input shape(s) of a layer.

input_spec

InputSpec instance(s) describing the input format for this layer.

jit_compile

Specify whether to compile the model with XLA.

last

layers

losses

List of losses added using the add_loss() API.

metrics

Return metrics added using compile() or add_metric().

metrics_names

Returns the model's display labels for all outputs.

name

Name of the layer (string), set in the constructor.

name_scope

Returns a tf.name_scope instance for this class.

non_trainable_variables

non_trainable_weights

outbound_nodes

Return Functional API nodes downstream of this layer.

output

Retrieves the output tensor(s) of a layer.

output_mask

Retrieves the output mask tensor(s) of a layer.

output_shape

Retrieves the output shape(s) of a layer.

run_eagerly

Settable attribute indicating whether the model should run eagerly.

schema

state_updates

Deprecated, do NOT use!

stateful

submodules

Sequence of all sub-modules.

supports_masking

Whether this layer supports computing a mask using compute_mask.

to_call

trainable

trainable_variables

trainable_weights

updates

variable_dtype

Alias of Layer.dtype, the dtype of the weights.

variables

Returns the list of all layer variables/weights.

weights

Returns the list of all layer variables/weights.

encode(dataset: merlin.io.dataset.Dataset, index: Union[str, merlin.schema.schema.ColumnSchema, merlin.schema.schema.Schema, merlin.schema.tags.Tags], batch_size: int, **kwargs) merlin.io.dataset.Dataset[source]#
batch_predict(dataset: merlin.io.dataset.Dataset, batch_size: int, output_schema: Optional[merlin.schema.schema.Schema] = None, index: Optional[Union[str, merlin.schema.schema.ColumnSchema, merlin.schema.schema.Schema, merlin.schema.tags.Tags]] = None, **kwargs) merlin.io.dataset.Dataset[source]#

Batched prediction using Dask. :param dataset: Dataset to predict on. :type dataset: merlin.io.Dataset :param batch_size: Batch size to use for prediction. :type batch_size: int

Return type

merlin.io.Dataset

call(inputs, training=False, testing=False, targets=None, **kwargs)[source]#
build(input_shape)[source]#
compute_output_shape(input_shape)[source]#
train_step(data)[source]#

Train step

fit(*args, **kwargs)[source]#

Fit model

save(export_path: Union[str, os.PathLike], include_optimizer=True, save_traces=True) None[source]#

Saves the model to export_path as a Tensorflow Saved Model. Along with merlin model metadata.

Parameters
  • export_path (Union[str, os.PathLike]) – Path where model will be saved to

  • include_optimizer (bool, optional) – If False, do not save the optimizer state, by default True

  • save_traces (bool, optional) – When enabled, will store the function traces for each layer. This can be disabled, so that only the configs of each layer are stored, by default True

property to_call#
property has_schema: bool#
property schema: merlin.schema.schema.Schema#
property first#
property last#
classmethod from_config(config, custom_objects=None)[source]#
get_config()[source]#