merlin.models.tf.InputBlock#

merlin.models.tf.InputBlock(schema: merlin.schema.schema.Schema, branches: typing.Optional[typing.Dict[str, merlin.models.tf.core.base.Block]] = None, pre: typing.Optional[typing.Union[merlin.models.tf.core.base.Block, str, typing.Sequence[str]]] = None, post: typing.Optional[typing.Union[merlin.models.tf.core.base.Block, str, typing.Sequence[str]]] = None, aggregation: typing.Optional[typing.Union[str, merlin.models.tf.core.tabular.TabularAggregation]] = None, seq: bool = False, max_seq_length: typing.Optional[int] = None, add_continuous_branch: bool = True, continuous_tags: typing.Optional[typing.Union[merlin.schema.tags.TagSet, typing.List[str], typing.List[merlin.schema.tags.Tags], typing.List[typing.Union[str, merlin.schema.tags.Tags]], typing.Tuple[merlin.schema.tags.Tags]]] = (<Tags.CONTINUOUS: 'continuous'>,), continuous_projection: typing.Optional[merlin.models.tf.core.base.Block] = None, add_embedding_branch: bool = True, embedding_options: merlin.models.tf.inputs.embedding.EmbeddingOptions = EmbeddingOptions(embedding_dims=None, embedding_dim_default=64, infer_embedding_sizes=False, infer_embedding_sizes_multiplier=2.0, infer_embeddings_ensure_dim_multiple_of_8=False, embeddings_initializers=None, embeddings_l2_reg=0.0, combiner='mean'), categorical_tags: typing.Optional[typing.Union[merlin.schema.tags.TagSet, typing.List[str], typing.List[merlin.schema.tags.Tags], typing.List[typing.Union[str, merlin.schema.tags.Tags]], typing.Tuple[merlin.schema.tags.Tags]]] = (<Tags.CATEGORICAL: 'categorical'>,), sequential_tags: typing.Optional[typing.Union[merlin.schema.tags.TagSet, typing.List[str], typing.List[merlin.schema.tags.Tags], typing.List[typing.Union[str, merlin.schema.tags.Tags]], typing.Tuple[merlin.schema.tags.Tags]]] = (<Tags.SEQUENCE: 'sequence'>,), split_sparse: bool = False, seq_aggregator: merlin.models.tf.core.base.Block = SequenceAggregator(), **kwargs) merlin.models.tf.core.base.Block[source]#

The entry block of the model to process input features from a schema.

This function creates continuous and embedding layers, and connects them via ParallelBlock.

If aggregation argument is not set, it returns a dictionary of multiple tensors each corresponds to an input feature. Otherwise, it merges the tensors into one using the aggregation method.

Example usage:

mlp = ml.InputBlock(schema).connect(ml.MLPBlock([64, 32]))
schema: Schema

Schema of the input data. This Schema object will be automatically generated using [NVTabular](https://nvidia-merlin.github.io/NVTabular/main/Introduction.html). Next to this, it’s also possible to construct it manually.

branches: Dict[str, Block], optional

Dictionary of branches to use inside the InputBlock.

post: Optional[BlockType]

Transformations to apply on the inputs after the module is called (so after forward). Defaults to None

aggregation: Optional[TabularAggregationType]

Aggregation to apply after processing the forward-method to output a single Tensor. Defaults to None

seq: bool

Whether to process inputs for sequential model (returns 3-D tensor) or not (returns 2-D tensor). Use seq=True to treat the sparse (list) features as sequences (e.g. for sequential recommendation) and seq=False to treat sparse features as multi-hot categorical representations. Defaults to False

add_continuous_branch: bool

If set, add the branch to process continuous features Defaults to True

continuous_tags: Optional[Union[TagsType, Tuple[Tags]]]

Tags to filter the continuous features Defaults to (Tags.CONTINUOUS,)

continuous_projection: Optional[Block]

If set, concatenate all numerical features and project using the specified Block. Defaults to None

add_embedding_branch: bool

If set, add the branch to process categorical features Defaults to True

embedding_optionsEmbeddingOptions, optional

An EmbeddingOptions instance, which allows for a number of options for the embedding table, by default EmbeddingOptions()

categorical_tags: Optional[Union[TagsType, Tuple[Tags]]]

Tags to filter the continuous features Defaults to (Tags.CATEGORICAL,)

sequential_tags: Optional[Union[TagsType, Tuple[Tags]]]

Tags to filter the sparse features Defaults to (Tags.SEQUENCE,)

split_sparse: Optional[bool]

When True, separate the processing of context (2-D) and sparse features (3-D). Defaults to False

seq_aggregator: Block

If non-sequential model (seq=False): aggregate the sparse features tensor along the sequence axis. Defaults to SequenceAggregator(‘mean’)