transformers4rec.torch.model package

Submodules

transformers4rec.torch.model.head module

transformers4rec.torch.model.model module

transformers4rec.torch.model.prediction_task module

class transformers4rec.torch.model.prediction_task.BinaryClassificationPrepareBlock[source]

Bases: transformers4rec.torch.block.base.BuildableBlock

build(input_size)transformers4rec.torch.block.base.SequentialBlock[source]
class transformers4rec.torch.model.prediction_task.BinaryClassificationTask(target_name: Optional[str] = None, task_name: Optional[str] = None, task_block: Optional[Union[transformers4rec.torch.block.base.BlockBase, transformers4rec.torch.block.base.BuildableBlock]] = None, loss=BCELoss(), metrics=(BinaryPrecision(), BinaryRecall(), BinaryAccuracy()), summary_type='first')[source]

Bases: transformers4rec.torch.model.base.PredictionTask

Returns a PredictionTask for binary classification.

Example usage:

# Define the input module to process the tabular input features.
input_module = tr.TabularSequenceFeatures.from_schema(
    schema,
    max_sequence_length=max_sequence_length,
    continuous_projection=d_model,
    aggregation="concat",
    masking=None,
)

# Define XLNetConfig class and set default parameters for HF XLNet config.
transformer_config = tr.XLNetConfig.build(
    d_model=d_model, n_head=4, n_layer=2, total_seq_length=max_sequence_length
)

# Define the model block including: inputs, masking, projection and transformer block.
body = tr.SequentialBlock(
    input_module,
    tr.MLPBlock([64]),
    tr.TransformerBlock(
        transformer_config,
        masking=input_module.masking
    )
)

# Define a head with BinaryClassificationTask.
head = tr.Head(
    body,
    tr.BinaryClassificationTask(
        "click",
        summary_type="mean",
        metrics=[
            tm.Precision(task='binary'),
            tm.Recall(task='binary'),
            tm.Accuracy(task='binary'),
            tm.F1Score(task='binary')
        ]
    ),
    inputs=input_module,
)

# Get the end-to-end Model class.
model = tr.Model(head)
Parameters
  • target_name (Optional[str] = None) – Specifies the variable name that represents the positive and negative values.

  • task_name (Optional[str] = None) – Specifies the name of the prediction task. If this parameter is not specified, a name is automatically constructed based on target_name and the Python class name of the model.

  • task_block (Optional[BlockType] = None) – Specifies a module to transform the input tensor before computing predictions.

  • loss (torch.nn.Module) – Specifies the loss function for the task. The default class is torch.nn.BCELoss.

  • metrics (Tuple[torch.nn.Module, ..]) – Specifies the metrics to calculate during training and evaluation. The default metrics are Precision, Recall, and Accuracy.

  • summary_type (str) –

    Summarizes a sequence into a single tensor. Accepted values are:

    • last – Take the last token hidden state (like XLNet)

    • first – Take the first token hidden state (like Bert)

    • mean – Take the mean of all tokens hidden states

    • cls_index – Supply a Tensor of classification token position (GPT/GPT-2)

    • attn – Not implemented now, use multi-head attention

DEFAULT_LOSS = BCELoss()
DEFAULT_METRICS = (BinaryPrecision(), BinaryRecall(), BinaryAccuracy())
training: bool
class transformers4rec.torch.model.prediction_task.RegressionPrepareBlock[source]

Bases: transformers4rec.torch.block.base.BuildableBlock

build(input_size)transformers4rec.torch.block.base.SequentialBlock[source]
class transformers4rec.torch.model.prediction_task.RegressionTask(target_name: Optional[str] = None, task_name: Optional[str] = None, task_block: Optional[Union[transformers4rec.torch.block.base.BlockBase, transformers4rec.torch.block.base.BuildableBlock]] = None, loss=MSELoss(), metrics=(MeanSquaredError()), summary_type='first')[source]

Bases: transformers4rec.torch.model.base.PredictionTask

DEFAULT_LOSS = MSELoss()
DEFAULT_METRICS = (MeanSquaredError(),)
training: bool
class transformers4rec.torch.model.prediction_task.NextItemPredictionTask(loss: torch.nn.modules.module.Module = NLLLoss(), metrics: Iterable[torchmetrics.metric.Metric] = (NDCGAt(), AvgPrecisionAt(), RecallAt()), task_block: Optional[Union[transformers4rec.torch.block.base.BlockBase, transformers4rec.torch.block.base.BuildableBlock]] = None, task_name: str = 'next-item', weight_tying: bool = False, softmax_temperature: float = 1, padding_idx: int = 0, target_dim: Optional[int] = None)[source]

Bases: transformers4rec.torch.model.base.PredictionTask

This block performs item prediction task for session and sequential-based models. It requires a body containing a masking schema to use for training and target generation. For the supported masking schemes, please refers to: https://nvidia-merlin.github.io/Transformers4Rec/main/model_definition.html#sequence-masking

Parameters
  • loss (torch.nn.Module) – Loss function to use. Defaults to NLLLos.

  • metrics (Iterable[torchmetrics.Metric]) – List of ranking metrics to use for evaluation.

  • task_block – Module to transform input tensor before computing predictions.

  • task_name (str, optional) – Name of the prediction task, if not provided a name will be automatically constructed based on the target-name & class-name.

  • weight_tying (bool) – The item id embedding table weights are shared with the prediction network layer.

  • softmax_temperature (float) – Softmax temperature, used to reduce model overconfidence, so that softmax(logits / T). Value 1.0 reduces to regular softmax.

  • padding_idx (int) – pad token id.

  • target_dim (int) – vocabulary size of item ids

DEFAULT_METRICS = (NDCGAt(), AvgPrecisionAt(), RecallAt())
build(body, input_size, device=None, inputs=None, task_block=None, pre=None)[source]

Build method, this is called by the Head.

forward(inputs: torch.Tensor, targets=None, training=False, testing=False, **kwargs)[source]
remove_pad_3d(inp_tensor, non_pad_mask)[source]
calculate_metrics(predictions, targets)Dict[str, torch.Tensor][source]
compute_metrics()[source]
training: bool
class transformers4rec.torch.model.prediction_task.NextItemPredictionPrepareBlock(target_dim: int, weight_tying: bool = False, item_embedding_table: Optional[torch.nn.modules.module.Module] = None, softmax_temperature: float = 0)[source]

Bases: transformers4rec.torch.block.base.BuildableBlock

build(input_size)transformers4rec.torch.block.base.Block[source]

Module contents