transformers4rec.torch.block package

Submodules

transformers4rec.torch.block.base module

class transformers4rec.torch.block.base.BlockBase(*args, **kwargs)[source]

Bases: transformers4rec.torch.utils.torch_utils.OutputSizeMixin, torch.nn.modules.module.Module

to_model(prediction_task_or_head, inputs=None, **kwargs)[source]
as_tabular(name=None)[source]
class transformers4rec.torch.block.base.Block(module: torch.nn.modules.module.Module, output_size: Union[List[int], torch.Size])[source]

Bases: transformers4rec.torch.block.base.BlockBase

forward(inputs, **kwargs)[source]
forward_output_size(input_size)[source]
class transformers4rec.torch.block.base.SequentialBlock(*args, output_size=None)[source]

Bases: transformers4rec.torch.block.base.BlockBase, torch.nn.modules.container.Sequential

property inputs
add_module(name: str, module: Optional[torch.nn.modules.module.Module])None[source]
add_module_and_maybe_build(name: str, module, parent, idx)torch.nn.modules.module.Module[source]
forward(input, training=False, testing=False, **kwargs)[source]
build(input_size, schema=None, **kwargs)[source]
as_tabular(name=None)[source]
forward_output_size(input_size)[source]
static get_children_by_class_name(parent, *class_name)[source]
transformers4rec.torch.block.base.build_blocks(*modules)[source]
class transformers4rec.torch.block.base.BuildableBlock[source]

Bases: abc.ABC

abstract build(input_size)transformers4rec.torch.block.base.BlockBase[source]
to_module(shape_or_module)[source]
transformers4rec.torch.block.base.right_shift_block(self, other)[source]

transformers4rec.torch.block.mlp module

class transformers4rec.torch.block.mlp.MLPBlock(dimensions, activation=<class 'torch.nn.modules.activation.ReLU'>, use_bias: bool = True, dropout=None, normalization=None, filter_features=None)[source]

Bases: transformers4rec.torch.block.base.BuildableBlock

build(input_shape)transformers4rec.torch.block.base.SequentialBlock[source]
class transformers4rec.torch.block.mlp.DenseBlock(input_shape, in_features: int, out_features: int, activation=<class 'torch.nn.modules.activation.ReLU'>, use_bias: bool = True, dropout: Optional[float] = None, normalization=None)[source]

Bases: transformers4rec.torch.block.base.SequentialBlock

forward_output_size(input_size)[source]

transformers4rec.torch.block.transformer module

class transformers4rec.torch.block.transformer.TransformerPrepare(transformer, masking)[source]

Bases: torch.nn.modules.module.Module

forward(inputs_embeds)Dict[str, Any][source]
training: bool
class transformers4rec.torch.block.transformer.GPT2Prepare(transformer, masking)[source]

Bases: transformers4rec.torch.block.transformer.TransformerPrepare

forward(inputs_embeds)Dict[str, Any][source]
training: bool
class transformers4rec.torch.block.transformer.TransformerBlock(transformer: Union[transformers.modeling_utils.PreTrainedModel, transformers.configuration_utils.PretrainedConfig], masking: Optional[transformers4rec.torch.masking.MaskSequence] = None, prepare_module: Optional[Type[transformers4rec.torch.block.transformer.TransformerPrepare]] = None, output_fn=<function TransformerBlock.<lambda>>)[source]

Bases: transformers4rec.torch.block.base.BlockBase

Class to support HF Transformers for session-based and sequential-based recommendation models.

Parameters
  • transformer (TransformerBody) – The T4RecConfig or a pre-trained HF object related to specific transformer architecture.

  • masking – Needed when masking is applied on the inputs.

TRANSFORMER_TO_PREPARE: Dict[Type[transformers.modeling_utils.PreTrainedModel], Type[transformers4rec.torch.block.transformer.TransformerPrepare]] = {<class 'transformers.models.gpt2.modeling_gpt2.GPT2Model'>: <class 'transformers4rec.torch.block.transformer.GPT2Prepare'>}
transformer: PreTrainedModel
prepare_module: Optional[TransformerPrepare]
classmethod from_registry(transformer: str, d_model: int, n_head: int, n_layer: int, total_seq_length: int, masking: Optional[transformers4rec.torch.masking.MaskSequence] = None)[source]

Load the HF transformer architecture based on its name

Parameters
  • transformer (str) – Name of the Transformer to use. Possible values are : [“reformer”, “gtp2”, “longformer”, “electra”, “albert”, “xlnet”]

  • d_model (int) – size of hidden states for Transformers

  • n_head – Number of attention heads for Transformers

  • n_layer (int) – Number of layers for RNNs and Transformers”

  • total_seq_length (int) – The maximum sequence length

training: bool
forward(inputs_embeds, **kwargs)[source]

Transformer Models

forward_output_size(input_size)[source]

Module contents