transformers4rec.torch.block package
Submodules
transformers4rec.torch.block.base module
-
class
transformers4rec.torch.block.base.
BlockBase
(*args, **kwargs)[source] Bases:
transformers4rec.torch.utils.torch_utils.OutputSizeMixin
,torch.nn.modules.module.Module
-
class
transformers4rec.torch.block.base.
Block
(module: torch.nn.modules.module.Module, output_size: Union[List[int], torch.Size])[source]
-
class
transformers4rec.torch.block.base.
SequentialBlock
(*args, output_size=None)[source] Bases:
transformers4rec.torch.block.base.BlockBase
,torch.nn.modules.container.Sequential
-
property
inputs
-
add_module
(name: str, module: Optional[torch.nn.modules.module.Module]) → None[source]
-
add_module_and_maybe_build
(name: str, module, parent, idx) → torch.nn.modules.module.Module[source]
-
property
transformers4rec.torch.block.mlp module
-
class
transformers4rec.torch.block.mlp.
MLPBlock
(dimensions, activation=<class 'torch.nn.modules.activation.ReLU'>, use_bias: bool = True, dropout=None, normalization=None, filter_features=None)[source] Bases:
transformers4rec.torch.block.base.BuildableBlock
-
build
(input_shape) → transformers4rec.torch.block.base.SequentialBlock[source]
-
transformers4rec.torch.block.transformer module
-
class
transformers4rec.torch.block.transformer.
GPT2Prepare
(transformer, masking)[source] Bases:
transformers4rec.torch.block.transformer.TransformerPrepare
-
class
transformers4rec.torch.block.transformer.
TransformerBlock
(transformer: Union[transformers.modeling_utils.PreTrainedModel, transformers.configuration_utils.PretrainedConfig], masking: Optional[transformers4rec.torch.masking.MaskSequence] = None, prepare_module: Optional[Type[transformers4rec.torch.block.transformer.TransformerPrepare]] = None, output_fn=<function TransformerBlock.<lambda>>)[source] Bases:
transformers4rec.torch.block.base.BlockBase
Class to support HF Transformers for session-based and sequential-based recommendation models.
- Parameters
transformer (TransformerBody) – The T4RecConfig or a pre-trained HF object related to specific transformer architecture.
masking – Needed when masking is applied on the inputs.
-
TRANSFORMER_TO_PREPARE
: Dict[Type[transformers.modeling_utils.PreTrainedModel], Type[transformers4rec.torch.block.transformer.TransformerPrepare]] = {<class 'transformers.models.gpt2.modeling_gpt2.GPT2Model'>: <class 'transformers4rec.torch.block.transformer.GPT2Prepare'>}
-
transformer
: PreTrainedModel
-
prepare_module
: Optional[TransformerPrepare]
-
classmethod
from_registry
(transformer: str, d_model: int, n_head: int, n_layer: int, total_seq_length: int, masking: Optional[transformers4rec.torch.masking.MaskSequence] = None)[source] Load the HF transformer architecture based on its name
- Parameters
transformer (str) – Name of the Transformer to use. Possible values are : [“reformer”, “gtp2”, “longformer”, “electra”, “albert”, “xlnet”]
d_model (int) – size of hidden states for Transformers
n_head – Number of attention heads for Transformers
n_layer (int) – Number of layers for RNNs and Transformers”
total_seq_length (int) – The maximum sequence length