transformers4rec.torch.tabular package

Submodules

transformers4rec.torch.tabular.aggregation module

class transformers4rec.torch.tabular.aggregation.ConcatFeatures(*args, **kwargs)[source]

Bases: transformers4rec.torch.tabular.base.TabularAggregation

Aggregation by stacking all values in TabularData, all non-sequential values will be converted to a sequence.

The output of this concatenation will have 3 dimensions.

forward(inputs: Dict[str, torch.Tensor])torch.Tensor[source]
forward_output_size(input_size)[source]
class transformers4rec.torch.tabular.aggregation.StackFeatures(axis: int = - 1)[source]

Bases: transformers4rec.torch.tabular.base.TabularAggregation

Aggregation by stacking all values in input dictionary in the given dimension.

Parameters

axis (int, default=-1) – Axis to use for the stacking operation.

forward(inputs: Dict[str, torch.Tensor])torch.Tensor[source]
forward_output_size(input_size)[source]
class transformers4rec.torch.tabular.aggregation.ElementwiseFeatureAggregation(*args, **kwargs)[source]

Bases: transformers4rec.torch.tabular.base.TabularAggregation

Base class for aggregation methods that aggregates features element-wise. It implements two check methods to ensure inputs have the correct shape.

class transformers4rec.torch.tabular.aggregation.ElementwiseSum[source]

Bases: transformers4rec.torch.tabular.aggregation.ElementwiseFeatureAggregation

Aggregation by first stacking all values in TabularData in the first dimension, and then summing the result.

forward(inputs: Dict[str, torch.Tensor])torch.Tensor[source]
forward_output_size(input_size)[source]
class transformers4rec.torch.tabular.aggregation.ElementwiseSumItemMulti(schema: Optional[merlin_standard_lib.schema.schema.Schema] = None)[source]

Bases: transformers4rec.torch.tabular.aggregation.ElementwiseFeatureAggregation

Aggregation by applying the ElementwiseSum aggregation to all features except the item-id, and then multiplying this with the item-ids.

Parameters

schema (DatasetSchema) –

forward(inputs: Dict[str, torch.Tensor])torch.Tensor[source]
forward_output_size(input_size)[source]
REQUIRES_SCHEMA = True
training: bool

transformers4rec.torch.tabular.tabular module

transformers4rec.torch.tabular.transformations module

class transformers4rec.torch.tabular.transformations.StochasticSwapNoise(schema=None, pad_token=0, replacement_prob=0.1)[source]

Bases: transformers4rec.torch.tabular.base.TabularTransformation

Applies Stochastic replacement of sequence features. It can be applied as a pre transform like TransformerBlock(pre=”stochastic-swap-noise”)

forward(inputs: Union[torch.Tensor, Dict[str, torch.Tensor]], input_mask: Optional[torch.Tensor] = None, **kwargs)Union[torch.Tensor, Dict[str, torch.Tensor]][source]
forward_output_size(input_size)[source]
augment(input_tensor: torch.Tensor, mask: Optional[torch.Tensor] = None)torch.Tensor[source]
class transformers4rec.torch.tabular.transformations.TabularLayerNorm(features_dim: Optional[Dict[str, int]] = None)[source]

Bases: transformers4rec.torch.tabular.base.TabularTransformation

Applies Layer norm to each input feature individually, before the aggregation

classmethod from_feature_config(feature_config: Dict[str, transformers4rec.torch.features.embedding.FeatureConfig])[source]
forward(inputs: Dict[str, torch.Tensor], **kwargs)Dict[str, torch.Tensor][source]
forward_output_size(input_size)[source]
build(input_size, **kwargs)[source]
class transformers4rec.torch.tabular.transformations.TabularDropout(dropout_rate=0.0)[source]

Bases: transformers4rec.torch.tabular.base.TabularTransformation

Applies dropout transformation.

forward(inputs: Union[torch.Tensor, Dict[str, torch.Tensor]], **kwargs)Union[torch.Tensor, Dict[str, torch.Tensor]][source]
forward_output_size(input_size)[source]

Module contents