merlin.systems.dag.ops.workflow.TransformWorkflow#

class merlin.systems.dag.ops.workflow.TransformWorkflow(workflow=None, sparse_max: Optional[dict] = None, max_batch_size: Optional[int] = None, label_columns: Optional[List[str]] = None, cats: Optional[List[str]] = None, conts: Optional[List[str]] = None)[source]#

Bases: InferenceOperator

This operator takes a workflow and turns it into a ensemble operator so that we can execute feature engineering during ensemble on tritonserver.

__init__(workflow=None, sparse_max: Optional[dict] = None, max_batch_size: Optional[int] = None, label_columns: Optional[List[str]] = None, cats: Optional[List[str]] = None, conts: Optional[List[str]] = None)[source]#

Creates a Transform Workflow operator for a target workflow.

Parameters:
  • workflow (Nvtabular.Workflow) – The workflow to transform data in ensemble.

  • sparse_max (dict, optional) – Dictionary representing key(name)/val(max value) pairs of max sparsity, by default None

  • max_batch_size (int, optional) – Maximum batch size, by default None

  • label_columns (List[str], optional) – List of strings identifying the label columns, by default None

  • cats (List[str], optional) – List of strings identifying categorical columns, by default None

  • conts (List[str], optional) – List of string identifying continuous columns, by default None

Methods

__init__([workflow, sparse_max, ...])

Creates a Transform Workflow operator for a target workflow.

column_mapping(col_selector)

Compute which output columns depend on which input columns

compute_column_schema(col_name, input_schema)

compute_input_schema(root_schema, ...)

Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use

compute_output_schema(input_schema, col_selector)

Returns output schema of operator

compute_selector(input_schema, selector[, ...])

Provides a hook method for sub-classes to override to implement custom column selection logic.

create_node(selector)

_summary_

export(path, input_schema, output_schema[, ...])

Export the class object as a config and all related files to the user defined path.

from_model_registry(registry, **kwargs)

Loads the InferenceOperator from the provided ModelRegistry.

from_path(path, **kwargs)

Loads the InferenceOperator from the path where it was exported after training.

load_artifacts(artifact_path)

Load artifacts from disk required for operator function.

output_column_names(col_selector)

Given a set of columns names returns the names of the transformed columns this operator will produce

save_artifacts(artifact_path)

Save artifacts required to be reload operator state from disk

set_nvt_model_name(nvt_model_name)

transform(col_selector, transformable)

Run nvtabular workflow transformations.

validate_schemas(parents_schema, ...[, ...])

Provides a hook method that sub-classes can override to implement schema validation logic.

Attributes

dependencies

Defines an optional list of column dependencies for this operator.

dynamic_dtypes

export_name

Provides a clear common english identifier for this operator.

is_subgraph

label

nvt_model_name

output_dtype

output_properties

output_tags

scalar_shape

supported_formats

supports

Returns what kind of data representation this operator supports

property nvt_model_name#
set_nvt_model_name(nvt_model_name)[source]#
compute_output_schema(input_schema: Schema, col_selector: ColumnSelector, prev_output_schema: Optional[Schema] = None) Schema[source]#

Returns output schema of operator

transform(col_selector: ColumnSelector, transformable: Transformable) Transformable[source]#

Run nvtabular workflow transformations.

Parameters:
  • col_selector (ColumnSelector) – Unused ColumunSelector input

  • transformable (Transformable) – Input features to model

Returns:

workflow transform

Return type:

Transformable