nvtabular.ops.LambdaOp#
- class nvtabular.ops.LambdaOp(f, dependency=None, label=None, dtype=None, tags=None, properties=None)[source]#
Bases:
Operator
LambdaOp allows you to apply row level functions to an NVTabular workflow.
Example usage 1:
# Define a ColumnSelector that LamdaOp will apply to # then define a custom function, e.g. extract first 5 character from a string lambda_feature = ColumnSelector(["col1"]) new_lambda_feature = lambda_feature >> LambdaOp(lambda col: col.str.slice(0, 5)) workflow = nvtabular.Workflow(new_lambda_feature + 'label')
Example usage 2:
# define a custom function e.g. calculate probability for different events. # Rename the each new feature column name. lambda_features = ColumnSelector(['event1', 'event2', 'event3']), # columns, f is applied to def cond_prob(col, gdf): col = col.astype(np.float32) col = col / gdf['total_events'] return col new_lambda_features = lambda_features >> LambdaOp(cond_prob, dependency=["total_events"]) >> Rename(postfix="_cond") workflow = nvtabular.Workflow(new_lambda_features + 'label')
- Parameters:
f (callable) – Defines a function that takes a Series and an optional DataFrame as input, and returns a new Series as the output.
dependency (list, default None) – Whether to provide a dependency column or not.
Methods
__init__
(f[, dependency, label, dtype, ...])column_mapping
(col_selector)compute_column_schema
(col_name, input_schema)compute_input_schema
(root_schema, ...)Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use
compute_output_schema
(input_schema, col_selector)Given a set of schemas and a column selector for the input columns, returns a set of schemas for the transformed columns this operator will produce
compute_selector
(input_schema, selector[, ...])Provides a hook method for sub-classes to override to implement custom column selection logic.
create_node
(selector)export
(path, input_schema, output_schema, ...)Export the class object as a config and all related files to the user defined path.
inference_initialize
(col_selector, model_config)Configures this operator for use in inference.
load_artifacts
([artifact_path])Load artifacts from disk required for operator function.
output_column_names
(col_selector)Given a set of columns names returns the names of the transformed columns this operator will produce
save_artifacts
([artifact_path])Save artifacts required to be reload operator state from disk
transform
(col_selector, df)Transform the dataframe by applying this operator to the set of input columns
validate_schemas
(parents_schema, ...[, ...])Provides a hook method that sub-classes can override to implement schema validation logic.
Attributes
export_name
Provides a clear common english identifier for this operator.
is_subgraph
supported_formats
supports
Returns what kind of data representation this operator supports
- transform(col_selector: ColumnSelector, df: DataFrame) DataFrame [source]#
Transform the dataframe by applying this operator to the set of input columns
- property dependencies#
- property label#
- property dynamic_dtypes#
- property output_dtype#
- property output_tags#
- property output_properties#