nvtabular.ops.Normalize
-
class
nvtabular.ops.
Normalize
(out_dtype=None)[source] Bases:
nvtabular.ops.stat_operator.StatOperator
Standardizing the features around 0 with a standard deviation of 1 is a common technique to compare measurements that have different units. This operation can be added to the workflow to standardize the features.
It performs Normalization using the mean std method.
Example usage:
# Use Normalize to define a NVTabular workflow cont_features = CONTINUOUS_COLUMNS >> ops.Normalize() processor = nvtabular.Workflow(cont_features)
- Parameters
out_dtype (str) – Specifies the data type for the output columns. The default value is numpy.float64 if not set here
Methods
__init__
([out_dtype])clear
()column_mapping
(col_selector)Compute which output columns depend on which input columns
compute_column_schema
(col_name, input_schema)compute_input_schema
(root_schema, …)Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use
compute_output_schema
(input_schema, col_selector)Given a set of schemas and a column selector for the input columns, returns a set of schemas for the transformed columns this operator will produce
compute_selector
(input_schema, selector[, …])Provides a hook method for sub-classes to override to implement custom column selection logic.
create_node
(selector)fit
(col_selector, ddf)Calculate statistics for this operator, and return a dask future to these statistics, which will be computed by the workflow.
fit_finalize
(dask_stats)Finalize statistics calculation - the workflow calls this function with the computed statistics from the ‘fit’ object’
inference_initialize
(col_selector, model_config)Configures this operator for use in inference.
output_column_names
(col_selector)Given a set of columns names returns the names of the transformed columns this operator will produce
set_storage_path
(new_path[, copy])Certain stat operators need external storage - for instance Categorify writes out parquet files containing the categorical mapping.
transform
(col_selector, df)Transform the dataframe by applying this operator to the set of input columns
validate_schemas
(parents_schema, …[, …])Provides a hook method that sub-classes can override to implement schema validation logic.
Attributes
dependencies
Defines an optional list of column dependencies for this operator.
dynamic_dtypes
label
output_properties
-
fit
(col_selector: merlin.dag.selector.ColumnSelector, ddf: dask.dataframe.core.DataFrame)[source] Calculate statistics for this operator, and return a dask future to these statistics, which will be computed by the workflow.
-
fit_finalize
(dask_stats)[source] Finalize statistics calculation - the workflow calls this function with the computed statistics from the ‘fit’ object’
-
transform
(col_selector: merlin.dag.selector.ColumnSelector, df: pandas.core.frame.DataFrame) → pandas.core.frame.DataFrame[source] Transform the dataframe by applying this operator to the set of input columns
- Parameters
columns (list of str or list of list of str) – The columns to apply this operator to
df (Dataframe) – A pandas or cudf dataframe that this operator will work on
- Returns
Returns a transformed dataframe for this operator
- Return type
DataFrame
-
property
supports
-
property
output_dtype