nvtabular.ops.StatOperator
-
class
nvtabular.ops.
StatOperator
[source] Bases:
nvtabular.ops.operator.Operator
Base class for statistical operator classes. This adds a ‘fit’ and ‘finalize’ method on top of the Operator class.
-
__init__
() Initialize self. See help(type(self)) for accurate signature.
Methods
__init__
()Initialize self.
clear
()zero and reinitialize all relevant statistical properties
column_mapping
(col_selector)Compute which output columns depend on which input columns
compute_column_schema
(col_name, input_schema)compute_input_schema
(root_schema, …)Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use
compute_output_schema
(input_schema, col_selector)Given a set of schemas and a column selector for the input columns, returns a set of schemas for the transformed columns this operator will produce
compute_selector
(input_schema, selector[, …])Provides a hook method for sub-classes to override to implement custom column selection logic.
create_node
(selector)fit
(col_selector, ddf)Calculate statistics for this operator, and return a dask future to these statistics, which will be computed by the workflow.
fit_finalize
(dask_stats)Finalize statistics calculation - the workflow calls this function with the computed statistics from the ‘fit’ object’
inference_initialize
(col_selector, model_config)Configures this operator for use in inference.
load_artifacts
([artifact_path])Load artifacts from disk required for operator function.
output_column_names
(col_selector)Given a set of columns names returns the names of the transformed columns this operator will produce
save_artifacts
([artifact_path])Save artifacts required to be reload operator state from disk
set_storage_path
(new_path[, copy])Certain stat operators need external storage - for instance Categorify writes out parquet files containing the categorical mapping.
transform
(col_selector, df)Transform the dataframe by applying this operator to the set of input columns
validate_schemas
(parents_schema, …[, …])Provides a hook method that sub-classes can override to implement schema validation logic.
Attributes
dependencies
Defines an optional list of column dependencies for this operator.
dynamic_dtypes
is_subgraph
label
output_dtype
output_properties
output_tags
supported_formats
supports
Returns what kind of data representation this operator supports
-
fit
(col_selector: merlin.dag.selector.ColumnSelector, ddf: dask.dataframe.core.DataFrame) → Any[source] Calculate statistics for this operator, and return a dask future to these statistics, which will be computed by the workflow.
-
fit_finalize
(dask_stats)[source] Finalize statistics calculation - the workflow calls this function with the computed statistics from the ‘fit’ object’
-
set_storage_path
(new_path, copy=False)[source] Certain stat operators need external storage - for instance Categorify writes out parquet files containing the categorical mapping. When we save the operator, we also want to save these files as part of the bundle. Implementing this method lets statoperators bundle their dependent files into the new path that we’re writing out (note that this could happen after the operator is created)
-