nvtabular.ops.ReduceDtypeSize#

class nvtabular.ops.ReduceDtypeSize(float_dtype=<class 'numpy.float32'>)[source]#

Bases: StatOperator

ReduceDtypeSize changes the dtypes of numeric columns. For integer columns this will choose a dtype such that the minimum and maximum values in the column will fit. For float columns this will cast to a float32.

__init__(float_dtype=<class 'numpy.float32'>)[source]#

Methods

__init__([float_dtype])

clear()

zero and reinitialize all relevant statistical properties

column_mapping(col_selector)

Compute which output columns depend on which input columns

compute_column_schema(col_name, input_schema)

compute_input_schema(root_schema, ...)

Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use

compute_output_schema(input_schema, selector)

Given a set of schemas and a column selector for the input columns, returns a set of schemas for the transformed columns this operator will produce

compute_selector(input_schema, selector[, ...])

Provides a hook method for sub-classes to override to implement custom column selection logic.

create_node(selector)

export(path, input_schema, output_schema, ...)

Export the class object as a config and all related files to the user defined path.

fit(col_selector, ddf)

Calculate statistics for this operator, and return a dask future to these statistics, which will be computed by the workflow.

fit_finalize(dask_stats)

Finalize statistics calculation - the workflow calls this function with the computed statistics from the 'fit' object'

inference_initialize(col_selector, model_config)

Configures this operator for use in inference.

load_artifacts([artifact_path])

Load artifacts from disk required for operator function.

output_column_names(col_selector)

Given a set of columns names returns the names of the transformed columns this operator will produce

save_artifacts([artifact_path])

Save artifacts required to be reload operator state from disk

set_storage_path(new_path[, copy])

Certain stat operators need external storage - for instance Categorify writes out parquet files containing the categorical mapping.

transform(col_selector, df)

Transform the dataframe by applying this operator to the set of input columns

validate_schemas(parents_schema, ...[, ...])

Provides a hook method that sub-classes can override to implement schema validation logic.

Attributes

dependencies

Defines an optional list of column dependencies for this operator.

dynamic_dtypes

export_name

Provides a clear common english identifier for this operator.

is_subgraph

label

output_dtype

output_properties

output_tags

supported_formats

supports

Returns what kind of data representation this operator supports

fit(col_selector: ColumnSelector, ddf: DataFrame)[source]#

Calculate statistics for this operator, and return a dask future to these statistics, which will be computed by the workflow.

fit_finalize(dask_stats)[source]#

Finalize statistics calculation - the workflow calls this function with the computed statistics from the ‘fit’ object’

clear()[source]#

zero and reinitialize all relevant statistical properties

transform(col_selector: ColumnSelector, df: DataFrame) DataFrame[source]#

Transform the dataframe by applying this operator to the set of input columns

Parameters:
  • columns (list of str or list of list of str) – The columns to apply this operator to

  • df (Dataframe) – A pandas or cudf dataframe that this operator will work on

Returns:

Returns a transformed dataframe for this operator

Return type:

DataFrame

compute_output_schema(input_schema, selector, prev_output_schema=None)[source]#

Given a set of schemas and a column selector for the input columns, returns a set of schemas for the transformed columns this operator will produce

Parameters:
  • input_schema (Schema) – The schemas of the columns to apply this operator to

  • col_selector (ColumnSelector) – The column selector to apply to the input schema

Returns:

The schemas of the columns produced by this operator

Return type:

Schema