nvtabular.ops.Groupby#
- class nvtabular.ops.Groupby(groupby_cols=None, sort_cols=None, aggs='list', name_sep='_', ascending=True)[source]#
Bases:
Operator
Groupby Transformation
Locally transform each partition of a Dataset with one or more groupby aggregations.
WARNING: This transformation does NOT move data between partitions. Please make sure that the target Dataset object is already shuffled by
groupby_cols
, otherwise the output may be incorrect. See:Dataset.shuffle_by_keys
.Example usage:
groupby_cols = ['user_id', 'session_id'] dataset = dataset.shuffle_by_keys(keys=groupby_cols) groupby_features = [ 'user_id', 'session_id', 'month', 'prod_id', ] >> ops.Groupby( groupby_cols=groupby_cols, sort_cols=['month'], aggs={ 'prod_id': 'list', 'month': ['first', 'last'], }, ) processor = nvtabular.Workflow(groupby_features) workflow.fit(dataset) dataset_transformed = workflow.transform(dataset)
- Parameters:
groupby_cols (str or list of str) – The column names to be used as groupby keys. WARNING: Ensure the dataset was partitioned by those groupby keys (see above for an example).
sort_cols (str or list of str) – Columns to be used to sort each partition before groupby aggregation is performed. If this argument is not specified, the results will not be sorted.
aggs (dict, list or str) – Groupby aggregations to perform. Supported list-based aggregations include “list”, “first” & “last”. Most conventional aggregations supported by Pandas/cuDF are also allowed (e.g. “sum”, “count”, “max”, “mean”, etc.).
name_sep (str) – String separator to use for new column names.
Methods
__init__
([groupby_cols, sort_cols, aggs, ...])column_mapping
(col_selector)compute_column_schema
(col_name, input_schema)compute_input_schema
(root_schema, ...)Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use
compute_output_schema
(input_schema, col_selector)compute_selector
(input_schema, selector[, ...])Provides a hook method for sub-classes to override to implement custom column selection logic.
create_node
(selector)export
(path, input_schema, output_schema, ...)Export the class object as a config and all related files to the user defined path.
load_artifacts
([artifact_path])Load artifacts from disk required for operator function.
output_column_names
(col_selector)Given a set of columns names returns the names of the transformed columns this operator will produce
save_artifacts
([artifact_path])Save artifacts required to be reload operator state from disk
transform
(col_selector, df)Transform the dataframe by applying this operator to the set of input columns
validate_schemas
(parents_schema, ...[, ...])Provides a hook method that sub-classes can override to implement schema validation logic.
Attributes
dynamic_dtypes
export_name
Provides a clear common english identifier for this operator.
is_subgraph
label
output_dtype
output_properties
output_tags
supported_formats
supports
Returns what kind of data representation this operator supports
- transform(col_selector: ColumnSelector, df: DataFrame) DataFrame [source]#
Transform the dataframe by applying this operator to the set of input columns
- Parameters:
col_selector (ColumnSelector) – The columns to apply this operator to
transformable (Transformable) – A pandas or cudf dataframe that this operator will work on
- Returns:
Returns a transformed dataframe or dictarray for this operator
- Return type:
Transformable
- compute_output_schema(input_schema: Schema, col_selector: ColumnSelector, prev_output_schema: Optional[Schema] = None) Schema [source]#
- property dependencies#