nvtabular.ops.JoinExternal#
- class nvtabular.ops.JoinExternal(df_ext, on, how='left', on_ext=None, columns_ext=None, drop_duplicates_ext=None, kind_ext=None, cache='host', **kwargs)[source]#
Bases:
Operator
Join each dataset partition to an external table. For performance reasons, only “left” and “inner” join transformations are supported.
Example usage:
# Load dataset which should be joined to the main dataset df_external = cudf.read_parquet('external.parquet') # Use JoinExternal to define a NVTabular workflow joined = ColumnSelector(columns_left) >> nvt.ops.JoinExternal( df_ext, on=['key1', 'key2'], on_ext=['key1_ext', 'key2_ext'], how='left', columns_ext=['key1_ext', 'key2_ext', 'cat1', 'cat2', 'num1'], kind_ext='cudf', cache='device' ) >> ... processor = nvtabular.Workflow(joined)
- Parameters:
df_ext (DataFrame, pyarrow.Table, Dataset, dd.DataFrame, or file path(s)) – The external table to join to each partition of the dataset. Note that the join must be a partition-wise transformation. Therefore, if
df_ext
is a multi-partition Dask collection, it will need to be broadcasted to every partition.how ({"left", "inner"}; default "left") – Type of join operation to perform.
on_ext (str or list(str); Optional) – Column name(s) on external table to join on. By default, we assume
on_ext
is the same ason
.columns_ext (list(str); Optional) – Subset of columns to select from external table before join.
drop_duplicates_ext (bool; Default False) – Drop duplicates from external table before join.
kind_ext (ExtData; Optional) – Format of
df_ext
. If nothing is specified, the format will be inferred.cache ({"device", "host", "disk"}) – Where to cache
df_ext
between transformations. Only used if the data is originally stored on disk. The “host” option is also supported whendf_ext
is acudf.DataFrame
.
- __init__(df_ext, on, how='left', on_ext=None, columns_ext=None, drop_duplicates_ext=None, kind_ext=None, cache='host', **kwargs)[source]#
Methods
__init__
(df_ext, on[, how, on_ext, ...])column_mapping
(col_selector)compute_column_schema
(col_name, input_schema)compute_input_schema
(root_schema, ...)Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use
compute_output_schema
(input_schema, col_selector)compute_selector
(input_schema, selector, ...)create_node
(selector)export
(path, input_schema, output_schema, ...)Export the class object as a config and all related files to the user defined path.
load_artifacts
([artifact_path])Load artifacts from disk required for operator function.
output_column_names
(col_selector)Given a set of columns names returns the names of the transformed columns this operator will produce
save_artifacts
([artifact_path])Save artifacts required to be reload operator state from disk
transform
(col_selector, df)Transform the dataframe by applying this operator to the set of input columns
validate_schemas
(parents_schema, ...[, ...])Provides a hook method that sub-classes can override to implement schema validation logic.
Attributes
dependencies
Defines an optional list of column dependencies for this operator.
dynamic_dtypes
export_name
Provides a clear common english identifier for this operator.
is_subgraph
label
output_dtype
output_properties
output_tags
supported_formats
supports
Returns what kind of data representation this operator supports
- transform(col_selector: ColumnSelector, df: DataFrame) DataFrame [source]#
Transform the dataframe by applying this operator to the set of input columns
- Parameters:
col_selector (ColumnSelector) – The columns to apply this operator to
transformable (Transformable) – A pandas or cudf dataframe that this operator will work on
- Returns:
Returns a transformed dataframe or dictarray for this operator
- Return type:
Transformable
- compute_selector(input_schema: Schema, selector: ColumnSelector, parents_selector: ColumnSelector, dependencies_selector: ColumnSelector) ColumnSelector [source]#