nvtabular.ops.JoinExternal
- 
class nvtabular.ops.JoinExternal(df_ext, on, how='left', on_ext=None, columns_ext=None, drop_duplicates_ext=None, kind_ext=None, cache='host', **kwargs)[source]
- Bases: - nvtabular.ops.operator.Operator- Join each dataset partition to an external table. For performance reasons, only “left” and “inner” join transformations are supported. - Example usage: - # Load dataset which should be joined to the main dataset df_external = cudf.read_parquet('external.parquet') # Use JoinExternal to define a NVTabular workflow joined = ColumnSelector(columns_left) >> nvt.ops.JoinExternal( df_ext, on=['key1', 'key2'], on_ext=['key1_ext', 'key2_ext'], how='left', columns_ext=['key1_ext', 'key2_ext', 'cat1', 'cat2', 'num1'], kind_ext='cudf', cache='device' ) >> ... processor = nvtabular.Workflow(joined) - Parameters
- df_ext (DataFrame, pyarrow.Table, Dataset, dd.DataFrame, or file path(s)) – The external table to join to each partition of the dataset. Note that the join must be a partition-wise transformation. Therefore, if - df_extis a multi-partition Dask collection, it will need to be broadcasted to every partition.
- how ({"left", "inner"}; default "left") – Type of join operation to perform. 
- on_ext (str or list(str); Optional) – Column name(s) on external table to join on. By default, we assume - on_extis the same as- on.
- columns_ext (list(str); Optional) – Subset of columns to select from external table before join. 
- drop_duplicates_ext (bool; Default False) – Drop duplicates from external table before join. 
- kind_ext (ExtData; Optional) – Format of - df_ext. If nothing is specified, the format will be inferred.
- cache ({"device", "host", "disk"}) – Where to cache - df_extbetween transformations. Only used if the data is originally stored on disk. The “host” option is also supported when- df_extis a- cudf.DataFrame.
 
 - 
__init__(df_ext, on, how='left', on_ext=None, columns_ext=None, drop_duplicates_ext=None, kind_ext=None, cache='host', **kwargs)[source]
 - Methods - __init__(df_ext, on[, how, on_ext, …])- column_mapping(col_selector)- compute_column_schema(col_name, input_schema)- compute_input_schema(root_schema, …)- Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use - compute_output_schema(input_schema, col_selector)- compute_selector(input_schema, selector, …)- create_node(selector)- inference_initialize(col_selector, model_config)- Configures this operator for use in inference. - output_column_names(col_selector)- Given a set of columns names returns the names of the transformed columns this operator will produce - transform(col_selector, df)- Transform the dataframe by applying this operator to the set of input columns - validate_schemas(parents_schema, …[, …])- Provides a hook method that sub-classes can override to implement schema validation logic. - Attributes - dependencies- Defines an optional list of column dependencies for this operator. - dynamic_dtypes- label- output_dtype- output_properties- output_tags- supported_formats- supports- Returns what kind of data representation this operator supports - 
transform(col_selector: merlin.dag.selector.ColumnSelector, df: pandas.core.frame.DataFrame) → pandas.core.frame.DataFrame[source]
- Transform the dataframe by applying this operator to the set of input columns - Parameters
- columns (list of str or list of list of str) – The columns to apply this operator to 
- df (Dataframe) – A pandas or cudf dataframe that this operator will work on 
 
- Returns
- Returns a transformed dataframe for this operator 
- Return type
- DataFrame