nvtabular.ops.ColumnSimilarity#
- class nvtabular.ops.ColumnSimilarity(left_features, right_features=None, metric='tfidf', on_device=True)[source]#
Bases:
Operator
Calculates the similarity between two columns using tf-idf, cosine or inner product as the distance metric. For each row, this calculates the distance between the two columns by looking up features for those columns in a sparse matrix, and then computing the distance between the rows of the feature matrices.
Example usage:
# Read in the 'document_categories' file from the kaggle outbrains dataset and convert # to a sparse matrix df = cudf.read_csv("document_categories.csv.zip") categories = cupyx.scipy.sparse.coo_matrix((cupy.ones(len(df)), (df.document_id.values, df.category_id.values)) # compute a new column 'document_id_document_id_promo_sim' between the document_id and # document_id_promo columns on tfidf distance on the categories matrix we just loaded up sim_features = [["document_id", "document_id_promo"]] >> ColumnSimilarity(categories, metric='tfidf', on_device=False) workflow = nvt.Workflow(sim_features)
- Parameters:
left_features (csr_matrix) – Sparse feature matrix for the left column
right_features (csr_matrix, optional) – Sparse feature matrix for the right column in each pair. If not given will use the same feature matrix as for the left (for example when calculating document-document distances)
on_device (bool) – Whether to compute on the GPU or CPU. Computing on the GPU will be faster, but requires that the left_features/right_features sparse matrices fit into GPU memory.
Methods
__init__
(left_features[, right_features, ...])column_mapping
(col_selector)compute_column_schema
(col_name, input_schema)compute_input_schema
(root_schema, ...)Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use
compute_output_schema
(input_schema, col_selector)Given a set of schemas and a column selector for the input columns, returns a set of schemas for the transformed columns this operator will produce
compute_selector
(input_schema, selector, ...)create_node
(selector)export
(path, input_schema, output_schema, ...)Export the class object as a config and all related files to the user defined path.
load_artifacts
([artifact_path])Load artifacts from disk required for operator function.
output_column_names
(col_selector)Given a set of columns names returns the names of the transformed columns this operator will produce
save_artifacts
([artifact_path])Save artifacts required to be reload operator state from disk
transform
(col_selector, df)Transform the dataframe by applying this operator to the set of input columns
validate_schemas
(parents_schema, ...[, ...])Provides a hook method that sub-classes can override to implement schema validation logic.
Attributes
dependencies
Defines an optional list of column dependencies for this operator.
dynamic_dtypes
export_name
Provides a clear common english identifier for this operator.
is_subgraph
label
output_properties
supported_formats
supports
Returns what kind of data representation this operator supports
- transform(col_selector: ColumnSelector, df: DataFrame) DataFrame [source]#
Transform the dataframe by applying this operator to the set of input columns
- Parameters:
col_selector (ColumnSelector) – The columns to apply this operator to
transformable (Transformable) – A pandas or cudf dataframe that this operator will work on
- Returns:
Returns a transformed dataframe or dictarray for this operator
- Return type:
Transformable
- compute_selector(input_schema: Schema, selector: ColumnSelector, parents_selector: ColumnSelector, dependencies_selector: ColumnSelector) ColumnSelector [source]#
- property output_tags#
- property output_dtype#