class nvtabular.ops.ColumnSimilarity(left_features, right_features=None, metric='tfidf', on_device=True)[source]

Bases: nvtabular.ops.operator.Operator

Calculates the similarity between two columns using tf-idf, cosine or inner product as the distance metric. For each row, this calculates the distance between the two columns by looking up features for those columns in a sparse matrix, and then computing the distance between the rows of the feature matrices.

Example usage:

# Read in the 'document_categories' file from the kaggle outbrains dataset and convert
# to a sparse matrix
df = cudf.read_csv("document_categories.csv.zip")
categories = cupyx.scipy.sparse.coo_matrix((cupy.ones(len(df)),
                                           (df.document_id.values, df.category_id.values))
# compute a new column 'document_id_document_id_promo_sim' between the document_id and
# document_id_promo columns on tfidf distance on the categories matrix we just loaded up
sim_features = [["document_id", "document_id_promo"]] >> ColumnSimilarity(categories,
                                                        metric='tfidf', on_device=False)
workflow = nvt.Workflow(sim_features)
  • left_features (csr_matrix) – Sparse feature matrix for the left column

  • right_features (csr_matrix, optional) – Sparse feature matrix for the right column in each pair. If not given will use the same feature matrix as for the left (for example when calculating document-document distances)

  • on_device (bool) – Whether to compute on the GPU or CPU. Computing on the GPU will be faster, but requires that the left_features/right_features sparse matrices fit into GPU memory.

__init__(left_features, right_features=None, metric='tfidf', on_device=True)[source]


__init__(left_features[, right_features, …])


compute_column_schema(col_name, input_schema)

compute_input_schema(root_schema, …)

Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use

compute_output_schema(input_schema, col_selector)

Given a set of schemas and a column selector for the input columns, returns a set of schemas for the transformed columns this operator will produce

compute_selector(input_schema, selector, …)


inference_initialize(col_selector, model_config)

Configures this operator for use in inference.


Given a set of columns names returns the names of the transformed columns this operator will produce

transform(col_selector, df)

Transform the dataframe by applying this operator to the set of input columns

validate_schemas(parents_schema, …[, …])

Provides a hook method that sub-classes can override to implement schema validation logic.



Defines an optional list of column dependencies for this operator.







Returns what kind of data representation this operator supports

transform(col_selector: merlin.dag.selector.ColumnSelector, df: pandas.core.frame.DataFrame)pandas.core.frame.DataFrame[source]

Transform the dataframe by applying this operator to the set of input columns

  • columns (list of str or list of list of str) – The columns to apply this operator to

  • df (Dataframe) – A pandas or cudf dataframe that this operator will work on


Returns a transformed dataframe for this operator

Return type


compute_selector(input_schema: merlin.schema.schema.Schema, selector: merlin.dag.selector.ColumnSelector, parents_selector: merlin.dag.selector.ColumnSelector, dependencies_selector: merlin.dag.selector.ColumnSelector)merlin.dag.selector.ColumnSelector[source]
property output_tags
property output_dtype