nvtabular.ops.DifferenceLag

class nvtabular.ops.DifferenceLag(partition_cols, shift=1)[source]

Bases: nvtabular.ops.operator.Operator

Calculates the difference between two consecutive rows of the dataset. For instance, this operator can calculate the time since a user last had another interaction.

This requires a dataset partitioned by one set of columns (userid) and sorted further by another set (userid, timestamp). The dataset must already be partitioned and sorted before being passed to the workflow. This can be easily done using dask-cudf:

# get a nvt dataset and convert to a dask dataframe
ddf = nvtabular.Dataset(PATHS).to_ddf()

# partition the dask dataframe by userid, then sort by userid/timestamp
ddf = ddf.shuffle("userid").sort_values(["userid", "timestamp"])

# create a new nvtabular dataset on the partitioned/sorted values
dataset = nvtabular.Dataset(ddf)

Once passed an appropriate dataset, this operator can be used to create a workflow to compute the lagged difference within a partition:

# compute the delta in timestamp for each users session
diff_features = ["quantity"] >> ops.DifferenceLag(partition_cols=["userid"], shift=[1, -1])
processor = nvtabular.Workflow(diff_features)
Parameters
  • partition_cols (str or list of str) – Column or Columns that are used to partition the data.

  • shift (int, default 1) – The number of rows to look backwards when computing the difference lag. Negative values indicate the number of rows to look forwards, making this compute the lead instead of lag.

__init__(partition_cols, shift=1)[source]

Methods

__init__(partition_cols[, shift])

column_mapping(col_selector)

compute_column_schema(col_name, input_schema)

compute_input_schema(root_schema, …)

Given the schemas coming from upstream sources and a column selector for the input columns, returns a set of schemas for the input columns this operator will use :param root_schema: Base schema of the dataset before running any operators.

compute_output_schema(input_schema, col_selector)

Given a set of schemas and a column selector for the input columns, returns a set of schemas for the transformed columns this operator will produce :param input_schema: The schemas of the columns to apply this operator to :type input_schema: Schema :param col_selector: The column selector to apply to the input schema :type col_selector: ColumnSelector

compute_selector(input_schema, selector, …)

create_node(selector)

inference_initialize(col_selector, model_config)

Configures this operator for use in inference.

output_column_names(col_selector)

Given a set of columns names returns the names of the transformed columns this operator will produce :param columns: The columns to apply this operator to :type columns: list of str, or list of list of str

transform(col_selector, df)

Transform the dataframe by applying this operator to the set of input columns

Attributes

dependencies

dynamic_dtypes

label

output_dtype

output_properties

output_tags

supports

Returns what kind of data representation this operator supports

transform(col_selector: merlin.dag.selector.ColumnSelector, df: pandas.core.frame.DataFrame)pandas.core.frame.DataFrame[source]

Transform the dataframe by applying this operator to the set of input columns

Parameters
  • columns (list of str or list of list of str) – The columns to apply this operator to

  • df (Dataframe) – A pandas or cudf dataframe that this operator will work on

Returns

Returns a transformed dataframe for this operator

Return type

DataFrame

property dependencies
column_mapping(col_selector)[source]
property output_tags
property output_dtype