Compute change in token usage over time.

compute_change_at(data = NULL, token = NULL, timebin = NULL, bin = TRUE,
  timefloor = NULL, top_pct = 0.25, only_signif = FALSE,
  signif_cutoff = 0.1, return_models = TRUE, return_data = FALSE,
  return_both = FALSE)

compute_change(..., token, timebin, timefloor)

Arguments

data

data.frame.

token

bare for NSE; character for SE. Name of column in data corresponding to token.

timebin

bare for NSE; character for SE. Name of column in data specifying temporal period to use to compute change.

bin

logical. Whether or not to call lubridate::floor_date() to truncate timebin.

timefloor

character. Name of column passed directly to unit parameter of lubridate::floor_date() if bin = FALSE.

top_pct

numeric. Number between 0 and 1. Useful primarily to limit the number of models that need to be computed and to reduce 'nose' regarding what is deemed significant.

only_signif

logical. Whether or not to return rows with a significant p-value.

signif_cutoff

numeric. Number between 0 and 1. Value to use as 'maximum' threshhld for significance. Only used if only_signif = TRUE.

return_models

logical. Whether to return just the models. This is probably the preferred option when calling compute_change_at() directly.

return_data

logical. Whether to 'bytime' data which is used as the data parameter in stats::glm() for creating models. Needed when using visualize_change_at().

return_both

logical. Set to TRUE when visualize_change_at().

...

dots. Parameters to pass directly to visualize_time().

Value

data.frame.

Details

None.

See also

https://www.tidytextmining.com/twitter.html#changes-in-token-use