Compute change in token usage over time.
compute_change_at(data = NULL, token = NULL, timebin = NULL, bin = TRUE, timefloor = NULL, top_pct = 0.25, only_signif = FALSE, signif_cutoff = 0.1, return_models = TRUE, return_data = FALSE, return_both = FALSE) compute_change(..., token, timebin, timefloor)
data | data.frame. |
---|---|
token | bare for NSE; character for SE. Name of column in |
timebin | bare for NSE; character for SE. Name of column in |
bin | logical. Whether or not to call |
timefloor | character. Name of column passed directly to |
top_pct | numeric. Number between 0 and 1. Useful primarily to limit the number of models that need to be computed and to reduce 'nose' regarding what is deemed significant. |
only_signif | logical. Whether or not to return rows with a significant p-value. |
signif_cutoff | numeric. Number between 0 and 1. Value to use as 'maximum' threshhld for significance.
Only used if |
return_models | logical. Whether to return just the models. This is probably the preferred
option when calling |
return_data | logical. Whether to 'bytime' data which is used as
the |
return_both | logical. Set to |
... | dots. Parameters to pass directly to |
data.frame.
None.
https://www.tidytextmining.com/twitter.html#changes-in-token-use