Statistical analyses of the weights obtained from the two methods show that, compared to Swing (with a high anchor), SMART (with a low anchor) produces lower weights for the least important attributes, while for the most important attributes, the opposite is true. Data analysis revealed that the two methods, which have different starting points, display different degrees of anchoring bias. Data were collected from university students for a transportation mode selection. In this study, the existence of anchoring bias-people's tendency to rely on, evaluate, and decide based on the first piece of information they receive-is examined in two multi-attribute decision-making (MADM) methods, simple multi-attribute rating technique (SMART), and Swing. Based on our findings, we highlight open challenges and new directions for analyzing user interactions and visualization provenance. We analyze exploration bias detection, data interaction prediction, and algorithmic complexity, among other measures. Our paper seeks to fill in this missing gap by comparing and ranking eight user modeling algorithms based on their performance on a diverse set of four user study datasets. As a result, there is limited guidance on which method to use and when. However, the community lacks a rigorous evaluation and comparison of these existing techniques. Researchers believe this collection of algorithms can help create more intelligent visual analytics tools. For example, some can detect exploration biases while others can predict data points that the user will interact with before that interaction occurs. The visual analytics community has proposed several user modeling algorithms to capture and analyze users' interaction behavior in order to assist users in data exploration and insight generation. Our results contribute empirical data towards understanding how provenance summarizations can influence analysis behaviors. On the other hand, while interaction history (i.e., when something was interacted with) does not significantly encourage more mimicry, it does take more time to comfortably understand, as represented by less confident conclusions and less relevant information-gathering behaviors. We see that data coverage (i.e., what was interacted with) provides provenance information without limiting individual investigation freedom. In an open-ended, 30-minute, textual exploration scenario, we qualitatively compare how adding different types of provenance information (specifically data coverage and interaction history) affects analysts' confidence in conclusions developed, propensity to repeat work, filtering of data, identification of relevant information, and typical investigation strategies. Our work focuses on the presentation of provenance information and the resulting conclusions reached and strategies used by new analysts. Yet, no universal guidelines exist for communicating provenance in different settings. The use of provenance to communicate analytic sensemaking carries promise by describing the interactions and summarizing the steps taken to reach insights. Especially in intelligence analysis scenarios where different experts contribute knowledge to a shared understanding, members must communicate how insights develop to establish common ground among collaborators. Conducting data analysis tasks rarely occur in isolation.