Learning Analytics systems often use Bayesian Knowledge Tracing (BKT) to predict what skills students have mastered.
In this article, we present three prototype Explainables for conferring useful information about BKT to our end users, teachers and students. Targeting end users without a background in computer science may require an additional layer of abstraction.
BKT or Bayesian Knowledge Tracing was introduced in 1995 by Corbett & Anderson as a means to model students' knowledge as a latent variable using technologically enhanced learning (TEL) environments. The TEL maintains an estimate of the probability that the student has learned a particular set of skills, which is statistically equivalent to a 2-node dynamic Bayesian network. Decades of additional research on learner modelling followed, resulting in a variety of improvements to the BKT approach, including estimating individual parameters instead of skill-based parameters (Yudelson et al, 2013) and new ways to estimate the initial parameters (Baker et al, 2008) to much predictive success. BKT is used across the United States in TELs such as the Open Learning Initiative and the Open Analytics Research Service (Bassen et al, 2018).
Explainables, often times called Explorables (see explorabl.es), are used to provide post-hoc explanations of complex machine learning algorithms (Lipton, 2017), including R2D3, StitchFix algorithms, Mike Bostock's Visualizing Algorithms, distill.pub, among others. While the explainables introduced in this article do not generally replicate the typical design patterns of these sample explainables, our explainables share a similar goal of providing an opportunity for post-hoc interpretability of BKT through visual narrative. According to Lipton (2017), the prototypes shared in this article fall under the Explanation by Example Approach, whereas many explainables appear to start with Explanation by Example and incorporate some elements of Visualization.
Our research team applied a user-centered design process to develop the initial stages of these prototypes. The designs were first developed through individual brainstorming. Next, individual ideas were explained, refined, and edited in a group brainstorming process. Explainables were narrowed down to three ideas and constructed as hand-drawn paper prototypes. After several iterations of prototyping with other members of the research team, the paper prototypes were pilot-tested on users. Participants at this stage all had backgrounds in computer science at the undergraduate level, but none had experience with BKT or AI explainables. Each prototype was tested on two different users for each iteration. Users were instructed to explain any difficulty in use or understanding that they encountered while interacting with the prototype. They were also asked a series of knowledge check questions at the end of each session in order to assess the efficacy of the prototype. The prototypes were adjusted to address user feedback. After the third iteration, paper prototypes were adapted to more streamlined digital forms.
Future work involves evaluating these prototypes with a user-centered design process with users without a computer science background, as well as evaluating these explainables for their impact on system trust, satisfaction, and decision-making.
The three BKT Explainables in development include:
The main goal of these explainables were to render the artificial intelligence algorithm approachable to non computer scientists in the form of a playful experience.As these prototypes continue development, we plan to incorporate more interactivity and learning by doing, as well as more thorough evaluation of their impact on user knowledge and behavior.
In this article we introduced prototypes of three different explainables to introduce teachers and students of Algorithmically Enhanced Learning algorithms to the Bayesian Knowledge Tracing Algorithm that underlies their classroom technology. These explainables all generally fall under Lipton's (2017) description of an Explanation by Example post-hoc interpretation. Going forward, we plan to measure whether these prototypes teach the BKT concepts we hope to, user engagement with the prototypes, and their impact on various behaviors. As designers of explainables, we should not aim simply to impact user knowledge, but also to assist user decision-making.
We are grateful to the Williams College Science Center for providing funds for undergraduates to perform research year-round.
First authors are listed alphabetically in the byline. Iris and Young drafted the text of the article, but the prototypes included in this article were imagined, implemented, and tested by Young, Grace, Kelvin, and Tongyu with considerable input from Iris.
For attribution in academic contexts, please cite this work as
Cho, et al., "What is Bayesian Knowledge Tracing?", Proceedings of the Workshop on Visualization for AI explainability (VISxAI), 2018.
BibTeX citation
@article{cho2018whatisbayesian, author = {Cho, Young and Mazzarella, Grace and Tejeda, Kelvin and Zhou, Tongyu and Howley, Iris}, title = {What is Bayesian Knowledge Tracing?}, journal = {Proceedings of the Workshop on Visualization for AI explainability (VISxAI)}, year = {2018}, @editor = {El-Assady, Mennatallah and Chau, Duen Horng (Polo) and Perer, Adam and Strobelt, Hendrik and Viégas, Fernanda}, note = {http://www.cs.williams.edu/~iris/res/bkt/} }