What is Bayesian Knowledge Tracing?

Learning Analytics systems often use Bayesian Knowledge Tracing (BKT) to predict what skills students have mastered.

Published

July 12, 2018

In this article, we present three prototype Explainables for conferring useful information about BKT to our end users, teachers and students. Targeting end users without a background in computer science may require an additional layer of abstraction.


Click here to interact with full prototype

Why BKT?

BKT or Bayesian Knowledge Tracing was introduced in 1995 by Corbett & Anderson as a means to model students' knowledge as a latent variable using technologically enhanced learning (TEL) environments. The TEL maintains an estimate of the probability that the student has learned a particular set of skills, which is statistically equivalent to a 2-node dynamic Bayesian network. Decades of additional research on learner modelling followed, resulting in a variety of improvements to the BKT approach, including estimating individual parameters instead of skill-based parameters (Yudelson et al, 2013) and new ways to estimate the initial parameters (Baker et al, 2008) to much predictive success. BKT is used across the United States in TELs such as the Open Learning Initiative and the Open Analytics Research Service (Bassen et al, 2018).

Explainables

Explainables, often times called Explorables (see explorabl.es), are used to provide post-hoc explanations of complex machine learning algorithms (Lipton, 2017), including R2D3, StitchFix algorithms, Mike Bostock's Visualizing Algorithms, distill.pub, among others. While the explainables introduced in this article do not generally replicate the typical design patterns of these sample explainables, our explainables share a similar goal of providing an opportunity for post-hoc interpretability of BKT through visual narrative. According to Lipton (2017), the prototypes shared in this article fall under the Explanation by Example Approach, whereas many explainables appear to start with Explanation by Example and incorporate some elements of Visualization.

Four different approachs for post hoc interpretability of AI algorithms. Visualization, explanation by example, local explanation, and generated text explanations.

Methodology

Our research team applied a user-centered design process to develop the initial stages of these prototypes. The designs were first developed through individual brainstorming. Next, individual ideas were explained, refined, and edited in a group brainstorming process. Explainables were narrowed down to three ideas and constructed as hand-drawn paper prototypes. After several iterations of prototyping with other members of the research team, the paper prototypes were pilot-tested on users. Participants at this stage all had backgrounds in computer science at the undergraduate level, but none had experience with BKT or AI explainables. Each prototype was tested on two different users for each iteration. Users were instructed to explain any difficulty in use or understanding that they encountered while interacting with the prototype. They were also asked a series of knowledge check questions at the end of each session in order to assess the efficacy of the prototype. The prototypes were adjusted to address user feedback. After the third iteration, paper prototypes were adapted to more streamlined digital forms.

Future work involves evaluating these prototypes with a user-centered design process with users without a computer science background, as well as evaluating these explainables for their impact on system trust, satisfaction, and decision-making.

Click here to interact with full prototype
Kelvin and Tongyus Alchemy prototype Click here to interact with full prototype

The three BKT Explainables in development include:

  1. A Visual Narrative, using cooking as the example domain skill to be learned.
  2. An Interactive Alchemy metaphor to visualize the actual changes in parameters of the model and the predicted mastery levels.
  3. A BKT Road Trip Poster leveraging driving as a metaphor for the BKT parameters.

The main goal of these explainables were to render the artificial intelligence algorithm approachable to non computer scientists in the form of a playful experience.As these prototypes continue development, we plan to incorporate more interactivity and learning by doing, as well as more thorough evaluation of their impact on user knowledge and behavior.

Conclusions

In this article we introduced prototypes of three different explainables to introduce teachers and students of Algorithmically Enhanced Learning algorithms to the Bayesian Knowledge Tracing Algorithm that underlies their classroom technology. These explainables all generally fall under Lipton's (2017) description of an Explanation by Example post-hoc interpretation. Going forward, we plan to measure whether these prototypes teach the BKT concepts we hope to, user engagement with the prototypes, and their impact on various behaviors. As designers of explainables, we should not aim simply to impact user knowledge, but also to assist user decision-making.

1 At the beginning of the trip, the driver is packed and ready to go, taking with them all the expertise and knowledge they’ve gathered over the years. This Prior Knowledge is factored into BKT. The more the student knows at the start, the closer they are to their destination. 2 In other words, there is a higher probability that the student has already learned the skill.  3 A fork in the road! The driver is faced with their first choice: left or right?This part of the route is familiar and they choose correctly. Left. 4 In BKT, correct answers increase the chance that the student has learned the skill. 5 The driver has been driving for a while now, choosing the correct turn at every fork. Each Correct Choice takes them closer to their destination. 6 Another fork! While they’re still familiar with this part of the route and know which way they’re supposed to go, the driver is tired. It’s been a long day and they’ve been driving for a while. They choose RIGHT instead of the correct LEFT turn that they meant to take. This error is called a Slip and it takes them off course from their destination.  7 When BKT predicts that the student has made a Slip error, it assumes that the student does know the right answer, but the error still introduces uncertainty into the overall knowledge prediction. The trip will take a bit longer now. 8 Now, they’re at an intersection. They have three choice: straight, left, or right. The driver doesn’t remember this part of the journey. They Guess, continuing straight. The journey will take even longer now. They hope the trip starts becoming familiar again so that they can get back on track.  9 In BKT, when a student answers incorrectly, this may be either a Slip or a Guess. If the student has been consistently correct, a Slip is more likely. If the student has not shown evidence of understanding, their wrong answer was probably a Guess. 10 Left, left, right, left…They’re on their way again! The destination is approaching and it looks like the driver does know how to get there after all! In BKT, many correct answers increase the odds that the student knows the skill, until eventually, a certain cutoff is reached, and the algorithm determines that the skill has been learned! 11 They’ve arrived! The trip took a little longer than expected, with the accidental Slip and the Guess the driver made, but in the end, they’ve shown that they know how to get to their destination. Their Prior Knowledge of the route set them up for a good start, and every Correct Choice they made helped them overcome their small mistakes.

Acknowledgments

We are grateful to the Williams College Science Center for providing funds for undergraduates to perform research year-round.

Author contribution

First authors are listed alphabetically in the byline. Iris and Young drafted the text of the article, but the prototypes included in this article were imagined, implemented, and tested by Young, Grace, Kelvin, and Tongyu with considerable input from Iris.

References

  1. Knowledge tracing: Modeling the acquisition of procedural knowledge tracing
    Corbett, A.T. & Anderson, J.R., 1995.
  2. Individualized bayesian knowledge tracing models
    Yudelson, M.V., Koedinger, K.R. and Gordon, G.J., 2013.
  3. More accurate student modeling through contextual estimation of slip and guess probabilities in bayesian knowledge tracing
    d Baker, R.S., Corbett, A.T. and Aleven, V., 2008.
  4. OARS: exploring instructor analytics for online learning
    Bassen, J., Howley, I., Fast, E., Mitchell, J. and Thille, C., 2018.
  5. The mythos of model interpretability
    Lipton, Z.C., 2016.

Citation

For attribution in academic contexts, please cite this work as

Cho, et al., "What is Bayesian Knowledge Tracing?", Proceedings of the Workshop on Visualization for AI explainability (VISxAI), 2018.

BibTeX citation

@article{cho2018whatisbayesian,
  author = {Cho, Young and Mazzarella, Grace and Tejeda, Kelvin and Zhou, Tongyu and Howley, Iris},
  title = {What is Bayesian Knowledge Tracing?},
  journal = {Proceedings of the Workshop on Visualization for AI explainability (VISxAI)},
  year = {2018},
  @editor = {El-Assady, Mennatallah and Chau, Duen Horng (Polo) and Perer, Adam and Strobelt, Hendrik and Viégas, Fernanda},
  note = {http://www.cs.williams.edu/~iris/res/bkt/}
}