mini red balloon
mini yellow balloon
mini green balloon
mini blue balloon

Predicting What Students Know

An Introduction to Bayesian Knowledge Tracing

Bayesian Knowledge Tracing, or BKT, is an artificial intelligence algorithm that lets us infer a student's current knowledge state to predict if they have learned a skill.

There are four parameters involved in BKT (each with a value between 0 and 1, inclusive):
  • P(known): the probability that the student already knew a skill.
  • P(will learn): the probability that the student will learn a skill on the next practice opportunity.
  • P(slip): the probability that the student will answer incorrectly despite knowing a skill.
  • P(guess): the probability that the student will answer correctly despite not knowing a skill.
Every time the student answers a question, our BKT algorithm calculates P(learned), the probability that the student has learned the skill they are working on, using the values of these parameters. The formula for P(learned) depends on whether their response was correct.

First, we compute the conditional probability that the student learned the skill previously (at time n-1), based on whether they answered the current question (at time n) correctly or incorrectly.
p learned at time n-1 if correct
p learned at time n-1 if wrong

Then, we use the result of our first calculation to compute the conditional probability that the student has learned the skill now (at time n). p learned at time n given answer


For the next question, we use P(learned) as the new value of P(known). Once P(known) ≥ 0.95, we say that the student has achieved m a s t e r y.

Now that you’ve had a chance to learn about the four parameters, here’s a tool that can help you visualize the relationships between them and explore how each one influences the probability calculations underlying BKT. We'll be modeling the system with a hot air balloon, using its height as a measure of mastery.


Let's begin!

start arrow

  1. Find two different parameter combinations that will result in mastery if the student answers correctly. Hint: make P(learned if correct) ≥ 0.95 and press "answer correct" to verify your results.
    • Recall that P(learned) becomes the new value for P(known).
    • Explore what adjustments you have to make depending on P(known). Try a higher P(known) and a lower P(known) and compare your results.
  2. What happens to P(learned if correct) and P(learned if wrong) if P(guess) and/or P(slip) exceeds 0.5?
    • P(learned) is higher if the user answers incorrectly vs. correctly. This is why P(guess) is typically bounded at 0.3 and P(slip) at 0.1. [3]
    • Does it make sense why the balloon flips now?
  3. What happens to P(learned) if the student answers incorrectly? Hint: compare P(learned if wrong) with P(known) (aka. your previous P(learned)).
    • Generally, P(learned) is always assumed to increase because BKT considers every answer, wrong or right, as a learning opportunity that brings you one step closer to mastery.
    • However, can you think of a situation where P(learned) might decrease with a wrong answer? Feel free to try modeling different scenarios with the sliders using your knowledge of the BKT parameters.
  4. Keep exploring! Can you find any other flaws or interesting characteristics of BKT?

Controls the balloon's height.

P(known):

Changes the storminess and number of clouds to represent learning difficulty.

P(will learn):

Certain values will flip the balloon.

P(slip):

Certain values will flip the balloon.

P(guess):

Remember, P(learned) depends on whether the student answers correctly and this probability becomes the new value for P(known). Simulate student responses by choosing an answer button below.

P(learned if correct):

P(learned if wrong):

Hint: hover over the parameters to see how they impact the simulator.