In another topic someone wanted to know the formulas used when calculating new intervals after early reviews. I think many users (my former self included) aren’t aware of how reviewing a card early or late will affect future intervals, and in consequence, the amount of cards one can study in the long run.
“Days Late” is the difference of the actual review date and the scheduled due date. So “25” means the card is studied 25 days after it was due or 125 days after the last review. “-100” means 100 days before the card would become due, which is the same date as the last review in this case.
Things I find noteworthy:
The Easy curve is discontinuous in 0, meaning reviewing a card only one day early will lead to a disproportionately shorter easy interval. In fact, rating early reviewed cards easy, compared to good, hardly makes a difference (if we ignore the ease boost).
Reviewing a card early can result in a stagnating interval, if rated good, or even a decreasing interval, if rated hard. That is to say, compared to the last scheduled interval, not the actual interval.
While a card remains overdue, the hard and good intervals grow at about the same rate. That means, while the difference between rating a due or slightly overdue card hard and good is striking, it dwindles for long overdue cards.
The good interval for overdue cards increases at a rather low rate, namely card_ease / 2, so only half of the rate with which the interval increases until the due date. That means, if you regularly study your cards late, their intervals won’t increase as fast and you will end up doing more work in the long run.
As you’re probably aware, my priority when implementing this in v3 was matching the existing v2 behaviour, so that the new code could be checked by the existing tests. v3 has been “baking” for a few months since then, and we’re now in a better position to be thinking about potentially changing this.
The main concern with early reviews is idempotency. It’s not uncommon for users to use a filtered deck to review cards early - sometimes once a day, and sometimes they may even review the same cards multiple times in a single day. Old Anki versions did not handle this well, and each time the card was reviewed, the interval would grow larger. The current algorithm was mainly trying to address that.
Suggestions on how it could be done better are welcome
I’m not sure I can make a well-founded suggestion at this point, but maybe as a basis for future discussions, I’d like to know more about the underlying assumptions.
If the goal is idempotency, that seems to indicate that the spacing effect is the only consideration here. However, while Anki users are usually fans of spaced repetition, I don’t think many people would deny that massed repetition has any effect at all.
Let’s say there’s a card I would be able to recall after 100 days, but forget after 101 days. If I study that card every day for the next 10 days, surely the interval over which I would be able to recall it would increase?
On the other hand, if we assume the number of repetitions really is completely negligible for determining the new interval, there is no point in penalising late reviews. Instead, only the number of elapsed days should matter.
As a final note, since Anki itself doesn’t enforce an exact due date, but a range of days (fuzz), it doesn’t seem right to use a drastically different algorithm just because a card was studied plus or minus a few days. Visually speaking, the curves should be as smooth as possible.
Of course, we wouldn’t want to use some complex and intransparent mathmatical function, but at least continuity should be guaranteed.
IIRC, Anki originally fully included any extra delay in answering in the next interval calculation. I think in 2.0 it was reduced to full/half/quarter, and more recently hard has stopped adding any delay at all. It’s mainly trying to prevent people from just completely resetting their cards, as seems to be a common desire when returning from a break: Due times after a break - Frequently Asked Questions
By how much, and how reliably could we predict it? I won’t argue it’s completely negligible, but would trying to account for the short term cramming give enough efficiency gains to be worth it, and not cause unwanted behaviour when cards are repeatedly reviewed? Users could send cards with short delays years into the future in the past just by repeatedly cramming them, and we definitely want to avoid that
Yep, in full agreement that the transition should be smooth.
The lack of efficiency gains is a convincing argument. I would expect other parameters like ease and lapses of cards that got crammed a lot to be off-kilter anyway, so optimising the new interval by a few days would not be a priority.
But just to leave the door open for a different handling in the future, I’d argue that it should be enough to ensure idempotency for same-day reviews. As long as the interval isn’t growing by more than the number of elapsed days, I don’t really see how it could get out of hand.