Number of cards studied affecting card retrievability

Not sure exactly how this would be implemented, but I wonder if the number of cards you’ve studied since the last time you saw a card should affect the retrievability. I know subjectively, when I go through periods where I’m studying way more, my retrievability goes down for each individual card.

I think that makes sense just thinking about it too. Let’s say you set your whole collection to 1 new per day, 10 reviews per day, I can almost guarantee you’re going to be getting them right at a much higher rate than the algorithm would predict than if you set those two numbers higher.

I’ll bet the FSRS algorithm would benefit by taking this into account.

1 Like

Similar ideas have been proposed in the past. My understanding is that it’s unlikely to be implemented, owing to a combination of feasibility with Anki’s native architecture as well as the difficulty in quantifying such a thing.

Not to mention that not all collections are equal, which means volume of cards may not be as important as content, difficulty, etc. That’s not to say that such modeling is impossible, but it falls into the same realm of judging difficulty by response time, etc., and other similar proposals that are unlikely to see the light of day for a few reasons.

You are speaking of interference and it is a real thing. I have had this on my mind for quite a while now.

I suppose it is very difficult to model. I am not sure how one could model the influence of a collection’s size AND its Rate of Growth on the Retention of learned cards (aka Interference).

I’ve said this before, I’ve tried incorporating the following two things into FSRS:

  1. The number of cards reviewed today before the current card
    image
  2. The time of the day (in the 24-hour format)

Then I would calculate how tired the user was based on this info. I’ve tried multiple different implementations, none improved accuracy. Maybe I made some mistake or maybe there are better ways to use this stuff, but if someone will find out a better way, it won’t be me.

There could be more complicated versions that would be hard to quantify, but I think the most obvious thing to try first would be easy. Anki keeps track of the time of every review. All you need to quantify is the number of reviews since the last review of this card.

Now, that’s easy to quantify, but maybe not easy to implement. I don’t know enough about Anki’s architecture. Every card would have that feature incremented after every review, so maybe you’d need to delay updating it until the next sync, or after each day passes.

Then, that number would simply be part of the Retrievability equation.

1 Like

Even then, how to weight a set number of reviews. Are all, say, 100 reviews considered equal? Surely that can’t be the case.

I’ll play the skeptic here and say that it’s a nice idea but I don’t see it being borne out. As Expertium noted above, it has been tested out and none improved accuracy.

What they tested isn’t remotely close to what I’m talking about. It’s scratching at the same itch, but it’s not the same metric.

I think that would be the case, yeah. At least at first. Why try the most complicated conception? Just weight the reviews equally and see what happens. You can try weighting them after to see if that improves it, but that seems like too much to me.

Yeah, give it a shot I suppose

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.