Difficulty not updated correctly

If I understand correctly the FSRS algorithm (the wiki on github open-spaced-repetition/fsrs4anki/wiki/The-Algorithm), a card for which I always press “Good” should have the default difficulty as its difficulty rating (D_0(3) = w_4 in the parameters).

However, this is not the case for many cards in my collection. For example, here are my current parameters:
0.1382, 1.9344, 6.5874, 14.9103, 5.1296, 1.1898, 0.7920, 0.1162, 1.6373, 0.0543, 0.9845, 2.1698, 0.0186, 0.3497, 1.5463, 0.1390, 2.8237
In particular, w_4 = 5.1296 which means the default difficulty is 51.296%.

And here is one of my card:


After optimising the parameters today, FSRS assigns a difficulty of 72% to this card! Why is that?

1 Like

(@L.M.Sherlock: they appear to be using the 24.04 beta)

Yes, sorry! I forgot to mention that I am using 24.04rc2 on macOS (the apple build).

And here is a similar mention in this forum: forums.ankiweb .net/t/anki-24-04-beta-rc/41792/104.

I have checked some other cards in my collection. Lots of cards for which I have never pressed “Again” or “Hard” have unusally high (>70%) difficulty rating.

Could you share your collection with me? I need to reproduce this bug.

1 Like

Yes, of course! How may I share it with you?

Google Drive is OK.

There you go: https://drive.google.com /file/d/1RFLyvMuniqlGKRJXUIbk-wE5NIKslLUd/view?usp=drive_link

Thank you very much in advance for your time for investigating this issue. Here is some additional information:

  • Before switching to FSRS last month, I used the default SM-2 with several learning and relearning steps longer than 1d.

  • The current parameters are optimised by ignoring reviews before 01/03/2024.

  • I have optimised and rescheduled three times since switching to FSRS: 27/02, 09/03, and today 22/03.

I guess it’s the problem.

This seems to be the case. Thank you very much for pointing me to that!

So if I understand correctly, if we use the “Ignore reviews before” feature to compute new parameters and save, FSRS also updates all the cards with the new parameters using only reviews that are not ignored, which is the intentional behaviour from the pull request you linked to. Am I right?

Actually, what I wanted is that the new parameters are computed with some old reviews ignored, but the S, R, D of the cards are updated using these new parameters but also all the reviews.

Strangely enough, I find a way to achieve this in the current implementation: first choose a date, then click Optimize to get new parameters, then clear the date, and finally click Save. I am not sure if this is also an intentional behaviour.

In addition, even after these steps, the difficulty is still not updated correctly. For the same weight
0.1382, 1.9344, 6.5874, 14.9103, 5.1296, 1.1898, 0.7920, 0.1162, 1.6373, 0.0543, 0.9845, 2.1698, 0.0186, 0.3497, 1.5463, 0.1390, 2.8237
as above, the same card now shows


The difficulty is 46% instead of 51%.

first choose a date, then click Optimize to get new parameters, then clear the date, and finally click Save. I am not sure if this is also an intentional behaviour.

Using the same parameters, can you press Evaluate with the date filled and cleared respectively and share the results?

The difficulty is 46% instead of 51%.

d = 46% is expected for these parameters and review history.

The conversion formula is

d = (D - 1)/9

where D is the difficulty from 1 to 10 and d is the difficulty in percent

Also, what was the reason for choosing to ignore some of the reviews during optimization? If the reason for ignoring the reviews is not good enough, it should not be a suprise that the algorithm works less effectively when some reviews are ignored.

With the date filled:
Log loss: 0.4359, RMSE(bins): 3.72%

With the date cleared:
Log loss: 0.3660, RMSE(bins): 7.58%

That explains it. Thank you so much. Maybe we can add this information into the wiki?

If I use all reviews, I will get the following parameters:
0.1865, 1.5578, 4.4790, 22.3363, 5.0529, 1.4993, 1.1636, 0.0000, 1.4973, 0.1206, 0.8117, 2.1186, 0.1380, 0.4419, 1.7362, 0.0543, 4.5750
As a result, half of the cards gets a difficulty of >95% (see image below); their interval grows ridiculously slow even for a desired retiention of 0.90. In particular, w_7 is optimised to be 0 (or sometimes very close to 0, say 0.0001), which means the difficulty of cards with high difficulty never goes down unless I press “Easy”.

I have a feeling that this is because before switching to FSRS, I had multiple (re)learning steps more than 1d long. So I try to ignore reviews before switching to FSRS and the difficulty distribution makes much more sense:

If I use all reviews, I will get the following parameters:
0.1865, 1.5578, 4.4790, 22.3363, 5.0529, 1.4993, 1.1636, 0.0000, 1.4973, 0.1206, 0.8117, 2.1186, 0.1380, 0.4419, 1.7362, 0.0543, 4.5750

What do you get after pressing Evaluate with these parameters with the date cleared?

Also, since when are you studying this deck? If it was significantly before 01/03/2024 (which I assume is 1st March, not 3rd January), then I don’t think that you should ignore those reviews.

I am not sure what’s the best way to deal with the issue. It is not good to ignore most of your review logs. But, at the same time, FSRS is unable to produce correct memory states when asked to use all the revlogs.

I have one suggestion though. You can try to split the deck based on the subjective difficulty (not FSRS difficulty) of the cards and use different presets (and thus, FSRS parameters) for them. Perhaps, FSRS would be able to calculate memory states more accurately in that case.

The calculation of difficulty is the most inaccurate component of FSRS and this case is clearly a manifestation of the same.

Thank you for your reply!

For the parameters optimised with all the reviews:
Log loss: 0.3416, RMSE(bins): 3.40%
So they are indeed much better in terms of RMSE(bins), but only marginally better in terms of log loss, compared to the parameters obtained from ignoring reviews.

I have been studying this deck since 09/2023; I have nearly 60000 reviews before 1st March, and 11000 since 1st March. Some cards are reviewed several times during the same day, so the number of revlogs that can be used to optimise the parameters is around 48000.

Since the subjective difficulty of the deck is quite homogenous, it would be difficult to split it into two. But thanks for your suggestion, anyway.

I learned that the modelling of difficulty is the weak point of FSRS from this discussion: https://github.com /open-spaced-repetition/fsrs4anki/issues/352
Do you know whether there is still an ongoing research on this topic? I would very much enjoy playing with the formulae to see if I can come across something.

I don’t think that there is any active research going on in that area. Basically, we are fed up trying different ideas and finding that none of them actually work. You are welcome to play with the formulae, though.

My stats seem similar. Multiple cards that I’ve never lapsed with a difficulty of 96% or similar. I’ve been using the full deck history with a desired retention of 90% and have about half my cards with a difficulty of 95%+. I’m not sure I understand what’s going on here.

Do you have any suggestions for how to split the deck? Is there a good proxy for subjective difficulty I could look for (like ease) or do I just need to go through every card?

Well, I have tried several dozens of ways to improve difficulty, from obvious ones, such as making the values that correspond to each button optimizable (instead of Again = 1, Hard=2, etc.), to not-so-obvious ones, such as making a completely new definition of difficulty. Most of the time, I got no improvement; the new definition improved RMSE by 1.5%. And I don’t mean “from 5% to 3.5%”, I mean “from 5% to 4.925%”. I would recommend that you don’t bother with it.

Hey,

im using version 24.04.1 and got a lot of cards aswell pre 2024 but is this rly an explanation for this behaviour here?

I got the card here 7 times in a row right (good) but the ease factor is still at 100% and the intervals just increasing by 1 day? This seems rly wrong.
Is this a behaviour i could also probably fix with rescheduling the cards on change?

My current FSRS parameters in this deck are:

0.2313, 0.3450, 0.6285, 11.2930, 5.3363, 1.5041, 0.9484, 0.0023, 1.3183, 0.0561, 0.7058, 2.0767, 0.1342, 0.2792, 1.4445, 0.2842, 3.1725

Log loss: 0.4535, RMSE(bins): 4.45%. Smaller numbers indicate a better fit to your review history.

the desired retention is 90%

thanks in advance mates

This parameter determines how much difficulty “reverts back” to the default value (which is another parameter) when you press “Good”. FSRS determined that this value must be very low for you, in other words, FSRS doesn’t think that difficulty should revert to the default value in your case. There isn’t really anything that you can or should do about it.

1 Like