Clarify what optimal retention means

PR is welcome.

This has been implemented in the latest release of FSRS Helper. You can check for add-on updates to install it.
image

Btw, I also opened this issue: Integration of the Helper add-on stats

1 Like

Is FSRS using the actual time one spends on card to calculate intervals or the minimum retention?
The term workload gets thrown around a lot, but not how it is used in practice.

The time spent reviewing cards is used for calculating minimum recommended retention, yes. Specifically, these values (in seconds):

  1. How much time you spend on a card during the first review, when the card is new
  2. For consecutive reviews, how much time you spend on a card when you click Again/Hard/Good/Easy, four values
1 Like

Oh no, that is terrible news! And really not communicated well enough.
Can I reset my cards timing data…?

I have two modi operandi for reviewing: As fast as possible, when I have a dedicated study session, and “Keep reviews open and do reviews during idle times at work”.
It seems, that is not a good idea anymore…

Edit: But the entire scheduling is still independent, right?

At least the manual is very clear there, but I don’t know if it was updated in that regard…
“Anki monitors how long it takes you to answer each card, so that it can show you how long was spent studying each day. The time taken does not influence scheduling”.

Why do you think that affects anything? It’s only bad if you leave a card open and only start looking at it after some time. Deck Options - Anki Manual

Either way, I am not sure why something like that should spoil this feature if it is not already poorly designed.

1 Like

One important point is missing in the manual.
(emphasis mine)
In Anki:

Stop timer on answer
Whether to stop the timer when the answer is revealed. This doesn’t affect statistics.

In the manual: Deck Options - Anki Manual
Stop timer on answer: whether the timer should keep running when you show the answer.

1 Like

FSRS only uses interval lenghts and grades, nothing else. Review timings are used only for simulations in “Compute minimum recommended retention”.

That’s exactly what I’m doing. I leave anki in whatever state it is in and go back to work. I don’t care if it’s on a card right now or not…

@Expertium Well, that’s good, altough it does mess up the minimum recommended retentation for me and makes this feature literally unusable.

As a recommendation: Do not use time as a basis of workload, but number of reviews maybe.

1 Like

How should a function that tries to reduce the workload (less time, more knowledge) perform it without using the user’s time spent?
In your case, Maximum answer seconds may help*

1 Like

How about less number of reviews with more knowledge? I don’t see your point, besides being fixated on an arbitrary defintion.

The amount of time spent on successful reviews is generally much lower than the amount of time spent on a lapse, when you press “Again”. And it also varies between Hard/Good/Easy. For most people, the pattern is like this: time(Again)>time(Hard)>time(Good)>time(Easy).
Time is important. Not taking it into account would make this feature less accurate, at least for most people.
I do have an idea how to improve it though - use median time instead of the mean time. The median is not sensitive to outliers, unlike the mean. So if you have times like this:
10 s, 11 s, 12 s, 13 s, 200 s.
The mean would be 49.2 s, but the median would be 12 s. I’ll suggest this to LMSherlock.

1 Like

Although time is important, even the quickest review takes (or should take) effort (and can be undone and redone). So maybe something could be added to it.

Also, I hope median calculation is not too slow.

I am not sure if your assumption of time(again) > time(hard) holds. It is way easier to identify a forgotten card, than recalling an hard to recall card.

In any case, I think this “feature” would benefit from pre-defined times, instead of using user data. There is way too much noise in any of the times. The most basic case would be chinese characters: If the character has 20 brush strokes a review will take longer than for a character with 2 strokes. Even if the learner has a perfect retenton for each of them.

Maybe development needs to slow down a bit instead of pushing more and more half-baked features :confused:

This may be the most half-baked feature that has always been so. It is marked “(experimental)”.

There is a beta branch for this kind of things, isn’t there?

No need to worry, it only takes a tiny fraction of the total amount of time required to run the simulations.

No, that is a bad idea because then the calculatons won’t be personalized for each user based on their real data.

If you don’t find this feature to be useful, just don’t use it.

2 Likes

You can base it on the users number of reviews, based on their statistics. This will benefit the user way more, because the noise-signal ratio in the times is pure hell.

As I said:

  1. The average/median amount of time spent on reviews is not the same for different buttons
  2. If you have to do fewer reviews but spend more time studying as a result, that’s undesirable

Then use fixed times per button, that reflect your hopefully well-researched assumption about their relative time. I.e. Hard: 5, Again: 5, Good: 2, Easy: 1. [seconds]

This is hundred times better than using messy data.
People who will get a minimum desired retention way below their actual minimum will despise the algorithm soon. For Reference: My minumum retention is suggested at below 0.75, which seems way too low… (Which actually brought me to this thread :D)