No reason for it to be. The data is the data. How many you got right divided-by how many you studied is pretty straightforward. We’ve had Mature retention as a Stat long before FSRS.
Are you suggesting that we look at retention to find out how well FSRS is performing? I said misleading because that is what I thought. Retention depends on the SM2 settings you use or the desired retention
you use for FSRS. It’s not a good way of comparing the schedulers.
I haven’t suggested any sort of a head-to-head algorithm comparison. But retention has been and remains a user’s best way to assess how well the tools are performing for them.
Working out whether you’re better or worse off than when you switched is easier to do on a grander scale with retention – as opposed to a smaller anecdotal scale. The figures wouldn’t say anything about how one algorithm stacks up against another across the board, but they can say a lot about how things are going for this user right now. And that’s the only thing I’m concerning myself with here. There’s not much comfort in hearing an algorithm is “better” when it feels like it’s not working for you.
I was talking about situations where the algorithm suddenly makes me review with a much larger interval than the previous one. You omitted that part in the capture you referred to
And I confirm that in these situations, yes, I forgot most of the cards—the spacing is far too large.
I installed Anki on my PC and set it to English. I didn’t quite understand which statistics you need.
If that’s what you’re talking about, then increasing the interval by 3 times in 2 passing responses, I don’t see anything unusual.
The jump in the interval from 10 days to 1.2 months is strange. It can be explained by optimization between checks.
From your Anki Desktop install, the Card Info will show up in full like we’d need to see.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.