In the latest simulation, a new sort order PSG_desc performs better than retrievability_desc in seconds_per_remembered_card. But it’s worse than difficulty_asc albeit slightly. Unsurprisingly, maintaining a constant desired retention is still the forte of retrievability_desc but that’s not a very valuable metric.
Now, one of the issues with difficulty_asc was mentioned that the sorting doesn’t change with time as difficulty only changes when grading cards. Even for very old backlogs, thus sorting stays the same as before. On the other hand, the sorting with retrievability_desc is more dynamic but comes with a slightly worse number for seconds_per_remembered_card. PSG_desc combines the strong points of both of them:
PSG value changes with time.
It performs really well in seconds_per_remembered_card almost as good as difficulty_desc.
There have been some aspersions cast about seconds_per_remembered_card or the calculation of total_time so I’d also point out that when it comes total_remembered, PSG_desc is performing better than all other sort orders. (Note: total_remembered is amount you remember when the simulation ends i.e. when you’ve finished the twenty thousand is:new cards)
What is PSG?
PSG stands for “Potential Stability Gain” and is calculated something like this:
PSG = (S_recall ÷ S) × R
Or in other words,
PSG = change in S after recall × probability of recall
Purely from intuition, I expect PSG_desc to perform better in edge cases as the formula for PSG includes both difficulty and retrievability. Say, when retrievability_desc creates an order very much reversed of what difficulty_asc does.
Now, it will probably be slow but I wanted to inquire if we implement this instead of what was suggested in Improving sort orders.
Tangent
We tried something like PSG = (S_recall × R – S_forget × (1 – R) ) ÷ S but it didn’t work out too well.
Hi, what is the image with table of order list? Is this is some addon? How to interpret this data? Are those statistics how order of the reviews impact learning? Very interesting
It’s not an add-on. It’s a simulator developed by Jarrett Ye, the creator of FSRS. See the discussion I linked in OP. We’ll have too much clutter if we discuss it here again.
I want to reiterate my aspersions about total_remembered here (which filter down to seconds_per_remembered_card ). I don’t think we should be using these as the objective function.
I think you’re calculating percent change in S, not absolute change in S. That may be intentional, but I think absolute change in S would be more useful for PSG purposes.
PSG = (S_Recall - S) x R
With percent change in S, it’s basically the exact same sort as difficulty_asc because that’s exactly the function of D in the algorithm (the sim results show it’s not the exact same, but pretty damn close).
The larger the value of D, the smaller the SInc value. This means that the increase in memory stability for difficult material is smaller than for easy material.
With absolute change in S, you’d be maximizing the actual time between reviews. That makes more sense to me.