Ordering Request: Reverse Relative Overdueness

Wait, the sim wasn’t doing it already? Bad.

The sim just has a new cards limit and a review limit. By default, they are independent and constant

No, the original sim was set up to just add 20 new cards every day, regardless of the backlog. I think that was on purpose though, because we wanted to see how they all performed with a big backlog.

Done (for the few we’ve been talking about the most)

You should post True Retention graphs as well

I’ll need to re-do the sim with the more normal settings. I have this one set up based on my study habits, which means there are zero cards studied days, there are gaps of 500 days straight with zero studying, when I do study I sampled from a Poisson distribution to decide how many cards to study. The graph is a little crazy, but it’s more accurate to my situation (and probably more accurate to the average user than a straight consistent 80 cards every single day).

Hard to learn anything from this particular graph though.

I’m not sure if negative True Retention is very accurate for your situation…or for anyone’s situation…

I changed the moving_average code to handle zero and NaN values and I think I did it wrong. I just tested it and the retention_per_day is never negative, so that graph should never be.

bro is generating noise like crazy

edit: I think I’ll make a post with all the graphs here. It’d be cool. what do you all think? I don’t mind if someone else does it too.

Dae already opened a Github issue. Is there anything else we need to achieve?

No it’s so that future generations don’t have to read through all this. Plus, I wanna make a “Dynamic”/“Automatic” order. So, for that too.

I just want to point out something in this graph because of the way we’re prioritizing the sums of retrievability scores.

You can see in some of the better performing sorts, there’s a big gap in the middle. That’s because they’re allowing the most forgotten cards to go to the back of the queue. That’s a good thing, but it causes those R scores to drop further and further while they are waiting to be studied. That’s what you want when you are working through a backlog, you don’t want to be wasting efficiency on the most forgotten cards.

This also causes the sum of retrievabilities to look really bad for those sorts. You have a bunch of cards being pushed way down, and other sorts prioritize those cards and make their average R score look higher, even though they’re terribly inefficient.

This is why I’ve been skeptical of total_remembered and seconds_per_remembered_card being useful at all.

but it is possible to set up a filtered deck
“deck:AAA::AC” is:due prop:due>-4
in the Search box
see OP

It doesn’t mean the sort will become reverse relative overdueness. Filtered decks have their own sort order.

The more I think about it, the more it makes sense that Ascending Difficulty is doing so well. I think “Difficulty” is just a bad name for that metric. I think the name is what was causing me confusion.

Retrievability (or really the inverse of retrievability) is what we mean by difficulty. Think about it, “the probability that you get this card correct right now” is basically exactly what we mean by subjective difficulty. The less likely it is that you get it right, the more difficult we consider it.

Difficulty™ isn’t really telling us that. It’s telling us how fast we need to change Stability. It’s basically an error correction term. If the Stability is underestimating our actual retention, then that’s when D is going to be lower. It forces Stability to change faster.

That’s why Ascending Difficulty sort does so well. It’s prioritizing the overall increase of Stability across all the cards. It’s asking “Which card right now, if you get it right, will increase its Stability the most?” Then when you get it right, it gets its interval pushed further into the future than any other card would have, relatively. So it’s prioritizing getting cards pushed into the future faster and lessening the average review load.

2 Likes

But thing is, if you leave a bunch of cards without a review for a long time, their Retrievability decreases. Their difficulty however does not. This is why I think R value is more robust.

@Expertium what do you think :question:

Yes, but the sort order is based on FSRS difficulty and not subjective difficulty. They are different things.

Edit; Oh, sorry. I didn’t read any further. You get the distinction.

1 Like

I think how well this hypothesis works depends on retrievability. For example, if the low difficulty cards have very low retrievability you might be better off doing the high retrievability cards first. As, you’ll have forgetten most of those cards so you’re increasing stability for only a small number of cards.

As I write this, I realise we can use a metric like PLC but with a twist. How about doing PSG? Potential Stability Gain? This is in line with my previous idea of combining difficulty and retrievability. I’m not sure it’ll work better than current methods but worth trying.

@rich70521 Can you come up with a formulae? I don’t know how next stability is calculated.

(By the way, for perspective, Jarrett was working on a new “system” where FSRS will be used to increase stability as quickly as possible instead of trying to maintain a certain retention level. It didn’t work out too well.)

Difficulty is literally PSG. That’s the exact formula (I’m pretty sure).

1 Like

Sorry, that’s not giving you the PSG number, but sorting by Difficulty is the exact same as sorting by PSG.