I have 2 decks in Anki. One I tend to find relatively more easy, answering faster and generally learning new cards faster. The other is the opposite.
The optimal retention for these two decks differs significantly; the easier deck recommends 91%, the harder deck recommends 76%.
Do we have a good understanding about what properties of decks lead to a deck having a lower or higher optimal retention?
Is there any intuition which can help explain why these two decks have such different optimal retentions?
It feels like these optimal retentions are close to my historical performance on these decks with the SM2 algorithm. Is it possible that if, eg, I’d used FSRS on the harder deck from the start with a 90% retention, I’d now be getting recommended around a 90% optimal retention for the same deck?
If you do cards at higher retention you retain more of the info. On the flip side, you get to learn less new cards. Maybe for many hard decks, learning new cards is easier than retaining all of them in the long term. Maybe that but I’m not sure.
Can you take the bullet and test it? I sometimes feel increasing DR makes my CMRR value increase too. What you need here is, preferably slice off a piece of your deck and test having a DR of .90, then you can see what happens to the value CMRR gives you.
By the way, I don’t completely trust CMRR because there are unexplained behaviours I notice in it (I reported one in GitHub).
I’ve got a feeling this is something which changes very slowly, so experimenting with my own decks may take a long time to get any meaningful results. It could maybe be tested with a few simulations, I haven’t played much with the simulation code I’ve seen floating around yet though.
I understand the optimization process and what minimal optimal retention means, but I’m not smart enough to have an intuitive explanation/understanding of how varying inputs may influence the optimization.
Having a function which works doesn’t necessarily imply understanding that function, especially in areas like statistics, machine learning and optimisation.