Also, Jarrett, another suggestion: learn_span = deck_size / learn_limit_perday . This way the number of days to simulate is automatically adjusted if you change the deck size from 20k ot something else
Doing some quick math, the derivative of R comes out to be proportional to R3/S. For the reason mentioned by you, I think that sorting in the descending order of R3/S can be quite effective (when there is a backlog).
Even though it confirms the opinion of others here that Anki can be daunting for new users?
A product that ignores the views of potential users will limit it’s potential client base. Any company worth its salt that produces a product would note what potential customers say prevents them from using the product.
Jarrett already implemented the first suggestion. R^3/S is interesting, I’ll see if I can do it myself later (I’m not at home right now) @vaibhav how did you get R^3/S?
I am too and the only way I can see that even being possible is that the desired retention level is not set at the actual best place.
PRL Descending is getting you as close to the desired retention (DR) level as possible. If you’ve chosen a suboptimal DR, and let’s say for example a lower DR is better, then a sort that is less efficient and is causing you to get average reviews at that more optimal lower retrievability is going to look better.
@L.M.Sherlock I know this would be really easy to code, but might take a long time to run… Can you loop the simulations from DR 0.70 to 0.90 (or whatever range you’re sure is going to capture the optimal number) in increments of 0.01 and show the tables produced for each? I know you guys already settled on 0.90 being the best, but you didn’t do this with all the different sort options. PRL Descending just makes too much sense for it not to have won lol.
The sim is using DR to determine when cards become due (I assume). This will definitely affect the results.
Yes, but I was looking at the data in last column (seconds per learned cards or something). PRL is performing worse than other candidates. But I get your logic now.
That metric is really hard to parse and may not even be useful at all. “Total Remembered” isn’t actually number of cards, it’s total summed up retrievability (which can just be thought of as average retrievability for sake of understanding). Dividing that by time doesn’t really make sense.