(partially backed up by @David. We don’t agree on the details, but we agree that something like this is necessary)
Right now a lot of users don’t realize that desired retention is related to interval lengths and workload. I could write a list of 100+ posts where someone has DR at 90% and asks “Why are my intervals so long? What do I do?”.
My initial idea was like this:
But David thinks that’s probably too much information, and that some people may still end up being confused. So here’s a more condensed way to present the same information + a hint about interval lengths.
Question 1: Will the interval hints always be there? Can’t we disable them?
Answer 1: No. Disabling them defeats the whole purpose, which is making it obvious what desired retention does.
Question 2: It’s obvious to me!
Answer 2: It’s not obvious to hundreds of other users.
Question 3: Is the slider necessary?
Answer 3: Yes, otherwise interval length hints will look out of place. Where would they even be then?
Question 4: Do you see any way to implement this overhaul without giant interval hints that cannot be turned off? Like, any at all–
Answer 4: Absolutely not. I understand that this will upset power users, but that’s a necessary evil.
The user needs to think whether he understands FSRS enough to mess with it? Then it doesn’t solve the problem.
The user needs to think what desired retention does before using your solution? Then it doesn’t solve the problem.
The user needs to think? Then it doesn’t solve the problem.
The point is NOT to add more customization. The point is to throw “DESIRED RETENTION AFFECTS INTERVALS AND WORKLOAD” in the user’s face so hard that he won’t be able to miss it.
Question 5: So we would be able to use both a slider and an input field for DR?
Answer 5: Yes. If you use the input field to change the value, the slider should instantly change as well.
Question 6: When will the workload value be recalculated?
Answer 6: Ok, now we are getting to the important part! I have no way of measuring how long the simulations take, at least not with the kind of millisecond precision that I need. That would require scavenging that particular part of the Rust code and running it manually. So depending on how long it takes to run simulations for 29 values of desired retention (as for the duration, 90 days is reasonable, IMO), we have 2 options.
-
If it takes <40 milliseconds (on a low-end or mobile device): just re-run simulations every time the user changes max. interval, learning steps, relearning steps, new card limit, review limit, Easy Days, or anything else that affects the simulations. The amount of lag won’t affect user experience much. This would be ideal. We could even reduce the duration of the simulation to 30 days, if that’s what it takes to get under 40 ms. As to why 40 ms: that’s a good baseline for “amount of lag that isn’t annoying”.
-
If it takes >=40 milliseconds: then we have no choice but to recalculate them only when FSRS parameters change. In other words, after each optimization. That would be very unfortunate, since that would mean that the user won’t immediately see how changing settings affects the workload, and likely won’t realize that the values only change if parameters are recalculated.
Time per simulation is the main factor that will decide whether this overhaul is a good idea or not.
I tried a very crude estimate - run the simulations in Python, calculate time and divide it by 50 because Rust is 50 times faster than Python*. But I ended up with a completely unrealistic estimate: for 29 values of DR and 90 simulated days, it would take 8 seconds. I am willing to bet it’s much faster than that in reality, hopefully Jarrett can estimate it properly.
*source: the universe has revealed it to me