Difficulty update (from old D to new D) consists of 2 parts:
The update that depends on the grade. Since there are 4 grades, there are 4 possible values
The “mean reversion” as LMSherlock calls it, which is usually small.
If we assume that 2 is negligible - and it often is - it turns out that D doesn’t change continuously and only changes by 4 discrete values.
However, this is somewhat different in the latest version of FSRS, where we added another term to D to make it approach maximum asymptotically, so that D is never exactly equal to it’s maximum allowed value. This makes the distribution smoother. I suggest using the latest rc, optimizing parameters and checking the distribution again.
Last but not least, D cannot reach 0% if you never use Easy.
I already did. This is the distribution after optimizing with the new RC (Optimsing All Presets) and then pressing Reschedule All Cards on my Main Deck(with the help of the FSRS helper addon)
.
However, this is somewhat different in the latest version of FSRS, where we added another term to D to make it approach maximum asymptotically so that D is never exactly equal to its maximum allowed value.
Shouldn’t this apply in both ways A card should never be able to achieve maximum ease (or minimum difficulty) at all, even with pressing easy infinite times. “Maximum” ease or difficulty shouldn’t be a thing.
Nope. It only applies one way. A card can still reach exactly 0% difficulty - as long as you press Easy - but not exactly 100% difficulty.
I don’t think that that should be the case though. Is this a mathematical restriction, or a theoretical one Practically speaking, things can always get easier just as things can always get harder. They don’t hit a magical boundary of ease, is what I am saying.
I would think that a normal distribution only happens if all the cards have the same complexity and all the facts are equally unknown and have equal interference with real life exposure to those facts or related facts. An extreme example: a deck of randomly generated made up words. Some might click a bit better than others, but the chances of that happening are fully random.
I’m learning a language and I get much more exposure to words that are used frequently and there are also a lot of words that are similar to words of other languages that I speak. There is also a group of words that can’t relate to anything, or words that are similar but have a different meaning. My difficulty distribution doesn’t remotely look like a smooth normal distribution.
That is not my point. My point is that it for example cards are either assigned a difficulty of lets say 68% or 75%, none in between. So my question, is why is it that there is a “void” for some D values.
Based on what @Expertium says, it is a restriction based on FSRS itself and how it assigns values.
I don’t know if this is good or bad, but I feel like difficulty is always continuous. There can always be difficulties between two values, even 2 consecutive difficulty values. For example all D values between 58% and 59% → 58.8% 58.76% 58.665% and so on… let alone D values between values as far from each other as 68% or 75%.
I agree that continuous D is more intuitively pleasing. If D was also based on R at the time of the review, that could result in a continuous (or at least somewhat more smooth) distribution, but I’ve tried like 20 different ways of using R for updating D, and nothing helped.
Yes. The scheduling is still static. It is not “real-time”. If the next interval was 2h on 10:00 AM and I do not press good, then come back later in the day 5 PM and then review it, the button will still say 2h.
If there is a way to make FSRS “real-time”,
that and the making an asymptote at minimum difficulty could perhaps lead to smoother difficulty curves.
But I don’t believe that’s all there is to it.
I think this also has something to do with FSRS not being to factor in Short-term memory and how it behaves “real-time”