Do you mean learning steps or interday learning
Learning and re-learning steps.
The problem is mainly that we don’t know what is the “some”. Do we divide cards by stability? Or difficulty? One card (not this one)? Type of content?
I had the same idea as yours, but implementing this probably won’t bring huge advantages. Instead, I think CMRR and the simulator will have to be reworked just to work for this. Plus we make Anki overly confusing.
I have a different idea now which seeks to help out users divide their card collection. It’s a familiar idea: deck specific settings. In particular, deck specific parameters field and search bar. This should work just like deck specific limits work. The CMRR feature has to be reworked a little, but apart from that I feel this will be a good improvement. You can simply select the “Deck” option over the “Preset” option, click Optimize, and it is done. The search will look something like deck:"parent::child" -is:suspended
.
If this done, finally I wouldn’t have 30 different presets in my collection. Instead I will just assign deck specific parameters for each of the those decks.
Is there an estimate of when this FSRS build is coming out. I dont know why I am eager for this (partly because I wished for taking learning steps into account when deciding intervals for quite a while now, even though you said it doesnt play a big role ). So I am excited like a dog.
Somewhere between “tomorrow” and “in 10 years”. Alright, well, if I really had to guess…before 2025.
I want to share this:
As I am learning for my state medical examination, I am trying to power through all my medical Anki cards that I have amassed over 2 years.
In my medical cards collection of 67,951 cards, which have not all been made at the same time but through a continuous and painful process, 9743 cards have failed. This translates into a true retention of 85.6%. My desired retention is set at 95%. This process has taken place at an average of 3000 cards per day so I don’t know if this anecdote is of significance since it could be influenced by lots of factors (time of the day, difficulty, me studying in a foreign language, backlog size). I am also using different presets for different subdecks, so I don’t know if that plays a role.
But I can’t help but think the discrepancy is too large for it to be just due to some “noise”. I think I made a similar post before but I think I know at least one reason behind this. Interference. As new knowledge is acquired it either strengthens, is indifferent, or more often overlaps and interferes with old knowledge.
I don’t know exactly how FSRS deals with interference from the continuous growth of knowledge (from what I can tell, it is suboptimal), but it is a real thing. That is just the retrograde interference, there is only the anterograde one, whereby old knowledge interferes with acquiring new knowledge.
If FSRS could somehow manage to model interference based on the card difficulty of new cards and then change the card difficulty of other cards based on it, then I believe that would be a dealbreaker.
Basically, the card difficulty is not only influenced by its reviews but rather reviews of the entire collection as a whole
Yeah, that seems too large of a discrepancy to be explained by noise. @L.M.Sherlock sorry for bothering you so much lately, but maybe you could give some advice
What about your true retention in young cards and mature cards? If the old knowledge interfere your new knowledge, I guess your mature retention will be higher than young retention.
Have a look. This is from my general reviews over the past month (not my actual review run over my entire collection which as I said clocked at 85% true retention)
But your observation is consistent Mature consistently scores higher, but I think interference goes both ways nevertheless.
Could you share your FSRS parameters? I guess the first four parameters are small, which may induce low young retention.
I could but like I said I have different presets for different subdecks. Is there some code line I can enter into the Anki console to get you that
I guess there isn’t. What about your largest preset’s parameters?
Here are some parameters from some of my newer decks (keep in mind, I have optimize at least once a week, and I have just optimized yesterday so they are fresh after the run)
Here is parameters of some of my older decks (>1 year)
What share of those 9700 lapsed cards were overdue when you studied them?
I don’t know exactly. There was differently a backlog, but I have begun addressing it over the past 2 months. So if a chunk were overdue, it would have been overdue by like 4 or 5 days I would give it a week tops. So they were just “due” cards and not “overdue” so nothing out of the ordinary. As I said I went ahead and took every card I have in my collection (due and not due) and gave it a go.
Thanks. The parameters deny my hypothesis. Seems FSRS have systematic error here in your case.
Should I be worried Or is this something that could genuinely be bettered in FSRS (dealing with interference)
Do you require a larger sample size
If it helps to know, even mature cards clock higher than young cards over the time span of a whole year, So the observation is pretty consistent
I don’t expect something like that to be happening in the near future. You shouldn’t either (near future extends upto 2030).
(Actually, why is support happening in this thread? this is for FSRS suggestions.)
Would be nice to get a quick summary for each FSRS update. Maybe in a blog post? I (should) be running the latest version on desktop+android, so it would be nice to just get a feel on the improvements since the changes aren’t immediately noticeable.
I think Sherlock documents it on his Github page