I just upgraded to FSRS and am wondering how to schedule my decks. Currently have 1 profile (ankihub Anking step 1) and then my default for all in house exams. Using SM2 I had a general pattern I used (and worked really well exam score wise albeit likely inefficiently) that was generally seeing a card 3x times for the first day, the use the hard good easy buttons accordingly. My study habit is read over lecture notes then go straight to anki - yes it may be a bit brute force to memorize and learn via anki but it works for me, and I get concepts very easily.
For default (in house) we generally have an exam every week or 2 and short-term memory unfortunately is the name of the game. This is why I developed my 3 reviews,2/1, day off, and 1 pattern of seeing cards a lot. Just wondering how I can modify (if i should) new learning steps and lapses to see a New Card maybe 2x in a day then once the next and then let FSRS do its thing?
For AK Step I generally would see new cards 1x or 2x a day (1 if okay or more esoteric to what we learned in class, 2x if hard or related). I would then review these cards the next day (if i got to them - which was a problem with 1000’s of review cards).
I am wondering how to set the schedule to see a new card either the next day or <3 days after learning it. - my assumption is two fold increase retention will likely solve this the best without affecting FSRS utility too much. 2. I likely have an inflated historical retention since I used good when it was likely hard and decreased this to .85? Not sure what I should do otherwise? I am not sure how new steps are different or affect lapses and don’t want hitting hard on new hards (in order to see it 2x first day) to mess with the long term algorithm?
Sorry that was a lot but appreciate any of the insight in advance. And yes I know I likely have been studying inefficently and misusing anki a bit.
Anki comes with a scheduler, in fact I’d call it a scheduling program if not flashcard program. You shouldn’t be wanting so much control unless there is good reason to.
For most people, research shows a lot of sub-day steps do not aid much in long-term memory. This is in line with what spaced repetition does, space your reviews. What if you experiment with reducing that number to 2? 1m 10m, the defaults.
Ah, so you four repetitions per day. I won’t say you need so many. You’ll get good grades, though, to quote you, “albeit likely inefficiently”.
I don’t think it’s STM in the proper sense. Is it?
I now don’t know what you do here: “3 reviews,2/1, day off, and 1 pattern”. I would advise you to just move back to defaults as explained.
Better alternative: put them in different decks with different presets. The seperation ensures better parameters for both groups. Do this for other decks too. And don’t forget to optimise!
Most people don’t need to study most of their cards exactly a day after. Are you using 1d learning steps? They are not recommended for FSRS. Might as well change that.
Learning steps, of course. I am not advising though.
Decreased what?
They are learning steps and they don’t affect cards in relearning.
Yeah, I see people being confused about this all the time. Short-term memories last for 30 seconds. Not minutes, not hours, not days and definitely not 2 weeks.
Additionally, it appears that sleep actually helps with memory consolidation, as opposed to a common belief that you forget things after sleeping. This is somewhat supported by academic research, as well as by my and LMSherlock’s findings when analyzing review data from thousands of Anki users.
that is true, I am more talking about that fact that seeing a card in one day then again in 8-10 days when I have a test in about 10 days doesn’t seem enough to really have the card memorized.
Agree that 2 reviews would be good. Currently no 1d learning steps just 5 min and 15m. And regarding the hard that is good to know, I read the thread and I guess I was confusing hitting hard on review cards with hitting it on new hards.
I checked and that seems to be correct although I am really not sure how do they even properly define these words. In my textbook, it also says,
Experiments, which were carried out to
test the stage model of memory, have produced
mixed results. While some experiments
unequivocally show that the STM and LTM
are indeed two separate memory stores, other
evidences have questioned their
distinctiveness. For example, earlier it was
shown that in the STM information is encoded
acoustically, while in LTM it is encoded
semantically, but later experimental evidences
show that information can also be encoded
semantically in STM and acoustically in LTM.
This is a common thing in many textbooks, where you are given a theory and then you’re told “It might not be accurate”. Always leaves me wondering, so what’s accurate anyway?
makes sense thank you!
Log loss: 0.0920, RMSE(bins): 3.07%. Smaller numbers indicate a better fit to your review history. , with True Retention around 98% - not entirely sure I am retaining that much tbh ( maybe I inflated my good when it was hard, type of thing?) But generally my numbers look much higher than the parameter example in the forum, is that fine?
All is well, I’ll just want to know why did you change the Ignore reviews before setting? From RMSE itself, it’s looking good. But, that option makes the optimiser ignore every card you’ve reviewed before the set date.
Try one thing, set that date to somewhere around 1970 and optimise again to see if your parameters change.
Ah thats just when I started medical school and had somewhat of a pattern of how I was going to use anki versus just hitting easy on every card.
Running into new problem -
This looks like a different issue. Please make another post describing the steps you took just before this happened. It’s good if people can reproduce it themselves. Also provide screenshots of card info/cards where you’re facing the problem. And copy paste your debug info from Anki.
You can read my unfinished article (the first one) here: https://expertium.github.io/. But it’s a pretty long read. Here’s an excerpt:
Notice that while GRU-P (short-term) outperforms GRU-P and while FSRS-5 outperforms FSRS-4.5, the difference in all 3 metrics is very small. This suggests that same-day reviews have a very small impact on long-term memory. Since the architecture of FSRS and GRU-P is very different, the fact that the improvement is small for both of them suggests that architecture is not to blame here.
Basically, most algorithms in the benchmark only take into account one review per day, the first one. LMSherlock made two algorithms that do take same-day reviews into account: FSRS-5 and GRU-P (short term). Their “no same-day reviews” counterparts are FSRS-4.5 and GRU-P. FSRS-5 outperforms FSRS-4.5, and GRU-P (short term) outperforms GRU, but the difference isn’t huge. In fact, it’s pretty small. So both for FSRS and for a neural net, taking same-day reviews into account only mildly improves their ability to predict the probability of recall. It doesn’t prove (in a rigorous sense) that same-day reviews are useless, but it’s strong evidence that they are only mildly beneficial for forming long-term memories, and that you won’t lose much by only doing one review per day. And since the code and formulas used in FSRS and GRU-P are very different, the fact that the improvement is small for both of them suggests that it doesn’t matter what you do with same-day reviews, they just suck anyway.
You might be thinking “But what if the dataset just has very few same-day reviews? Then it would appear that, on average, their impact is small.” That’s a valid concern, but in the Anki 20k dataset, 24.6% of reviews are same-day reviews. So clearly, lack of data isn’t an issue.
Thanks! If I recall correctly, Andy Matuschak’s research showed exactly the same (same day reviews not significantly increasing long term memory).
But I was more referring to "Additionally, it appears that sleep actually helps with memory consolidation, as opposed to a common belief that you forget things after sleeping.” I’m aware of the general academic research that shows this, but not the one using Anki data which seems super interesting.
Thanks again
Or is that implied?