I was wondering how to find the optimal* *target retention rate* (**TRR**)

- *:
**In my case,**the average summed time per remembered card shall be minimal (or: the review effort over each card lifetime shall be minimal). Which also allows me to lower the threshold by which it is efficient to include material to my deck (see: gwern_net, spaced-repetition#how-much-to-add) …*optimal*means:

### Known approaches

I know of three approaches to find optimal *target retention rate* for fsrs4anki.

- Suggested retention from
*fsrs-optimizer* - Compute optimal retention within anki >= 23.10 (deck options)
*fsrs4anki_simulator.ipynb*… comparing different scenarios.- (… and intuition)

## Selected approach

Regarding *1.* and *2.* I don’t know yet what exactly is optimized for and how it really works. In addition to this, it is marked as experimental. So, in the meantime I use variant *3.* … which is also the most intuitive for me personally.

Especially when @Expertium posted a plot on github (target_retention_rate → workload). I wanted to have this kind of information for my decks. I used a “manual approach” with *fsrs4anki_simulator.ipynb* first. However, I wanted to compare 8 decks with 40 simulated years each, comparing retention rates between 0.6 and 0.95.

(I guess that means 288 simulation runs) …

**Running different scenarios with ***fsrs4anki_simulator*:

*fsrs4anki_simulator*:

## note regarding simulator modifications

99% of the simulator code remains unchanged and credit goes to @L.M.Sherlock

Changes for my usecase:

- Setting up several sets of parameters so that the simulator can run a batch of scenarios with one execution.
- Iterating through a range of target retention rates (e. g. 0.6 to 0.95 for each parameter-set, stepsize 1)
- Adding plot type (target_retention_rate → average time_per_remembered_card)
- Adding plot type (learn_day → total learn time)
- Adding plot type (learn_day → average time_per_remembered_card)
- Showing different retention rates on existing plot types instead of comparing FSRS to SM2.
- Running the simulator on multiple CPU cores with python’s Multiprocessing Package
- Removed some stuff from the simulator (SM2, file-saves beside of plots) … (made my life way easier*)

(*I wouldn’t call myself a programmer, and am not used to work with github or python really.)

**My Conclusion regarding target retention rate (TRR)**

#### The optimal target retention rate seems to depend on the remainig learn days (number of simulated days)

e. g. 2 study-years …

(lowest time per card (or optimal TRR) somewhere at 0.75)

… compared to 40 years (same deck, same fsrs-weights, same number of cards):

(TRR more like 0.87)

→ a higher TRR seem to pay off in the long run. While lower TRR reduce the workload in the beginning.

##### … So I think it makes sense to consider the expected lifetime of cards

- E. g. if I learn material with age of 40 and the cards should last until I’m 80 years old → run the simulation with 14600 days (40y).*
- E. g. if I learn material with age of 60 and the cards should last until I’m 80 years old → run the simulation with 7300 days (20y).*
- If I want to learn material for a test (card lifetime 2 years) → plot the results for 730 days.

*: One has to have long term goals

#### Optimal TRR is different for different material

(… comparing different decks, but keeping the same number of simulated learning days)

*Language material DE–>IT:*

(see above)

*Same notes, but cards with IT–>DE:*

(Sidenote: I splitted the language decks. E. g. IT–>DE cards I find easier compared to DE–>IT, therefore the time per card is lower, also optimal TRR seems a bit lower):

… and:

*40 years of completely different* material than those from above (meaning different fsrs weights, different average answer times):

(here, the simulated deck has cards with shorter answer times in the past … maybe this also has an impact)

##### … I will consider the type of learning material

- (I think I knew this already )
- And also trivial: When entering new material I will consider the fsrs parameters and target retention rate of similar decks from my existing collection.

#### Regarding all eight simulated decks …

…, each optimum TRR clearly seems lower than 0.9 but higher than 0.7. I feel confident to use TRR between 0.8 and 0.88 for long term material. And also consider 0.8 for short term material. Going lower than 0.8 still feels demoralizing even if a simulation might suggest lower TRRs.

**Final thoughts**

Maybe thinking too much about the algorithm and about finding the optimal parameters distracts from the actual learning. In the end it’s still a choice of intuition. And maybe the resulting review effort doesn’t change that much by chosing different TRR. However, playing around with the new FSRS scheduler made the use of Anki even more interessting and fun. So thank you all who created Anki and FSRS!

Open questions for me:

- Are there other recommended ways to find optimal target retention?
- What other optimization targets could be chosen, and why? (Overall time? Number of remembered cards? Limiting daily review time?)
- What impact will changing the target retention rate and rescheduling cards of existing decks have … Will this increase the overall effort compared to if it remaines unchanged?
- I ran one scenario where I changed the TRR after two simulated years within the same run. However without simulating a rescheduling of cards. → Made things worse … I won’t consider exploring this via simulation … however I think I will still change TRRs (+rescheduling) from time to time.

- And: In the end you could only simulate a specific deck afterwards when maybe the main effort of learning new material has already been done. How to best chose parameters for fresh material?