New progress in implement the custom algorithm

This is an interesting project. I’m doing some rough comparisons with intervals given by Anki SM-2 and the FSRS algorithm, and it seems like FSRS gives more reviews?

Assuming we’re using default Anki values, where starting ease is 250% and interval modifier is 100%, and assuming that we’re always pressing the Good button, then that means that the interval is calculated by new interval = old interval * 100% * 250% = old interval * 2.5. So essentially the formula is 2.5^x, if you press Good every time.

Then that means that with Anki’s SM-2 algorithm, you will see the intervals in days (assuming there is no Fuzz):

0, 1, 3, 8, 20, 50, 125, 313, 783, 1958, 4894, ...

And according to the FSRS algorithm on your GitHub https://github.com/open-spaced-repetition/fsrs4anki/blob/main/fsrs4anki_optimizer.ipynb, you have the intervals:

# You can see the memory states and intervals generated by FSRS as if you press the good in each review at the due date scheduled by FSRS.
0, 3, 6, 11, 20, 36, 63, 106, 175, 282, 445, 689, 1048, 1568, 2310, 3355, ...

And if I understand your code properly, these are the intervals given if you press good every time, and you set requestRetention = 0.9. If you set requestRetention to 0.85 for example, then you would essentially multiply each of these stability values (intervals) by ln(0.85) / ln(0.90):

(tensor(2.5952), tensor(4.5839), tensor(0.))
(tensor(5.5947), tensor(4.5692), tensor(0.))
(tensor(11.0884), tensor(4.5624), tensor(0.))
(tensor(20.3944), tensor(4.5631), tensor(0.))
(tensor(36.1469), tensor(4.5649), tensor(0.))
(tensor(62.6541), tensor(4.5653), tensor(0.))
(tensor(106.1411), tensor(4.5648), tensor(0.))
(tensor(174.9307), tensor(4.5649), tensor(0.))
(tensor(282.0350), tensor(4.5649), tensor(0.))
(tensor(445.2276), tensor(4.5649), tensor(0.))
(tensor(689.3261), tensor(4.5650), tensor(0.))
(tensor(1048.3844), tensor(4.5650), tensor(0.))
(tensor(1568.3268), tensor(4.5650), tensor(0.))
(tensor(2310.3726), tensor(4.5651), tensor(0.))
(tensor(3355.0225), tensor(4.5651), tensor(0.))

So you would see

# FSRS algorithm, with 85% requested retention
0, 4, 9, 17, 31, 56, 96, 163, 269, 434, ...

Similarly, for 80% requested retention:

# FSRS algorithm, with 80% requested retention
0, 5, 12, 23, 43, 76, 132, 223, 367, 592, ...

According to the True Retention addon https://ankiweb.net/shared/info/613684242, my monthly true retention is 86.9% (includes young+mature), and supermature rate is 85.2% (cards that have >= 100 day interval). And according to True Retention by Card Maturity Simplified https://ankiweb.net/shared/info/1779060522, my monthly mature retention is 81.6%, monthly young retention is 89.2%

image

Summary:

# fsrs (90% retention rate target)
0, 3, 6, 11, 20, 36, 63, 106, 175, 282, 445, 689, 1048, 1568, 2310, 3355, ...

# fsrs (85% retention rate target)
0, 4, 9, 17, 31, 56, 96, 163, 269, 434, ...

# fsrs (80% retention rate target)
0, 5, 12, 23, 43, 76, 132, 223, 367, 592, ...

# Anki SM-2 algorithm
0, 1, 3, 8, 20, 50, 125, 313, 783, 1958, ...

It seems that you would get more reviews using FSRS than Anki, even if you set requestInterval to 80%, assuming you always press Good on a card. I have 80-85% retention using Anki’s simple SM-2 algorithm. There doesn’t seem to be much benefit in using FSRS if it increases the amount of reviews. Is my math/understanding wrong?

1 Like

If you change the requested retention, the interval will change, and the memory states will change, too.

The retention of mature cards is less than that of young cards. It means that the interval given by SM-2 is too long for mature cards (or too short for young cards).

1 Like

Oh okay, so my statement is incorrect then. Is it possible to simulate and see the memory states and intervals for different requested retention values? In your fsrs4anki_optimizer.ipynb code, I think currently there’s no way to change requestRetention.

The retention of mature cards is less than that of young cards. It means that the interval given by SM-2 is too long for mature cards (or too short for young cards).

Agreed, I don’t feel confident in remembering some cards especially when it jumps from 783 days to 1958 days for example using Anki SM-2, assuming I press good every time. It is a simple formula. However, it does feel like FSRS schedule cards more often than Anki SM-2 especially at 90% requested retention. I’d be interested in seeing/simulating the memory states/intervals for 85% and 80% for example to see the comparison, and potentially lower the amount of reviews per day

1 Like

I’m developing it on Colab. The last cell is replaced by following codes.

requestRetention = 0.9
class Collection:
    def __init__(self):
        self.model = model

    def states(self, t_history, r_history):
        with torch.no_grad():
            line_tensor = lineToTensor(list(zip([t_history], [r_history]))[0])
            output_t = [(self.model.zero, self.model.zero, self.model.zero)]
            for input_t in line_tensor:
                output_t.append(self.model(input_t, *output_t[-1]))
            return output_t[-1]

my_collection = Collection()
t_history = "0"
r_history = "3"
my_collection.states(t_history, r_history)
for i in range(15):
    last_states = states
    states = my_collection.states(t_history, r_history)
    print(states)
    next_t = round(float(np.log(requestRetention)/np.log(0.9) * states[0]))
    t_history += f',{int(next_t)}'
    r_history += f",3"
print(t_history)
1 Like

For retention = 0.9

(tensor(3.3262), tensor(3.7424), tensor(0.))
(tensor(6.7031), tensor(3.7518), tensor(0.))
(tensor(14.4613), tensor(3.7476), tensor(0.))
(tensor(29.7728), tensor(3.7506), tensor(0.))
(tensor(62.1181), tensor(3.7499), tensor(0.))
(tensor(128.0619), tensor(3.7501), tensor(0.))
(tensor(262.3655), tensor(3.7501), tensor(0.))
(tensor(533.5971), tensor(3.7503), tensor(0.))
(tensor(1079.0902), tensor(3.7502), tensor(0.))
(tensor(2166.8831), tensor(3.7502), tensor(0.))
(tensor(4323.1865), tensor(3.7502), tensor(0.))
(tensor(8569.5469), tensor(3.7502), tensor(0.))
(tensor(16880.3711), tensor(3.7502), tensor(0.))
(tensor(33043.2891), tensor(3.7502), tensor(0.))
(tensor(64286.9141), tensor(3.7502), tensor(0.))
0,3,7,14,30,62,128,262,534,1079,2167,4323,8570,16880,33043,64287

For retention = 0.85:

(tensor(3.3262), tensor(3.7424), tensor(0.))
(tensor(8.9401), tensor(3.6960), tensor(0.))
(tensor(24.5108), tensor(3.6439), tensor(0.))
(tensor(66.4316), tensor(3.5932), tensor(0.))
(tensor(178.0517), tensor(3.5438), tensor(0.))
(tensor(476.5527), tensor(3.4936), tensor(0.))
(tensor(1268.2266), tensor(3.4436), tensor(0.))
(tensor(3359.2651), tensor(3.3937), tensor(0.))
(tensor(8858.8408), tensor(3.3436), tensor(0.))
(tensor(23259.6816), tensor(3.2936), tensor(0.))
(tensor(60813.8711), tensor(3.2436), tensor(0.))
(tensor(158362.1406), tensor(3.1936), tensor(0.))
(tensor(410787.4062), tensor(3.1436), tensor(0.))
(tensor(1061627.6250), tensor(3.0936), tensor(0.))
(tensor(2733930.7500), tensor(3.0436), tensor(0.))
0,5,14,38,102,275,735,1956,5182,13665,35878,93806,244274,633641,1637564,4217096

For retention = 0.8

(tensor(3.3262), tensor(3.7424), tensor(0.))
(tensor(11.1564), tensor(3.6436), tensor(0.))
(tensor(37.9256), tensor(3.5408), tensor(0.))
(tensor(127.0566), tensor(3.4415), tensor(0.))
(tensor(426.3553), tensor(3.3415), tensor(0.))
(tensor(1430.4845), tensor(3.2415), tensor(0.))
(tensor(4800.1455), tensor(3.1415), tensor(0.))
(tensor(16114.7295), tensor(3.0415), tensor(0.))
(tensor(54157.2109), tensor(2.9415), tensor(0.))
(tensor(182300.8750), tensor(2.8415), tensor(0.))
(tensor(614981.5000), tensor(2.7415), tensor(0.))
(tensor(2080359.), tensor(2.6415), tensor(0.))
(tensor(7061487.5000), tensor(2.5415), tensor(0.))
(tensor(24067658.), tensor(2.4415), tensor(0.))
(tensor(82427336.), tensor(2.3415), tensor(0.))
0,7,24,80,269,903,3030,10166,34129,114700,386096,1302472,4406002,14955559,50973012,174573264
1 Like

The first version of FSRS4Anki has been released!

Today I fix some bugs and add a feature. Here is the latest beta version:

3 Likes