Anki Scheduler Broken?

I noticed that I’ve been reviewing similar cards for a LONG time and thought something might be wrong, so I setup a single note deck called “test” with a simple front/back pattern and reviewed ahead while monitoring the ease and interval along with my answer and stated interval above the “Good” button.

It seems my cards get stuck at the 1 day intervals. Trying to figure out why. Here at my deck options:

New Cards
Learning steps 1m 10m
Graduating interval 1
Easy interval 4
Relearning steps 10m
Minimum interval 1
Maximum interval 36500
Starting ease 2.5
Easy bonus 1.3
Interval modifier 1
Hard interval 1.2
New interval 0

Here are the card stats as I progress through reviews:

Due Ease Interval Answer Next Stated Interval (above Good) Note
New #1000890 (new) (new) Good 10m
2023-02-28 0.00% (learning) Good 1d How can this be 1d already!?
2023-03-01 250.00% 1 day Good 1d
2023-03-01 250.00% 1 day Good 1d
2023-03-01 250.00% 1 day

Two things seem strange.

  1. Why would it jump from 10m to 1d interval so fast?
  2. It gets stuck at an interval of 1 day over and over. Why?

Any ideas? I’m using the V2 scheduler as I’m mainly using Anki on AnkiDroid, but it was easier to test this on the desktop version to monitor the ease and interval.

The 1d step is what you have configured in your deck options. The repeated intervals on the same day are odd, and makes me think you’re using add-ons, a filtered deck, or have changed the maximum interval in the options.

You mean the step from 10m to 1d is expected? I was under the impression that it should be 10m * the current ease (250%) or something like that. Can you point me to the location in the github repo where the update to the interval takes place? I’d be curious to see the actual calculation (and hopefully this will help me debug my issue as well). I’m happy to build from source and try debugging with gdb to sort out my issue, just need a tip in the right direction of where to start looking.

I don’t have any add-ons that I know of. I checked that the list was empty (Tools–>Add ons), not sure if they can exist elsewhere though. The maximum interval is 36500. The decks I normally use are not filtered, but the “test” one used in this example is automatically entered into a filtered deck when I “review ahead” in order to test this out. Is there something about this filtering that would induce this behavior? Can I avoid it and test this more efficiently?

I notice the same behavior in my normal decks as well though i.e.:

  1. I open a normal deck to study and find CARD1
  2. Find CARD1 in the card browser and note its current interval and ease,
  3. Press “good” and note the interval and ease are not updated.

Looking into this more, I realized two things:

  1. It appears Reviewing Ahead obeys different scheduling, so my “test” deck was not a good way to test this.
  2. In the desktop app card browser, I realized there is useful info shown by right clicking on the card and clicking “info…” about the historical answers, ease, and status of the card. One snippet from one card is shown below:
Date Type Rating Interval Ease Time
2023-03-01 @ 18:43 Review 3 ⁨3⁩ days 130.00% ⁨1⁩m
2023-02-26 @ 07:56 Review 3 ⁨3⁩ days 130.00% ⁨10.56⁩s
2023-02-23 @ 10:47 Review 3 ⁨3⁩ days 130.00% ⁨2.66⁩s
2023-02-20 @ 07:14 Review 3 ⁨3⁩ days 130.00% ⁨6.18⁩s
2023-02-17 @ 20:03 Review 3 ⁨3⁩ days 130.00% ⁨43.54⁩s
2023-02-14 @ 07:49 Review 3 ⁨3⁩ days 130.00% ⁨8.3⁩s
2023-02-10 @ 07:35 Review 3 ⁨4⁩ days 130.00% ⁨8.52⁩s
2023-02-05 @ 14:43 Review 3 ⁨5⁩ days 130.00% ⁨5.11⁩s
2023-01-29 @ 09:24 Review 3 ⁨7⁩ days 130.00% ⁨9.45⁩s
2023-01-24 @ 10:10 Review 3 ⁨5⁩ days 130.00% ⁨48.73⁩s
2023-01-21 @ 08:58 Review 3 ⁨3⁩ days 130.00% ⁨14.41⁩s
2023-01-19 @ 09:36 Review 3 ⁨2⁩ days 130.00% ⁨4.91⁩s
2023-01-18 @ 09:26 Relearn 3 ⁨1⁩ day 130.00% ⁨35.76⁩s
2023-01-17 @ 10:47 Relearn 3 ⁨1⁩ day 130.00% ⁨30.51⁩s
2023-01-16 @ 13:22 Relearn 3 ⁨6⁩ hours 130.00% ⁨2.49⁩s
2023-01-16 @ 11:34 Relearn 3 ⁨1⁩ hour 130.00% ⁨2.49⁩s
2023-01-16 @ 10:22 Relearn 1 ⁨10⁩ minutes 130.00% ⁨6.64⁩s
2023-01-16 @ 10:11 Review 1 ⁨10⁩ minutes 130.00% ⁨10.84⁩s
2022-12-28 @ 04:19 Review 3 ⁨20⁩ days 145.00% ⁨6.64⁩s
2022-12-14 @ 09:46 Review 3 ⁨13⁩ days 145.00% ⁨5.11⁩s
2022-12-07 @ 18:32 Review 2 ⁨7⁩ days 145.00% ⁨7.28⁩s
2022-11-28 @ 08:58 Review 3 ⁨9⁩ days 160.00% ⁨7.58⁩s
2022-11-24 @ 11:35 Review 3 ⁨4⁩ days 160.00% ⁨4.22⁩s
2022-11-22 @ 08:25 Relearn 3 ⁨1⁩ day 160.00% ⁨4.8⁩s

As you can see, from 2022-11-22 @ 08:25 up to 2023-01-29 @ 09:24 it appears normal with reseting on relearn as expected, and increasing intervals with “good” or “3 Rating”. Then on 2023-02-05 @ 14:43 something weird happens. I answer Good/3 Rating, but the interval decreases from 7 days to 5 days.

My guess so far is that sometime between 2023-01-29 @ 09:24 and 2023-02-05 @ 14:43 I changed the deck settings. More specifically, I think I increased the maximum intervals on the New Cards and Lapses. I can’t recall exactly, but based on the history from “info…” I’d guess I had Relearning steps of 10m, 1h, 6h, 1d. Sometime between 2023-01-29 @ 09:24 and 2023-02-05 @ 14:43 I changed the Relearning steps to 10m 1h 6h 1d 2d 5d 10d 18d. Now what I think might be happening is that this 7 day interval at 2023-01-29 @ 09:24 is less than the maximum relearning step, or learning steps for new cards (identical intervals btw). Now I guess there is some logic within Anki that is bugging out at this. i.e. some unforeseen state is occurring and the logic responsible for updating the interval is spitting out nonsense. Why it settles from 7 days, 5 days, 4 days, then stays on 3 days is beyond me. I’d have to find this in the code to say more. Any help pointing me towards this interval update logic is appreciated!

Please include a screenshot of the deck options for that card. Did you change any of the settings during the review period?

Everything should be the default settings (for the test deck).

Can you confirm this behavior is still odd even when “reviewing ahead”?

Here is the “info” on that card I’m using as a test:

Unless its helpful to look beyond it, just focus on the entries from 2023-03-02 @ 15:50 where I “forgot” the card from the browser and tried the sequence of tests again.

Your latest example looks normal: Filtered Decks - Anki Manual

Here is an example card from another deck where I was not using the “review ahead” or filtered decks. What is especially strange, is that the intervals appear to be increasing again from TODAY. See screenshot of one example below:

Setting for this deck are as follows:

Note this is a child check of another deck with these options (don’t know if this would affect this but just in case posting):

In both, the custom scheduling is not pictured, but both are empty.

Any idea as to what could stall cards like this on the 2 days interval?

Another example from the same deck shown here:

Notice how the interval decreases from 2023-01-16 @ 10:07 until 2023-02-23 @ 10:12 despite the 3 Ratings and a positive Ease unchanged from 145%. Again, I’m not sure why the intervals are increasing again as of today. The only thing I can think of is I must have set some deck settings between 2023-01-16 @ 10:07 and 2023-02-23 @ 10:12 that caused this decreasing interval behavior. Then sometime between 2023-02-23 @ 10:12 and 2023-03-02 @ 21:03 this was changed to something that no longer caused this decrease.

I believe I had the interval modifier set to something like 0.65 according to what I had read in the manual Is this expected behavior to decrease intervals even when a Good / 3 Rating is selected? This is again why I’d like to see the code governing this calculation/logic.

Starting to get a handle on how the project internal works. I had never used Rust, nor any python+C or otherwise external bindings in python before, so a fair learning curve, but its starting to come around. I was terribly impressed when I asked chatGPT to find where the interval is updated in the github repo, and it pointed me here. A good point to start, but I still needed to understand how the Rust code was bound. It looks like you’re using PyO3 if I’m not mistaken? Happy to try to reproduce this error if I can understand the code base a bit better.

Ok, so I’ve traced the “Good” button down to here but I’d appreciate some help as to how you would go about following this beyond python to the rust code that governs the execution of _run_command from I think I can get there via rust-gdb, but still struggling getting the pretty print to work, so its a bit of a mess to look at.

Ok, think I found the Rust code I was interested in here still need to work out how to interactively debug to this point so I can evaluate expressions during runtime. running tools/ from within rust-gdb python doesn’t appear to have any of the rust code as sources, so I’m not sure how to set a breakpoint that would occur in the rust code. Even print statements or the like would be helpful. Any pointers appreciated.

Just keeping notes for myself to follow up on tomorrow. It appears what I really need to be looking at is here in particular the possible suspects are:

  1. constrain_passing_interval method
  2. the update to interval (line 151) with the days_late being of particular interest.
  3. something to do with fuzz?

Your settings have hit a corner case. With default settings, hard will always be at least one greater than the previous interval, and good a day higher than that, so that you make progress even when the card’s ease is low. You’ve set the hard factor to less than 1, meaning progress goes backwards when pressing hard. Anki can’t enforce the 1+ day increase in that case, and this has also resulted in the range for good being pulled down by one (though I’d expect to see it happen on a hard multiplier a bit lower than 0.8). I will address this in a future update; in the mean time setting your multiplier above 1 should resolve it.

I’m not sure I follow. In the example I gave, I have 3 Rating every time after 2022-12-24 @ 00:11 within the Review state. During this period where the interval increases then decreases, I never used the Hard button (Rating 2).

Reading through the code a bit again, I think I found a few possible hypotheses. In the there appear to be two possible locations that the interval is updated in response to a Good (Rating 3) response.

The first being when self.days_late() < 0 within the passing_early_review_intervals() function:

let good_interval = constrain_passing_interval(
            (elapsed * self.ease_factor).max(scheduled), 

and the second being self.days_late() >= 0 within the passing_nonearly_review_intervals() function.

let good_interval = constrain_passing_interval(
            (current_interval + days_late / 2.0) * self.ease_factor, 
            hard_interval + 1,

The definition of the constrain_passing_interval() function is also of importance here:

/// Transform the provided hard/good/easy interval.
/// - Apply configured interval multiplier.
/// - Apply fuzz.
/// - Ensure it is at least `minimum`, and at least 1.
/// - Ensure it is at or below the configured maximum interval.
fn constrain_passing_interval(ctx: &StateContext, interval: f32, minimum: u32, fuzz: bool) -> u32 {
    let interval = interval * ctx.interval_multiplier;
    let (minimum, maximum) = ctx.min_and_max_review_intervals(minimum);
    if fuzz {
        ctx.with_review_fuzz(interval, minimum, maximum)
    } else {
        (interval.round() as u32).clamp(minimum, maximum)

So I can expand the assignments to interval and good_interval and summarize them below for easier comparison:

passing_early_review_intervals() update:

interval = (elapsed * self.ease_factor).max(scheduled) * ctx.interval_multiplier;
good_interval = (interval.round() as u32).clamp(minimum, maximum)


passing_nonearly_review_intervals() update.

interval = (current_interval + days_late / 2.0) * self.ease_factor * ctx.interval_multiplier;
good_interval = ctx.with_review_fuzz(interval, minimum, maximum)

I need to confirm still, but I assume ctx.interval_multiplier refers to the interval modifier in the deck settings. Also I’m not sure which gets updated first scheduled (aka self.scheduled_days) or interval but based on what I see here I’d guess interval is updated, then scheduled is set to that new updated interval sometime later.

passing_early_review_intervals() update comments:

In the first case of passing_early_review_intervals(), interval cannot decrease before jumping into constrain_passing_interval, even if self_ease_factor < 100% or elapsed being potentially negative, since the max(scheduled) will hold it to its previously scheduled value (or previous interval) rather than decrease it.

On the other hand, the direct multiplication with ctx.interval_multiplier after jumping into constrain_passing_interval doesn’t seem appropriate here, as anything that satisfies self.ease_factor * ctx.interval_multiplier < 1 would lead to either a decrease in interval or at best a rounding up to the previous interval. You might want to either do the .max(scheduled) operation after multiplying with ctx.interval_modifier or change the minimum from 0 to scheduled as I would never expect to press Good and have the interval decrease.

Also the application of round could potentially cause the interval to get stuck forever on a single interval (in the case where interval does not increase beyond 0.5 above the previous interval). Why don’t you use a ceiling function (always round up) rather than round()? That way it will be guaranteed to increase by 1 without relaying on the hard_interval (as you do in the passing_nonearly_review_intervals version of the update). But maybe you want it to get stuck when reviewing early so as not to have the potential to indefinitely increase the interval when reviewing ahead.

passing_nonearly_review_intervals() update comments.

Here the days_late is guaranteed to be >= 0, thus the only thing that could cause interval to decrease would be if self.ease_factor were less than 100%. This was not the case based on the info provided in the screenshot as ease_factor appears to be a constant 145% the whole time, so I don’t think this had anything to do with my issue.

Again though, after jumping into constrain_passing_interval you multiply by ctx.interval_multiplier which can cause interval to decrease as per the above comments. This time around it is not so straight forward as we have to consider fuzz, and you also change the minimum to be hard_interval +1 (which I think is what you previous comment was regarding now that I reread it).

So entering into ctx.with_review_fuzz(interval, minimum, maximum) there appears to be another function call to constrained_fuzz_bound but after that there is a line that doesn’t appear to modify interval before returning a u32 value. Within constrained_fuzz_bound I also don’t see any modification of interval so I am a bit confused as to where fuzz is actually applied to the interval. So I’m not sure if fuzz has any potential contribution to this issue or not.


The direct multiplication of ctx.interval_multiplier definitely did if I am assuming correctly that this is the interval modifier from the deck settings. If self.ease_factor * ctx.interval_multiplier < 1 my interval would decrease. In the example screenshot from the last posts, I had ease_factor=1.45 and I believe I had a interval modifier of 0.65. Assuming I was within the passing_nonearly_review_intervals() that would lead to an update of:

interval = (15 + 0 / 2.0) * 1.45 * 0.65; // 14.1375, a decrease of almost 1 day.
good_interval = ctx.with_review_fuzz(interval, minimum, maximum) 

Now I’m not sure how fuzz actually modifies the value of interval, but assuming it adds some potential range of values and rounds. If that fuzz were near zero (or potentially even negative) then one can easily see an issue.

Continuing down my screenshot data for another example:

interval = (11 + 0 / 2.0) * 1.45 * 0.65; // 10.3675, a decrease of almost 1 day.
good_interval = ctx.with_review_fuzz(interval, minimum, maximum) 

Essentially I will have a linear slope downward of 1.45*0.65=0.9425. Eventually the interval gets stuck at some low value because between the rounding/fuzz and the decrease of 5.75%, it will never decrease enough to get below 4.5 (in this example) and continue to round back up to 5.

Then, after I increased the interval modifier back to 1.00, the combined value of the ease_factor and interval modifier became positive and began to increase again.

I’d say this is very opposed to what is written in the documentation regarding the interval modifier in that anything below 1.0 has the potential to have this issue occur.

I did not say you used the Hard button - please read what I wrote again. I’ve provided you a workaround, and a fix is already in the development code.

I’m sorry if I did something to offend you regarding the day boundaries thread, but I did in fact read it again, and mentioned that in the post which I spent a fair bit of time on, along with a logical review of the shortcomings of the existing code with respect to this issue, and possible solutions, only to be responded to with a dismissive tone and not even pointing me to the commit/branch where this claimed fix is implemented. Nor did I receive any feedback on the shortcomings or solutions I pointed out.

Gotta say I’m pretty disappointed by the response to this. I was hoping to contribute to Anki, but given the tone here, these things don’t sound too open to discussion :frowning:

I’m sorry, I did not mean to come across as unwelcoming, but I was a little frustrated. I took the time to diagnose the cause of your stalled steps, fix it, and give you a workaround until a new update, and you basically ignored what I’d written, and started hypothesizing about what the cause might be. If you still believe there are issues with the code, please bring them up on the issue tracker (the forums are good for reports about behavior you don’t expect; the issue tracker is good if you’re a developer and want to discuss the code).


I apologize if it came across that way, that was not my intent. I appreciate your time and effort, I just wanted to understand above and beyond being told what to do. I wanted to understand why it works or doesn’t work.

I didn’t ignore your workaround, I just did not understand why it would work, so I dove into the code further to understand why, and noted my progress in a rather lengthy post, which eventually led me to understanding what you meant, as noted below:

Without digging into the code to see that dependence on hard_interval + 1 I didn’t understand what you were proposing. Reading code or math makes a lot more sense to me personally than the descriptions in the docs.

So in summary, I think you’re workaround potentially fixes part of the problem, but not the whole thing. In particular, this issue could potentially still persist due to:

I’m starting to get a handle on the rust-gdb method for debugging/testing runtime calcs like this, so will report back when I have a good way of testing this. I also note your rust tests in the code, so I will try to have a look at how to use those to exemplify what I am trying to convey here.

I heavily rely on Anki on a daily basis since 2012, so I do dearly hope I can contribute. I’ll continue any further discussion on this on github specific to each issue as I progress through my understanding.

I hope for your kind assistance hereafter.

1 Like

A small rust test is probably the best way to demonstrate issues for discussion, as it’s easy to reproduce and lets us focus on what’s important.

1 Like