Anki 23.10.1 Release Candidate

Hi all,

A new RC is available: Releases · ankitects/anki · GitHub

Due to the change of toolkit and the short timeframe before a new stable release will come out, I’d appreciate as many people testing it as possible.

4 Likes

Continue the topic from Anki 23.10 Release Candidate - #130 by dae

I need this:

In FSRS4Anki Helper add-on, the reschedule feature also applied fuzz, but the fuzz factor is generated in python. I think it’s inconsistent with the fuzz factor generated in Rust Backend. So I hope Anki could expose the card.get_fuzz_factor() to add-ons.

Yes, it is working on macOS 10.15.

RTL
image

image

There are no checksums?

You want the 0…1 number, and not the actual delta days it resolves to for a given interval?

Logged on Image occlusion buttons look odd in RTL mode · Issue #2808 · ankitects/anki · GitHub

I usually only bother with them if I expect the rc to become the stable release without further changes.

2 Likes

The delta days is OK for me.

Hello,
Is it possible to have Anki 23.10.1 RC and Anki 2.1 coexist on the same mac?

I want to take part in the test but I’m afraid it will break my decks

I suggest renaming “Maximum interval” (in Advanced) to “Maximum interval (days)”. I just talked to a user who thought that the value of “Maximum interval” is measured in hundredths of a day. He wasn’t a troll, he thought 36500 means 365 days.
On that note, I also suggest making the default value of max. interval 3650, or roughly 10 years, instead of 100 years. Or even 1825 (5 years). I never understood why the default value is so high. Nobody will be using Anki for 100 years, and the time savings from setting it so high are negligible.
EDIT: here’s my reasoning for lowering the max. interval

  1. Suppose you spend 10 seconds per review. If your max interval is one year, the minimum amount of time that you will spend per year is, well, 10 seconds. If your max interval is 100 years, you will spend less than 10 seconds per year. But do you really care about saving 10 seconds per year? Probably not, and definitely not as much as you care about actually knowing the content of the card.
  2. If your max. interval is 1 year and you immediately forget the card after reviewing it, you will spend at most one year of your life not knowing it. If your max interval is 100 years and you immediately forget the card after reviewing it, you will spend the rest of your life not knowing the card. This is, of course, undesirable.

EDIT 2: I just saw another user trying to convince me that 36500 is 365 days. Dear god…
@dae please rename “Maximum interval” to “Maximum interval (days)” or “Maximum interval (in days)”

3 Likes

It’s not really “so high”; the feature is effectively disabled by default. I agree that users can get confused, because representations of values like Ease and Difficulty differ in units.

2 Likes

BUG

I regularly use the Find and Replace feature to replace $nbsp; in my notes with a " " (space). In 23.10.1 rc1, doing the same results in an error message: “replacement name cannot contain a space”.

@dae I recently saw you saying that the issue with “Hard” being longer than “Good” was fixed in the native implementation of FSRS, but this is what happesn if I set the learning steps to “1d”, without any short steps.

rc2 is now available: Releases · ankitects/anki · GitHub. I need to get a new release out soon due to the issue running on older Macs, so this will hopefully be renamed as stable in the next day or two if no issues are found.

Yes, if it’s a recent 2.1.x, you can rename the .app or put them in different folders.

None of the other interval options like graduating interval, easy interval, or minimum interval have (days), so we should either do it for all of them, or none. I am surprised to see you’ve come across two users who assumed it was hundreds of a day, as I don’t recall ever seeing that before. And we do already allow the user to click on the description or question mark to get help on an item, which shows not only the units it’s in, but explains what it does. And if users are assuming that it’s limited to 1 year, they’ll likely figure out their mistake once they see a card being scheduled for longer than a year.

I presume you mean  , but I can’t reproduce the problem with $nbsp; either. Presumably turning of ‘treat input as a regular expression’ would fix it, though this may be a bug, and is perhaps related to the content of your notes?

I’m sorry, I was mistaken here. I was thinking of the logic Anki has in review mode to
ensure hard<good<easy, but that doesn’t apply when transitioning from the learning to review state.

1 Like

Yes, I meant &nbsp;, $nbsp; was a typo in the post. IIRC, ‘treat input as a regular expression’ was not enabled when I saw that bug. Nonetheless, I am unable to reproduce it in rc2 irrespective of whether the option is enabled or not. I didn’t recheck in rc1 though. Anyways, thanks.

RMSE has been expressed in percent, making it easier to interpret.

Good, less to keep in mind while comparing.

The number of decimal places in log loss have been increased from 3 to 4.

More to remember and compare (6→7 digits). Is it really so sensitive that 0.0010 or 0.0020 matters? What should I think of a 0.0010 increase in log loss and 1% decrease in RMSE?

The description of “Compute optimal retention” should probably say what it uses as input. I guess it uses Maximum interval, Desired retention and FSRS parameters, but not the actual cards.

1 Like

Is it really so sensitive that 0.0010 or 0.0020 matters?

Kind of. The first digit after the decimal point practically never changes (for a given deck/collection). The second digit may change sometimes, but rarely. Also, log-loss changing by 1% could be accompanied by RMSE changing by 10-15%. RMSE is a much more intuitive metric, so I suggest paying attention to RMSE instead of the log-loss.

1% decrease in RMSE?

Do you mean “RMSE went from 5.00% to 4.95%” or “RMSE went from 5.00% to 4.00%”? If it’s the former, it doesn’t matter. If it’s the latter, that’s a pretty good improvement.

Log loss: 0.2508, RMSE(bins): 4.06%
Log loss: 0.2516, RMSE(bins): 4.03%
2nd log loss is worse by 0.00317965, 2nd RMSE is better by 0.007444169.
0.007444169÷0.00317965 = 2.341191326.

Log loss: 0.2047, RMSE(bins): 3.96%.
Log loss: 0.2054, RMSE(bins): 3.78%.
0.047619048÷0.003407984 = 13.972790952

Log loss: 0.4527, RMSE(bins): 9.41%.
Log loss: 0.4639, RMSE(bins): 7.36%.
0.278532609÷0.024143134 = 11.53672133

Then I suppose renaming them all is the best option.

Yeah, but it seems a lot of users don’t know about it.

Click Optimize and watch the number of reviews constantly change.