Technical Analysis
The current system handles Hard intervals differently depending on the learning phase. In the first learning step, the interval is calculated as an average between Again and Good intervals, which forces an unnecessary coupling of parameters. For later steps, the system returns to the previous step interval, which works as intended and requires no modification.
Core Problem
The averaging calculation in the first step creates two significant issues:
Creates illogical spacing for unknown cards requiring immediate review
Forces coupling between first and second learning steps parameters:
Second parameter must be carefully chosen considering the first
Limits flexibility in setting optimal intervals
Restricts ability to set longer second steps without affecting first step timing
User Impact
This change is particularly crucial for Auto advance users who:
Use only Hard/Good buttons for efficiency
Rely on intuitive and objective criteria for quick decision-making
Need consistent and logical review intervals
Proposed Solution
Modify only the first learning step’s Hard interval behavior:
if learning_step == 1:
hard_interval = again_interval # Match Again interval for first step
else:
hard_interval = previous_step_interval # Keep current behavior
This modification would:
Create logical progression for initial learning
Allow independent parameter setting
Enable more flexible learning step configurations
Support efficient Auto-Advance workflow
Maintain consistent review patterns for unknown cards
You should be using Again/Good button for rating cards and not Hard/Good. Hard is supposed to be used as Pass and so it makes sense for it’s interval to depends upon what you have as the second step (which the Good button uses).
The solution doesn’t make sense to me. Did you write it yourself? Why should Again and Hard intervals be the same?
if learning_step == 1:
hard_interval = again_interval # Match Again interval for first step
else:
hard_interval = previous_step_interval # Keep current behavior
According to research, memory consists of storage strength and retrieval strength. While storage strength remains constant, retrieval strength decreases over time.
Benefits of Hard/Good usage:
Partial recall is more effective for learning than complete reset
Spaced practice is 74% more effective than massed practice
Active recall is 51% more effective than passive review
Therefore, for users processing large volumes of cards:
Hard maintains storage strength while adjusting retrieval difficulty
Again creates inefficient massed repetition
Hard provides optimal spacing for partial recall
The “Forget to Learn” theory suggests that partial decay in retrieval strength can enhance long-term retention.
I don’t get what you’re trying to say. You have suggested making Again/Hard the same for first step. Now, you say Again interval is not good therefore make Hard the same as Again?
I will ask with all due respect, are you writing this yourself? There seems to be influx of AI-generated content to various communities.
Learning steps should follow user-defined parameters consistently. While every other step does this, the first step’s Hard interval creates an unexpected average, breaking this logical pattern.
I write these explanations from direct Anki experience, using AI as a supplementary tool for refinement. This reflects today’s reality where AI-human collaboration has become essential - not to replace human thinking, but to enhance our problem-solving capabilities.
Now, I don’t know what you want and what the AI wants. I believe that code was written by the LLM which basically makes Hard interval no different from Again. That’s less useful than what Anki does currently.
You don’t understand the simple logic and are calling this an AI problem. If Anki continues to operate like this, I will be building my own app and it will completely replace Anki within 10 years.
I support to optimize the learning step module. But my proposal is not more flexible learning step configurations. Instead, a ML-powered short-term memory model is the game changer.
However, short-term memory is far more complex than I initially thought.
I’ve already conducted extensive research on this topic:
I don’t think average user could do better than me. So it’s not a good solution to enable more flexible learning step configurations.