I searched online for solutions to my problems with Anki intervals and found thread after thread from this forum in which forum members responded to support requests by telling users what they were experiencing. In particular, again and again people asking about unexpected interval behavior were told that basically nothing had changed and they were suddenly noticing the fuzz factor only because it’s now shown above the rating buttons.
This is an assumption, and it’s not necessarily correct. The way to provide support to users—if providing support is the goal—is to respond to what individual people say about their own experience with the software, not to assume what people are experiencing with the software.
Assuming what users are experiencing does not solve problems and frankly comes off as arrogant.
That isn’t the fuzz factor though, is it? The value above the answer buttons shows you when you’ll see the card again (when it will be scheduled) depending on which button you press.
Fuzz factor manipulates the interval slightly but shouldn’t have a negative effect on retention. I think to reasoning to add a fuzz factor was to prevent cards introduced on the same day, with the same interval (e.g. good button pressed for both) to appear together every single time. They now are more spaced out.
In fact, we got a bug in the recent version which made cards less fuzzed and so if someone says they’re suddenly noticing fuzz and they never did before I’m not sure what we here should tell them.
I’m sorry that you found those answer off-putting, but I don’t see you pointing out many instances where the assumptions were incorrect. We all miss on occasion [and sometimes users just don’t like the answer, even if it is correct], but if folks around here are frequently assuming answers that are unhelpful, that’s definitely something we discourage. Otherwise, I think @ferophila captured it.
We deal with a significant volume of low-effort questions (thankfully fewer here than on Reddit or Discord) – didn’t check the manual or the FAQ, didn’t search the Forums (or even Google), ignored the guidance in the topic template, ignored the Discourse suggestions of other topics to look at before posting, XY Problems, etc. – or users who just don’t explain the issue very well. Once you’ve gotten familiar with the “typical” incorrect descriptions for an issue, it’s generally not worth the user’s time, or yours, to have lot of back-and-forth before getting down to business.
Asking a bunch of clarifying questions up front delays getting the user to the resources and answers that will help them. From my perspective, even when I do ask clarifying questions up front for information I really need, users often don’t answer. So it doesn’t make sense to take the time to do that when it’s not absolutely essential, and we can give a 95%-of-the-time answer quickly. If that’s not the right answer for this user, that gives them a chance to clarify, and us a chance to expand the answer, but that usually means extra hours or days getting to a solution.
You picked a thorny one – where users often have to have their basic impressions corrected, and answers might start at the “I can’t help you solve your problem because what you think is happening isn’t really what is happening”-level. That is harder to do, but in the end, it’s the best answer for that situation.
That particular issue relates to the forced transition from the v2 to v3 scheduler. With v2, fuzz was applied after you graded your answer, so some users had the mistaken impression that the intervals they saw on the buttons were rigid and dependable, even sacred.
When the v3 scheduler started showing them the fuzz up front – on the buttons, before grading – there were those who detected that as a huge and outrageous change and not the tiny change it really was. The user-education piece for that was (and continues to be!) tough.