Image Occlusion - New Feature Concept

Hi, as a first year medical student I’m also a heavy Anki user.

When studying histology and anatomy the first step is to learn the name of the structures, in Anki the cards are usually either custom image with an arrow pointing to the structure or IO card hiding the description label.

However both card types have their flaws:

“Arrow” cards t ake a lot of time to create specific images and it is easy to get overwhelmed while doing so. This method also generates large number of practically identical images which result in bloated decks that take up disk space and are difficult to share. But in case of multiple structures this method keeps the image easily legible and can be combined with type-in card type, to allow quick check of correct spelling.

Classic IO cards speed up card creation immensely and do not create any redundant image files, but in case of multiple structures it quickly generates visual clutter and decreases legibility of the card.

So the feature idea is to combine the best of both worlds. Keeping the speed and data efficiency of image occlusion and the legibility and feedback potential of the “arrow” method.

It could be built up upon current image occlusion toolkit, where it would add new shape: arrow and it’s linked text field. Creating an arrow (similar to annotation feature in macos’s preview) would prompt you to type the name of the stucture.

Unfortunatelly I don’t have any experience with developing anki add-ons, and I’m not sure if these features are something that that the current IO could support. ( u/Glutanimate will probably know)

So what do you think? Would you find this feature useful? Or is there other way of achiving the same result?

Thanks for replies.


Hi, lines and potentially arrows are on their way as part of the upcoming annotation improvements. That will allow you to label completely unlabeled images, but the labels would still be part of the image.

I think that supporting the use case you mention, where the image does not contain the labels and the prompt and answer are rendered separately is not something that’s feasible to do with the current implementation. Consider that IO notes are effectively just an image cloze, and regular clozes also do not support this kind of separation between the context they are in and the prompt location. In the text world, the solution here would be using a basic-like note type that splits the question and answer, and I think that also continues to be the most viable way to implement image-based prompts like this, where you have one image field and multiple numbered fields for each label, conditionally generating separate cards.

Generally speaking, as either using prelabelled images, or adding the labels to the image yourself covers most use cases, it’s a bit hard to make the case for a completely different prompting approach like this. Consider also that this solution does have disadvantages like the answer not being visible at a glance and more visual parsing needed.

If the visual clutter part is more secondary and the main thing you need is type-in-the-answer support, that’s something I am hoping to add in the new IO add on I’m working on that will be based on the native implementation.

(I might be able to explore your exact use case as well in the future, but I have to focus on the core for now and getting the add-on out, as it’s already been too long)