Importing Deck Name from CSV

I am generating a CSV for importing it into Anki according to the docs. The import in general works, but seems to ignore my specifically setup deck column:

#deck column:1
Genki Kanji::Lesson 23::Lesson 22::Lesson 21::Lesson 20::Lesson 19::Lesson 18::Lesson 17::Lesson 16::Lesson 15::Lesson 14::Lesson 13::Lesson 12::Lesson 11::Lesson 10::Lesson 9::Lesson 8::Lesson 7::Lesson 6::Lesson 5::Lesson 4::Lesson 3,<h1>映</h1>,"<more html=""""/>"
[more rows cut...]

Expected Result:
the tree of decks being created and the cards added as specified in each line, similar to this
Actual Result:
all cards are assigned to the deck manually selected in the import dialog
the header of the csv file is ignored

I’m using:
Version ⁨2.1.54 (b6a7760c)⁩
Python 3.9.7 Qt 6.3.1 PyQt 6.3.1

As of Anki 2.1.54, the new importer must be enabled in the preferences. I failed to mention that in the manual.
However, it will still not do what you’d like: Currently, the deck column must contain existing deck names or ids, but creating decks for missing names is a suggestion worthy of consideration.

I actually looked into compiling an apkg first, but there doesn’t seem to be a good javscript sdk at all. If CSV doesn’t do the job either, I wanted to look at Anki Connect next, but I’m open for other ideas.

You can do this with ki! What are you using to generate the CSV? If you can modify this slightly to generate markdown, it may work.

I can help you out with this if you want.

It’s custom made javascript (hence my search for an sdk :P), but writing files is naturally easier than generating CSV. Let me give ki a shot, thanks for the hint.
PS: Keep the ideas coming :smiley:
PPS: FileNotFoundError: [Errno 2] No such file or directory: 'tidy' @langfield :frowning:

sudo apt install tidy

fixed this error.
So with my own code I’d be “converting” my CSV to individual markdown files, containing front and back as ### headers. Do I need to create the model.json manually?

Wow thanks! You’ve totally found a missing dependency I forgot to write in to the install instructions!

You do not need to create the models.json file manually unless you are generating notes with a previously-unseen-by-your-anki-installation notetype. What is the notetype of the notes you’re generating?

If you’re using an existing notetype, I would suggest simply copying the models.json file into the deck folder you’re generating.

Yes, this is exactly right. Keep in mind that if you want a note to live in a deck a::b::c, you must put it in a subdirectory a/b/c/<note>.md.

I was thinking of creating my own at some point, but atm I’m using Basic and render the html myself (which could contain css if needed).

Neat! When you get around to making a custom notetype, just create it in Anki and then ki pull the changes into your repository. Then the new notetype should be in your top-level models file.

Unlikely to happen since Anki can’t render lists for example, but thanks for the hint.

What do you mean it can’t render lists? Were you able to import your deck successfully?

I mean Anki on its own. I reformatted my templates (which can render lists :D) and it produced the file tree quite nicely. Now ki is

(     ●) Checking out repo at '4b888dac3b0a680b9e630d69baa286aa829620e4'...

That took a while last time as well, will report back if and when it finishes.

Wow! How large is your Anki collection? All this command does is copy the repository and run git init, so I’m surprised it’s taking so long. Got a lot of media?

A pull/push cycle (which I even need to do without content changes to the database apparently) takes about 40m. Recently I’ve been hitting [Errno 12] Cannot allocate memory a lot. A du -hs collection says 2.2GB.

That’s an error about RAM not being enough, not the hard drive space. You may want to enable some swap, but if you don’t have any partition available to do so it may be a bit hard.

Given that I have 8G+2G swap to process 2G of data, I assume it’s more likely a memory leak:

$ free -m
               total        used        free      shared  buff/cache   available
Mem:            7645         294         193          25        7157        7031
Swap:           2048          15        2032 

Well anyhow I will drop ki for scalability and performance reasons. Thanks for the attempt though.

@Zifix Yeah this is definitely a bug, one way or another. I’ve stress-tested ki on the largest decks I could find, and the max wall-clock time for a push-pull cycle was on the order of two minutes. So sorry you’ve run into this problem!

Any chance you’d be willing to share your collection (even privately) so I could build a test case out of it?

If you are willing to do this, I can let you know once I’ve sped it up enough that it will work for you use case!

It will force you to pull any time the hash of the database file changes. So if you open Anki and then close it, that will modify the .anki2 file, requiring you to pull again.

Thanks for bringing up the issue though, I’ll look into a more intelligent way to figure out if changes have been made. Perhaps if I can figure out a way to content-address only the contents of the database that we need to write to disk, we can avoid this. It would have to be a substantially quicker process than cloning a copy of the collection, though.

Actually this is the first custom deck I’m building. I suppose (don’t know where to find proper stats…) that AwesomeTTS bloated my sqlite quite a bit when I rendered voice for a few thousand cards :smiley:
Anki doesn’t have any perceivable problem for it. I can maybe offer a small degree of consolation: didn’t get very far with Anki Connect either. I think it’s an incompatibility with the Anki version and reported a bug there as well.