[Bug] File headers not working correctly with .csv files

Steps to reproduce

  1. Open LibreOffice Calc and paste the following into it:
#separator:Comma		
This is a question.	And this is the answer.	Test::CSV
This is a question 2.	And this is the answer 2.	Test::CSV
This is a question 3.	And this is the answer 3.	Test::CSV

It will look like this:

  1. Safe the file as test.csv and make it comma separated:
  2. Open Anki.
  3. Import test.csv and notice that anki thinks the file is colon separated:

Additional info

The test.csv looks like this:

#separator:Comma,,
This is a question.,And this is the answer.,Test::CSV
This is a question 2.,And this is the answer 2.,Test::CSV
This is a question 3.,And this is the answer 3.,Test::CSV

Removing the trailing commas in the first line (without modifying the tags Test::CSV) results in this:

#separator:Comma
This is a question.,And this is the answer.,Test::CSV
This is a question 2.,And this is the answer 2.,Test::CSV
This is a question 3.,And this is the answer 3.,Test::CSV

And:


As you can see Anki now correctly identifies the separator and the field greys out as well.

Expected Behavior

Anki should be able to handle the .csv file headers correctly. I shouldn’t have to manually alter the first few lines every time I want to import a .csv.

Actual Behavior

Anki doesn’t correctly apply the file headers, without prior manual modifications.

Version

Anki 25.02 (8fc822a6) (src) (ao)
Python 3.9.18 Qt 6.6.2 PyQt 6.6.1
Platform: Linux-6.12.12-amd64-x86_64-with-glibc2.40

===Add-ons (active)===
(add-on provided name [Add-on folder, installed at, version, is config changed])
AnkiWebView Inspector ['31746032', 2023-06-27T21:26, 'None', '']

===IDs of active AnkiWeb add-ons===
31746032

===Add-ons (inactive)===
(add-on provided name [Add-on folder, installed at, version, is config changed])
Image Occlusion Enhanced ['1374772155', 2022-04-09T09:15, 'None', '']
Review Heatmap ['1771074083', 2022-06-30T03:43, 'None', '']
Study Time Stats ['1247171202', 2024-02-24T17:59, 'None', '']

I opened an issue: