@ganqqwerty this is a really great plugin! Any chance you could enable a GPT-4 option?
Yes, I will do it and check out about those custom assistants they recently added.
This is one of the best extensions out there–I really hope you add it to the newest version of Anki and add an option where we can switch to GPT 4 instead of 3.5.
Great addon. Thank you for your work.
There are some annoying stuff though… Once I start generating a block of cards could you make it able to skip flashcards with errors and finish the work with others?
It’s quite frustrating when you leave your PC generating cards and after hours of waiting only to find out everything was blocked in the first minutes, so, you must start all over wishing to be lucky.
Could you clarify more about what flashcard the error was with? I find it exceedingly difficult to find out.
Within that, it would be perfect!
Thank you.
If you want to use GPT-4 (I won’t suggest this at the moment as it’s quite expensive)
here’s how:
Tools > Add-ons > IntelliFiller ChatGPT > View Files (this open file manager).
Find out a file named “data_request.py” and open with Notepad. Scroll down until you find the string:
" try:
print("Request to ChatGPT: ", prompt)
openai.api_key = config[‘apiKey’]
response = openai.ChatCompletion.create(model=“gpt-3.5-turbo”, messages=[{“role”: “user”,
“content”: prompt}], max_tokens=2000)
print(“Response from ChatGPT”, response)
return response.choices[0].message.content.strip() ".
See where it’s written “model=“gpt-3.5-turbo””,
change that with “model=“gpt-4-1106-preview””.
All done.
I’m back! It seems that for 23.12.1 (1a1d4d54) it works again without any additional code. i use Qt6 everywhere now. Could you check please?
Subject: Automatically Triggering the Add-on upon Newly Created Cards
Hi there,
Firstly, huge thanks to the author(s) for this incredible add-on; it’s been an immense help to me.
I’ve recently developed a Python script that streamlines my card creation process by enabling me to generate a new card while reviewing. By simply selecting a word or sentence and pressing a shortcut, a new card is automatically created in the background, with the selected text transferred onto it. Now, I’m wondering if it’s feasible to integrate this functionality with the IntelliFiller add-on. Specifically, I’m interested in having the add-on automatically execute a pre-set prompt on each newly generated card.
I’d greatly appreciate any insights or guidance on this matter.
Best regards,
Abdallah
Hi! It fails to load on start-up now.
Anki 24.04 (429bc9e1) (ao)
Python 3.9.18 Qt 6.6.2 PyQt 6.6.1
Platform: Windows-10-10.0.22631When loading IntelliFiller ChatGPT:
Traceback (most recent call last):
File “aqt.addons”, line 247, in loadAddons
File “C:\Users\gu\AppData\Roaming\Anki2\addons21\1416178071_init_.py”, line 11, in
from .settings_editor import SettingsWindow
File “C:\Users\gu\AppData\Roaming\Anki2\addons21\1416178071\settings_editor.py”, line 1, in
from PyQt5.QtGui import QGuiApplication
ModuleNotFoundError: No module named ‘PyQt5.QtGui’
let me check it out. It seems that I will need to switch to a Qt6 completely…
Crashes when starting the App:
Anki 24.04.1 (ccd9ca1a) (ao)
Python 3.9.18 Qt 6.6.2 PyQt 6.6.1
Platform: Windows-10-10.0.19045
When loading IntelliFiller ChatGPT:
Traceback (most recent call last):
File “aqt.addons”, line 247, in loadAddons
File “C:\Users\phili\AppData\Roaming\Anki2\addons21\1416178071_init_.py”, line 11, in
from .settings_editor import SettingsWindow
File “C:\Users\phili\AppData\Roaming\Anki2\addons21\1416178071\settings_editor.py”, line 1, in
from PyQt5.QtGui import QGuiApplication
ModuleNotFoundError: No module named ‘PyQt5’
You need to install the qt5 version of anki. It is compatible with more add-ons than the default qt6 one.
You can use chatgpt 4 with this patch on the addon after downloading:
--- a/99999999/data_request.py
+++ b/99999999/data_request.py
@@ -9,6 +9,8 @@
sys.path.append(vendor_dir)
import openai
+import time;
+from openai import error
from html import unescape
@@ -38,10 +40,23 @@ def send_prompt_to_openai(prompt):
try:
print("Request to ChatGPT: ", prompt)
openai.api_key = config['apiKey']
- response = openai.ChatCompletion.create(model="gpt-3.5-turbo", messages=[{"role": "user", "content": prompt}], max_tokens=2000)
- print("Response from ChatGPT", response)
- return response.choices[0].message.content.strip()
+
+ def try_call():
+ # gpt-3.5-turbo
+ # gpt-4o-mini, faster, cheaper, more precise, https://openai.com/api/pricing/
+ response = openai.ChatCompletion.create(model="gpt-4o-mini", messages=[{"role": "user", "content": prompt}], max_tokens=2000)
+ print("Response from ChatGPT", response)
+ return response.choices[0].message.content.strip()
+
+ maximum = 300
+ while maximum > 0:
+ maximum -= 1
+ try:
+ return try_call()
+
+ except error.RateLimitError as e:
+ time.sleep(1.0) # gpt-4o is has a token limit
except Exception as e:
- showWarning(f"An error occurred while processing the note: {str(e)}")
+ print(f"An error occurred while processing the note: {str(e)}", file=sys.stderr)
return None
Adding this to the end of the file data_resquest.py
, you can make this addon generate images with open ai dalle 3 engine. This will replace the default text response with an image generation result. You need to remove this code if you would like to generate text responses again:
def send_prompt_to_openai_image(prompt):
config = mw.addonManager.getConfig(__name__)
if config['emulate'] == 'yes':
print("Fake request chatgpt: ", prompt)
return f"This is a fake response for emulation mode for the prompt {prompt}."
try:
import requests
import json
import base64
import pathlib
print("Request to ChatGPT: ", prompt)
api_key = config['apiKey']
media_dir = pathlib.Path(mw.col.media.dir())
# https://platform.openai.com/docs/api-reference/images/create
url = "https://api.openai.com/v1/images/generations"
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {api_key}"
}
data = {
"prompt": prompt,
"n": 1,
"model": 'dall-e-3',
"quality": 'hd',
"response_format": "b64_json",
"size": "1024x1024" # Optional: can be adjusted to different sizes like "256x256", "512x512"
}
response = requests.post(url, headers=headers, data=json.dumps(data))
if response.status_code == 200:
response_json = response.json()
revised_prompt = response_json["data"][0]["revised_prompt"]
file_name = f"{revised_prompt[:100]}-{response_json['created']}"
invalid_chars = r'[\/:*?"<>|\n]'
file_name = re.sub(invalid_chars, '', file_name)
# with open(media_dir / f"{file_name}.json", mode="w", encoding="utf-8") as file:
# json.dump(response_json, file)
image_info = (f"""<img alt="generated image" src="{file_name}.png">"""
f"""\n:::plugin:::\n{response_json['created']},\n{revised_prompt},\n{file_name}.png"""
)
print(image_info)
image_data = base64.b64decode(response_json["data"][0]["b64_json"])
with open(media_dir / f"{file_name}.png", mode="wb") as png:
png.write(image_data)
# dalle 3 has request limit https://platform.openai.com/docs/guides/rate-limits/usage-tiers?context=tier-free
# check your tier on https://platform.openai.com/settings/organization/limits
# time.sleep(10.0)
return image_info
print(f"Failed to get response: {response.status_code}")
print(response.text)
print(data)
return None
except Exception as e:
print(f"An error occurred while processing the note: {str(e)}", file=sys.stderr)
return None
send_prompt_to_openai = send_prompt_to_openai_image
Edit:
With this other patch, you can also have on the web interface a checkbox to enable image generation or text generation:
diff --git a/1416178071/__init__.py b/1416178071/__init__.py
index 67cf429..0eac73a 100644
--- a/1416178071/__init__.py
+++ b/1416178071/__init__.py
@@ -21,7 +21,7 @@ def get_common_fields(selected_nodes_ids):
note = mw.col.getNote(nid)
note_fields = set(note.keys())
common_fields = common_fields.intersection(note_fields)
- return list(common_fields)
+ return sorted(list(common_fields))
def create_run_prompt_dialog_from_browser(browser, prompt_config):
common_fields = get_common_fields(browser.selectedNotes())
diff --git a/1416178071/data_request.py b/1416178071/data_request.py
index 8595e03..716dd07 100644
--- a/1416178071/data_request.py
+++ b/1416178071/data_request.py
@@ -130,5 +130,3 @@ def send_prompt_to_openai_image(prompt):
print(f"An error occurred while processing the note: {str(e)}", file=sys.stderr)
return None
-
-# send_prompt_to_openai = send_prompt_to_openai_image
diff --git a/1416178071/process_notes.py b/1416178071/process_notes.py
index 0f3fa13..37bfd16 100644
--- a/1416178071/process_notes.py
+++ b/1416178071/process_notes.py
@@ -3,7 +3,7 @@ from PyQt5.QtWidgets import QDialog, QVBoxLayout, QProgressBar, QPushButton, QLa
from aqt import mw
from aqt.utils import showWarning
-from .data_request import create_prompt, send_prompt_to_openai
+from .data_request import create_prompt, send_prompt_to_openai, send_prompt_to_openai_image
from .modify_notes import fill_field_for_note_in_editor, fill_field_for_note_not_in_editor
@@ -71,7 +71,10 @@ class ProgressDialog(QDialog):
def generate_for_single_note(editor, prompt_config):
"""Generate text for a single note (editor note)."""
prompt = create_prompt(editor.note, prompt_config)
- response = send_prompt_to_openai(prompt)
+ if prompt_config["generateImage"]:
+ response = send_prompt_to_openai_image(prompt)
+ else:
+ response = send_prompt_to_openai(prompt)
target_field = prompt_config['targetField']
fill_field_for_note_in_editor(response, target_field, editor)
@@ -81,7 +84,11 @@ def generate_for_multiple_notes(nid, prompt_config):
"""Generate text for multiple notes."""
note = mw.col.get_note(nid)
prompt = create_prompt(note, prompt_config)
- response = send_prompt_to_openai(prompt)
+ if prompt_config["generateImage"]:
+ response = send_prompt_to_openai_image(prompt)
+ else:
+ response = send_prompt_to_openai(prompt)
+
fill_field_for_note_not_in_editor(response, note, prompt_config['targetField'])
diff --git a/1416178071/run_prompt_dialog.py b/1416178071/run_prompt_dialog.py
index fd24225..daee1a3 100644
--- a/1416178071/run_prompt_dialog.py
+++ b/1416178071/run_prompt_dialog.py
@@ -1,6 +1,6 @@
import re
-from PyQt5.QtWidgets import QDialog, QVBoxLayout, QLabel, QPushButton, QTextEdit, QComboBox
+from PyQt5.QtWidgets import QDialog, QVBoxLayout, QLabel, QPushButton, QTextEdit, QComboBox, QCheckBox
from aqt import mw
from aqt.utils import showWarning
@@ -31,6 +31,9 @@ class RunPromptDialog(QDialog):
layout.addWidget(QLabel("Target Field:"))
layout.addWidget(self.target_field_editor)
+ self.enable_image_checkbox = QCheckBox("Enable Image")
+ layout.addWidget(self.enable_image_checkbox)
+
run_button = QPushButton("Run")
run_button.clicked.connect(self.try_to_accept)
@@ -40,6 +43,7 @@ class RunPromptDialog(QDialog):
def try_to_accept(self):
self.prompt_config["prompt"] = self.prompt_editor.toPlainText()
self.prompt_config["targetField"] = self.target_field_editor.currentText()
+ self.prompt_config["generateImage"] = self.enable_image_checkbox.isChecked()
invalid_fields = get_invalid_fields_in_prompt(self.prompt_config["prompt"], self.possible_fields)
if invalid_fields:
You are ChatGPT, a large language model trained by OpenAI based on the GPT-4 architecture. Let’s work this out step by step to make sure we have the right answer. If there is a flaw in my logic, point out the flaw, explain why someone might be mistaken, and explain the correct solution. You will receive a word and three example sentences. Your task is to generate an image that best describes the word by selecting one of the three example sentences. Prioritize choosing an example sentence that portrays a more natural environment, which includes elements such as trees, animals, mountains, rivers, oceans, lakes, plants, flowers, stars, outer space, planets, and other aspects of nature. If none of the sentences can naturally incorporate these elements of nature, select the sentence that best conveys the essence of the word, even if it involves an urban or indoor environment.
Word: {{{Verb}}}
Sentence 1: {{{VerbExample}}}
Sentence 2: {{{PastSimpleExample}}}
Sentence 3: {{{PastParticipleExample}}}
Anki 24.06.3 (d678e393) (ao)
Python 3.9.18 Qt 6.6.2 PyQt 6.6.1
Platform: macOS-15.0.1-x86_64-i386-64bit
When loading IntelliFiller ChatGPT:
Traceback (most recent call last):
File “aqt.addons”, line 247, in loadAddons
File “/Users/xxxxxxx/Library/Application Support/Anki2/addons21/1416178071/init.py”, line 11, in
from .settings_editor import SettingsWindow
File “/Users/xxxxxxx/Library/Application Support/Anki2/addons21/1416178071/settings_editor.py”, line 1, in
from PyQt5.QtGui import QGuiApplication
ModuleNotFoundError: No module named ‘PyQt5’