RFC: Architecture for Runtime Privilege Dropping (Sandboxing) to Mitigate Supply Chain Risks

Hi Damien (@dae) and community,

I am opening this thread to discuss the feasibility and design of a runtime sandboxing mechanism for Anki Desktop.

The Problem: Supply Chain Vulnerability Currently, Anki’s add-on ecosystem operates on a high-trust model. While this flexibility is Anki’s greatest strength, it is also a significant security liability.

  • The Risk: An add-on runs with the full permissions of the user. A malicious actor—or a legitimate developer whose AnkiWeb credentials are compromised—could push an update that instantly encrypts a user’s ~/Documents folder (ransomware), exfiltrates browser session cookies, steals money from user bank accounts or crypto wallets. Today with LLMs, writing malware became easier then ever before.

  • The Ease of Attack: Because there is no permission scoping (e.g., “Add-on X wants Internet access”), a malicious update effectively grants remote code execution (RCE) on thousands of machines instantly. As Anki grows in popularity, the incentive for such supply-chain attacks increases.

The Proposal: Opt-in Privilege Dropping I am not proposing a full rewrite or moving to the strict OS App Stores (which would break too many things). Instead, I’d like to explore implementing a “Self-Sandboxing” step at startup. This is a standard security practice.

The core idea is to allow Anki to initialize normally, but then “lock the door” before loading any add-ons.

Proposed Architecture (Draft): I am investigating using OS-native, dependency-free APIs to drop write privileges for the Anki process at runtime.

  1. Linux: Using Landlock (via ctypes, no new dependencies).

  2. macOS: Using sandbox_init (Seatbelt) with a blacklist profile.

  3. Windows: Using Job Objects or AppContainer restrictions.

The Workflow:

  1. Anki starts, loads profiles, and initializes the GUI.

  2. (New Step) Anki checks for an --experimental-sandbox flag.

  3. If present, Anki invokes the OS API to permanently revoke its own ability to write to files outside of base and temp folders, and netowrk access beyond https://ankiweb.net/ for syncing.

  4. Anki loads add-ons.

    • Result: If a compromised add-on tries to encrypt ~/Documents, the OS kernel denies the write.

Long-Term Vision: A Permission Manifest System While the immediate goal is a binary “Sandbox On/Off” switch, I believe this architectural change paves the way for a more granular, user-friendly permission system in the future (similar to Android or Chrome extensions).

If we can successfully implement the plumbing for privilege dropping, we could eventually explore:

  1. Add-on Manifests: Developers declare their needs in a manifest.json (e.g., ["network:api.openai.com", "clipboard-write"]).

  2. User Consent: AnkiWeb could display these requests prominently (“This add-on wants to access the Internet”).

  3. Enforcement: On startup, Anki would read the manifest and configure the sandbox specifically for that add-on requested permission, knowing the user knowingly allowed it when installing the module.

I realize this requires server-side changes and is out of scope for now, but establishing the client-side sandboxing capabilities is the necessary first step toward that future.

Questions for the Team:

  1. Has this specific approach (runtime privilege dropping via ctypes) been explored or rejected before?

  2. Are there specific technical blockers (e.g., specific Qt helper processes or IPC mechanisms) that you anticipate would break under this model?

  3. If I were to develop a Proof of Concept (POC) for Linux/macOS first, would you be open to reviewing it and merging as an opt-in hardening feature?

  4. Do you have other ideas to move towards a more secure anki ecosystem?

I want to emphasize that my goal is not to impose restrictions on general users or break existing add-ons, but to provide a safety layer for security-conscious users who want to mitigate the risk of running third-party code. And slowly, pave the way for more secure anki for everyone.

Looking forward to your thoughts.

3 Likes

That’s interesting. Do you know of any implementation of this in popular software?

I don’t think anyone had explored this. My understanding is that it’s not possible to make Python add-ons 100% secure, but any small security wins we can get is good in my opinion (while allowing users to bypass the guards if needed).

Anki is gradually moving away from Qt to web technologies. I expect we’ll have a JS add-on system and Python add-ons will be less useful in the future. A manifest system would be great here.

3 Likes

Yes, “Self-Sandboxing” (or Privilege Dropping) is actually the standard architecture for modern secure software, though it is often invisible to the end user. Browsers are the most famous examples, but they are far from the only ones. Some concrete examples:

  • Chromium: On macOS, it uses the exact sandbox_init (Seatbelt) API I am proposing to lock down renderer processes immediately after initialization. (Source: Chromium Sandbox Design: https://www.chromium.org/developers/design-documents/sandbox/osx-sandboxing-design/)

  • OpenSSH: The industry standard for remote access pioneered Privilege Separation. It splits the daemon into a privileged monitor and an unprivileged child process. The child process handles network traffic but has zero privileges, so remote exploits are contained. (Source: https://www.citi.umich.edu/u/provos/ssh/privsep.html)

  • Systemd: The Linux service manager uses directives like ProtectHome=yes. Under the hood, this uses Linux Namespaces (similar to Landlock) to make user data completely invisible to system services. (Source: https://www.freedesktop.org/software/systemd/man/systemd.exec.html)

  • GNU tar: Even the standard tar utility recently added Landlock support to prevent malicious archives from overwriting system files during extraction. (Source: https://man7.org/linux/man-pages/man7/landlock.7.html)

I 100% agree. Python is dynamic, so we can’t stop a malicious add-on from messing with Anki’s internal memory (e.g., messing with user’s anki experience). However, we can stop it from destroying the user’s life outside of Anki.

  • The Goal: Prevent “Mass Ransomware” (add-ons that encrypt ~/Documents) or credential theft (stealing ~/.ssh keys) and preventing add-ons from network communication which is needed for malware.

  • The Non-Goal: Prevent an add-on from corrupting a single card in your deck (hard to stop in Python as you suggested).

This is “Defense in Depth”—even if the inner wall (Python) is open, the outer wall (OS Kernel) remains shut.

That’s interesting. What’s the timeline on this? This sounds like a massive project.

Also, regarding long-term add-on security, I found this while doing some research on similar projects: Deno (JS Runtime). The modern successor to Node.js is Secure by Default. A script cannot access the disk or network unless the user explicitly runs it with flags like --allow-read or --allow-net. (Source: https://docs.deno.com/runtime/fundamentals/security/)

2 Likes

The editor and main screen have many parts rewritten in TypeScript years ago. The plan now is to complete the transition and open the door for more code sharing between the different Anki clients.

There are already well-developed PRs for the editor and review screen:

Major screens remaining are the main decks screen and the browser.

As this will inevitably break a lot of add-ons, Anki will eventually need to offer an alternative JS API for add-ons.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.