Security
Found a bug?
Found a bug?
Tell us first.
AZMX AI runs shells, reads and writes files, and talks to AI providers — so security matters. Please report privately before posting publicly.
Reporting
Email [email protected]. Include:
- What the issue is and what it lets an attacker do
- Steps to reproduce (a small proof-of-concept is great)
- Version, OS, and architecture
We'll get back to you within a few days. Once it's fixed, we'll credit you in the release notes — unless you'd rather stay anonymous.
Please don't open a public GitHub issue for security
reports. Use the email above so the issue can be fixed before it's known.
Supported versions
Until 1.0.0, only the latest minor receives security fixes.
Update to the most recent release before reporting; the in-app
auto-updater keeps you current.
What's in scope
- The shipped AZMX AI app — anywhere untrusted input lands (terminal output, file content, AI tool results, credentials)
- Release artifacts at
github.com/drvt69talati/azmx-ai-releasesand theazmx.aisite - The auto-updater
What's not
- Bugs in upstream dependencies (xterm.js, CodeMirror, AI SDKs…) — report those upstream; we'll ship the fix once it's released
- Anything that requires an already-compromised machine or a local attacker with shell access
- Old, unsupported versions
What we do to keep things safe
- API keys live in the OS keychain via the
keyringlayer — not on disk, not inlocalStorage, not in logs. - No telemetry. AZMX AI only touches the network when you ask it to (AI requests, update checks, web preview).
- AI tool approval. File writes and shell commands proposed by the agent require your explicit approval before they run.
- No Node in the renderer. The frontend reaches the host only through an allow-listed set of native commands.
- Signed releases. Updates are verified with a minisign signature before they're applied.
What we can't promise
- AZMX AI runs whatever you (or the agent, with your approval) tell it to run, with your permissions. That's the point of a terminal.
- AI providers see whatever you send them. Read their retention policies.
- Local model endpoints (LM Studio, OpenAI-compatible) are trusted at the network level — only point AZMX AI at servers you control.
A machine-readable contact is published at /.well-known/security.txt.