Static Analysis for AI-Generated Code
A purpose-built security scanner that catches the vulnerabilities AI coding assistants introduce. 59 rules across vibe code patterns, agent security, and LLM application risks — powered by tree-sitter AST analysis and optional AI-assisted triage.
59 Security Rules
6 Languages
3 Rule Categories
SARIF Output
The Problem
Why AI-generated code is a security risk
AI assistants generate code faster than humans can review it. Vibe coding — accepting AI suggestions with minimal scrutiny — is the new normal.
AI optimizes for working code, not secure code. The same hardcoded secret, the same SQL concatenation, the same permissive CORS — repeated across thousands of projects.
Semgrep, Snyk, and CodeQL don't understand prompt templates, agent tool definitions, or LLM output handling. New attack surfaces have zero coverage.
Autonomous agents make real-world decisions with file system access, database writes, and shell commands. No existing tool audits their permission boundaries.
Rules
59 rules targeting three AI-specific vulnerability domains
20 rules for patterns commonly generated by AI coding assistants: hardcoded secrets in prompts, SQL injection via concatenation, dangerous code execution, insecure defaults, and missing authentication.
15 rules for autonomous agent code: overly permissive tool definitions, unrestricted file system access, shell command execution from user input, missing audit logging, and no confirmation before destructive actions.
24 rules for production LLM applications: raw user input in system prompts, unsanitized HTML rendering, LLM-generated SQL execution, exposed API keys, missing prompt injection detection, insecure OAuth flows, deprecated SDK usage, and unvalidated tool outputs.
Integrate Mistral AI or local Ollama models to reduce false positives, re-rank severity based on context, and generate contextual fix suggestions. Enable with a single --ai flag — works with any supported model.
Real-time security feedback in your editor via the VibeGuard LSP server. Inline diagnostics, code actions, and fix suggestions as you type — works with VS Code, Neovim, and any LSP-compatible editor.
Write custom rules in YAML — no Rust knowledge required. Define AST node types, regex patterns, ancestor context constraints, and fix suggestions. Scaffold new rules with a single command.
Native SARIF v2.1.0 output for GitHub Security tab integration. Findings appear as inline PR annotations with severity, fix suggestions, and references. Also supports JSON and text output.
Drop-in GitHub Action for automated scanning on every push and pull request. Also supports GitLab CI, Bitbucket Pipelines, and pre-commit hooks. Single binary — no runtime dependencies.
Capabilities
What VibeGuard delivers
59
Built-in security rules
6
Languages supported
3
AI-specific categories
1
Single binary — no deps
5
Cross-platform builds
3
Output formats
Tech Stack
Rust workspace (5 crates), tree-sitter, serde, regex, YAML rule engine, .vibeguard.yaml config
tree-sitter-javascript, tree-sitter-typescript, tree-sitter-python, tree-sitter-go, tree-sitter-java, tree-sitter-rust
Mistral AI SDK, Ollama client, configurable model selection, false-positive filtering, severity re-ranking
clap, colored, serde_json, SARIF v2.1.0 output, Language Server Protocol
GitHub Actions (CI + cross-compile release), GitLab CI, Bitbucket Pipelines, pre-commit hooks
Linux (amd64/arm64), macOS (amd64/arm64), Windows (amd64), crates.io
Need help securing AI-generated code in production? Our consulting services complement VibeGuard.
70% of AI pilots never reach production. Get the playbook for the 30% that does.
Unsubscribe anytime. No spam, ever.