๐Ÿ›ก๏ธ Prompt Injection Scanner

Paste your LLM system prompt below to scan for security vulnerabilities

Built by Empowerment AI โ€” OWASP LLM Top 10 Coverage

Paste your entire system prompt / system message. Nothing leaves your browser โ€” all analysis runs locally.
0 chars

The scanner analyzes your system prompt text using 15+ detection rules across 5 security categories. It combines regex pattern matching (for secrets, URLs, connection strings) with structural heuristic analysis (for missing defenses, excessive permissions).

Detection Categories

CategoryOWASPWhat It Catches
Sensitive Data Exposure LLM06 API keys, passwords, PII (SSNs, credit cards), database strings, internal URLs
Injection Defense Gaps LLM01 Missing anti-injection instructions, prompt leakage risk, instruction-only defenses
Excessive Agency LLM08 Unrestricted tool access, destructive actions without confirmation
Output Handling LLM02 Missing output sanitization, auto-execution of generated code
Attack Surface LLM01/02 Overly detailed context, multi-role prompts, HTML rendering

Scoring

Each prompt starts at 100 points. Findings reduce the score based on severity: Critical (โˆ’25), High (โˆ’15), Medium (โˆ’8), Low (โˆ’3). The final score maps to a letter grade (A through F).

Two Detection Methods

  • Pattern matching โ€” Regex patterns detect known secret formats (AWS keys, GitHub tokens, Slack tokens, Bearer tokens), PII patterns (SSNs, credit cards), database URIs, and internal URLs.
  • Structural analysis โ€” Heuristic functions check for the absence of security controls: no injection defense instructions, no prompt protection rules, no output format restrictions, no tool usage boundaries.

Privacy

Everything runs 100% in your browser using JavaScript. Your prompt text is never sent to any server. The full source code is open source on GitHub.

CLI & Library

This scanner is also available as a Node.js CLI tool and importable library for CI/CD integration. Clone the repo and run node bin/pi-scan.js your-prompt.txt โ€” or import scanPrompt() directly in your code.

โš ๏ธ Educational Purpose Only

This tool is for learning about AI security vulnerabilities and improving the security posture of your own LLM applications. Static analysis can catch common issues, but it cannot guarantee security โ€” always use defense in depth. Never use the techniques or knowledge gained here maliciously against systems you don't own.