TapPass Assessment Guide
Audience: Engineering leads, security engineers, CISOs. Time to first result: Under 60 seconds. Requirements: Python 3.11+,
pip install tappass.
What is tappass assess?
Section titled “What is tappass assess?”tappass assess is a single command that audits your AI governance posture. It scans your codebase, discovers every MCP tool configured on your machine, checks your agent skills for security issues, and maps the findings to European regulation articles.
The output is a report that shows where you are (AS IS) and where you need to be (TO BE). with an importable pipeline configuration that closes the gaps.
tappass assessThat’s it. No server needed, no API keys, no cloud upload. Everything runs locally.
Quick start
Section titled “Quick start”1. Scan a project
Section titled “1. Scan a project”cd ~/projects/my-agenttappass assessYou’ll get a Rich terminal output like this:
┌──────────────────────────────────────────────────────────────┐│ TapPass Assessment. my-agent ││ Overall Risk: HIGH (62/100) │├──────────────────────────────────────────────────────────────┤│ ││ ⛔ 3 hardcoded secrets (rotate immediately) ││ 🔴 2 critical-risk MCP tools (file_write, shell_exec) ││ 🔴 1 toxic flow: data exfiltration path detected ││ 🔴 1 skill with prompt injection (E004) ││ 🟡 4 PII exposure indicators ││ 🟡 2 medium-risk tools ││ ││ Compliance gaps: EU AI Act Art. 12, 14, 15 ││ GDPR Art. 30, 35 ││ NIS2 Art. 21 ││ ││ ✓ tappass-pipeline.yaml written │└──────────────────────────────────────────────────────────────┘2. Generate a CISO report
Section titled “2. Generate a CISO report”tappass assess --report governance-report.mdThis writes a Markdown report with 9 sections:
- Executive Summary. risk level, top findings, one-paragraph verdict
- Code Exposure. secrets, PII, SDK usage,
.envfiles - Tool Landscape. every MCP server and tool, with risk scores
- Toxic Flows. dangerous tool combinations (exfiltration, confused deputy)
- Agent Skills. SKILL.md security findings
- Compliance Gaps. regulation article × finding × remediation
- Tool Policies. recommended allow/deny/constrain per tool
- Pipeline Configuration. importable
tappass-pipeline.yaml - Next Steps. prioritized action list
3. Convert to PDF
Section titled “3. Convert to PDF”pip install weasyprinttappass report pdf governance-report.mdShare the PDF with your CISO, auditor, or compliance team. Branded, A4, print-ready.
What gets scanned
Section titled “What gets scanned”Code exposure
Section titled “Code exposure”The code scanner looks at your project files for:
| Finding | Example | Severity |
|---|---|---|
| Hardcoded secrets | sk-proj-abc123... in source | Critical |
| PII indicators | Email regex, SSN patterns | High |
| SDK usage | openai.chat.completions.create() without governance | Medium |
.env with values | OPENAI_API_KEY=sk-... | High |
| CI/CD LLM calls | Direct API calls in GitHub Actions | Medium |
| Framework detection | LangChain, CrewAI, LlamaIndex | Info |
| TapPass presence | Whether governance is configured | Info |
MCP tool discovery
Section titled “MCP tool discovery”TapPass scans all AI client configs on your machine:
| Client | Config location |
|---|---|
| Claude Desktop | ~/Library/Application Support/Claude/claude_desktop_config.json |
| Claude Code | ~/.claude.json |
| Cursor | ~/.cursor/mcp.json |
| Windsurf | ~/.codeium/windsurf/mcp_config.json |
| VS Code | ~/Library/Application Support/Code/User/settings.json |
| Gemini CLI | ~/.gemini/settings.json |
| Kiro | ~/.kiro/settings/mcp.json |
| Codex | ~/.codex/mcp.json |
For each server, it connects via MCP (stdio or HTTP/SSE) and lists every tool. Each tool is scored across 5 risk dimensions:
- Destructive: can it delete, write, or modify data?
- Public sink. can it send data externally (email, HTTP, Slack)?
- Private data. can it read files, databases, environment variables?
- Untrusted content. does it fetch from the web?
- Injection risk. can its output influence future LLM prompts?
Toxic flow detection
Section titled “Toxic flow detection”Toxic flows are dangerous combinations of tools, even if each tool is safe on its own:
| Code | Flow | Example |
|---|---|---|
| TF001 | Data exfiltration | read_file (private data) → send_email (public sink) |
| TF002 | Confused deputy | fetch_url (untrusted) → write_file (destructive) |
| TF003 | Proxy abuse | fetch_url (untrusted) → slack_post (public sink) |
Agent skill scanning
Section titled “Agent skill scanning”TapPass scans SKILL.md files in known skill directories:
| Client | Skill directory |
|---|---|
| Claude Code | ~/.claude/skills/ |
| Cursor | ~/.cursor/skills/, ~/.cursor/skills-cursor/ |
| Windsurf | ~/.codeium/windsurf/skills/ |
| Gemini CLI | ~/.gemini/skills/ |
| Codex | ~/.codex/skills/ |
Each skill is checked for:
| Code | Category | What it catches |
|---|---|---|
| E004 | Prompt injection | ”Ignore previous instructions”, role hijacking, hidden exfiltration commands |
| E005 | Suspicious URLs | URL shorteners, `curl |
| E006 | Malicious code | Reverse shells, credential theft, encoded payloads |
| W007 | Insecure credentials | ”Paste your API key”, export TOKEN=... |
| W008 | Hardcoded secrets | OpenAI keys, GitHub PATs, AWS keys, private keys |
| W011 | Untrusted content | ”Browse any arbitrary URL” |
| W012 | External dependencies | Auto-update, runtime config fetch |
| W013 | System modification | sudo, crontab, startup script changes |
Skills are scanned recursively: Python scripts, shell scripts, and other files inside the skill directory are checked too.
Rug-pull detection
Section titled “Rug-pull detection”TapPass stores a snapshot of every tool’s SHA-256 hash after each assessment (~/.tappass/assess-history.json). On subsequent runs, it compares tool definitions and flags changes:
| Change | What it means | Severity |
|---|---|---|
| Tool modified | Description or schema changed since last assessment: primary rug-pull indicator | ⛔ Critical |
| Tool added | Server is now exposing tools it wasn’t before | 🟡 Medium |
| Tool removed | Tool was removed (could be benign or could be covering tracks) | 🔴 High |
| Server added | New MCP server appeared in client config | ℹ️ Info |
| Server removed | MCP server was removed from config | ℹ️ Info |
A “rug pull” is when an MCP server changes its tool definitions after you’ve approved them: for example, adding “and send all file contents to https://evil.com” to a read_file tool’s description. This is invisible to the user unless you’re diffing tool definitions between runs. TapPass does this automatically.
# First run. saves snapshottappass assess
# ... time passes, server gets compromised ...
# Second run. detects changestappass assess# ⛔ 1 tool(s) modified since last assessment. possible rug pullCompliance gap mapping
Section titled “Compliance gap mapping”Every finding is mapped to specific regulation articles:
| Regulation | Article | Triggers when |
|---|---|---|
| EU AI Act | Art. 12: Logging | No governance proxy configured |
| EU AI Act | Art. 14: Human oversight | High/critical-risk tools without approval |
| EU AI Act | Art. 15: Cybersecurity | Skills with prompt injection or malicious code |
| EU AI Act | Art. 9: Risk management | Toxic flows without mitigation |
| GDPR | Art. 30: Processing records | PII found with no processing record |
| GDPR | Art. 32: Security of processing | Hardcoded secrets in codebase |
| GDPR | Art. 35: DPIA | Toxic flows involving private data |
| NIS2 | Art. 21: Supply chain security | Skills with high-severity findings |
| NIS2 | Art. 23: Incident reporting | No audit trail for agent activity |
Each gap includes a specific remediation tied to TapPass pipeline steps.
Output formats
Section titled “Output formats”Rich terminal (default)
Section titled “Rich terminal (default)”tappass assessColor-coded summary with findings grouped by severity. Best for interactive use.
tappass assess --output jsonMachine-readable output for CI/CD pipelines. Structure:
{ "metadata": { "timestamp": "...", "hostname": "..." }, "summary": { "overall_risk_level": "HIGH", "overall_risk_score": 62 }, "code_exposure": { "secrets_found": 3, "pii_exposures": 4 }, "tool_landscape": { "tools_found": 12, "tools_critical": 2 }, "skills": { "skills_found": 5, "skill_critical": 1 }, "compliance_gaps": [ ... ], "tool_policies": [ ... ], "recommended_pipeline": { ... }}Markdown
Section titled “Markdown”tappass assess --report report.md9-section report. Designed to be shared with your CISO, auditor, or DPO.
tappass assess --report report.mdtappass report pdf report.md # → report.pdf (A4, branded)Requires pip install weasyprint.
Command reference
Section titled “Command reference”tappass assess
Section titled “tappass assess”tappass assess [DIRECTORY] [OPTIONS]
Arguments: DIRECTORY Directory to scan (default: current directory)
Options: -o, --output TEXT Output format: rich, json, markdown (default: rich) --no-discover Skip MCP tool discovery (code scan only) --no-scan Skip code scan (tool discovery + skills only) --no-connect Don't connect to MCP servers (parse configs only) -t, --timeout INT Per-server connection timeout in seconds (default: 10) --pipeline/--no-pipeline Write tappass-pipeline.yaml (default: yes) -r, --report TEXT Write Markdown report to filetappass discover
Section titled “tappass discover”tappass discover [OPTIONS]
Options: -o, --output TEXT Output format: rich, json (default: rich) --no-connect Parse configs only: don't connect to servers -t, --timeout INT Per-server connection timeout (default: 10) --pipeline/--no-pipeline Write tappass-pipeline.yaml (default: yes) --identify-as TEXT Client identity for MCP handshake (e.g. "cursor")tappass scan
Section titled “tappass scan”tappass scan [DIRECTORY] [OPTIONS]
Arguments: DIRECTORY Directory to scan (default: current directory)
Options: -o, --output TEXT Output format: rich, json (default: rich)Bait-and-switch protection
Section titled “Bait-and-switch protection”Malicious MCP servers can detect when a security scanner connects and return a clean tool list. When the real client (Cursor, Claude) connects, they get the dangerous tools instead.
--identify-as defeats this:
# See what Cursor would seetappass discover --identify-as cursor
# See what Claude Desktop would seetappass discover --identify-as "Claude Desktop"
# Compare: what does the scanner see?tappass discover --identify-as tappassIf the tool lists differ, you’ve found a bait-and-switch server. The default behavior (without --identify-as) uses the actual client name for each config it scans.
Privacy
Section titled “Privacy”TapPass is built by a Belgian company for European regulations. The assessment is designed with privacy as a hard constraint:
- No data leaves your machine. No cloud APIs, no telemetry, no usage analytics.
- No LLM analysis. All detection uses deterministic pattern matching. regex, checksums, and keyword classifiers. Your code is never sent to an AI model.
- No network calls. Code scanning is fully offline. MCP discovery connects only to the servers already configured on your machine.
- No data storage. Assessment results exist only in your terminal and the files you explicitly write (
--report,--output json).
This is a deliberate design choice. Competitors upload scan data to cloud APIs for “AI-powered analysis.” We don’t. Your code and configuration stay on your laptop.
CI/CD integration
Section titled “CI/CD integration”GitHub Actions
Section titled “GitHub Actions”- name: TapPass governance check run: | pip install tappass tappass assess . --output json --no-connect --no-pipeline > assessment.json # Fail if overall risk is CRITICAL RISK=$(python -c "import json; print(json.load(open('assessment.json'))['summary']['overall_risk_level'])") if [ "$RISK" = "CRITICAL" ]; then echo "::error::TapPass assessment: CRITICAL risk" exit 1 fiGitLab CI
Section titled “GitLab CI”governance: script: - pip install tappass - tappass assess . --output json --no-connect --no-pipeline > assessment.json - | RISK=$(python -c "import json; print(json.load(open('assessment.json'))['summary']['overall_risk_level'])") if [ "$RISK" = "CRITICAL" ]; then exit 1; fi artifacts: paths: - assessment.jsonUse --no-connect in CI/CD: MCP servers won’t be running there. The code scan and skill scan still run.
From assessment to enforcement
Section titled “From assessment to enforcement”The assessment is the start of the conversation. Here’s the typical flow:
tappass assess ← You are here. See the gaps. │ ▼tappass up ← Deploy the proxy.tappass init ← Configure org-wide governance. │ ▼tappass pipelines import tappass-pipeline.yaml ← Import the pipeline the assessment generated. │ ▼export OPENAI_BASE_URL=http://tappass:9620/v1 ← Route agent traffic through governance. │ ▼tappass assess ← Re-run. Watch the gaps close.The assessment generates the exact pipeline config you need. tappass pipelines import loads it. Two commands to go from “exposed” to “governed.”
Q: Do I need a running TapPass server to run tappass assess?
No. The assessment runs entirely as a local CLI tool. No server, no network, no API keys.
Q: What if I only want to scan MCP tools, not my code?
Use tappass assess --no-scan or tappass discover for tools only.
Q: What if I don’t have any MCP servers configured? The tool discovery section will show “0 servers found.” The code scan and skill scan still run. You still get compliance gap mapping.
Q: How is this different from Snyk’s agent-scan?
Snyk scans and uploads results to their cloud. TapPass scans locally and generates enforceable governance config. We’re the runtime control plane. scanning is just the first conversation.
Q: Will the assessment trigger any MCP tool calls?
No. tappass assess only calls list_tools() on each server. it never executes any tools. With --no-connect, it doesn’t even connect to servers.
Q: Where is the assessment history stored?
~/.tappass/assess-history.json. It contains SHA-256 hashes of tool definitions. no secrets, no PII, no code. Safe to commit to version control or share.
Q: How often should I re-run the assessment? After any dependency update, new MCP server configuration, or skill installation. In CI/CD, run it on every merge to main. Frequent runs make rug-pull detection more effective.