Skip to content

TapPass Assessment Guide

Audience: Engineering leads, security engineers, CISOs. Time to first result: Under 60 seconds. Requirements: Python 3.11+, pip install tappass.


tappass assess is a single command that audits your AI governance posture. It scans your codebase, discovers every MCP tool configured on your machine, checks your agent skills for security issues, and maps the findings to European regulation articles.

The output is a report that shows where you are (AS IS) and where you need to be (TO BE). with an importable pipeline configuration that closes the gaps.

Terminal window
tappass assess

That’s it. No server needed, no API keys, no cloud upload. Everything runs locally.


Terminal window
cd ~/projects/my-agent
tappass assess

You’ll get a Rich terminal output like this:

┌──────────────────────────────────────────────────────────────┐
│ TapPass Assessment. my-agent │
│ Overall Risk: HIGH (62/100) │
├──────────────────────────────────────────────────────────────┤
│ │
│ ⛔ 3 hardcoded secrets (rotate immediately) │
│ 🔴 2 critical-risk MCP tools (file_write, shell_exec) │
│ 🔴 1 toxic flow: data exfiltration path detected │
│ 🔴 1 skill with prompt injection (E004) │
│ 🟡 4 PII exposure indicators │
│ 🟡 2 medium-risk tools │
│ │
│ Compliance gaps: EU AI Act Art. 12, 14, 15 │
│ GDPR Art. 30, 35 │
│ NIS2 Art. 21 │
│ │
│ ✓ tappass-pipeline.yaml written │
└──────────────────────────────────────────────────────────────┘
Terminal window
tappass assess --report governance-report.md

This writes a Markdown report with 9 sections:

  1. Executive Summary. risk level, top findings, one-paragraph verdict
  2. Code Exposure. secrets, PII, SDK usage, .env files
  3. Tool Landscape. every MCP server and tool, with risk scores
  4. Toxic Flows. dangerous tool combinations (exfiltration, confused deputy)
  5. Agent Skills. SKILL.md security findings
  6. Compliance Gaps. regulation article × finding × remediation
  7. Tool Policies. recommended allow/deny/constrain per tool
  8. Pipeline Configuration. importable tappass-pipeline.yaml
  9. Next Steps. prioritized action list
Terminal window
pip install weasyprint
tappass report pdf governance-report.md

Share the PDF with your CISO, auditor, or compliance team. Branded, A4, print-ready.


The code scanner looks at your project files for:

FindingExampleSeverity
Hardcoded secretssk-proj-abc123... in sourceCritical
PII indicatorsEmail regex, SSN patternsHigh
SDK usageopenai.chat.completions.create() without governanceMedium
.env with valuesOPENAI_API_KEY=sk-...High
CI/CD LLM callsDirect API calls in GitHub ActionsMedium
Framework detectionLangChain, CrewAI, LlamaIndexInfo
TapPass presenceWhether governance is configuredInfo

TapPass scans all AI client configs on your machine:

ClientConfig location
Claude Desktop~/Library/Application Support/Claude/claude_desktop_config.json
Claude Code~/.claude.json
Cursor~/.cursor/mcp.json
Windsurf~/.codeium/windsurf/mcp_config.json
VS Code~/Library/Application Support/Code/User/settings.json
Gemini CLI~/.gemini/settings.json
Kiro~/.kiro/settings/mcp.json
Codex~/.codex/mcp.json

For each server, it connects via MCP (stdio or HTTP/SSE) and lists every tool. Each tool is scored across 5 risk dimensions:

  • Destructive: can it delete, write, or modify data?
  • Public sink. can it send data externally (email, HTTP, Slack)?
  • Private data. can it read files, databases, environment variables?
  • Untrusted content. does it fetch from the web?
  • Injection risk. can its output influence future LLM prompts?

Toxic flows are dangerous combinations of tools, even if each tool is safe on its own:

CodeFlowExample
TF001Data exfiltrationread_file (private data) → send_email (public sink)
TF002Confused deputyfetch_url (untrusted) → write_file (destructive)
TF003Proxy abusefetch_url (untrusted) → slack_post (public sink)

TapPass scans SKILL.md files in known skill directories:

ClientSkill directory
Claude Code~/.claude/skills/
Cursor~/.cursor/skills/, ~/.cursor/skills-cursor/
Windsurf~/.codeium/windsurf/skills/
Gemini CLI~/.gemini/skills/
Codex~/.codex/skills/

Each skill is checked for:

CodeCategoryWhat it catches
E004Prompt injection”Ignore previous instructions”, role hijacking, hidden exfiltration commands
E005Suspicious URLsURL shorteners, `curl
E006Malicious codeReverse shells, credential theft, encoded payloads
W007Insecure credentials”Paste your API key”, export TOKEN=...
W008Hardcoded secretsOpenAI keys, GitHub PATs, AWS keys, private keys
W011Untrusted content”Browse any arbitrary URL”
W012External dependenciesAuto-update, runtime config fetch
W013System modificationsudo, crontab, startup script changes

Skills are scanned recursively: Python scripts, shell scripts, and other files inside the skill directory are checked too.

TapPass stores a snapshot of every tool’s SHA-256 hash after each assessment (~/.tappass/assess-history.json). On subsequent runs, it compares tool definitions and flags changes:

ChangeWhat it meansSeverity
Tool modifiedDescription or schema changed since last assessment: primary rug-pull indicator⛔ Critical
Tool addedServer is now exposing tools it wasn’t before🟡 Medium
Tool removedTool was removed (could be benign or could be covering tracks)🔴 High
Server addedNew MCP server appeared in client configℹ️ Info
Server removedMCP server was removed from configℹ️ Info

A “rug pull” is when an MCP server changes its tool definitions after you’ve approved them: for example, adding “and send all file contents to https://evil.com” to a read_file tool’s description. This is invisible to the user unless you’re diffing tool definitions between runs. TapPass does this automatically.

Terminal window
# First run. saves snapshot
tappass assess
# ... time passes, server gets compromised ...
# Second run. detects changes
tappass assess
# ⛔ 1 tool(s) modified since last assessment. possible rug pull

Every finding is mapped to specific regulation articles:

RegulationArticleTriggers when
EU AI ActArt. 12: LoggingNo governance proxy configured
EU AI ActArt. 14: Human oversightHigh/critical-risk tools without approval
EU AI ActArt. 15: CybersecuritySkills with prompt injection or malicious code
EU AI ActArt. 9: Risk managementToxic flows without mitigation
GDPRArt. 30: Processing recordsPII found with no processing record
GDPRArt. 32: Security of processingHardcoded secrets in codebase
GDPRArt. 35: DPIAToxic flows involving private data
NIS2Art. 21: Supply chain securitySkills with high-severity findings
NIS2Art. 23: Incident reportingNo audit trail for agent activity

Each gap includes a specific remediation tied to TapPass pipeline steps.


Terminal window
tappass assess

Color-coded summary with findings grouped by severity. Best for interactive use.

Terminal window
tappass assess --output json

Machine-readable output for CI/CD pipelines. Structure:

{
"metadata": { "timestamp": "...", "hostname": "..." },
"summary": { "overall_risk_level": "HIGH", "overall_risk_score": 62 },
"code_exposure": { "secrets_found": 3, "pii_exposures": 4 },
"tool_landscape": { "tools_found": 12, "tools_critical": 2 },
"skills": { "skills_found": 5, "skill_critical": 1 },
"compliance_gaps": [ ... ],
"tool_policies": [ ... ],
"recommended_pipeline": { ... }
}
Terminal window
tappass assess --report report.md

9-section report. Designed to be shared with your CISO, auditor, or DPO.

Terminal window
tappass assess --report report.md
tappass report pdf report.md # → report.pdf (A4, branded)

Requires pip install weasyprint.


tappass assess [DIRECTORY] [OPTIONS]
Arguments:
DIRECTORY Directory to scan (default: current directory)
Options:
-o, --output TEXT Output format: rich, json, markdown (default: rich)
--no-discover Skip MCP tool discovery (code scan only)
--no-scan Skip code scan (tool discovery + skills only)
--no-connect Don't connect to MCP servers (parse configs only)
-t, --timeout INT Per-server connection timeout in seconds (default: 10)
--pipeline/--no-pipeline Write tappass-pipeline.yaml (default: yes)
-r, --report TEXT Write Markdown report to file
tappass discover [OPTIONS]
Options:
-o, --output TEXT Output format: rich, json (default: rich)
--no-connect Parse configs only: don't connect to servers
-t, --timeout INT Per-server connection timeout (default: 10)
--pipeline/--no-pipeline Write tappass-pipeline.yaml (default: yes)
--identify-as TEXT Client identity for MCP handshake (e.g. "cursor")
tappass scan [DIRECTORY] [OPTIONS]
Arguments:
DIRECTORY Directory to scan (default: current directory)
Options:
-o, --output TEXT Output format: rich, json (default: rich)

Malicious MCP servers can detect when a security scanner connects and return a clean tool list. When the real client (Cursor, Claude) connects, they get the dangerous tools instead.

--identify-as defeats this:

Terminal window
# See what Cursor would see
tappass discover --identify-as cursor
# See what Claude Desktop would see
tappass discover --identify-as "Claude Desktop"
# Compare: what does the scanner see?
tappass discover --identify-as tappass

If the tool lists differ, you’ve found a bait-and-switch server. The default behavior (without --identify-as) uses the actual client name for each config it scans.


TapPass is built by a Belgian company for European regulations. The assessment is designed with privacy as a hard constraint:

  • No data leaves your machine. No cloud APIs, no telemetry, no usage analytics.
  • No LLM analysis. All detection uses deterministic pattern matching. regex, checksums, and keyword classifiers. Your code is never sent to an AI model.
  • No network calls. Code scanning is fully offline. MCP discovery connects only to the servers already configured on your machine.
  • No data storage. Assessment results exist only in your terminal and the files you explicitly write (--report, --output json).

This is a deliberate design choice. Competitors upload scan data to cloud APIs for “AI-powered analysis.” We don’t. Your code and configuration stay on your laptop.


- name: TapPass governance check
run: |
pip install tappass
tappass assess . --output json --no-connect --no-pipeline > assessment.json
# Fail if overall risk is CRITICAL
RISK=$(python -c "import json; print(json.load(open('assessment.json'))['summary']['overall_risk_level'])")
if [ "$RISK" = "CRITICAL" ]; then
echo "::error::TapPass assessment: CRITICAL risk"
exit 1
fi
governance:
script:
- pip install tappass
- tappass assess . --output json --no-connect --no-pipeline > assessment.json
- |
RISK=$(python -c "import json; print(json.load(open('assessment.json'))['summary']['overall_risk_level'])")
if [ "$RISK" = "CRITICAL" ]; then exit 1; fi
artifacts:
paths:
- assessment.json

Use --no-connect in CI/CD: MCP servers won’t be running there. The code scan and skill scan still run.


The assessment is the start of the conversation. Here’s the typical flow:

tappass assess ← You are here. See the gaps.
tappass up ← Deploy the proxy.
tappass init ← Configure org-wide governance.
tappass pipelines import tappass-pipeline.yaml
← Import the pipeline the assessment generated.
export OPENAI_BASE_URL=http://tappass:9620/v1
← Route agent traffic through governance.
tappass assess ← Re-run. Watch the gaps close.

The assessment generates the exact pipeline config you need. tappass pipelines import loads it. Two commands to go from “exposed” to “governed.”


Q: Do I need a running TapPass server to run tappass assess? No. The assessment runs entirely as a local CLI tool. No server, no network, no API keys.

Q: What if I only want to scan MCP tools, not my code? Use tappass assess --no-scan or tappass discover for tools only.

Q: What if I don’t have any MCP servers configured? The tool discovery section will show “0 servers found.” The code scan and skill scan still run. You still get compliance gap mapping.

Q: How is this different from Snyk’s agent-scan? Snyk scans and uploads results to their cloud. TapPass scans locally and generates enforceable governance config. We’re the runtime control plane. scanning is just the first conversation.

Q: Will the assessment trigger any MCP tool calls? No. tappass assess only calls list_tools() on each server. it never executes any tools. With --no-connect, it doesn’t even connect to servers.

Q: Where is the assessment history stored? ~/.tappass/assess-history.json. It contains SHA-256 hashes of tool definitions. no secrets, no PII, no code. Safe to commit to version control or share.

Q: How often should I re-run the assessment? After any dependency update, new MCP server configuration, or skill installation. In CI/CD, run it on every merge to main. Frequent runs make rug-pull detection more effective.