Pentest-Tools.com MCP server

The MCP server connects your Pentest-Tools.com capabilities to Claude, Codex, Copilot, or Cursor. Run scans, manage targets, pull findings, and generate reports using natural language. No scripting required.

Run scans and pull findings without leaving your AI tool

You're already in Claude analyzing code or planning your next test, but when you need to run a scan, pull your latest findings, or validate something, you switch back to Pentest-Tools.com. Our MCP server removes that friction by letting your LLM client talk directly to your available capabilities in natural language. 

You can rest assured that nothing happens without you noticing, as every execution requires your approval before it runs.

MCP Server

What you can do with the MCP server

Run vulnerability scans

Trigger Website Scanner, Network Scanner, Subdomain Finder, and Port Scanner from inside your AI client. Light or deep, authenticated or not. The LLM handles the parameters.

Example prompt: "Start a deep website scan on staging.example.com and focus on SQL injection."

Manage targets, workspaces, and scans

Add targets, create workspaces, check VPN profiles, delete scans. Almost everything available through the API works through the MCP.

Example prompt: "Create a workspace called Q2 External Assessment and add these three targets."

Generate and translate reports

Build reports filtered by vulnerability type, severity, or scan scope. Export them in PDF, HTML, JSON, CSV, XLSX, or DOCX format, then translate them into any language for regional stakeholders. Translated reports get imported back into the product automatically.

Example prompts: "Generate an executive report from my latest 10 scans, then translate it to German."

Triage findings faster

If you use an agentic IDE like Cursor or VS Code Copilot, you can connect the MCP server alongside your coding environment. Trigger scans on local apps, review findings, and act on them without leaving your editor.

Requires a VPN profile for local/internal targets.

Why use the Pentest-Tools.com MCP server

Resilient to API changes

The LLM reads fresh tool definitions every time it connects. If the API updates, your workflow keeps working without changes on your side.

Simple for the whole team

No documentation to read, no parameters to configure. Ask in natural language, get results. Junior team members and managers can use it from day one.

Predictable and safe

Every tool call needs your explicit approval before it runs. Strict JSON-Schema validation keeps execution reliable. You stay in control.

Open source

Run the server locally if you prefer. Install the package, host it yourself, and only API calls reach Pentest-Tools.com. Nothing else you share with your LLM passes through.

Works with the LLMs you already use

Ready-made configs for the most popular AI clients like Claude Code, Cursor, VS Code, ChatGPT, Gemini CLI, etc. Choose between a hosted remote server (no install needed) or run it locally from the open-source package.

Built for security teams, MSPs, and the developers who fix vulnerabilities

Internal security teams

Run assessments, triage findings, and build reports from a single conversation. Less context-switching, faster turnaround.

MSSPs

Scale assessments across clients. Generate localized reports, standardize workflows, and let junior analysts contribute through natural language.

Devs who find and fix vulnerabilities

Test the code you're writing without leaving your IDE. Catch and fix vulnerabilities before they leave your machine.

What you need to get started

  • A Pentest-Tools.com account on any paid plan

  • Your API key (My Account -> API)

  • An MCP-compatible AI client

  • For local server: Python 3.10+ and the pentesttools[mcp] package

Try it now

Connect your Pentest-Tools.com account to your AI tools in a few minutes. Full documentation and step-by-step setup guides are ready.

Pentest-Tools.com MCP server FAQs

Which plans support the MCP server?

Available on all paid plans. You'll need to generate an API key from My Account -> API from inside the product. Free plan users don't have API access, so the MCP server won't work on the free edition.

Local or remote server - which should I use?

The remote server requires no installation and works with just your API key and a compatible client. Use it if you want the fastest setup. The local server requires Python 3.10+ and the [mcp] package, but keeps all communication between your LLM client and the Pentest-Tools.com API, with nothing passing through our MCP server. If you're working with sensitive client data or operating under strict data handling policies, the local option gives you more control.

Can I scan internal or private targets?

Yes. You'll need a VPN profile configured in your Pentest-Tools.com account to make your internal environment reachable by our scanners. To get access, you’ll need to buy the “Internal network scanning” add-on.

Is the MCP server open source?

Yes. The Python package is publicly available on GitHub at https://github.com/pentesttoolscom/pentesttools-pypi. You can inspect the code and host it yourself. When running locally, only standard API calls reach Pentest-Tools.com. Nothing else you share with your LLM passes through our infrastructure.

Which AI clients are supported?

Any client that supports the Model Context Protocol. We provide ready-made configuration guides for Claude, VS Code, and Gemini CLI. Other MCP-compatible clients will also work. You can connect using either the remote server URL with an Authorization header, or by running the local server via the ptt mcp command.

Can the MCP server run actions without my approval?

No. By default, most MCP clients ask for your explicit confirmation before executing any tool call. You can see exactly which API call the LLM is about to make, and you can reject it. However, some clients allow you to disable these confirmation prompts if you're confident in your setup.

What happens if the API changes or updates?

The LLM reads the full tool specification from the MCP server every time it connects. If we update the API or change parameters, the LLM picks up the new definitions automatically. You don't need to update any scripts, configs, or client-side code. This is one of the key advantages over writing direct API integrations.

Can I run multiple scans at the same time?

True parallel execution across multiple scans depends on your plan's parallel scan limit and on how your LLM client handles concurrent tool calls, which most clients don't fully support yet.

Does the MCP server support authenticated scanning?

Yes. The Website Scanner supports login form, cookie, and header authentication through the MCP. Authenticated scanning involves more complex parameters than a standard scan, so you may need to be more explicit in your prompt about how the login works (URL, credentials, form fields). Recorded authentication is not supported, as the API doesn't support that method either.

What are the limitations I should know about?

A few things to keep in mind: the report translation tool (translate_report) is capped at roughly 50 findings per hour due to API import limits. Your scan capacity (parallel scans, queued scans, assets per cycle) depends on your subscription plan. Some clients, including VS Code, can't display progress bars for long-running scans, though you'll still get status updates. And the MCP intentionally exposes a small number of tools to keep LLM performance reliable, since most clients degrade as the tool count grows.

Can I combine the MCP server with other MCP servers?

Yes. Since MCP is an open standard, you can configure multiple MCP servers in the same client. For example, connect the Pentest-Tools.com MCP alongside Linear to push findings directly into project tasks, or alongside GitHub to reference code while reviewing scan results. The more MCPs your client supports, the more you can automate in a single conversation.

Does my LLM see my scan data or credentials?

Your LLM client processes whatever data the MCP server returns, which includes scan results, findings, and report content. This is the same data you'd see in the API response. If you run the hosted version, Pentest-Tools.com sees both the API calls and the MCP tool calls. If you run locally, we only see API calls. Your API key is passed as an environment variable or HTTP header and is not shared with the LLM itself. However, if you use authenticated scanning and include credentials in your prompt, those credentials are visible to your LLM provider.