Skip to content
matt@iammattl.com:~/blog $ cat ./posts/mcp-servers-connecting-claude-to-real-infrastructure.md
mcp-servers-connecting-claude-to-real-infrastructure.md

MCP Servers: Connecting Claude to Real Infrastructure

7 April 2026
#Claude Code#AI#MCP#Infrastructure#DevTools
Cyberpunk digital art of a programmer interacting with a Claude avatar, receiving access to MCP servers, APIs, and datasets

The last post covered skills and plugins. Reusable expertise that tells Claude how to do things. A skill knows the methodology. It knows the steps. But it can only work with what Claude can already see.

That's the codebase in front of it. Files, terminal output, whatever you paste into the conversation. Everything else, your infrastructure, your databases, your running services, lives behind a wall Claude can't reach.

MCP servers remove that wall.

What MCP Actually Is

MCP stands for Model Context Protocol. The name is more intimidating than the concept.

An MCP server is a small program that gives Claude access to an external system. It exposes tools. Claude calls those tools the same way it runs a shell command or reads a file. The difference is that the tool might query a database, take a browser screenshot, or deploy a container to your NAS.

Add a server, Claude gets new capabilities. Remove it, those capabilities go away. The interface is standardised. Any MCP server works with any MCP-compatible client. Claude Code, the desktop app, the web app, other AI tools that support the protocol.

You don't need to understand the protocol to use them. You install a server, point it at your infrastructure, and Claude starts using the tools it provides. The protocol handles the plumbing.

The Servers I Use

I have a handful of MCP servers that stay on permanently. Cloudflare, GitHub, Context7, and Chrome DevTools. The rest I enable when I need them and disable when I don't. Here's how I use the core ones, and a few of the situational servers that are worth mentioning.

Cloudflare — Where My Sites Run

This site runs on Cloudflare Workers. The blog data lives in D1. Images live in R2. The Cloudflare MCP server gives Claude direct access to all of it.

I can query the blog database mid-conversation. Check how many posts are published, pull content for review, verify a migration ran correctly. I can list Workers, check KV namespaces, manage R2 buckets. It's infrastructure management without leaving the terminal.

The moment this clicked for me was when I was writing the previous blog post. I needed to check the exact slug and status of a draft in D1. Instead of opening the Cloudflare dashboard, finding the database, writing a SQL query, I just asked. Claude queried D1, showed me the results, and we kept working. Trivial on its own. But those small context switches add up across a day of work.

GitHub — PR and Issue Management

GitHub is the other server I can't turn off. PR creation, issue management, code search across repos. It handles the full workflow. Create a branch, push changes, open a PR with a description, all from conversation. No tab switching to the GitHub UI for routine operations.

Context7 — Current Documentation on Demand

This one is deceptively important. Context7 fetches current library documentation in real time. When Claude is writing code that uses a specific library, it can pull the latest docs instead of relying on training data.

Training data has a cutoff. Libraries change. APIs get deprecated, new methods get added, configuration formats evolve. Without Context7, Claude sometimes generates code using outdated patterns. With it, Claude checks the current documentation first.

I use it constantly when working with Next.js, Cloudflare Workers, and Drizzle ORM. All three move fast. The difference between "this worked six months ago" and "this works now" matters when you're deploying to production.

Chrome DevTools — Live Browser Inspection

The Chrome DevTools MCP server connects Claude to a running browser. It can navigate pages, take screenshots, inspect the DOM, read console output, monitor network requests, and run Lighthouse audits.

For frontend work, this is the one that changes the workflow most. Instead of describing what you see on screen, Claude sees it directly. "The layout breaks on mobile" becomes Claude taking a screenshot, identifying the issue, fixing the CSS, and taking another screenshot to confirm. The feedback loop tightens from minutes to seconds.

I use it alongside the SEO skills from the last post. The visual analysis agent takes screenshots at desktop and mobile breakpoints. The performance agent runs Lighthouse. Having real browser data instead of guessing makes the analysis credible.

Enabled When Needed

The rest come and go depending on the task.

TrueNAS is the one I'm most proud of. I forked an existing server and heavily customised it. Rewrote authentication, added security validation that blocks privileged containers and dangerous mounts, built Docker Compose to TrueNAS Custom App conversion, added auto-reconnect for dead WebSocket connections. The fork has 22 tools, 165 tests, and 80% coverage. When I need to deploy a service to my NAS, I enable it. Claude reads a Docker Compose file, converts it to TrueNAS format, deploys it, and verifies it's running. One conversation instead of thirty minutes of tab switching.

n8n handles workflow automation. It runs on my homelab. The MCP server lets Claude create, test, and manage workflows conversationally. Faster than clicking through the node editor for anything beyond a simple two-step automation. The documentation server alongside it means Claude pulls current n8n docs instead of guessing at node configurations.

Playwright provides browser automation for testing. Serena does semantic code navigation, understanding symbols and their relationships rather than just text search.

Building Your Own

If the system you need isn't covered by an existing server, you have two options. Fork one that's close and adapt it, or build from scratch. I'd check first. The TrueNAS server started as a fork. Most of my work was enhancing it to fit my setup, not writing MCP plumbing from zero.

The MCP SDK is available in Python and TypeScript. You define tools with names, descriptions, and parameters. Each tool is a function that does something and returns a result. The SDK handles the protocol, transport, and communication with the client.

@server.tool()
async def get_system_info() -> dict:
    """Get TrueNAS system information."""
    client = await get_client()
    info = await client.get_system_info()
    return {"hostname": info.hostname, "version": info.version, ...}

That's the shape of it. Define what the tool does, handle the API call, return structured data. The description matters because it's how Claude decides when to use the tool. Same principle as skill trigger descriptions from the last post.

The harder parts are authentication, error handling, and security validation. Exposing your NAS to an AI tool means thinking about what operations should be allowed. My server blocks privileged containers and dangerous filesystem mounts by default. That kind of guardrail belongs in the server, not in a prompt.

Test thoroughly. The mock mode in the TrueNAS server lets me run the full test suite without a live NAS connection. 165 tests might seem like overkill for an MCP server, but when the tool is managing your storage infrastructure, you want confidence it does what you expect.

If you build something useful, open source it. I published my TrueNAS fork because other homelabbers have the same needs. The original author gets contributions back, the ecosystem grows, and someone else doesn't have to solve the same problems from scratch.

What I Got Wrong

A few lessons from accumulating MCP servers.

Too many servers at once creates noise. Each server adds tools to Claude's context. A dozen servers can mean over 100 tools available in every conversation. Most of the time, you need three or four. The rest are consuming context for nothing. I'm more selective now about which servers are active globally versus enabled per project.

Security needs thought, not afterthoughts. An MCP server with write access to your NAS or your production database is powerful. It's also a risk if the tool descriptions are ambiguous or the guardrails are missing. Think about what you're exposing before you connect it. Least privilege applies here the same way it applies everywhere else.

Not all servers are equal quality. Some community servers are well-tested and maintained. Others are weekend projects with no error handling. Before connecting a server to anything important, read the code. Check the test coverage. Understand what it's doing with your credentials.

Server descriptions matter more than you'd expect. If the tool descriptions are vague, Claude either won't use the server when it should, or will use it when it shouldn't. Good descriptions include when to use the tool and what it returns. This is the same lesson as skill trigger descriptions. The description is the interface between Claude and the capability.

What's Next

Skills tell Claude how to do things. MCP servers give Claude access to the systems where things happen. But in both cases, you're still driving. You ask, Claude acts, you review, you ask again.

The next step is agents. Claude working autonomously. Spawning subagents that run in parallel, delegating tasks, making decisions within defined boundaries. That's the final post in this series.

MCP Servers: Connecting Claude to Real Infrastructure | Matt Lambert