What I've Learned Building Projects with Claude Code

I've been using Claude Code as my daily pair programmer for about a year now. Across multiple projects — Terraform modules managing 400+ Cloudflare zones, a full-stack price comparison platform with 4,800+ tests, automated workflows, a mobile file converter, and this website — it's become a core part of how I build software.
Here's what I've picked up along the way.
It Won't Replace Your Thinking
The biggest misconception is that you hand over a problem and get a solution back. It doesn't work like that. The better mental model is a pair programmer who's read every Stack Overflow answer but has never worked at your company.
Claude doesn't know your system's quirks. It doesn't know that your n8n Code node doesn't have fetch, or that TrueNAS will silently fail if you use update_custom_app instead of update_compose_config. You learn that through building — and then you teach it.
That's where the real workflow starts.
The CLAUDE.md File Changed Everything
Early on, I was repeating myself every conversation. "Don't use that API, it's deprecated." "Test coverage must stay above 80%." "The CI pipeline works like this." Every new chat started from zero.
So I started maintaining a CLAUDE.md file in each project — a plain markdown doc that acts as institutional memory. The rules, the gotchas, the architecture decisions, the hard-won lessons from previous sessions.
My Terraform project at work has a 500+ line CLAUDE.md. TechPartPrices has 860+ lines. They grow organically — every time I hit a wall and solve it, the fix goes into CLAUDE.md so I never hit it again.
The result — a single engineer delivering what would normally need a platform team. Not because AI wrote all the code, but because it remembered all the context.
It's Like Onboarding a Developer Every Morning
The best way I can describe it is onboarding a very skilled but brand-new hire — every single morning. They're talented, they learn fast, but they don't know your codebase yet.
CLAUDE.md is their onboarding doc. The better you write it, the faster they're productive. I structure mine with:
- Project overview — what this is, how it's deployed
- Commands — how to build, test, deploy
- Architecture — key patterns and why they exist
- Gotchas — the things that'll waste your afternoon if you don't know them
- Rules — test coverage thresholds, commit conventions, things that must not break
Sounds like effort, but it pays for itself within a day.
Push Complexity Into Deterministic Code
Here's a pattern I learned the hard way — AI is roughly 90% accurate per step. Sounds great until you chain 5 steps together and you're at 59% accuracy. For 10 steps? 35%.
The solution is to push complexity out of the AI's decision-making and into deterministic code. I use a three-layer approach:
- Directive layer — SOPs written in markdown that describe what should happen
- Orchestration layer — Claude reads the directives and decides what to do next
- Execution layer — Python scripts that do the actual work, reliably, every time
Claude orchestrates. Code executes. The AI decides what to do — the scripts guarantee how it's done.
Verify Visually Before Testing
For anything with a UI, I verify visually before writing tests. The instinct with AI-assisted development is to write the code, write the tests, and move on. The problem — you can have a green test suite and a screen that looks like it was built during a power cut.
My workflow for frontend tasks: implement → screenshot → verify it looks right → then write the tests. For backend work, it's the opposite — tests first. But for UI, the eyes come first.
Fetch the Docs Every Time
One rule I don't break — before implementing anything involving a library or framework, fetch the latest documentation first. I use Context7 for this. It pulls current docs so Claude isn't working from training data that might be months out of date.
This single habit has saved me from countless issues with outdated API patterns, deprecated methods, and missed features. Tailwind v4 changed how utility classes work under the hood. Without current docs, Claude would happily write Tailwind v3 patterns that silently break.
Extending Claude with MCP Servers, Plugins, Skills, and Agents
Out of the box, Claude Code is a capable pair programmer. The real shift came when I started wiring it into my actual infrastructure — so it's not just writing code, it's operating systems.
I run multiple plugins and MCP servers. Here's how they fit together.
MCP Servers — Connecting Claude to Your Infrastructure
MCP (Model Context Protocol) servers give Claude direct access to external tools and services. Instead of copying error messages back and forth, Claude can just look.
I built my own TrueNAS MCP server — forked an existing project and extended it to fit my homelab. It connects Claude to my NAS with 22 tools for checking app status, updating Docker Compose configs, managing ZFS snapshots, and monitoring storage. When I'm deploying docker containers, Claude doesn't need me to SSH in and paste logs. It reads them directly.
My n8n MCP server does the same for workflow automation. Claude can create, update, validate, and test n8n workflows without me touching the UI. The blog post topic curator for TechPartPrices — a 49-node workflow with a two-trigger state machine, Telegram integration, and DALL-E image generation — was built almost entirely through Claude talking to the n8n API.
Context7 fetches live documentation for any library, so Claude is never working from stale training data. Prevents more bugs than any linter.
Plugins — Specialised Capabilities
Plugins add focused tools for specific tasks. Chrome DevTools MCP lets Claude inspect live pages, take screenshots, and debug CSS issues directly in the browser — I used it today to diagnose a Tailwind v4 specificity issue. Playwright handles automated UI testing and visual verification. The GitHub plugin manages PRs and issues without leaving the terminal.
Serena gives Claude semantic code navigation — it can find symbols, trace references, and understand architecture without reading entire files. For large codebases, that's the difference between Claude being useful and Claude being lost.
Skills — Reusable Expertise
Skills are like playbooks. I've built a suite of 13 SEO skills that handle everything from technical audits to schema markup generation. Instead of explaining what an SEO audit involves every time, the skill encodes the methodology. Claude runs the audit, delegates to specialist sub-agents, and produces a scored report.
The pattern works for any domain expertise you find yourself repeating. Package it as a skill — it becomes a one-command operation.
Agents — Autonomous Problem Solving
Agents take this further. Instead of Claude doing one thing at a time, agents spin up specialised sub-processes that work in parallel. A frontend developer agent handles React and CSS. A software architect agent evaluates trade-offs. An SEO agent crawls pages and delegates to six specialist sub-agents simultaneously.
The key insight is delegation. When I run a full site audit, the main agent doesn't do all the work — it launches a technical SEO agent, a content quality agent, a performance agent, a schema agent, all running concurrently. What would take an hour of sequential analysis happens in minutes.
I've built custom agents for specific workflows too. TechPartPrices uses agents for code review, test generation, and deployment validation. Each agent has its own tools, its own system prompt, its own area of expertise — like having a small team, each focused on what they do best.
How It All Connects
A typical session — I ask Claude to deploy an update. It uses the n8n MCP to check the current workflow state, the TrueNAS MCP to update the Docker Compose config, Context7 to verify the latest API patterns, and Playwright to confirm the UI still works. No tab switching. No copy-pasting. Just a conversation that drives real infrastructure.
That's the shift — Claude stops being a chatbot and becomes an interface to your entire development environment.
The Projects That Proved It
TechPartPrices — An Amazon price tracker following 2,400+ products. Built with Next.js, Drizzle ORM, and Cloudflare D1. 4,800+ tests at 80%+ coverage, an n8n Telegram bot for admin, and an automated blog post topic curator. The CLAUDE.md file alone documents 18 n8n gotchas I discovered through trial and error.
This Website — The cyberpunk terminal UI, canvas pixelation effects, SVG cityscape, blog CMS with AI writing tools, and the deployment pipeline to Cloudflare Workers — all built with Claude Code from design to deployment.
What Doesn't Work
It's not all smooth.
It forgets. Every conversation starts fresh. Without CLAUDE.md, you're re-explaining everything. Institutional memory is your workaround for the lack of persistent context.
It's confidently wrong about edge cases. Especially newer APIs and platform-specific quirks. Claude will tell you with full confidence that an n8n node works a certain way — and it's just wrong. You learn to verify and document.
Chained reasoning degrades. The more steps in a chain, the less reliable the output. That's why the three-layer architecture exists — you don't ask AI to do 10 things in sequence. You ask it to decide the next thing, run deterministic code, then come back.
The Takeaway
Working with AI isn't about writing less code. It's about maintaining context, building institutional memory, and knowing when to let the AI think versus when to let code execute.
The engineers who'll get the most from these tools aren't the ones who type "build me an app." They're the ones who write solid CLAUDE.md files, verify before they trust, and treat the AI like what it is — a skilled colleague with amnesia.
After 20+ years of building for the web, the last year with Claude Code has been the most productive stretch of my career. Not because the AI did the work for me — but because it let me operate at a scale I couldn't reach alone.