Cloudflare Connect London 2026: The Agentic Internet is Here

I spent today at Cloudflare Connect in London. The Brewery, 60+ speakers, sold out. The event landed mid-way through Cloudflare's Agents Week, so the product announcements from the past few days were fresh and the demos were live. The theme across every session was the same. The internet is being rebuilt around agents.
Agents. Autonomous software that acts on your behalf, runs its own compute, pays for services, talks to other agents. Cloudflare is betting their platform on this shift. After a full day of sessions, I think the bet is right.
The Keynote: Welcome to the Agentic Internet
The opening keynote framed everything that followed. The internet we built was designed for humans browsing pages. The next internet is designed for agents acting on behalf of humans. One-to-one instead of one-to-many. Each agent is a unique instance, serving one user, running one task.
The infrastructure implications are massive. If every knowledge worker has agents running tasks for them, you're looking at hundreds of millions of concurrent compute instances. Traditional containers don't scale to that. Cloudflare's answer is isolates and their new Sandbox environments. Lightweight, persistent, secure. They sleep when idle and wake on demand. Agents only pay for active CPU time, not for sitting around waiting on an LLM response.
Sandboxes hit general availability alongside Cloudflare Containers. Agents get their own persistent compute with file systems, terminal access, git, dev servers. Credentials are injected at the network layer so agents never see raw secrets. It's a full development environment that an AI can operate independently.
The other announcement worth noting was the x402 Foundation. It's a standard for agents to pay for services they consume. Right now, agents can browse the web and call APIs, but there's no native way for them to transact. x402 is building that payment layer. Not exciting on its own, but it matters once agents start operating at scale.
Fast Path to AI: Securely Adopting Models and Agents
This session covered the security side of the agent transition. The pitch was practical. Companies want to use AI models and deploy agents, but the security surface is genuinely new. Prompt injection, data exfiltration through tool calls, agents accessing systems they shouldn't.
Cloudflare's approach layers AI Gateway in front of everything. Unified logging, rate limiting, caching, content filtering across multiple model providers. The argument is that you can't retrofit security onto agents the way you did with web apps. It needs to be embedded from the start. Access controls, identity, and authorization baked into the execution model.
As someone running Workers AI on my own site for blog drafts and image generation, this resonated. I have a Cloudflare Access JWT check protecting my admin routes. That pattern of auth-at-the-edge is exactly what they're proposing for agents, just at a much larger scale.
I was convinced enough to act on it during the session. I have a TrueNAS MCP server that gives Claude direct access to my NAS. 22 tools, container deployments, storage management. It runs locally over stdio. I've wanted to make it remote for a while, accessible from anywhere, but the security story wasn't there. An MCP server that can deploy containers to your NAS shouldn't be exposed to the internet without serious access controls.
MCP Server Portals solve exactly this. They aggregate MCP servers behind a single Zero Trust endpoint. Identity provider authentication, device posture checks, per-tool access policies, full audit logging. You register your MCP server, attach it to a portal, and users connect through one URL protected by Cloudflare Access. The portal even collapses tools into a single code execution mode so the AI client sees a cleaner interface.
The other announcement that caught my attention was Cloudflare Mesh. It's private networking for users, devices, and agents. Think Tailscale, but built into Cloudflare's network. You run a lightweight connector on your server, it gets a private Mesh IP, and any device or Worker on your Mesh can reach it. No port forwarding. No public exposure. Traffic routes through Cloudflare's edge across 330+ cities, so NAT traversal just works.
This matters for me because my TrueNAS sits on my internal network. I don't want to expose it to the internet. Cloudflare Tunnels could do it, but Mesh is bidirectional and many-to-many instead of one-directional. Run a Mesh node on the NAS, connect my devices and Workers, and the MCP server becomes reachable over a private IP with full Zero Trust policy enforcement. The free tier covers 50 nodes and 50 users.
Mesh plus MCP Server Portals gets me from "local MCP server on my home network" to "remote MCP server accessible from anywhere, secured by Zero Trust, without exposing a single port."
I had my laptop open through most of the sessions, exploring the features as they were being discussed and working out how they'd fit into my own infrastructure. The AI Controls integration for my TrueNAS server was pushed before the session ended.
Ditching the Mac Mini: Moltworker and OpenClaw
This was the most entertaining talk of the day. OpenClaw (formerly Moltbot, formerly Clawdbot) is a self-hosted personal AI agent. It connects Claude to your files, APIs, and messaging platforms. The original deployment model was a Mac mini sitting under someone's desk. Always on, always running, always your problem when it crashed at 3am.
Moltworker is the Cloudflare Workers port. Sid Chatterjee walked through the retrospective of packaging OpenClaw to run in a Sandbox container on Cloudflare's network. No hardware. No maintenance. It sleeps when idle, wakes on request, and runs across 300+ data centers. The talk covered the full migration story. Every pain point of self-hosting a persistent AI agent, power outages, OS updates that break things, the Mac mini's fan noise, all gone.
The Moltworker repo is open source. Cloudflare published it with the Sandbox SDK built in. R2 storage for persistence across container restarts. It's a reference implementation for how to deploy a personal agent on their platform.
The talk was honest about the engineering challenges. Adapting an agent designed for a persistent local machine to a sleep/wake serverless model wasn't straightforward. But the result is a self-hosted AI agent that you deploy with a single command and stop thinking about.
I had a sandbox running by the end of the talk. Forked the repo, configured it, deployed. I'd been running OpenClaw on an old laptop to tinker with. The Sandbox deployment replaced that entirely. No hardware to keep running, no laptop lid that needs to stay open.
Kilo Code
John Fawcett from Kilo Code presented in one of Ade Oshineye's lightning sessions. It's an open-source AI coding agent. VS Code, JetBrains, CLI. Over 2 million users, 500+ model options, and an orchestrator mode that coordinates planner, coder, and debugger agents on complex tasks. It forked from Cline and Roo Code, raised $8 million, and recently launched KiloClaw, a hosted version that runs coding tasks in the cloud without tying up your local machine. The talk focused on orchestrating tens of coding agents per developer using Cloudflare Containers and Sandboxes.
The orchestrator concept is familiar. I built something similar with autonomous-coder, a multi-agent system that coordinates 7 specialised agents (frontend, backend, design, QA, DevOps, docs, research) with dependency graphs, heartbeats, and checkpoint recovery. 2,757 lines of coordination code. The hard lesson from building it: the coordination layer is more work than the agents themselves. Kilo Code is productising that same pattern. Break a complex task into subtasks, assign each to a specialised agent, coordinate the results. The difference is they've packaged it into something 2 million people can use without writing the orchestration from scratch.
Media Industry Meetup: AI Crawl Control, Security and Monetization
This session was aimed at the publishing industry. Cloudflare now blocks all AI crawlers by default on new websites. That's the baseline. From there, publishers get three options for each crawler: allow free access, charge per request, or block entirely.
Pay-per-crawl is the interesting middle ground. It uses the HTTP 402 status code (the same standard the x402 Foundation is building on) to let publishers set a flat per-request price. The crawler authenticates, pays, gets the content. Cloudflare handles billing, aggregation, and distribution. Publishers like DMGT, Associated Press, and Conde Nast are already on board.
The room was mostly media and publishing people, not developers. The questions were practical. How do you price access? How do you differentiate between a crawler training a model and one fetching a snippet for a citation? What happens when agents replace the traffic that ad revenue depends on?
Nobody had clean answers. But the questions are the right ones. The agentic internet needs economic infrastructure as much as it needs compute and security. Pricing models for agent access to content don't exist yet. They're being figured out now, and the publishers in that room were trying to work out if the numbers would add up.
What I Took Away
I run my site on Cloudflare Workers. Blog data in D1, images in R2, AI features powered by Workers AI. I've been on this platform for over a year. What struck me at Connect wasn't any single announcement. It was the coherence of the vision.
Every product slots into the agent story. Workers for compute, Durable Objects for state, Sandboxes for persistent environments, AI Gateway for security, R2 and D1 for storage. It's not a pivot. It's a logical extension of what they already built. The serverless edge platform designed for web applications turns out to be what AI agents need too.
Right now, agents still browse websites and fill in forms because that's the interface that exists. The agentic internet means building the native ones. MCP servers instead of screen scraping. Agent-to-agent authentication instead of OAuth flows designed for humans. Programmatic payments instead of checkout pages.
I left The Brewery thinking about what this means for the tools I use daily. My MCP servers already give Claude access to Cloudflare, GitHub, and my NAS. The Sandbox model could let those agents run persistently instead of dying when I close the terminal. Mesh and MCP Server Portals give me a path to making my TrueNAS MCP server remotely accessible without exposing my home network.
The infrastructure is shipping faster than I can explore it. Every session introduced something I wanted to try, and I ran out of day before I ran out of ideas. The agentic internet isn't a concept deck anymore. It's live, and the challenge now is finding the time to build on it.