MCP Server
Most AI agents (like Claude or Cursor) need an MCP server to read data. But deploying a web-based MCP server is surprisingly annoying because it requires Server-Sent Events (SSE).
In serverless environments, SSE is notoriously tricky: if your agent POSTs a message to /mcp/messages, there is zero guarantee that request hits the same edge instance that holds your open SSE streaming connection.
TriFrost solves this out of the box using built-in primitives, perfectly demonstrating the power of its architecture.
The Setup
We run our own MCP server directly on the TriFrost website. You can connect your local AI agent to it right now to read our documentation organically.
Endpoint: https://trifrost.dev/mcp/sse
If you're using Claude Desktop, your config looks like this:
{
"mcpServers": {
"trifrost-docs": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/inspector", "https://trifrost.dev/mcp/sse"]
}
}
}(Note: Claude Desktop currently only supports `stdio` connections locally, so you need the inspector to proxy it to SSE).
How we built it
We didn't write any custom Durable Objects or deploy a distinct stateful microservice. The entire MCP integration lives alongside our standard HTTP routes and took under 100 lines of code.
Here's exactly how the architecture flows:
1. The Stream (`GET /mcp/sse`):
We use ctx.file({stream}) to hold the connection open. Under the hood, we mount the official @modelcontextprotocol/sdk and start an async polling loop that continuously checks ctx.cache for new commands.
2. The Inbox (`POST /mcp/messages`):
When the AI agent sends a command (like get_doc_by_slug), it hits our stateless POST endpoint. This endpoint blindly drops the JSON-RPC payload into the distributed DurableObjectCache using ctx.cache.set.
That's it. The polling SSE isolate instantly detects the new payload from the cache, processes it, and streams the response back to your client.
Stateless edge-compute, clustered cache, infinite streams. Zero configuration.