How I'm building the Mesh
Concrete decisions from principles to implementation. Every choice here traces back to at least one principle. Where a simpler option was available, I took it — the Mesh should be running quickly, not engineered to perfection before a single page is stored.
Build approach
Two-stage strategy: get to working quickly with stable Cloudflare primitives, then harden at specific trigger points.
Everything runs on Cloudflare. The entire stack — storage, compute, auth, MCP, web delivery — is Cloudflare-native. No external services except ClickUp (auth only) and GitHub (safety net backup). This is not a constraint imposed by the principles; it's the natural outcome of building on infrastructure that's already in use and already understood.
The prototype stage uses three proven, GA Cloudflare products: R2 for HTML content, D1 for metadata and search, Workers KV for sessions and tokens. These are stable, cheap, and well-documented. Cloudflare Artifacts — which the research identified as a strong long-term candidate — is in public beta as of May 2026 and introduces real risk as a core dependency for a prototype. I'll build around its concepts (versioning, fork/merge) but implement them differently for now, with a clear migration path documented below.
If a design decision requires a new infrastructure category, I resist it until the need is proven. The Mesh starts with R2 + D1 + KV + Workers — four products, one provider, no external APIs in the critical path except ClickUp OAuth. Each addition to this set needs to earn its place.
System architecture
Two layers: one Worker handles all logic and both surfaces; storage is R2 + D1 + KV.
One Worker, one hostname. mesh.smplrspace.io/mcp handles MCP tool calls authenticated via Bearer token; all other routes handle web requests authenticated via HTTP-only cookie. The Worker routes by URL path — no service binding, no inter-Worker round-trip, no second custom domain to manage. All business logic lives in the same codebase: access control, R2 reads and writes, D1 metadata, KV session checks, and async GitHub pushes via Cloudflare Queues.
mesh.smplrspace.io). Routes by path: /mcp for MCP tool calls, all other paths for web. Direct bindings to R2, D1, and KV — no service binding indirection.Content storage — R2
One HTML file per Mesh page. Path encodes scope. HTMLRewriter handles all operations on the content.
R2 is the canonical content layer. It stores the authoritative HTML for every Mesh page (see format research for why HTML over Markdown). Nothing else is the source of truth; every read and write goes through R2 via the API Worker. D1 holds metadata about pages (titles, scope, timestamps) but never content — content lives in R2 only.
Path structure
Each Mesh page has two R2 objects written atomically on every write: an .html file (canonical, editable) and a .md file (derived, never edited directly — always regenerated from the HTML). The Markdown is not a fallback or an approximation; it is the definitive agent-readable form of that page, guaranteed to match what the FTS index contains because both are produced from the same HTMLRewriter pass at write time.
After the scope root, path segments are fully user-controlled. Orgs, teams, and users can create arbitrary directory structures to organise their pages. R2 keys are just strings — there is no meaningful distinction between a "directory" and a "page" in R2. The tree hierarchy is expressed in D1 via parent_id and reflected in the path itself.
org/{org-id}/handbook.html ← flat page
org/{org-id}/brand/guidelines.html ← nested under brand/
org/{org-id}/brand/guidelines.md ← derived counterpart
team-level pages — path after team root is user-controlled
org/{org-id}/teams/{team-id}/decisions.html
org/{org-id}/teams/{team-id}/projects/the-mesh/spec.html
org/{org-id}/teams/{team-id}/projects/the-mesh/spec.md
individual pages — same principle
org/{org-id}/users/{user-id}/notes.html
org/{org-id}/users/{user-id}/notes/2026/may.html
drafts — under org, HTML only (MD not generated until merge)
org/{org-id}/drafts/{draft-id}.html
Section addressing
A section is any container element carrying a data-mesh-id attribute — the stable identifier used for surgical edits, section-level MCP reads, search indexing, and diff computation. Heading level does not determine what is a section: an H4 with its own data-mesh-id becomes a subsection of its nearest enclosing data-mesh-id ancestor; an H4 without one is sub-content of that ancestor. Content before the first data-mesh-id container (page title and any intro paragraphs) is treated as a root section with a null ID.
Every write produces three artifacts from this structure. The data-mesh-id attribute is the link between all three — present only in the HTML, used as metadata in the FTS5 index, and absent from the Markdown entirely.
data-mesh-id on containers
<h1>Brand Guidelines</h1>
← root, no ID
<p>Our visual identity...</p>
<div
data-mesh-id=
"brand-colors">
<h2>Brand Colors</h2>
<p>Primary: #6B4FBB</p>
</div>
<div
data-mesh-id=
"typography">
<h2>Typography</h2>
<h4>Body font</h4>
← sub-content, not a section
<p>IBM Plex Sans...</p>
</div>
data-mesh-id stripped; clean prose
# Brand Guidelines
Our visual identity...
## Brand Colors
Primary: #6B4FBB
## Typography
#### Body font
IBM Plex Sans...
one row per data-mesh-id
-- root section
section_id NULL
heading "Brand Guidelines"
body "Our visual identity..."
-- section 2
section_id "brand-colors"
heading "Brand Colors"
body "Primary: #6B4FBB"
-- section 3; H4 folded in
section_id "typography"
heading "Typography"
body "Body font IBM Plex..."
The HTMLRewriter conversion strips all attributes — data-mesh-id, class names, style attributes — producing clean prose. Section IDs exist only in the HTML (as attributes) and in D1 (as the section_id column in both sections and search_content). An agent or human reading the .md file sees no trace of them.
Nested sections
When a data-mesh-id container is nested inside another, the HTMLRewriter segmentation pass uses a stack to attribute text to the correct section. On entering a data-mesh-id element, it is pushed onto the stack and a new buffer starts. Text is accumulated into the innermost buffer only. On exit, the buffer flushes to FTS5 and the element pops off the stack — the outer buffer resumes collecting only its own direct content. This means FTS5 rows never overlap: searching for a term in a subsection returns that subsection's row, not the parent's.
<div data-mesh-id=
"typography">
<h2>Typography</h2>
<p>Our type system...</p>
← direct content
<div data-mesh-id=
"body-font">
<h3>Body font</h3>
<p>IBM Plex Sans</p>
</div>
<div data-mesh-id=
"heading-font">
<h3>Heading font</h3>
<p>IBM Plex Mono</p>
</div>
</div>
parent_section_id tracks nesting
section_id "typography"
parent null
depth 0
heading "Typography"
section_id "body-font"
parent "typography"
depth 1
heading "Body font"
section_id "heading-font"
parent "typography"
depth 1
heading "Heading font"
stack-segmented; no overlap
section_id "typography"
body "Our type system..."
↑ direct content only;
subsections excluded
section_id "body-font"
body "IBM Plex Sans"
section_id "heading-font"
body "IBM Plex Mono"
Calling get_section(page_id, "typography") returns the full HTML subtree — including all nested subsections — converted to Markdown. The agent sees all the content under that section. To read only a specific subsection, call get_section(page_id, "body-font") directly. The get_page section index lists all sections with indentation reflecting depth, so agents always know which IDs are available and how they nest.
Artifacts (reviewed in the storage research) maps beautifully onto the Mesh's requirements: git-native versioning, fork/diff/merge for the draft workflow, incremental fetches for token efficiency. The case for it is strong on paper. But it's in public beta as of May 2026, the multi-user auth model is undocumented for our use case, and the API stability guarantee is unclear. The canonical layer is the highest-risk dependency in the stack — I won't build the core data model on a product that might change its APIs or pricing before we've validated the Mesh itself. R2 is boring, stable, and fast. When Artifacts reaches GA and I've evaluated it hands-on, the migration is a storage swap — the Worker layer and data model don't change. That migration is documented below.
{scope}/{slug}.html in R2.2. HTMLRewriter converts that HTML to Markdown in the same Worker invocation.
3. Markdown written to
{scope}/{slug}.md in R2.4. D1
pages row updated; FTS5 index updated from the Markdown (not the HTML).5. GitHub push queued async. If step 3 fails, the whole write errors and retries — the two R2 objects are always in sync.
{scope}/{slug}.md directly from R2. No HTMLRewriter pass at read time. The Markdown is pre-computed, stable, and guaranteed to match the FTS index because it came from the same conversion pass that populated the index. Fast and predictable.{scope}/{slug}.html from R2 and serve it inside the Mesh web chrome. HTMLRewriter is not involved here either — the HTML is served as-is. The only HTMLRewriter work happens on the write path (to generate the MD) and on the surgical-edit path (to splice a section)..html from R2.2. HTMLRewriter locates
data-mesh-id target and splices in the new content.3. Write modified HTML back to R2.
4. HTMLRewriter converts the updated HTML to Markdown.
5. Write updated MD to R2.
6. FTS index updated from the new MD. Both R2 objects stay in sync.
.html and .md files. Surgical edits produce minimal commits — just the changed lines. The GitHub history is therefore readable as a meaningful change log, not a stream of full-file rewrites.Metadata — D1
One D1 database. Ten tables. SQLite on the edge, bound to the Worker directly.
id TEXT PRIMARY KEY, -- Mesh org UUID
slug TEXT UNIQUE NOT NULL, -- URL-safe, e.g. 'smplrspace'
name TEXT NOT NULL,
clickup_workspace_id TEXT UNIQUE, -- set on ClickUp OAuth; maps workspace → org
created_at INTEGER NOT NULL
id TEXT PRIMARY KEY, -- Mesh UUID, global across orgs
clickup_id TEXT UNIQUE NOT NULL,
email TEXT NOT NULL,
display_name TEXT,
created_at INTEGER NOT NULL
user_id TEXT REFERENCES users(id),
org_id TEXT REFERENCES orgs(id),
is_admin INTEGER DEFAULT 0,
created_at INTEGER NOT NULL,
PRIMARY KEY (user_id, org_id)
user_id TEXT REFERENCES users(id),
org_id TEXT REFERENCES orgs(id),
team_id TEXT NOT NULL, -- ClickUp space or list ID
can_merge_protected INTEGER DEFAULT 0,
PRIMARY KEY (user_id, org_id, team_id)
id TEXT PRIMARY KEY, -- path-within-scope, e.g. 'projects/the-mesh/spec'
org_id TEXT REFERENCES orgs(id),
r2_key TEXT UNIQUE NOT NULL, -- full R2 path including org and scope prefix
scope TEXT NOT NULL, -- 'org' | 'team' | 'user'
scope_id TEXT NOT NULL, -- org-id, team-id, or user-id
title TEXT NOT NULL,
protected INTEGER DEFAULT 0, -- always 1 for scope='org'
parent_id TEXT REFERENCES pages(id), -- explicit parent for query efficiency
owner_id TEXT REFERENCES users(id),
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
updated_by TEXT REFERENCES users(id)
page_id TEXT REFERENCES pages(id),
section_id TEXT NOT NULL, -- matches data-mesh-id in HTML
parent_section_id TEXT, -- null for top-level sections
depth INTEGER NOT NULL DEFAULT 0, -- 0 = top-level; denormalised for query convenience
heading TEXT,
updated_at INTEGER NOT NULL,
PRIMARY KEY (page_id, section_id)
id TEXT PRIMARY KEY,
page_id TEXT REFERENCES pages(id),
r2_draft_key TEXT NOT NULL, -- drafts/{id}.html in R2
base_r2_key TEXT NOT NULL, -- R2 key of page at time of fork
base_snapshot_hash TEXT NOT NULL, -- ETag from R2 at fork time
status TEXT NOT NULL DEFAULT 'open', -- open | submitted | merged | rejected
proposer_id TEXT REFERENCES users(id),
reviewer_id TEXT REFERENCES users(id),
description TEXT,
feedback TEXT,
created_at INTEGER NOT NULL,
submitted_at INTEGER,
resolved_at INTEGER
id TEXT PRIMARY KEY, -- random 24-char token
org_id TEXT REFERENCES orgs(id),
page_ids TEXT NOT NULL, -- JSON array of page IDs
purpose_label TEXT NOT NULL, -- human-readable: "Onboarding — Acme, May 2026"
created_by TEXT REFERENCES users(id),
created_at INTEGER NOT NULL,
expires_at INTEGER, -- nullable = never expires
revoked_at INTEGER,
last_accessed_at INTEGER
user_id TEXT REFERENCES users(id),
target_type TEXT NOT NULL, -- 'page' | 'section'
target_id TEXT NOT NULL, -- page_id or page_id:section_id
channel TEXT NOT NULL DEFAULT 'clickup_dm', -- 'clickup_dm' | 'email'
PRIMARY KEY (user_id, target_type, target_id)
id TEXT PRIMARY KEY, -- CUID
org_id TEXT REFERENCES orgs(id),
target_type TEXT NOT NULL, -- 'page' | 'section'
target_id TEXT NOT NULL, -- page_id or page_id:section_id
clickup_channel_id TEXT NOT NULL,
created_by TEXT REFERENCES users(id)
id TEXT PRIMARY KEY,
page_id TEXT REFERENCES pages(id),
tool TEXT NOT NULL, -- 'clickup' | 'figma' | 'missive' | 'github'
external_url TEXT NOT NULL,
external_label TEXT,
created_by TEXT REFERENCES users(id),
created_at INTEGER NOT NULL
-- USING fts5(page_id UNINDEXED, section_id UNINDEXED, scope UNINDEXED, scope_id UNINDEXED, heading, body)
-- Populated on every write. Queried with JOIN on pages for access gating.
Sessions — Workers KV
Read-heavy, write-rarely — KV is exactly right.
session:{token} → {user_id, exp}. Set on successful ClickUp OAuth callback. HTTP-only cookie carries the token. TTL matches the session expiry. Revocation: delete the key.mcp_token:{token} → {user_id, org_id, created_at}. Issued at the end of the ClickUp OAuth flow initiated from Claude's Connect button. Claude stores and manages the credential; the Mesh validates it on every tool call. Revocation: delete the key.share_links for audit trail. Validation is a D1 query checking revoked_at IS NULL AND (expires_at IS NULL OR expires_at > ?). The token itself is a random 24-char string, not a signed JWT.ratelimit:oauth:{ip} — OAuth attempt counter, short TTL. ratelimit:write:{user_id} — write-rate counter per user. Prevents OAuth abuse and runaway agent write loops. Cloudflare Rate Limiting is the outer layer; KV handles per-entity granularity.Auth — ClickUp OAuth
ClickUp OAuth for the prototype — fast to ship, already in production. Not a permanent commitment.
A user authenticates via ClickUp OAuth. The OAuth token carries a ClickUp workspace ID, which the Worker uses to resolve the user's Mesh org. That lookup creates a Mesh session in KV scoped to the org. All subsequent operations — web browsing, MCP tool calls, page edits, admin actions — are scoped to that org session. No separate Mesh account. No password. ClickUp identity is Mesh identity for now.
org_memberships.is_admin, team_memberships rows for the resolved org. The Worker enforces scope access on every read and write — never trusts the caller's stated scope. Access is checked per-operation, not at session creation.orgs row is created (slug derived from workspace name), the user gets an org_memberships row with is_admin = 1. Subsequent users from the same workspace join automatically via OAuth; the org admin assigns team memberships and elevated rights via the admin UI. Until assigned, new members have read-only access to org-level content.Cloudflare Access could gate the web UI, but it cannot authenticate MCP sessions — there's no mechanism to propagate an Access identity into a Claude Desktop MCP configuration. Since the MCP and web surfaces must share one identity layer (Principle 14), Cloudflare Access cannot be the sole auth mechanism. ClickUp OAuth is the right choice: it covers both surfaces, it already works for the time-entries MCP, and it reuses a credential every Mesh org already has by definition — the Mesh is ClickUp-workspace-gated at the org level.
Tying org identity to a ClickUp workspace means every Mesh org must be a ClickUp customer. That's true for the initial use case but is the wrong constraint for a general-purpose product. The right long-term auth layer is provider-agnostic — email magic link, Google OAuth, or a purpose-built solution like WorkOS. The reason to start with ClickUp: it's already working in production for the time-entries MCP, the OAuth flow is understood, and it eliminates an entire auth build from phase 1. The migration path is clean: the session and token layer (KV) and the rights model (D1) don't change — only the OAuth provider and the org onboarding flow swap out.
HTML parsing — Cloudflare HTMLRewriter
No external parsing libraries. HTMLRewriter is the right tool for every HTML operation the Mesh needs.
Cloudflare HTMLRewriter is a streaming, SAX-style HTML parser built into the Workers runtime. It's fast, allocation-efficient, and handles the three operations the Mesh requires on every page: section extraction, Markdown conversion, and surgical editing. I won't import an external HTML parser — HTMLRewriter does the job and eliminates the cold-start cost and bundle size of a DOM library.
Operation 1 — Section extraction
Agent requests section brand-colors from page org/smplrspace/brand.html. The Worker fetches the full page from R2 and pipes it through an HTMLRewriter that attaches an element handler to [data-mesh-id="brand-colors"]. The handler captures the element and its subtree. Only the matched content is returned. The rest of the document is discarded mid-stream — no need to buffer the full file.
Operation 2 — Markdown generation (write-time, not read-time)
The HTML-to-Markdown conversion runs once per write — not on every agent read. Element handlers map tags to Markdown equivalents: h1–h6 → heading hashes, p → paragraph text, ul/li → dashes, ol/li → numbers, table/tr/td → pipe-delimited tables, code/pre → backtick blocks. Style attributes, class names, data-mesh-id attributes, and other non-semantic markup are stripped — the output is clean prose with no trace of the source HTML structure.
The same pass also segments content by data-mesh-id boundary using a stack. On entering a data-mesh-id element the handler pushes it onto the stack and opens a new section buffer. Text content is always accumulated into the innermost buffer only — never the outer one. On exit, the buffer is flushed as an FTS5 row (section ID, parent ID from the next item down the stack, depth, heading, body) and the element pops off the stack. The parent buffer resumes collecting its own direct content. Content before the first data-mesh-id container goes into a null-ID root buffer. One traversal produces three outputs: the full-page .md file written to R2, the per-section FTS5 rows written to D1, and the updated sections table rows with full nesting hierarchy.
The extraction contract: Mesh HTML must not bury semantically meaningful content inside style-only elements. A table of values must produce readable Markdown. A diagram must have a text caption that survives extraction. This is enforced by convention in how Claude authors Mesh pages, not by runtime validation. An agent reading the .md file and a search query against FTS5 will always see the same text because they're derived from the same pass.
Operation 3 — Surgical editing
Agent calls edit_section with a target data-mesh-id and new HTML content. The Worker fetches the full page from R2, pipes it through an HTMLRewriter that replaces the innerHTML of the matching element with the new content, and buffers the transformed document. The result is written back to R2. The diff between the old and new R2 objects is committed to GitHub as a minimal change — one section edited, not a full page rewrite.
Mesh pages will be non-trivial HTML documents — potentially 10–50KB per page. Buffering the full HTML into memory to perform DOM operations would be wasteful and slow. HTMLRewriter's streaming approach means section extraction never buffers more than the matched subtree, and Markdown conversion never holds more than a paragraph at a time in working memory. For a Workers environment with 128MB memory limits, this is the right approach.
Search — D1 FTS5
Full-text search built into D1's SQLite. No additional service. Access-gated at the index level.
D1's SQLite engine supports FTS5 — a full-text search virtual table with stemming, prefix search, and relevance ranking. The search_content FTS5 table is populated on every Mesh write from the stored Markdown — not from the HTML, and not from a separate extraction pass. By the time the FTS update runs, the Markdown has already been written to R2 as part of the write path, so the FTS index and the Markdown agents read are guaranteed to be identical. There is no risk of the search index reflecting a different representation of the content than what the MCP surface delivers.
Access gating is not post-filtering — it's a JOIN. Every search query is scoped to the pages the calling user is allowed to see. The query shape:
SELECT sc.page_id, sc.section_id, sc.heading, snippet(search_content, 5, '<b>', '</b>', '…', 20)
FROM search_content sc
JOIN pages p ON sc.page_id = p.id
WHERE search_content MATCH 'brand colors'
AND (
p.scope = 'org'
OR (p.scope = 'team' AND p.scope_id IN ('eng', 'design'))
OR (p.scope = 'user' AND p.scope_id = '{user_id}')
)
ORDER BY rank LIMIT 10;
The section_id column is UNINDEXED — it is stored on each row as metadata but never matched against search queries. It is the data-mesh-id value sourced from D1's sections table at write time, not extracted from the Markdown text. Search matches only against heading and body. The section_id in results tells the caller exactly which section to pass to get_section or edit_section.
Results from the MCP search_mesh tool include the matching section content directly — not a list of page references. The agent receives usable content. The web UI search shows the heading and snippet with enough context to judge relevance before clicking through.
Vectorize (Cloudflare's vector database) would enable semantic search — useful when a user searches for "how we communicate externally" and the Mesh page says "tone of voice guidelines." That's a real improvement over keyword search. But it adds a second database, requires Workers AI for embedding generation on every write, and introduces embedding costs and latency on the write path. For a small team with a carefully structured Mesh, keyword FTS5 will cover the vast majority of queries — the Mesh isn't a large corpus of unstructured text, it's a structured set of pages with clear headings. I'll add Vectorize if FTS5 proves insufficient in practice.
MCP server — tools catalog
MCP protocol handled at mesh.smplrspace.io/mcp. Same infrastructure pattern as the time-entries MCP already in production.
The /mcp route implements OAuth 2.0 endpoints (/mcp/authorize, /mcp/token) so the Mesh can be registered as a Claude team connector. When a user clicks Connect, Claude initiates the OAuth flow: /mcp/authorize redirects to ClickUp OAuth, the callback issues a Mesh token, and Claude stores it. From then on, every MCP tool call arrives with a Bearer token that the Worker validates against KV before any tool executes. All business logic — R2 reads/writes, D1 queries, access control — runs in the same Worker, with direct storage bindings. 13 tools total.
| Tool | What it does |
|---|---|
| list_pages(scope?, team_id?) | Returns the page catalog accessible to the caller. Optional scope filter ('org', 'team', 'user'). Returns page IDs, titles, updated_at, and section headings — enough for an agent to decide what to load without fetching content. |
| get_page(page_id) | Returns the full page as Markdown, access-checked. The Worker prepends a hierarchical section index sourced from D1's sections table — section IDs and headings indented by depth, so the agent immediately sees the nesting structure and knows which IDs to pass to get_section or edit_section. Response also includes updated_at. The section index is injected by the Worker; it is not in the stored .md file. |
| get_section(page_id, section_id) | Returns a single section as Markdown. The agent uses this when it already knows which section it needs — no need to load the full page. Minimal token cost for targeted context loading. |
| list_sections(page_id) | Returns the section index for a page — IDs, headings, parent IDs, and depth — sourced from D1's sections table. Cheap call: no R2 fetch. Useful when the agent knows the page but wants to inspect its structure before deciding which section to read or edit. The same index is prepended by get_page, so this tool is mainly for agents that already have page content in context and need to refresh or check the section map without reloading the full page. |
| search_mesh(query, scope?, page_id?) | Full-text search via D1 FTS5. Returns matching sections with content inline — not a list of links. Optional scope to narrow to org/team/user. Optional page_id to search within a single page. Results include page title, section heading, updated_at, and the matching text. Access-gated at the query level. |
| get_freshness(page_ids[]) | Given a list of page IDs previously loaded into context, returns which ones have been updated since a given timestamp. Used by agents at session start to decide whether to reload stale context or use what's already in their window. Cheap call: D1 query only, no R2 fetch. |
| write_page(page_id, html, description?) | Create or replace a page. The HTML is the full page content. For protected pages, this creates a draft (not a direct write) — the response indicates draft status and draft ID. For open pages, the write is immediate. description is used as the GitHub commit message. |
| edit_section(page_id, section_id, html, description?) | Surgical section edit. Replaces only the targeted section. For protected pages: creates or updates a draft at section granularity. The agent only sends the changed section — the Worker splices it into the current page content. Produces a minimal diff in the GitHub history. |
| create_draft(page_id, description) | Explicitly fork a protected page into a draft before editing. Returns a draft_id. The agent then uses write_page or edit_section with the draft page ID. Useful when the agent wants to make multiple edits before submitting for review. |
| submit_draft(draft_id) | Marks a draft as submitted for review. Triggers notification to users with can_merge_protected = 1. The original page is untouched until a reviewer merges the draft via the web UI. |
| create_share_link(page_ids[], purpose, expires_in?) | Generates an external share link. purpose is required (e.g. "Onboarding — Acme Corp, May 2026"). expires_in is seconds from now; null = never expires. Returns a URL the caller can share. Stored in D1 for audit. |
| add_backlink(page_id, tool, url, label?) | Register a cross-tool link from a Mesh page to an external resource. Claude calls this when it detects a relationship between current work and a Mesh page — "I've linked this ClickUp task to the brand guidelines page." |
| list_drafts() | Returns drafts the calling user has submitted or is eligible to review. Used by reviewers to find pending changes. Admins see all submitted drafts. |
Every tool that retrieves content returns the content directly, not a reference to go fetch. search_mesh returns the matching sections. get_freshness returns which pages changed and when. The agent never has to chain tool calls to get usable information — one call per intent. This is what makes the grep analogy in Principle 19 work in practice.
Web UI — surfaces and routes
The non-Claude surface. Authenticated for team members, token-gated for external share links. Served from the same Mesh Worker at mesh.smplrspace.io — not Pages, since the auth logic lives in the Worker anyway.
/ — Home/p/{page-id} — Page viewupdated_at badge. Toggle button to switch between the rendered HTML view and the raw Markdown — the .md file is fetched from R2 on demand and displayed in a styled <pre> block. Useful for checking what agents see./drafts — Review queue/share/{token} — External shareshare_links. Serves only the pages in the link's page_ids array. Navigation is limited to within the shared scope. External agents hit the same route with Accept: text/markdown or ?format=md and receive Markdown, not HTML./admin — Admin UI/settings — User settingsContent negotiation on share links
The same URL serves both human and agent audiences. The Worker inspects the request's Accept header. If it includes text/markdown, or if ?format=md is present, the response is Markdown extracted from the HTML via HTMLRewriter. Otherwise, it's the rendered HTML page. External agents can be configured with the same URL a human would share — no special agent endpoint, no separate API.
The edit experience
Editing always happens through Claude — the web UI has no editor of its own. The guiding principle: there is no local version. Every edit Claude makes is written to the Mesh immediately. The preview always shows the live truth. There is no commit step, no staging area, no "push" — the change happens and the preview reflects it. This is the mental model that works for non-technical people.
The artifact approach in Claude.ai breaks this: Claude fetches a page, renders a local copy as an artifact, edits it, then separately writes back to the Mesh. The artifact is not the live Mesh — it's a draft that has to be committed. That staging step is natural for engineers and disorienting for everyone else.
The right approach in Claude.ai: Claude always writes to the Mesh immediately using edit_section or write_page — never accumulating changes in an artifact. The preview is a live browser tab open to mesh.smplrspace.io/p/{page-id} alongside the Claude conversation. What the user sees in that tab is always real and always current.
In Claude Code, the preview panel is sandboxed to local URLs and cannot directly fetch mesh.smplrspace.io. The path to an in-Claude live experience is a small CLI tool — mesh — that authenticates once, then starts a local proxy server that forwards all requests to the Mesh with the user's token injected. Claude Code's preview panel points to that localhost address and renders the full Mesh web UI: navigation, search, every page the user has access to. The user browses the Mesh directly in the panel, exactly as they would in a browser tab. When Claude makes an MCP write, the web UI's own live-update mechanism reflects it in the preview immediately — no page-scoping, no separate browser tab needed. Phase 5 builds this CLI.
Protected pages — draft/review/merge
The pull request workflow for non-technical people, built natively in the Mesh. Implemented with D1 drafts and R2 draft objects — not Artifacts branches.
When a write targets a protected page (all org-level pages, and any team page flagged as protected), the API Worker intercepts the write and creates a draft instead. The original page in R2 is never touched until a reviewer approves the merge. The draft lives at drafts/{draft-id}.html in R2 and is tracked in D1's drafts table.
Diff computation
At review time, the Web Worker computes a diff between the base snapshot (the HTML content of the page at the time the draft was forked, stored as a hash in D1 and resolved from GitHub history) and the draft content (fetched from drafts/{draft-id}.html in R2). The diff is computed using a line-level diff algorithm running in the Worker — no external service. The review UI shows a human-readable side-by-side comparison, not a raw git diff. Changed lines are highlighted. Claude can summarise what changed in plain language if the reviewer asks.
Merge operation
Reviewer clicks Merge. The API Worker: (1) fetches the draft from R2, (2) writes it to the canonical page R2 key, (3) deletes the draft R2 object, (4) updates D1 drafts row to status merged, (5) updates D1 pages.updated_at, (6) re-indexes search content, (7) queues the GitHub push, (8) dispatches notifications to page subscribers, (9) notifies the draft submitter that their draft was merged. All of this happens in one API call.
Cloudflare Artifacts' fork/diff/merge semantics are exactly the right primitive for protected pages. When Artifacts reaches GA stability and I've confirmed the multi-user auth model, this workflow should migrate to Artifacts — the draft becomes a branch, the merge is a native git merge, and the diff is a proper git diff. For now, D1 + R2 drafts give the same UX guarantee with zero beta dependency. The migration is a storage swap with no user-facing change.
Notifications
Subscription model. Async by default. ClickUp Chat webhook and email for phase 1.
On every confirmed write to R2 (after the main write path completes), the API Worker publishes a page-changed event to a Cloudflare Queue. A consumer Worker dequeues these events, queries D1 for subscribers to the affected page or section, and dispatches notifications to their registered channels.
clickup_id on the users row). No webhook URL to set up. Admin channel routing: admins can also map any page or section to a ClickUp Chat channel — updates to that target are posted to the channel in addition to subscriber DMs. Stored in the channel_routes D1 table; configured from the admin UI.get_freshness at session start. The response tells them which of their loaded pages have been updated since the timestamp they provide. This is the lightweight Tier 2 collaboration primitive (Principle 03) — no automation needed, just an efficient check at session start.{page_id, section_id?, changed_by, changed_at, diff_summary}.Using ClickUp Chat DMs as the primary notification channel is a deliberate shortcut to validate the project, not the end state. It works for the initial use case — the team is already in ClickUp — but ties notification delivery to a single provider and assumes every Mesh org is a ClickUp customer. The right long-term model is provider-agnostic: configurable channels (Slack, Teams, email-only, in-app) that any org can wire up regardless of their tooling. The reason to start with ClickUp: no new credential, no extra integration work, the API is already in use for auth. The consumer Worker is channel-agnostic by design, so the migration path is clean — adding a new channel is a new dispatch case, nothing else changes.
GitHub sync — safety net
Every write to R2 queues a deferred GitHub push. Rapid edits coalesce — only the latest version in each 15-minute window is committed. Never blocks the Mesh write path.
After the API Worker confirms a successful R2 write, it enqueues a GitHub push event to a Cloudflare Queue with a delaySeconds of 900 (15 minutes). The consumer Worker dequeues the event, does a cheap HEAD request to R2 to get the current ETag, and compares it to the ETag recorded in the event. If they match, this is still the latest version — push it. If they don't match, a newer write happened and a newer event is already in the queue — drop this one silently. No git binary, no git daemon — just an HTTPS API call with the base64-encoded file content and a commit message derived from the write description.
org/{org-id}/ R2 prefix is redundant there. The consumer Worker strips it before constructing the GitHub path: org/smplrspace/brand/guidelines.html in R2 → brand/guidelines.html in the repo. Diff history is readable. Blame is meaningful. No tooling required to understand the repo structure.Build sequence
Five phases. Phase 1 produces a working Mesh. Each phase adds a defined capability layer.
- D1 schema — users, team_memberships, pages, sections (no drafts, no share_links yet)
- ClickUp OAuth flow — /auth/clickup → /auth/callback → KV session → cookie
- R2 read/write — API Worker: write_page creates/replaces HTML; get_page returns Markdown via HTMLRewriter
- MCP route (
mesh.smplrspace.io/mcp) — OAuth 2.0 endpoints + tools: get_page, write_page, list_pages, get_section - Claude connector registration — admin registers the Mesh as a connector in the Claude team account; users connect via ClickUp OAuth
- Markdown extraction — HTMLRewriter converter, covering all common HTML elements
- Web UI (reader) — /, /p/{page-id}, auth flow. No search, no editing. Brand-styled HTML from R2 served in the Mesh chrome.
- D1 backlinks table — MCP tool add_backlink; backlink display on page view in web UI
- Zen mode — distraction-free reading on
/p/{page-id}, no header or navigation
- D1 FTS5 search index — search_content virtual table; populated on every write via HTMLRewriter text extraction
- MCP tool: search_mesh — D1 FTS5 query with access gating; returns content inline
- Section addressing — HTMLRewriter-based extraction by data-mesh-id; MCP tool get_section
- MCP tool: edit_section — surgical edit via HTMLRewriter splice; updates D1 sections row and FTS index
- MCP tool: get_freshness — D1 query against pages.updated_at for a list of page IDs
- Web UI: search — /search route with FTS query, results with snippets, scoped to user's access
- D1 drafts table — full schema; API Worker intercepts writes to protected pages and creates drafts
- MCP tools: create_draft, submit_draft, list_drafts
- Web UI: /drafts review queue — diff view with merge/reject; line-level diff computed in Worker
- D1 share_links table — MCP tool create_share_link; /share/{token} route with content negotiation
- Web UI: /admin — user management (team assignments, merge rights), share link audit table with revoke
- GitHub sync — Cloudflare Queue + consumer Worker pushing to GitHub Contents API on every write; 15-minute coalescing window
- D1 subscriptions table — /settings page to subscribe to pages/sections
- Cloudflare Queues — page-changed events queued on every write; 30-second batching window
- Notification consumer Worker — ClickUp Chat DM and email dispatch; admin channel routing via
channel_routes; protected-page merge broadcast - Web UI: /recent — ordered list of pages by
updated_at DESC; shows title, author of last change, and relative timestamp; scoped to the user's access - Org-level protection defaults — all org-scope pages automatically flagged as protected; admin can open individual pages
- Durable Object WebSocket — push-based live updates on every MCP write; any web UI session open to an affected page refreshes automatically
- Local bridge CLI —
mesh; authenticates once and starts an authenticated local proxy at localhost; Claude Code's preview panel renders the full Mesh web UI — navigation, search, live updates, the complete session in-panel - Change highlighting — brief CSS transition on the modified section when a live update lands
- Staleness indicators —
pages.updated_atshown as relative time on page view; configurable per-page review reminders - Vectorize (optional) — semantic search layer on top of D1 FTS if keyword search proves insufficient in practice
Cloudflare Artifacts — migration target
When to migrate, what changes, and what doesn't.
The research concluded that Cloudflare Artifacts is the right long-term canonical storage layer for the Mesh. Its git-native versioning provides real content history (not just GitHub-as-backup), its fork/diff/merge semantics are the ideal primitive for the draft/review/merge workflow, and its agent-first design matches what the Mesh needs. The reason I'm not starting with it is beta risk on the critical path, not architectural disagreement.
The migration triggers are:
- Artifacts reaches GA (not just public beta) with a documented API stability commitment
- I've confirmed the per-scope auth model — specifically, how to enforce that user A cannot read user B's individual content within the same repository
- I've evaluated cost at realistic op counts (small team, mixed read/write ratio)
When those conditions are met, the migration is a storage swap. The Worker layer stays exactly as built. The D1 metadata model stays. The HTMLRewriter operations stay. What changes: R2 object reads/writes become Artifacts file reads/writes via the Workers SDK; the draft workflow uses Artifacts branches instead of D1 draft rows + R2 draft objects; GitHub becomes redundant (Artifacts provides its own version history) and can be dropped or kept as an offsite backup.
By building the API Worker as a storage-agnostic layer (all R2 calls go through an internal content.get(key) / content.put(key, value) abstraction), the migration is a swap of the implementation behind that abstraction. No other component needs to change. This is a design constraint I'm enforcing from phase 1.
Prototype shortcuts — ClickUp dependencies
A summary of where ClickUp is being used as a shortcut, why, and what the production equivalent looks like.
The Mesh uses ClickUp in three distinct ways: as an identity provider, as a notification delivery mechanism, and as a channel routing target. All three are deliberate shortcuts that let the prototype ship without building general-purpose infrastructure. None of them are the right long-term answer for a multi-tenant product that any org can adopt regardless of their tooling.
| Area | Prototype shortcut | Why it works for now | Production equivalent |
|---|---|---|---|
| Org identity | A ClickUp workspace ID maps 1:1 to a Mesh org. No org can exist without a ClickUp workspace behind it. | Every Smplrspace team member already has a ClickUp account. Zero onboarding friction. | Org creation is independent of any third-party provider. An admin creates a Mesh org directly — email, name, slug — and optionally connects external providers (ClickUp, Google, etc.) as identity sources. |
| User identity | User accounts are seeded by ClickUp OAuth. The clickup_id field on the users row is the canonical user handle for notification delivery and access control. |
No registration flow to build. Users already exist in ClickUp; the Mesh inherits that directory. | Native Mesh identities with their own credentials. External OAuth (ClickUp, Google, GitHub) becomes an optional sign-in method, not the foundation. clickup_id becomes one of several optional external identity fields. |
| Personal notifications | Notifications are delivered as ClickUp Chat DMs, using the ClickUp API token already held by the Worker from OAuth. No additional integration required. | The API token is free — it exists for auth. Sending a DM is one extra API call with zero new credentials. | A configurable notification layer where each org (and each user) picks their channel: Slack DM, Teams message, email, in-app, or a custom webhook. No assumption about which communication tools the org uses. |
| Channel routing | Admins can route page/section updates to a ClickUp Chat channel. ClickUp Chat is the only supported channel type for broadcast routing. | Same reason as personal notifications — the API token is already there. | Channel routing supports any destination the org configures: Slack channels, Teams channels, generic incoming webhooks. The channel_routes table stores a destination URL and type, not a ClickUp-specific ID. |
| MCP connector | The Mesh is registered as a connector in Smplrspace's Claude team account. Other orgs cannot self-serve connect — an admin would need to register a new connector in their own Claude team account. | Smplrspace is the only org using the prototype. One connector registration is sufficient. | The Mesh is a publicly listed Claude connector. Any org can connect via the standard OAuth flow without needing to register anything themselves. |
The auth and notification layers are the two components most coupled to ClickUp. Both are isolated behind clean interfaces: the session and token layer (KV) doesn't know how a user was authenticated, and the notification consumer Worker dispatches by channel type — adding a new type is a new case in the dispatch logic, not a structural change. The D1 schema will need additive migrations (new auth provider fields, new channel destination format), but no destructive changes. The core of the Mesh — R2 storage, D1 metadata, HTMLRewriter, MCP tools, web UI — is entirely ClickUp-independent today.