DocsAI — Full PRD
Source of truth:
~/cofoundy/products/cofoundy-platform/docs-ai/PRD.md. This is the team-facing snapshot, last synced 2026-05-03. Do not edit this file directly — edit the root PRD and re-publish.
One-liner
AI-native client deliverable portal for agencies. AI agents publish markdown → clients see branded, access-controlled pages with view tracking.
The Problem
Every company using AI agents (Claude Code, Cursor, Copilot) generates markdown files that sit in git repos, invisible to anyone who doesn’t cat files in a terminal. The output is trapped.
For Cofoundy specifically:
- Claude Code writes meeting notes, brand guides, proposals, project docs, deliverables — all as
.mdfiles - Those files live in repos that clients never see
- Today we send PDFs on WhatsApp or links to Google Docs — things get lost, there’s no single source, no branding, no access control
- Team members write CLAUDE.md files that are rich with knowledge, but nobody reads them because raw markdown in a repo is hard to navigate
For the market broadly:
- Every agency, consultancy, and dev shop using AI coding tools has this problem
- The gap between “AI generated a great document” and “client can see it professionally” is manual work: copy to Notion, export to PDF, upload to Drive, send link
- That manual step kills the speed advantage of AI
The Insight
Documents shouldn’t be edited by humans anymore. They should be authored by AI agents and published automatically. The human’s job is to review, approve, and share — not to format, upload, and manage links.
The defensible version isn’t “we render markdown better” — it’s “we are the only platform purpose-built for agencies to publish AI-generated deliverables for clients, with workflow depth that documentation platforms won’t build.”
Analogy: DocSend meets GitBook, but for AI-native agencies. Content stays in git (zero vendor lock-in), platform handles branding, access control, view tracking, and client-facing workflows.
Competitive Landscape (March 2026 Research)
Why not just use an existing tool?
| Tool | What it does well | Why it’s not this |
|---|---|---|
| GitBook ($65/site/mo, 450K users) | Markdown + git sync + RBAC + API. Covers ~90% of features. | Documentation platform, not client deliverables. No view tracking, no per-client branded spaces, no approval workflows. Could add an AI publish endpoint in weeks — biggest threat. |
| Mintlify ($18M from a16z, ~$10M ARR) | MDX-native, git-native, auto-generates docs from code. Customers: Anthropic, Vercel, Cursor. | API docs only. Narrow focus. But validates “markdown in, beautiful pages out” at 10x YoY growth. |
| Notion ($600M ARR, 100M+ users) | Notion Sites launched June 2024 — pages become websites with custom domains. | Proprietary block format, not markdown-native. No git sync. Agencies already use it but deliverable sharing is clunky. |
| Qwilr/PandaDoc ($35-59/user/mo) | Branded client-facing proposals with analytics. | Drag-and-drop editors. No markdown, no git, no AI agent API. Right workflow, wrong input format. |
| WordPress (43% of web) | Launched MCP write in March 2026 — AI agents can publish posts. | Overkill CMS for deliverable publishing. But massive distribution threat. |
The whitespace
No existing tool combines: markdown-from-git + branded client pages + view tracking + version diffs + approval workflows + AI agent API. GitBook has the tech but not the workflow. Qwilr has the workflow but not the tech. We need both. Version diffs are a hidden advantage: because content lives in git, we get diffing for free at the storage layer — the challenge is rendering it visually for non-technical clients (word-level inline diffs, not code diffs).
Emerging signal
5+ Show HN projects in 2025 solving “markdown → shareable URL” (mdto.page, md2.website, wrds.cc). One commenter explicitly requested an API “for agents to share content.” An OSS project (waynesutton/markdown-site, 598 stars) positions itself as “publishing for AI agents.” None became SaaS products with RBAC or agency workflows.
Adjacent layer (we sit above, not against)
A separate emerging category is agent-native primitive hosting — hosting and storage explicitly built for AI agents to publish and store files. Different category than Cofoundy Docs:
| Tool | What it does | Why it’s not a competitor |
|---|---|---|
here.now (here.now) | “Web hosting and storage for agents.” Agents publish static files (HTML/PDF/images) to {slug}.here.now. Anonymous tier (24h URL, no account). Drives for agent-to-agent file handoff. Stablecoin paywalls via Tempo. | Raw substrate. No branding, no client roles, no view tracking, no version diffs, no approval workflows, no agency multi-tenancy. We could even use it as one storage backend. |
| Cloudflare Pages / Vercel / Netlify | Static site deploy with custom domains, CI/CD. | Generic. No agent-author-first, no llms.txt-first, no client deliverable workflow. |
Lesson from here.now: Their agent-discovery surface (/llms.txt, /llms-full.txt, /openapi.json, /.well-known/agent.json, /.well-known/ai-plugin.json, hosted skill installable via npx skills add, ?mode=agent rendering, structured error contract with code/retry_after/docs_url) is best-in-class. We must match or exceed it on V1.1 — see “Agent-Native Surface” below. An “AI-native” product that ships less agent-discovery surface than a generic file-host is mispositioned.
Market Sizing
| Ring | Size | How we get there |
|---|---|---|
| Beachhead: Agencies using AI coding tools that need client deliverable publishing | 25K-70K firms globally, $60M-$340M/yr at $200-400/mo | Cofoundy = customer zero. Direct outreach to dev shops and consultancies. |
| Adjacent: Developer documentation (compete with GitBook/Mintlify) | $500M+/yr | Not recommended — incumbents too strong here. |
| Broader: Document management SaaS | $8-10B in 2025, 13-16% CAGR | Long-term expansion if agency wedge works. |
| AI coding tools market (context) | $3.5-7.4B in 2025, 21-28% CAGR | This market creates our users. Cursor: $0→$2B ARR in 18 months. |
The initial market is narrow but growing explosively with AI agent adoption.
Users & Personas
| Persona | How they interact | Primary need |
|---|---|---|
| AI Agent (Claude Code) | API / CLI skill (/publish) | Push markdown → get URL back |
| Team Member (Cofoundy ops) | Simple web UI | Browse, edit, organize docs. Light editing for non-technical team |
| Client (external) | Read-only branded view | See deliverables professionally. Download if needed |
| Public visitor | Blog / case study pages | SEO-friendly content, brand presence |
Priority order: AI Agent > Client reader > Team member > Public visitor
Core Concepts
Document
A markdown file in a project folder with YAML frontmatter:
src/content/docs/{project}/{slug}.md
---
title: "Brand Guidelines — Acme Corp"
role: client # team | client | public
version: 3
author: andre
created: 2026-03-22
tags: [branding, deliverable]
---
# Brand Guidelines
...
Note: project is determined by the folder name, not frontmatter. This prevents slug collisions across projects.
Project Space
Documents are grouped by folder. Maps 1:1 to Cofoundy’s project structure:
docs/cofoundy/→docs.cofoundy.dev/cofoundy/— internal docs (team only)docs/client-acme/→docs.cofoundy.dev/client-acme/— client deliverablesdocs/blog/→docs.cofoundy.dev/blog/— public content
Access Roles
| Role | Can see team docs | Can see client docs | Can see public docs |
|---|---|---|---|
| Admin (Cofoundy) | Yes | Yes (all clients) | Yes |
| Team (Cofoundy staff) | Yes | Yes (assigned clients) | Yes |
| Client (external) | No | Only their project | Yes |
| Public | No | No | Yes |
What We Have Now (shipped 2026-03-22 → 2026-03-25)
Goal
Cofoundy uses it internally for 1 week. Claude Code can publish. Team can read. Clients see deliverables. Access is per-project (PoLP).
Platform
-
Astro 6 static site with Zod-validated content collections
- Full markdown rendering: GFM, syntax highlighting (Shiki dual-theme), tables
- Mermaid.js diagram rendering (client-side, theme-aware)
- Table of contents auto-generated per doc
- Nested folder routing:
src/content/docs/{project}/{slug}.md
-
Cofoundy branding
- Header: Cofoundy logo (white/dark variants) + “docs” label
- Footer: isologo + cofoundy.dev link
- Fonts: Space Grotesk (headings), Inter (body), JetBrains Mono (code)
- Colors: brand tokens matching
@cofoundy/ui(#46A0D0primary,#020b1bdark bg) - Dark/light theme switcher (persisted to localStorage, respects system preference)
- Print-friendly CSS (clean PDF via browser print)
-
Frontmatter schema (validated at build time)
title(required),role(required: team/client/public),version,author,created,tagstype:markdown(default) orpdf— PDF docs embed a pdf.js viewerpdf_file: filename for PDF documents (stored inpublic/files/{project}/)- Project derived from folder name — no frontmatter duplication, no slug collisions
-
PDF document viewer (pdf.js)
- Canvas-based renderer (not browser iframe — consistent cross-browser)
- Toolbar: prev/next page, zoom in/out, zoom level indicator, fullscreen, download
- Keyboard navigation (arrow keys)
- Dark container with white page + shadow (Skim-like aesthetic)
- Supports PDF-only docs (frontmatter only, no markdown body)
Infrastructure
-
Deploy: Cloudflare Pages + GitHub Actions
- Repo:
github.com/cofoundy/docs-ai(private) - Production:
docs.cofoundy.dev(custom domain, SSL) - Auto-deploy on push to main via GitHub Actions (
deploy.yml) - Uses
wrangler@latest+ org secretsCLOUDFLARE_API_TOKEN,CLOUDFLARE_ACCOUNT_ID
- Repo:
-
Two-layer access control (authentication + authorization)
- Authentication: Cloudflare Access (OTP login via email, 24h session cookie)
- Authorization: D1 permissions middleware (
functions/_middleware.ts) - Access model (Principle of Least Privilege):
Role Team docs Client docs Public docs admin All All projects All team All All projects All client No Only their project(s) All - D1
permissionstable:(email, role, project)— source of truth for who sees what - CF Access apps: “DocsAI Team” (
/team/*), “DocsAI Client” (/client/*) — gate for login scripts/grant-access.sh— single command to: insert D1 permission + sync CF Access + send Resend emailscripts/sync-access.sh— syncs CF Access policies fromaccess-config.json- Branded 403 page for unauthorized access attempts
-
View analytics (D1 + Pages Functions)
- Cloudflare D1 database:
docs-ai-analytics(SQLite, WEUR region) POST /_api/track— logs page view with session ID, viewer email (from CF Access JWT), referrerGET /_api/views— aggregated analytics (team-only: views, unique viewers, avg time, last viewed)- Tracking beacon in every doc page (fetch on load, sendBeacon on leave for time-on-page)
- Dashboard at
/team/analytics— filterable by project, sortable table
- Cloudflare D1 database:
Publishing
-
/publishskill (cofoundy-toolkit v1.21+)- Backed by deterministic
publish.pyscript (PEP 723 inline deps viauv) — not a prompt - Input: path to
.mdfile +--project+--role+ optional--notify <email>+ optional--tags - Reads source, parses frontmatter, extracts H1 from body and strips it (single-source-of-truth: frontmatter
titleis canonical, body must not duplicate) - Auto-increments
versionif a doc with same slug already exists - Validates tags against controlled vocabulary (Domain × Type × optional Subdomain — see SKILL.md)
- Commits + pushes → GitHub Actions auto-deploys
- Returns role-prefixed URL (
/team/...,/client/..., or/...) --notify <email>: callsgrant-access.shwhich grants D1 permission + syncs CF Access + sends branded Resend email- Supports
--type pdffor PDF documents --dry-runwrites preview to stderr without touching file or git- Fully autonomous e2e: one command publishes, grants access, notifies client
- Backed by deterministic
-
Index pages
- Global index: sections for public, team, client projects with doc counts + role badges
- Per-project index: doc list sorted by date
- Team index links to analytics dashboard
Content (15 docs published)
| Project | Role | Docs |
|---|---|---|
| cofoundy | team | Welcome, Team, Brand Book, Leads Guide (PDF) |
| starthack-2026 | team + public | README (public), Pitch Guardrails, Numbers Cheatsheet, Deployment |
| docs-ai | team + public | Public PRD, Full PRD |
| client-demo | client | Project Brief, Technical Proposal, Weekly Update #1 |
Not yet built
- Web UI for editing
- Versioning UI (version field exists in frontmatter)
- Version diff view (visual inline diff between versions — V2 #4)
- Search (Pagefind)
- REST API (skill uses git push directly)
- Approval button (V2 — track client sign-off on deliverables)
V1 Features — SHIPPED (2026-03-23 → 2026-03-25)
All V1 features have been built and deployed.
| Feature | Status | Notes |
|---|---|---|
| Client spaces | Shipped | docs.cofoundy.dev/client/{project}/ with per-project access |
| View analytics | Shipped | D1 database, tracking beacon, /team/analytics dashboard |
| GitHub Actions auto-deploy | Shipped | Push to main → auto-deploy |
| Cloudflare Access + D1 middleware | Shipped | Two-layer: CF Access (auth) + D1 middleware (authorization, PoLP) |
| PDF document viewer | Shipped | pdf.js canvas renderer with toolbar, fullscreen, keyboard nav |
| Client email notifications | Shipped | grant-access.sh — D1 insert + CF Access sync + Resend email |
| Per-project permissions (PoLP) | Shipped | D1 permissions table enforced by _middleware.ts on every request |
| Autonomous e2e publish | Shipped | /publish --notify email does everything: publish + grant access + notify |
Not yet built (deferred to V1.5)
- Versioning UI — version field exists in frontmatter but no history dropdown
- REST API — skill uses git push directly (API adds complexity without clear need yet)
- Search — Pagefind (static search, zero-server) — quick to add when content volume demands it
- Per-project isolation for clients at CDN level — middleware blocks, but HTML is served before 403 (acceptable for V1, fix with SSR in V2)
V1.1 — Agent-Native Surface (committed, parallel to V2.0)
Premise: We claim “AI-native”. We must implement more agent-discovery surface than a generic file-host. here.now is the bar to clear, not exceed by half.
| Feature | Path / Spec | Purpose |
|---|---|---|
/llms.txt (root) | https://docs.cofoundy.dev/llms.txt | llmstxt.org spec — concise context for any agent reading our site. Lists projects + key URLs. |
/llms-full.txt | /llms-full.txt | Expanded version — full doc index with summaries. |
Scoped /llms.txt | /{project}/llms.txt, /team/{project}/llms.txt | Per-project context. Agent reading a client space gets only that client’s docs. |
?mode=agent | Any URL with ?mode=agent | Returns clean markdown + frontmatter JSON, no chrome (no nav, no theme toggle, no JS). |
/openapi.json | OpenAPI 3.1 | Full API spec: list docs, get doc, search, publish, get analytics, manage permissions. Enables Cursor/Codex/any agent without skill. |
/.well-known/agent.json | Agent manifest | Cofoundy Docs declares itself as an agent-readable surface (per emerging W3C agent discovery patterns). |
/.well-known/ai-plugin.json | OpenAI plugin manifest | ChatGPT plugins compatibility. |
/.well-known/mcp.json | MCP server discovery | Points to mcp.docs.cofoundy.dev (MCP server). |
/.well-known/skills/index.json | Hermes/Claude skill index | Lists hosted skills (just /publish for now). |
| MCP server | mcp.docs.cofoundy.dev | Tools: list_docs, read_doc, search, publish_doc, get_analytics, grant_access. |
| Hosted skill, npx-installable | npx @cofoundy/docs install | Installs /publish skill into Claude Code/Cursor/Codex without cloning the toolkit. |
| Structured error contract | All API responses on error | {code, message, retry_after?, docs_url} — agent-friendly debugging. |
| Live-docs > cached-skill contract | Documented in skill + llms.txt | Meta-instruction: “If your local skill disagrees with /openapi.json, prefer live.” Prevents drift bugs. |
| Scoped agent tokens | TTL + path prefix | E.g., a 7-day token scoped to client-acme/ write-only. For agent-to-agent handoff and least-privilege automation. |
Anti-feature: Don’t invent a parallel agent protocol. Comply with llmstxt.org / well-known / OpenAPI / MCP standards and extend with our domain (deliverables) — never a private alternative.
V2.0 — Stack Migration (Committed)
Decision: Pull V2’s “Next.js rewrite” forward to V2.0 (immediately after V1.1 agent surface). Astro was correct for MVP validation. The next ceiling is visual richness + multi-tenancy + workflow loop, all of which the design system already serves natively.
What changes
| Layer | Astro (current) | Next.js (V2.0) |
|---|---|---|
| Framework | Astro 6 (static + minimal JS) | Next.js 15+ (App Router, RSC, SSG default for docs, SSR for /team//client middleware) |
| Content format | .md only | .md + .mdx (component imports in MDX) |
| Component library | 1 Astro component (TOC), inline JS for everything else | @cofoundy/ui consumed directly via transpilePackages |
| Styling | Vanilla CSS + tokens | Same tokens, exposed via @cofoundy/ui/styles + Tailwind v4 |
| PDF viewer | pdf.js inline | react-pdf or pdf.js dynamic import |
| Mermaid | Vanilla JS lightbox | React component wrapper |
| Auth/RBAC | CF Access + D1 middleware | Same backend; frontend reads JWT via Next middleware |
| Hosting | Cloudflare Pages | Cloudflare Pages with Next-on-Pages, OR Vercel (decide based on edge needs) |
Why now (not after Week 4 validation)
The premise of Week 4 validation is “does this resolve the agency deliverable problem?”. That premise can only be validated with a product that delivers the visual richness the positioning promises. Markdown-with-colors is not “documents that feel designed.” We migrate to the stack that lets us actually build the product before we ask if the product works.
Migration plan (1 day with /sprint, parallel dispatch)
- Next.js scaffold + App Router + routing parity
DocLayoutport → React component, MDX setup,remark-breaksport- PDF viewer + html2pdf + Mermaid component
- Cloudflare Access middleware + D1 permissions port
- 26 docs migration (most stay
.md, flagship docs go.mdx) - Browser QA gate before DNS cutover
- Astro instance kept at
legacy.docs.cofoundy.devfor 30 days as fallback
V2.1 — Visual Vocabulary (MDX Components)
Premise: Once on Next.js + @cofoundy/ui, MDX docs can compose any component from the design system. Authors get a vocabulary that goes beyond prose.
Components available in MDX (from @cofoundy/ui)
| Component | Use case |
|---|---|
<PersonalNote author={...} avatar={...} signature={...}> | ”From the desk of Andre” — signed personal intro at top of deliverable. Web-grade web component (NOT the email version — shares tokens, different layout). |
<MetadataCard items={[...]} /> | Replaces the **Para:**\n**Tiempo:**\n**Costo:** plain-text block. Renders as branded chips/cards. |
<Callout variant="note|tip|warn|danger"> | Already exists in CSS. Wrapped as MDX component. |
<ScopeList items={[...]} /> | Bulleted list with checkmarks (port from email PersonalNote primitive). For “deliverable scope” sections. |
<InfoBox label value link?> / <InfoBoxRow> | Stat cards inline (from email primitives, web-grade port). |
<NextStepCallout label body> | Branded “next step” CTA (port from email). |
<StatCard label value trend /> | KPI in a doc — e.g., monthly client report shows “Revenue +18%“ |
<BarChart>, <FunnelChart>, <HorizontalBar>, <DonutChart> | Embedded charts. Pure CSS, no charting lib. |
<Heatmap data={...} /> | GitHub-style activity grid — for engagement reports. |
<Leaderboard items={[...]} /> | Ranked list for “top contributors / top clients / top deliverables” docs. |
<ProgressBar current target /> | Project progress indicator. |
<ActivityFeed events={[...]} /> | Timeline of events for project status docs. |
<AnimatedNumber value duration /> | Counter animation for hero KPIs. |
<Button>, <Badge>, <Avatar>, <Logo> | Branded primitives. |
<ChatAboutDoc docId={current} /> | Embedded ChatWidgetFloating scoped to the current doc — agent-powered Q&A. The doc itself becomes interactive. |
<ApprovalBlock requiredFrom={email}> | Signed-off CTA — pairs with V2 approval workflow. |
Authoring patterns to encode
- Heuristic injection in
publish.py— detect plain-text patterns like the**Para:**\n**Tiempo:**\n**Costo:**block and auto-wrap in<MetadataCard>during publish. Authors keep writing markdown; the script enriches. - MDX templates per doc-type —
client-monthly-report.mdx,proposal.mdx,kickoff-recap.mdxas starting points with components pre-imported. - Storybook of doc components — extend
ui.cofoundy.devwith a/docsnamespace showing every doc-component with copy-paste MDX snippets.
V2.2 — AI Capabilities Beyond Authoring
Premise: “DocsAI” should mean AI throughout the lifecycle, not just at publish-time. Each doc URL accepts query params that invoke AI on-demand, server-side via Workers.
| Feature | URL pattern | Behavior |
|---|---|---|
| Translation on demand | ?lang=es, ?lang=en, ?lang=pt | LLM translates content client-side cached at edge. Important for LATAM agencies serving multi-country clients. |
| Re-explain a section | ?explain=section-3 | LLM rewrites a specific section in simpler terms. Lowers the “I don’t understand the technical proposal” barrier. |
| Executive summary mode | ?mode=executive | LLM generates a 1-paragraph TL;DR + 3-bullet decision summary at top of doc. Server-rendered, cached. |
| Inline Q&A | <ChatAboutDoc> MDX component (V2.1) | Agent answers grounded in the doc — no hallucinated context outside this URL. |
| Auto-tags + auto-summary on publish | Server-side at /publish | LLM generates tags (validated against vocabulary), summary frontmatter field, reading_time. Author can override. |
| Suggested improvements post-publish | Sent via email digest (weekly) | Based on analytics: “this doc was read 3 times by client@acme.com but they didn’t approve. Consider clarifying section 4.” |
| Smart redirect for old slugs | LLM-aided fuzzy match | Old URL → suggest closest current doc. |
Privacy gate: LLM ops on client and team roles must use models with no-train data agreements (Claude Enterprise, Azure OpenAI). Public docs can use cheaper models. Per-tenant model selection in V3.
V2 — Workflow Loop (the actual moat)
Premise: The PRD’s claim that “moat is workflow, not tech” requires features that are workflow-shaped, not document-shaped.
-
Comments & approval — client can comment on a doc. Team gets notified. “Approve” button for deliverables that need sign-off. Records who approved, when, from which CF Access email. Webhook fires to publishing agent on approve/comment — closes the loop. Agent can read feedback and trigger republish autonomously.
-
Version diff view — visual inline diff between versions (word-level, Google Docs “suggesting” mode — not git-style). Diff selector dropdown: “Compare v2 ↔ v3” or “Show changes since last viewed.” Pairs with comments — review the diff, then approve.
-
Agent-to-agent handoff — agent A drafts deliverable, hands off to specialist agent B with scoped token (e.g., legal-reviewer-agent, brand-reviewer-agent), then to publishing agent C. Each stage logged.
-
Re-read frequency tracking — Qwilr-style insight surfaced in
/team/analytics: “client@acme.com read your proposal 3 times in 24h before signing.” Strongest signal for sales engagement. -
Section-level analytics — heatmap per doc (which sections get read, which get skipped, which get re-read), scroll depth, time-per-section. Surfaced in dashboard.
-
AI vs human read differentiation — track UA + headers to know when an agent (Claude/Cursor/ChatGPT) read a doc vs a human. “Your client’s agent read this doc as context — they’re acting on it” is a separate signal from “your client read this.”
-
Deliverable approval webhook — third-party integration: on
approve, fire webhook to client’s CRM (HubSpot, Pipedrive), accounting system, or back to the publishing agent. -
White-label — client’s own branding on their space. Custom domain:
docs.acmecorp.com→ their project space. Branded header/footer/colors. Enterprise tier. -
Web editor — Monaco/CodeMirror with live preview + frontmatter form. Lower priority than agent surface — most authors will be agents, not humans.
-
LaTeX compilation —
/publish --type latexcompiles .tex → branded PDF, uploads to platform. Niche but unique. Maintained.
V3 — Distribution & Open Source Track
Premise (Andre’s call, 2026-05-03): OSS positioning is the right play. Resolves three problems simultaneously: differentiation vs closed competitors (GitBook/Mintlify), distribution via npm/MCP/well-known surfaces, and defense against “Anthropic ships Artifacts-as-deliverables” risk.
Model — Vercel/Next.js, Cal.com, Sentry, PostHog pattern
| Layer | License | What it is |
|---|---|---|
@cofoundy/docs-engine | MIT / Apache-2 | Core: MDX renderer, frontmatter validation, doc-component library, publish CLI, llms.txt generation, ?mode=agent handler, OpenAPI exporter, .well-known generators. Ships as npx @cofoundy/docs init. |
| Self-host instructions | OSS | Anyone can host their own Cofoundy Docs instance on Cloudflare/Vercel/Render. Bring own auth, own DB. |
docs.cofoundy.com (Cofoundy Cloud) | Hosted SaaS | Multi-tenant hosting, RBAC, view tracking, custom domains, approval workflows, analytics, Resend email, agent-to-agent handoff, MCP server. Per-workspace billing. |
@cofoundy/docs-mcp-server | OSS | Drop-in MCP server. Self-host or use ours. |
Distribution channels
npx @cofoundy/docs init— scaffold a docs site (OSS, self-hosted)npx @cofoundy/docs-claim— claim a 24h anonymous publish to your account- MCP server discovery via
.well-known/mcp.json - Hosted skill at
npx @cofoundy/docs install(installs/publishskill in any agent) - Storybook at
ui.cofoundy.dev/docs(visual catalog of MDX components) - GitHub repo as community surface — issues drive roadmap
Anonymous publish + claim flow (here.now-inspired)
- Agent publishes with no auth → 24h URL on
*.preview.docs.cofoundy.com - URL emailed to anyone or shared as link
- Recipient (or original author) can “claim” it: enter email → OTP → preview URL converts to permanent URL under their workspace
- Lowers activation friction dramatically. Top of funnel.
Pricing model V3 (per-deliverable + workspace)
Reframe from per-workspace SaaS pricing ($29-99/mo) to:
| Tier | Price | Includes |
|---|---|---|
| OSS / Self-host | Free | Full engine, no Cofoundy Cloud features |
| Free Cloud | Free | 1 workspace, 5 deliverables/mo, *.docs.cofoundy.com subdomain, view tracking |
| Pro | $29/mo | 1 workspace, 50 deliverables/mo, custom domain, branded email, comments + approval |
| Agency | $99/mo | 3 workspaces, unlimited deliverables, white-label, agent-to-agent handoff, MCP server, SSO |
| Enterprise | Custom | Multi-tenant for re-sellers, SLA, dedicated support |
Per-deliverable metering aligns price to value (each deliverable = client touchpoint). Custom domain + white-label = hard upgrade trigger.
Technical Architecture (live)
Claude Code ──/publish skill──► docs-ai repo (Astro 6 + content collections)
│ │
│ git push to main
│ │
│ GitHub Actions auto-deploy
│ │
│ ┌─────────┴─────────┐
│ │ │
│ Astro build Pages Functions
│ (static HTML) (Workers on CF)
│ │ │
│ Cloudflare Pages ┌─────┤
│ │ │ │
│ docs.cofoundy.dev │ /_api/track
│ │ │ /_api/views
│ ▼ │
│ _middleware.ts ◄─────┘
│ (authorization)
│ │
│ ┌──────────┼──────────┐
│ │ │ │
│ /public /team/* /client/*
│ (open) (CF Access) (CF Access)
│ │ │
│ ▼ ▼
│ D1: permissions table
│ (email, role, project)
│ │
│ admin → sees all
│ team → sees team + all clients
│ client → sees only their project
│
└──► grant-access.sh ──► D1 insert + CF Access sync + Resend email
Stack (V1, current): Astro 6, Shiki, Mermaid.js, pdf.js, Cloudflare Pages + D1 + Workers, GitHub Actions, Resend.
Stack (V2.0 target, committed): Next.js 15+ (App Router, RSC), MDX, @cofoundy/ui design system, react-pdf, Mermaid wrapper, Cloudflare Pages (Next-on-Pages) + D1 + Workers, GitHub Actions, Resend, MCP server, Cloudflare R2 (asset storage for V3 multi-tenant).
Target architecture (V2.0 + V1.1 + V2.1)
┌─────────────── AGENT-NATIVE SURFACE (V1.1) ───────────────┐
│ │
│ /llms.txt /openapi.json │
│ /llms-full.txt /.well-known/agent.json │
│ /{project}/llms.txt /.well-known/ai-plugin.json │
│ /{project}/{slug}?mode=agent /.well-known/mcp.json │
│ mcp.docs.cofoundy.com (MCP server) │
│ npx @cofoundy/docs install (hosted skill) │
└────────────────────────────┬───────────────────────────────┘
│
▼
Claude Code / Cursor / Codex / Next.js 15 (App Router)
ChatGPT / Hermes / any-agent ─────────► ├─ /[project]/[slug] (SSG, MDX rendered)
│ ├─ /team/[project]/[slug] (SSR + middleware)
│ /publish skill OR MCP ├─ /client/[project]/[slug] (SSR + middleware)
│ OR REST API ├─ /api/track (analytics beacon)
│ ├─ /api/views (analytics dashboard)
│ ├─ /api/publish (REST publish endpoint)
│ ├─ /api/explain (AI re-explain a section)
│ ├─ /api/translate (AI translate to lang=)
│ ├─ /api/summary (AI executive summary)
│ └─ /api/chat (AI Q&A grounded in doc)
│ │
│ ▼
│ @cofoundy/ui (transpiled)
│ ├─ analytics: StatCard, BarChart, Funnel, Heatmap
│ ├─ docs: PersonalNote, MetadataCard, ScopeList,
│ │ Callout, NextStepCallout, ApprovalBlock
│ ├─ chat: ChatWidgetFloating (askAbout=docId)
│ └─ ui: Button, Badge, Avatar, Logo, Sidebar
│ │
│ ▼
│ Cloudflare Pages (Next-on-Pages)
│ │
│ ┌─────────────────┼─────────────────┐
│ │ │ │
│ D1: permissions D1: analytics R2: PDFs/assets
│ D1: comments D1: read_events R2: workspace files
│ D1: approvals D1: ai_cache (V3 multi-tenant)
│ │
│ webhooks ◄────────┤
└────────────────────────── on approve/comment │
fire to publisher │
▼
Cloudflare Access
(Auth: OTP, OAuth, SSO V3)
V3 layer (multi-tenant): Workspace router (subdomain or custom domain → workspace ID), per-workspace D1 partitioning OR Durable Objects, R2 prefix isolation, per-workspace Resend domain, billing via Stripe.
Risks (revised May 2026)
| Risk | Severity | Mitigation |
|---|---|---|
| Feature-swallowed by GitBook — they add an AI agent publish endpoint (weeks of work for them) and our core feature becomes a checkbox | HIGH | Build agency workflow depth (view tracking, approvals, client spaces, agent-to-agent handoff) that GitBook won’t prioritize. They serve dev docs, not agency deliverables. |
| Anthropic ships “Artifacts as deliverables” — Claude.ai already has Artifacts. A “branded artifacts with custom domain + access control” upgrade is one sprint for them. | HIGH | OSS positioning (V3) defangs this — if Cofoundy Docs is the open-source standard, Anthropic shipping a hosted version becomes a complement, not a kill. Same dynamic as Vercel surviving Next.js being Anthropic-agnostic. |
| Browser-native AI rendering — ChatGPT Atlas + Arc render llms.txt and structured data natively in the browser-as-agent UI. The “branded URL” stops being a differentiator if the browser-agent is the renderer. | MEDIUM | Compete on workflow, not rendering. Approval, comments, view tracking, custom domains, client identity are not browser features. Make ?mode=agent a first-class citizen so we work with browser-native rendering. |
| AI agents bypass us entirely — Claude Code can already generate full websites and deploy to Vercel in one loop. | MEDIUM | A branded site ≠ a professional deliverable portal with access control, analytics, versioning, approval. Value is workflow + identity surface, not rendering. |
Standardization eats our protocol — llmstxt.org, MCP, OpenAI plugin spec, well-known agent surface. If we invent a private protocol, we get bypassed. | MEDIUM | Comply + extend. Treat llmstxt.org / MCP / OpenAPI / well-known as table stakes. Our extensions live in our domain (deliverables, approvals, agency workspaces), never in protocol. |
| Initial market too narrow — only 25K-70K firms globally fit the exact agency profile | MEDIUM | OSS distribution + per-deliverable freemium broadens top-of-funnel beyond agencies (solo consultants, internal teams, dev shops). The agency tier is the monetization wedge, not the entire market. |
| WordPress MCP — WordPress launched AI agent write in March 2026. 43% of the web. | LOW | WordPress is a CMS, not a deliverable portal. Different use case, different buyer. |
| Notion Sites expanding — 100M+ users, $600M ARR, adding custom domains | MEDIUM | Notion locks content in proprietary blocks. We keep content in git = zero vendor lock-in. Different philosophy, different user. |
| MCP commoditization — when every site exposes an MCP server, “agent-friendly” is no longer a feature. | LOW | Workflow depth + identity + branding move from differentiator to required. Same path as “responsive design” 2012 → 2018. Fine. |
| Core tech is commodity — replicable in 1-3 months | HIGH | Tech is not the moat. Agency workflow depth + agent ecosystem position + OSS distribution + design system integration is. Move fast — 6-12 month window. |
Open Questions
-
Name — “DocsAI” is generic. Need something sharper. Candidates: Folio, Publi, Inkwell, Quill, Sendoff, Closing, Postmark (taken). Decide before V3 OSS launch — domain availability matters.
-
Positioning — Research says “DocSend for AI-native agencies” is more defensible than “Vercel for documents.” With OSS track, possible reframe: “the open-source deliverable layer for AI-native work.” Validate with agencies AND with the OSS community separately.
-
Pricing — per-workspace vs per-deliverable — V3 pricing table proposes hybrid (workspace base + deliverable metering). Validate which dimension drives upgrades. Risk: per-deliverable feels punitive; per-workspace feels arbitrary cap. May need usage-based with no hard cap.
-
OSS license — MIT vs Apache-2 vs AGPL — MIT/Apache for max distribution; AGPL forces re-hosters to upstream improvements (Vercel did MIT, Sentry did BSL after AGPL friction, Cal.com on AGPL). Decision before V3 launch.
-
MCP-first vs API-first — Should the public surface be primarily MCP (agent-discoverable, native tool calls) or REST API (universal, language-agnostic)? Likely both, but resource ordering matters. MCP is younger, REST has 25 years of muscle memory.
-
Anonymous publish flow — own subdomain or use here.now as backend? — Could ship
*.preview.docs.cofoundy.comourselves, or proxy to here.now’s anonymous tier. The here.now route trades sovereignty for speed-to-ship. -
Multi-tenant isolation — D1 partitioning vs Durable Objects vs separate DBs per workspace — V3 architecture decision. D1 row-level isolation cheapest; Durable Objects best for per-tenant compute; separate DBs cleanest but ops-heavy.
-
AI model selection per tenant — Public docs can use cheap models. Client/team docs need no-train data agreements (Claude Enterprise, Azure OpenAI). Should this be tenant-configurable in Pro/Agency tiers?
-
File downloads beyond PDF — should client spaces also serve raw files (zip of assets, brand packages)? Edges into here.now territory. Probably yes for V3 (workspace = also a file drive).
-
Build vs. integrate (web editor) — Monaco/CodeMirror from scratch or embed HackMD/CodiMD? Lower priority than agent surface; most authors will be agents.
-
Doc as software — versioning UX — Each publish = immutable build (Vercel deploy mental model). Should clients see “v3 (current)” with “v2 (sept)” in a dropdown, or a Git-history view? Pairs with V2 visual diff.
-
Identity beyond email OTP — SSO (Google/Microsoft), GitHub, magic-link-via-WhatsApp (LATAM clients), wallet-based (web3 clients). Which to ship first depends on first 5 paying agencies’ client mix.
-
Browser extension / overlay — A Chrome/Atlas extension that detects “you’re on docs.cofoundy.dev” and shows agency-side tools (analytics inline, comment thread, approval shortcut). Speculative.
Strategic Decisions Log
2026-05-03 — V2.0 pull-forward + agent-native surface + OSS track
Context: During a session reviewing the @cofoundy/ui design system (audited as full tier-1 design system, not email-only as initially assumed) and benchmarking against here.now’s agent-discovery surface, three strategic calls were made by Andre (CEO).
| Decision | Rationale | Trade-off |
|---|---|---|
| Pull V2.0 (Next.js migration) forward — out of “if go after Week 4” conditional, into committed near-term work | Visual richness ceiling of Astro+plain-markdown blocks the validation premise. You cannot validate “documents that feel designed” with a stack that cannot consume the design system. The premise of Week 4 validation already presupposed a product that the V1 stack can’t deliver. | Eats Astro investment (PDF viewer, print CSS, Mermaid lightbox, scroll-spy TOC) — but /sprint parallel dispatch makes this a 1-day rebuild, not 1-2 weeks. Bundle size goes from ~0KB JS (Astro static) to ~150-200KB (React+Radix). Mitigated via SSG default + RSC for non-interactive sections. |
Ship agent-native surface as V1.1 — llms.txt (root + scoped), ?mode=agent, .well-known/*, OpenAPI 3.1, MCP server, hosted skill via npx, structured error contract | An “AI-native” product that ships less agent-discovery surface than a generic file-host (here.now) is mispositioned. llmstxt.org is becoming standard; comply + extend, never invent parallel. | Adds scope before V2.0 stack work. Mitigated: most of these are static endpoints (llms.txt, well-known/*) generated at build time — additive, not architectural. |
| OSS track committed for V3 | Three problems solved at once: (1) differentiation vs GitBook/Mintlify (closed), (2) distribution via npm/MCP/well-known surfaces, (3) defense vs “Anthropic ships Artifacts-as-deliverables” risk — if Cofoundy Docs becomes the open standard, Anthropic shipping a hosted version becomes a complement. | OSS gives away the engine; monetization shifts to Cofoundy Cloud (hosting + RBAC + analytics + multi-tenant + email + auth + custom domains) — Vercel/Next.js, Cal.com, Sentry, PostHog model. License (MIT vs Apache vs AGPL) still open. |
Decision criteria not used (rejected): “Wait for Week 4 go/no-go before any V2 work.” Andre’s call: validation requires the product. Ship the product, validate concurrently.
Anti-decisions (explicitly NOT doing):
- Do NOT build raw file hosting beyond docs (here.now territory). Workspace files (V3) is for deliverable assets, not general storage.
- Do NOT invent a parallel agent protocol. Comply with llmstxt.org / well-known / OpenAPI / MCP and extend in our domain (deliverables).
- Do NOT compete on rendering performance. Compete on workflow + identity + agent-native distribution.
- Do NOT build a Notion-style editor before the agent surface and design system migration are done. Agents are the primary author; humans editing is V2 polish.
Validation Framework
Status (May 2026): Week 1 complete. Validation now runs concurrent with V1.1 + V2.0 build, not before it (per Strategic Decisions Log entry above). Weeks 2-4 below remain as the customer signal capture protocol; their gating role on V2 work is removed.
Week 1: Build MVP + V1 (2026-03-22 → 2026-03-25) — COMPLETE
- Astro site +
/publishskill - Cloudflare Pages deploy (
docs.cofoundy.dev, custom domain, SSL) - Cofoundy branding (logo, dark/light theme, brand tokens)
- GitHub Actions auto-deploy on push to main
- Path-based role routing (
/team/*,/client/*, public at root) - Cloudflare Access (Zero Trust) — team + client policies
- D1 permissions middleware (PoLP: admin > team > client per-project)
-
grant-access.sh— one script: D1 insert + CF Access sync + Resend email - View analytics (D1 + Pages Functions + tracking beacon + dashboard)
- PDF document viewer (pdf.js — toolbar, fullscreen, keyboard nav)
-
/publish --notify <email>— autonomous e2e: publish + grant access + notify - 15 real docs published across 5 projects (incl. 3 client demo deliverables)
- Unified Cloudflare API token provisioned (settings.json, GitHub, .env.required)
- Track: how often does the team use it vs. going to the repo directly?
Week 2: First Client Test (starting 2026-03-25)
- Pick 1 active client
-
/publishtheir deliverables with--notify client@email.com - Client receives email → clicks link → OTP login → sees doc
- Track in
/team/analytics: did they open it? When? How long? - Track: do they ask questions about it?
Week 3: Market Signal
- Talk to 5 other agencies/consultancies
- Ask: “How do you share deliverables with clients today?”
- Ask: “If Claude Code could auto-publish docs to a branded page with a link, would you use it?”
- Track: excitement level (1-10), current pain, willingness to pay
Week 4: Go/No-Go Decision
- If internal usage is high + client liked it + 3/5 agencies show interest → Go: build V1, pursue as product
- If internal usage is medium + client indifferent → Pivot: keep as internal tool, don’t productize
- If nobody uses it → Kill: the problem isn’t real enough
References & Research Sources
Future agents: this section lists the URLs/files behind the analysis in this PRD. If a referenced site has changed, refetch it and update the relevant section. Do not assume the synthesis below is permanent — it is dated 2026-05-03.
Live external sources to refetch when revisiting
| Source | URL / Path | What we extracted |
|---|---|---|
| here.now homepage | https://here.now | ”Web hosting and storage for agents.” Positioning, hero copy, agent-icon grid. Treat as primitive layer, not competitor. |
| here.now/llms.txt | https://here.now/llms.txt | The reference implementation of agent-discovery surface for V1.1. Read it before proposing changes to our /llms.txt. |
| here.now/docs | https://here.now/docs | API surface (presigned upload, slug-based site CRUD, scoped Drive tokens). |
| here.now/pricing.md | https://here.now/pricing.md | Free/Hobby/Developer tiers + anonymous flow. Reference for V3 anonymous-publish + claim. |
| llmstxt.org spec | https://llmstxt.org | Spec for /llms.txt and /llms-full.txt. Comply, don’t reinvent. |
| MCP spec | https://modelcontextprotocol.io | For mcp.docs.cofoundy.com server design. |
| OpenAPI 3.1 | https://www.openapis.org/specification | For /openapi.json. |
| W3C Well-Known URIs | RFC 8615 | For /.well-known/* endpoints. |
Internal references (read before V2.0 sprint)
| Source | Path | What it tells you |
|---|---|---|
@cofoundy/ui package | ~/cofoundy/packages/ui/ | The design system to consume. 28 UI primitives + 17 analytics + 13 chat + 12 messaging + 16 email components. Read packages/ui/CLAUDE.md first, then browse src/components/. |
@cofoundy/ui Storybook | https://ui.cofoundy.dev | Live visual catalog (71 stories). Single best place to see component vocabulary. |
/publish skill | ~/cofoundy/plugins/cofoundy-toolkit/skills/publish/ | Current publishing pipeline. SKILL.md + scripts/publish.py. Tag taxonomy lives here. |
| Mail skill (founders) | ~/cofoundy/plugins/cofoundy-founders/skills/mail/ | Reference for email-flavored visual vocabulary. PersonalNote, ScopeList, NextStepCallout, InfoBox patterns to port to web. |
| Workspace setup | ~/cofoundy/plugins/cofoundy-toolkit/commands/workspace-setup.md | How team members get the codebase. Update when onboarding flow changes. |
Why this section exists
A new agent picking up this PRD in a fresh session does not have the conversation history that produced it. They CAN refetch here.now and audit @cofoundy/ui — but they don’t necessarily know to. This section is the breadcrumb trail. Updates to V1.1, V2.1, or V3 features should refetch here.now/llms.txt first to check whether their patterns have evolved.
Why This Could Be a Startup
-
Timing — AI coding agent adoption is growing 25-28% CAGR. Cursor went $0→$2B ARR in 18 months. The volume of AI-generated markdown is exploding, and the “last mile” to non-technical stakeholders will only get worse.
-
Wedge — Start with agencies (Cofoundy = customer zero). The sharpest pain: multiple clients, multiple projects, professional output matters. No markdown-native tool serves this workflow today.
-
Counter-positioning — Content stays in git, not proprietary blocks (vs Notion) or a managed editor (vs GitBook). Zero vendor lock-in on content. For teams already working in IDEs with AI agents, this aligns with existing workflows.
-
Moat is workflow, not tech — Tech is commodity (1-3 months to replicate). Moat is agency workflow depth: per-client spaces, view tracking, approval flows, white-label — plus AI agent ecosystem positioning. Once
/publishtargets this platform from every major agent, switching costs are real. -
Revenue — Free for public docs. $29/mo per client workspace. $99/mo for white-label + custom domains. Benchmarked against GitBook ($65/site), DocSend ($10-65/user), Mintlify ($300/mo).
-
Expansion — Agency deliverables → consulting firms → legal (contracts) → marketing (content) → any team that uses AI to write for external stakeholders.
What the research validated
- Mintlify ($10M ARR, 150% NRR, 10x YoY) proves “markdown in → beautiful pages out” works as a business
- 5+ Show HN projects in 2025 solving this exact problem — demand is real, no winner yet
- A consultant (Dale Rogers) reported reducing a 20-document engagement from weeks to days using markdown→deliverable automation — the exact workflow we enable
What the research warned
- 6-12 month window before incumbents close the gap
- “Thin wrapper” risk — VCs avoid products that sit as a rendering layer without owning content or distribution
- Must build agency workflow depth fast to be more than a rendering layer