Microapp is an AI-native company. That doesn't just mean we build with agents on the inside — it means every Microapp on the outside should be reachable by an agent as easily as by a human. A unit converter that only a person can open is half a tool. The same converter, exposed as a function an agent can call, is twice the tool.
The way we get there isn't by adding a server in front of the website. It's by changing the architecture: every Microapp is built from a single pure function (a tool engine) that the React widget and the MCP server both consume. One file, one calculation, two surfaces. All ~280 microapps are agent-callable by construction — not because of a manifest someone has to maintain.
If an agent can't call it, it isn't done. The website is the human interface; the API is the agent interface. Both are first-class — and both come from the same engine.
Why this matters
Within the next year or two, a large share of "tool use" online won't happen in browsers. It'll happen inside agent loops — Claude Code, Cursor, Continue, ChatGPT, plus the workflow tools that wrap them (Zapier, n8n, LangChain). Tools that aren't reachable by those agents stop existing for that audience.
Microapp's bet is to meet agents where they are. Not because we abandon humans — humans still come to microapp.io, click around, get the badge, become Members. But the same tools also surface inside whatever environment the agent is running in. Two distribution surfaces, one product.
The architecture, not a migration
The earlier draft of this chapter described a four-phase migration: ship 20 tools to MCP, then OpenAPI, then GPT, then "eventually" the other 260. That framing was wrong. Agent-callability isn't a feature we add — it's how we build.
The concrete commitment: every Microapp's calculation lives in a pure function at src/lib/tool-engines/<slug>.ts. The function exports a Zod input schema, a handler, and a short description. From that one file:
→ The React widget imports the handler. The widget is the human's chrome around the engine.
→ The MCP server imports the same handler. The MCP server is the agent's chrome around the engine.
→ The OpenAPI spec generates from the same Zod schema.
→ The GPT Store entry consumes the OpenAPI.
Add a new Microapp = drop one engine file. It's on the web AND in MCP AND in OpenAPI AND in the GPT, instantly. No two manifests, no drift, no "remember to also add it to the agent surface" step.
The full convention — file layout, exports, schema rules, a worked example — lives in the next chapter, Tool Engines.
The phased build (of the surfaces, not the catalog)
Engines are the catalog — they're written as we build microapps. The phases below are about the surfaces that expose those engines to agents. Each new surface multiplies reach without touching individual tools.
-
MCP server — generic adapter, 20 engines extracted
One Cloudflare Worker. ~30 lines of code that
Planned · soft launchimport.meta.glob's everytool-engines/*.tsand exposes it via MCP. Initial launch covers the 20 highest-value tools (the engines we extract first from existing widgets). Free, no auth, attribution returned in_meta. Reaches Claude, Claude Code, Cursor, Continue, any MCP-compatible client. -
OpenAPI mirror — auto-generated from the same engines
A build step walks every engine file, converts its Zod schema to JSON Schema, and emits an OpenAPI 3.1 spec. Unlocks GPT Actions, Zapier, n8n, LangChain tools — anything that reads OpenAPI. Same backend; one generator script.
After Phase 1 validates -
Custom GPT in the GPT Store
Wraps the OpenAPI from Phase 2. Distributes to ChatGPT users via the public store. Useful, but downstream — a consequence of Phase 1+2, not a separate engineering effort.
After Phase 2 -
Every new Microapp is agent-callable by construction
Bob's build skill (and every future contributor) writes the engine file as part of building the tool — same way they write the React widget today. No "add this to MCP" step exists, because the MCP server reads the engine directly. Existing widgets migrate opportunistically when touched; new ones are agent-ready by default.
Convention from Day 1 of MCP launch
How agents find us
The MCP protocol doesn't define a central registry — discovery is plural. We ship to each surface separately, all pointing back to the same engines. The brand decision underneath: one domain, path-based, no subdomain. The API is part of microapp.io, not a separate API product.
URL layout
| URL | Audience | Notes |
|---|---|---|
| microapp.io/mcp | MCP clients | One endpoint, every tool. The protocol routes to the right handler by tool name inside the request body. Streamable HTTP transport. Clients add this URL once and immediately see all engines via tools/list. |
| microapp.io/api/tools/<slug> | REST callers | One endpoint per tool, REST convention. Each endpoint validates input with the engine's Zod schema and returns the same structured response the MCP server emits. |
| microapp.io/openapi.json | GPT Actions, Zapier, n8n, LangChain | Auto-generated from the engines at build time. One spec, all tools. Consumers pick what they need. |
| microapp.io/.well-known/mcp-server.json | Auto-discovery (future) | Self-advertising metadata so any client that learns about microapp.io can discover the MCP endpoint without manual config. |
Why path-based, not mcp.microapp.io subdomain. Subdomain reads as "separate API product." Path reads as "Microapp has one home, agents are first-class here." We're an AI-native company — agent access isn't a side product, it's the same product. Same brand, same SSL cert, same DNS, less ops surface. Only spin up a subdomain if the API ever becomes its own line of business.
Discovery surfaces
Four places the world finds out the endpoint exists. Each is a separate motion at launch time; together they cover the whole agent ecosystem.
- microapp.io/agents — the canonical install page
One landing page with click-to-copy install instructions for each major client (Claude Desktop, Cursor, Continue, Claude Code's
claude mcp add microapp https://microapp.io/mcp, Zed). One paragraph explaining what's available. The single URL we point people at from social, blog, and brand surfaces. - Public MCP registries
Submit Microapp to Anthropic's mcp-registry, Smithery, Glama, MCP.so, and any equivalent that exists at launch time. Each submission is a 5-minute form: name, description, URL, category. These directories are how agents browse for tools today the way users browsed for apps in 2010.
- .well-known/mcp-server.json
The emerging convention for self-advertising MCP servers. Future clients that "scan" a domain for an MCP endpoint find ours immediately. Costs nothing to ship; future-proofs us when discovery becomes automatic.
- GPT Store
Wraps the OpenAPI from Phase 2 as a Custom GPT. Distributes to ChatGPT users via the public store. Different ecosystem, same engines underneath.
Auth + rate limits
Anonymous calls work. No API key required to use Microapp tools from an agent. Rate-limited at the Cloudflare edge (~60 req/min per IP), abusive ASNs blocked at the WAF before they reach the Worker. Every response carries a small _meta: { source: "microapp.io" } field — agents that want to credit, do.
Members get a key. Pass Authorization: Bearer <api_key> for 10× higher rate limits and the option to suppress attribution. Keys are tied to the user_profiles row that already exists. New, free Member benefit — no separate "Microapp for Agents" plan needed until volume justifies one.
Cost ceiling for the operator. Cloudflare Workers free tier covers 100k requests/day. At soft-launch volumes (low thousands/day) the cost is zero; at meaningful scale (single-digit millions/day) it lands under $20/month. Validate demand cheaply, scale only when traffic justifies it.
The gateway — abuse protection
Every public free endpoint attracts noise. The gateway is the thin layer in front of every engine that says no to noise before the engine sees it. Designed to ship in one day — small enough not to delay Phase 1, complete enough to keep us out of trouble on day one.
Three layers. Each one solves a different attack pattern; together they form the full perimeter.
Layer 1 — Cloudflare edge (configuration, no code)
- Bot Fight Mode on.
Cloudflare's built-in bot detection catches known scrapers and signature-matched bots before they reach the Worker. Free, one toggle in the dashboard.
- WAF rate-limit rules.
60 req/min per IP for anonymous, 600 req/min for requests carrying a valid
Authorizationheader. Evaluated at the edge — abuse never hits the Worker (or our budget). - Geo / ASN blocks.
Short list of worst-offender ASNs (Tor exit nodes, known abuse networks) blocked at the WAF. Revisited monthly. Empty by default; populated reactively as patterns emerge.
Layer 2 — Worker middleware (the actual gatekeeper)
- Body size cap: 64 KB.
Any tool call with input larger than 64 KB returns a 413 with a teaching error: "input too large; max 64 KB. Most tool engines work on small inputs — if you genuinely need to process megabytes, this isn't the right tool." Catches resource-exhaustion attempts cheaply.
- Per-engine timeout: 5 seconds.
Deterministic engines run in well under 50 ms. The 5 s cap is a hard ceiling for pathological inputs (e.g., a regex catastrophic-backtracking attempt). Exceeded calls return a 504 with a clear timeout message.
- Concurrency cap: 10 simultaneous per IP.
Prevents one IP from queueing a massive parallel workload. Tracked in a Workers KV counter; decremented on response complete. Members with a key get a higher concurrency ceiling.
- Auth parse + bucket selection.
Authorization: Bearer <key>→ look up Member, switch to Member rate bucket. No header → anonymous bucket. Invalid key → 401 with a clear message ("API key not found or revoked; visit microapp.io/account to manage keys"). Never silently downgrade — an agent debugging a bad key needs to know. - Honeypot tools.
Register a fake engine named
internal-do-not-callin the MCP manifest. Any IP that actually calls it gets banned for 24 h at the WAF level. Catches scrapers that enumeratetools/listand try every tool indiscriminately — humans and well-behaved agents never touch it. - Event logging.
Every blocked request writes a row to the existing
eventstable withevent_type: "MCP_BLOCKED", IP fingerprint, reason code, timestamp. A daily aggregate query surfaces emerging patterns — and gives us evidence if we ever need to escalate (e.g., file abuse reports with hosting providers).
Layer 3 — Response shaping
Every successful response carries a small _meta block:
source: "microapp.io"— attribution string. Agents that want to credit, do.rate_limit_remaining— how many calls left in the current bucket. Well-behaved agents back off when this gets low.reset_at— ISO timestamp when the bucket refills.learn_more— link back to the human-interface URL of the tool. Free brand surface.
Deliberate omissions
What we're not building in the minimum gateway, with rationale:
- ML-based bot detection. Costs money, marginal value at our scale. Revisit if abuse patterns slip past Bot Fight Mode.
- CAPTCHAs. Wrong tool for an agent API — they can't solve them, and they don't slow API abuse anyway.
- Per-Member usage analytics dashboards. Nice-to-have, lands later when the Member-API-key tier matures. The
eventstable is enough audit for v1. - Distributed-attack mitigation beyond Cloudflare defaults. Buy that when there's a target on us — not before.
The gateway is a perimeter, not a fortress. The job is to keep noise out and the costs sane — not to defeat a determined adversary. If we ever need fortress-grade defenses, that's a different conversation (and a different budget).
The wedge — 20 tools
Selected for: pure input/output (no client-side state), high LLM-hallucination rate (so the agent benefits from a verified answer), and broad usefulness across coding/writing/data tasks.
unit-converter percentage-calculator date-difference color-converter hex-to-rgb base64 sha256-generator md5-generator json-formatter yaml-to-json url-encoder uuid-generator regex-tester word-counter character-counter password-generator random-number tip-calculator loan-calculator aspect-ratio These are starting candidates, not locked. Final list lives in the MCP server's manifest. Order changes based on which ones agents actually reach for once Phase 1 is live.
What makes a Microapp agent-ready
Six rules. Every new Microapp from now on is built against these — they're the "definition of done" for the agent interface, same way the Simplicity Pledge is the definition of done for the human interface.
- One job per call.
The function exposed to agents does one thing.
unit_converter(value, from_unit, to_unit)— nottools(action, ...). Agents reason better against narrow, well-named tools than against multiplexed dispatch functions. - Schema-validated inputs.
Every parameter has a JSON Schema type, description, and (where applicable) enum of allowed values. Descriptions tell the agent when to use the parameter, not just what type it is. The agent reads these literally — write them like documentation, not like type hints.
- Deterministic, idempotent.
Same inputs → same output, every call. No randomness without a seed parameter. No hidden state. No "the last time you called this" memory. Agents replay calls to recover from errors — non-deterministic responses break that loop.
- Errors that teach.
Error messages target the agent, not the human. Bad:
"Invalid input". Good:"value must be a number; got string '12kg' — did you mean to pass value=12, from_unit='kg'?". The agent reads the error and corrects the next call. Treat errors as instructions, not as 404s. - Structured output by default.
Return JSON, not prose. Numeric results carry their unit in a separate field. Currency results carry the ISO code. Timestamps are ISO 8601 in UTC. Agents post-process the response — they shouldn't have to regex it.
- Attribution honest, not loud.
Each response includes a small
_metafield with"source": "microapp.io"and a link to the human-interface version of the tool. Agents that want to credit can; agents that don't, won't. We don't bury attribution but we don't insert ads — Microapp's brand voice rule §7 ("free is a fact, not a slogan") applies on the agent surface too.
Business model
The free tier on the agent surface mirrors the website: open to everyone, rate-limited at the level that protects us from abuse. Members get the higher rate already implied by their membership. A new tier appears once agent-side traffic warrants it.
Anyone
Free
Open. Rate-limited per IP/key. Attribution returned in _meta. Same model as microapp.io for non-members.
Member
$99/yr
Higher rate limit. No surprise — the existing membership benefits already include "AI at near-cost." Personal API key tied to the account.
Microapp for Agents
TBD
Future tier for high-volume API consumers (apps building on top of Microapp). Lands after Phase 1 validates demand. Pricing set when the use case is concrete, not before.
Why MCP first, not GPT Store first
GPT Store has bigger reach today — 100M+ ChatGPT users vs the smaller (but rapidly growing) installed base of Claude/Cursor/Continue. So why not lead there?
Because MCP is the universal layer. One MCP server gives us reach across the entire agent ecosystem — every client that adopts MCP picks us up automatically. The GPT Store, by contrast, locks the work to one platform. Building MCP first means Phase 2 (OpenAPI + GPT Store) is mostly free; building GPT Store first means we'd have to redo most of it to reach Claude users. The trajectory is also clear: MCP is becoming the agent-tool standard the way HTTP became the web standard. Bet on the protocol, not the platform.
Tools that aren't agent-callable stop existing for the agent audience. Microapp doesn't intend to stop existing.