Skip to Content
Deneva MCPComponentsEntry Point

Entry point

Source: src/index.ts

The bootstrap file. Reads top-to-bottom in the exact order it does at runtime — useful as the one place where the whole architecture is visible end to end.

Order matters

1. verifyAllSecretsLoadable() ← fail fast if any required secret is missing 2. loadConfig() ← Zod-validate non-secret env vars 3. Fastify({...}) ← logger w/ redaction, body limit, request IDs 4. helmet ← security headers (HSTS, X-Frame-Options, etc.) 5. globalRateLimiterPlugin ← Layer 1 (per-IP) BEFORE auth — cap unauth abuse 6. tenantAuthPlugin ← X-Api-Key → req.tenantId; covers /mcp/* + /auth/*/start 7. tenantRateLimiterPlugin ← Layer 2 (per-tenant) AFTER auth — needs req.tenantId 8. adminRoutes ← /admin/api-keys/rotate (X-Admin-Token gated) 9. oauthRoutes ← /auth/:platform/start + /callback (Phase 2) 10. mountMcp(app) ← POST /mcp (per-request McpServer + transport) 11. GET /health ← unauthenticated process-liveness probe 12. startOAuthStateCleanup() ← in-process timer (replaced by Inngest in Phase 4) 13. startIpBlockCleanup() ← in-process timer 14. app.listen({ host, port }) ← 127.0.0.1:3001 — nginx terminates TLS upstream

If you reorder steps 4–10 you’ll break expected behaviour. Notably:

  • Helmet must be first so its headers attach before any handler can write a response.
  • Rate-limit (Layer 1) must come before auth so unauthenticated abuse is still capped.
  • Auth must come before admin/MCP/OAuth-start so req.tenantId is populated when those handlers run. /auth/:platform/callback intentionally runs without req.tenantId (browser-driven; the state row carries it).

Each of the security plugins (steps 5, 6, 7) is exported through fastify-plugin’s fp(...), so its addHook('preHandler', ...) calls escape the plugin’s encapsulated scope and apply to every route on the parent instance — including POST /mcp, which mountMcp(app) registers directly on the parent. A bare async-function plugin would silently no-op for /mcp. See docs/phase-1-foundation.md §14 #6 for the smoke-test that surfaced this.

Logger

Two redaction layers wired in the Fastify constructor:

logger: { redact: { paths: [..."x-api-key"..., ..."authorization"...], remove: true }, serializers: { err: scrubTokens(...) }, }

The redact paths cover request headers; the err serializer covers arbitrary error objects via log-scrubber.ts. See log-scrubber.md for the redaction contract.

Bind address

await app.listen({ host: '127.0.0.1', port: 3001 });

The Node process never binds to a public interface. nginx forwards from :443 to 127.0.0.1:3001 (Phase 5). This is non-negotiable: if you change host to 0.0.0.0, the server is one nginx config bug away from being directly reachable on the open internet.

Health check

GET /health is unauthenticated and returns:

{ "ok": true, "version": "0.1.0", "uptimeSec": 1234 }

A 200 here means the Node process is alive. It does not mean Postgres is healthy or that platform adapters are reachable. The dependency-health endpoint lands in Phase 4 (/admin/health/inngest).

Body limit

bodyLimit: 64 * 1024 matches the nginx client_max_body_size 64k from Phase 5. Anything bigger gets rejected at nginx before reaching the Node process; this limit is a defence-in-depth backstop.

What the entry point does NOT do

  • It does not connect to the DB explicitly. The pool is created lazily on first import of src/db/index.ts (which loads DB_PASSWORD synchronously via top-level await on the secrets loader, plus registers the pg TIMESTAMP type parser override — see database.md).
  • It does not register Inngest. That lands in Phase 4.