Deneva MCP Tool — Architecture Plan
Overview
A multi-tenant MCP server (TypeScript / Node.js) that pulls data from Google Ads, Meta Ads, and TikTok Ads, caches it in PostgreSQL, and exposes structured MCP tools over Streamable HTTP transport. Deployed on a public-facing Linux server with GDPR + SOC 2 compliance requirements.
System Diagram
Internet
│
▼
┌──────────────────────────────────────────────────────────────────┐
│ nginx (TLS termination, WAF rules, rate limiting) │
│ TLS 1.3 only │ HSTS │ Security headers │ Request size limits │
└──────────────────────────┬───────────────────────────────────────┘
│ localhost only (127.0.0.1:3001)
▼
┌──────────────────────────────────────────────────────────────────┐
│ Linux Server │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ MCP Server (Fastify) │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────────────┐ │ │
│ │ │ Security Middleware Stack (applied in order) │ │ │
│ │ │ 1. Request ID injection │ │ │
│ │ │ 2. Rate limiter (per IP + per tenant key) │ │ │
│ │ │ 3. Tenant API key auth (constant-time compare) │ │ │
│ │ │ 4. Input validation (Zod schemas) │ │ │
│ │ │ 5. Audit log writer │ │ │
│ │ └──────────────────────────────────────────────────────┘ │ │
│ │ │ │
│ │ ┌─────────────────┐ ┌──────────────────────────────┐ │ │
│ │ │ Tool Registry │ │ Platform Adapters │ │ │
│ │ │ (7 MCP tools) │ │ Google │ Meta │ TikTok │ │ │
│ │ └─────────────────┘ └──────────────────────────────┘ │ │
│ │ │ │
│ │ ┌──────────────────────────────────────────────────────┐ │ │
│ │ │ Credentials Service (envelope encryption) │ │ │
│ │ │ DEK per tenant, KEK from secrets store │ │ │
│ │ └──────────────────────────────────────────────────────┘ │ │
│ └────────────────────────┬───────────────────────────────────┘ │
│ │ SSL + pg_hba.conf (local only) │
│ ┌────────────────────────▼───────────────────────────────────┐ │
│ │ PostgreSQL 16 │ │
│ │ tenants │ platform_credentials │ metric_cache │ │
│ │ audit_log │ api_keys │ sync_log │ │
│ │ Row-level security enabled on all tenant-scoped tables │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Background Sync (Inngest functions + cron) │ │
│ │ Request signed with INNGEST_SIGNING_KEY │ │
│ └────────────────────────────────────────────────────────────┘ │
│ │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ Secrets Store (systemd-creds or Vault Agent) │ │
│ │ KEK, DB password, platform secrets — never in env │ │
│ └────────────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────┘Project Structure
deneva-mcp/
├── src/
│ ├── index.ts # Entry point, Fastify bootstrap
│ ├── config.ts # Zod-validated config (no secrets here)
│ ├── mcp/
│ │ ├── server.ts # MCP server instance, tool registration
│ │ └── tools/
│ │ ├── account-health.ts
│ │ ├── pmax-breakdown.ts
│ │ ├── quality-score.ts
│ │ ├── search-term-waste.ts
│ │ ├── budget-optimizer.ts
│ │ ├── auction-insights.ts
│ │ └── weekly-anomaly.ts
│ ├── adapters/
│ │ ├── adapter.interface.ts
│ │ ├── google/ { index, auth, queries }
│ │ ├── meta/ { index, auth, queries }
│ │ └── tiktok/ { index, auth, queries }
│ ├── cache/
│ │ ├── cache.service.ts
│ │ └── ttl-config.ts
│ ├── sync/
│ │ ├── functions.ts
│ │ └── inngest.ts
│ ├── db/
│ │ ├── schema.ts
│ │ ├── rls.sql # Row-level security policies
│ │ ├── migrations/
│ │ └── index.ts
│ ├── security/
│ │ ├── api-key.service.ts # Key generation, hashing, rotation
│ │ ├── credentials.service.ts # Envelope encryption for OAuth tokens
│ │ ├── secrets.loader.ts # Load KEK from systemd-creds or Vault
│ │ ├── rate-limiter.plugin.ts # Per-IP + per-tenant rate limiting
│ │ ├── audit-log.service.ts # Structured immutable audit events
│ │ └── tenant.middleware.ts # Auth + context injection
│ └── auth/
│ ├── oauth.routes.ts # /auth/:platform/start + /callback
│ └── oauth-state.service.ts # PKCE + state param management
├── docker-compose.yml
├── ecosystem.config.js
├── drizzle.config.ts
├── tsconfig.json
└── package.jsonTech Stack
| Layer | Choice | Notes |
|---|---|---|
| Runtime | Node.js 22 LTS | LTS, native fetch, --experimental-permission flag available |
| Language | TypeScript 5.5 | Strict mode, noUncheckedIndexedAccess enabled |
| MCP SDK | @modelcontextprotocol/sdk | Official SDK, Streamable HTTP |
| HTTP server | Fastify 5 | Fast, TS-native, helmet plugin for security headers |
| ORM | Drizzle ORM | Parameterized queries only, no raw string interpolation |
| Database | PostgreSQL 16 | RLS, SSL connections, encrypted at rest |
| Queue | Inngest | Signed webhook requests, durable retries |
| Crypto | Node.js crypto AES-256-GCM | Envelope encryption per tenant |
| Secrets | systemd-creds (or HashiCorp Vault) | KEK never in env vars or on disk unencrypted |
| Process manager | PM2 | Non-root user, no shell access |
| Logging | Pino | Structured JSON, PII fields redacted before write |
| Validation | Zod | All external inputs, env config, MCP tool params |
| Security headers | @fastify/helmet | CSP, HSTS, X-Frame-Options, etc. |
| Rate limiting | @fastify/rate-limit | Per-IP + per-tenant-key limits |
Security Architecture
1. Transport Security
nginx enforces TLS 1.3 minimum. TLS 1.2 is disabled. The Fastify process binds to 127.0.0.1 only — it is never directly reachable from the internet.
# /etc/nginx/sites-available/deneva-mcp
server {
listen 443 ssl;
server_name your-domain.com;
ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem;
ssl_protocols TLSv1.3; # TLS 1.2 explicitly disabled
ssl_ciphers TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_stapling on;
ssl_stapling_verify on;
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always;
add_header X-Frame-Options DENY always;
add_header X-Content-Type-Options nosniff always;
add_header Referrer-Policy no-referrer always;
add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always;
# Hide nginx version
server_tokens off;
# Reject oversized requests before they hit Node.js
client_max_body_size 64k;
# Basic nginx-level rate limiting (coarse — Fastify enforces finer limits)
limit_req zone=mcp_global burst=30 nodelay;
location /mcp {
proxy_pass http://127.0.0.1:3001;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_hide_header X-Powered-By;
}
# OAuth callback — restricted path
location /auth/ {
proxy_pass http://127.0.0.1:3001;
limit_req zone=mcp_auth burst=5 nodelay;
}
# Block everything else
location / { return 444; }
}
# Redirect HTTP to HTTPS
server {
listen 80;
return 301 https://$host$request_uri;
}2. API Key Security
Tenants authenticate with bearer-style API keys. The raw key is shown once at creation and never stored. Only a keyed HMAC-SHA256 hash is stored in the database.
// security/api-key.service.ts
import { createHmac, timingSafeEqual } from 'crypto';
const HMAC_SECRET = await secretsLoader.get('API_KEY_HMAC_SECRET');
export function hashApiKey(rawKey: string): string {
return createHmac('sha256', HMAC_SECRET).update(rawKey).digest('hex');
}
export function verifyApiKey(rawKey: string, storedHash: string): boolean {
const candidate = Buffer.from(hashApiKey(rawKey));
const stored = Buffer.from(storedHash);
// Constant-time comparison — prevents timing attacks
if (candidate.length !== stored.length) return false;
return timingSafeEqual(candidate, stored);
}
export function generateApiKey(): string {
// 32 bytes = 256 bits of entropy, base64url-encoded
return randomBytes(32).toString('base64url');
}API keys carry an expiry date and a rotation mechanism. When a key is rotated, the old key remains valid for a configurable grace period (default: 24h) to allow clients to update without downtime.
3. Envelope Encryption for OAuth Tokens
Each tenant gets its own Data Encryption Key (DEK). DEKs are encrypted with a Key Encryption Key (KEK) loaded from the secrets store at startup. This means:
- Compromising the database alone does not expose tokens (no KEK).
- Compromising the server alone does not expose tokens at rest (no database).
- Revoking a tenant means destroying their DEK — all their stored tokens become unreadable immediately.
// security/credentials.service.ts
import { randomBytes, createCipheriv, createDecipheriv } from 'crypto';
// KEK loaded once at startup from secrets store — never from env
const kek = await secretsLoader.get('CREDENTIAL_KEK'); // 32 bytes
export async function encryptToken(tenantId: string, plaintext: string): Promise<string> {
const dek = await getOrCreateDek(tenantId); // tenant-specific DEK, itself encrypted with KEK
const iv = randomBytes(12);
const cipher = createCipheriv('aes-256-gcm', dek, iv);
const encrypted = Buffer.concat([cipher.update(plaintext, 'utf8'), cipher.final()]);
const tag = cipher.getAuthTag();
// Format: iv(12) + tag(16) + ciphertext — all base64
return Buffer.concat([iv, tag, encrypted]).toString('base64');
}
export async function decryptToken(tenantId: string, ciphertext: string): Promise<string> {
const dek = await getOrCreateDek(tenantId);
const buf = Buffer.from(ciphertext, 'base64');
const iv = buf.subarray(0, 12);
const tag = buf.subarray(12, 28);
const data = buf.subarray(28);
const decipher = createDecipheriv('aes-256-gcm', dek, iv);
decipher.setAuthTag(tag);
return decipher.update(data) + decipher.final('utf8');
}4. Secrets Management
Secrets (KEK, DB password, platform client secrets) must never live in .env files, environment variables, or on the filesystem unencrypted.
Recommended: systemd credentials (simplest for a single Linux server)
# Store secret at provisioning time — encrypted on disk by systemd
sudo systemd-creds encrypt --name=CREDENTIAL_KEK - /etc/deneva-mcp/creds/CREDENTIAL_KEK.cred
# In the systemd unit file:
# LoadCredential=CREDENTIAL_KEK:/etc/deneva-mcp/creds/CREDENTIAL_KEK.cred
# Node.js reads from /run/credentials/<unit>/ at runtime// security/secrets.loader.ts
export async function loadSecret(name: string): Promise<Buffer> {
// systemd decrypts and exposes secrets in a tmpfs directory at runtime
const credPath = `/run/credentials/${process.env.SYSTEMD_UNIT}/${name}`;
return fs.readFile(credPath); // tmpfs — never written to disk
}Alternative for multi-server: HashiCorp Vault with AppRole auth. The Node.js process authenticates to Vault at startup and fetches secrets via the Vault HTTP API. Vault handles rotation, audit trails, and lease expiry natively.
5. OAuth 2.0 Hardening (PKCE + State)
All OAuth flows use PKCE and a server-generated state parameter to prevent CSRF and authorization code interception.
// auth/oauth-state.service.ts
export async function createOAuthState(tenantId: string, platform: string): Promise<{
state: string; codeVerifier: string; codeChallenge: string;
}> {
const state = randomBytes(32).toString('base64url');
const codeVerifier = randomBytes(32).toString('base64url');
const codeChallenge = createHash('sha256')
.update(codeVerifier).digest('base64url'); // S256 method
// Store in DB with 10-minute TTL — single use
await db.insert(oauthStates).values({
state, codeVerifier, tenantId, platform,
expiresAt: new Date(Date.now() + 10 * 60 * 1000),
});
return { state, codeVerifier, codeChallenge };
}
export async function consumeOAuthState(state: string): Promise<OAuthStateRow> {
const row = await db.delete(oauthStates)
.where(and(eq(oauthStates.state, state), gt(oauthStates.expiresAt, new Date())))
.returning().then(r => r[0]);
if (!row) throw new Error('Invalid or expired OAuth state');
return row; // deleted on read — single use enforced
}6. Rate Limiting
Two layers: coarse at nginx, fine-grained at Fastify.
// security/rate-limiter.plugin.ts
await fastify.register(import('@fastify/rate-limit'), {
global: true,
max: 100, // per IP per minute (global default)
timeWindow: 60_000,
keyGenerator: (req) => req.ip,
errorResponseBuilder: () => ({ error: 'rate_limit_exceeded' }),
});
// Stricter limit on MCP tool calls per authenticated tenant
fastify.addHook('preHandler', async (req) => {
if (req.tenantId) {
await tenantRateLimiter.consume(req.tenantId, 1); // 300 req/min per tenant
}
});
// Very strict on auth endpoints — 5 attempts per 15 min per IP
fastify.register(rateLimit, {
routeConfig: true, // per-route override enabled
});
// applied on /auth/* routes: max: 5, timeWindow: 900_000Auth failures increment a counter. After 10 failures from the same IP within 1 hour, that IP is blocked for 1 hour and the event is written to the audit log.
7. Input Validation & Injection Prevention
Every MCP tool input is validated through a Zod schema before any business logic runs. Drizzle ORM uses parameterized queries exclusively — no raw SQL string interpolation anywhere in the codebase. A lint rule (no-restricted-syntax) bans db.execute(sql\…`)` with template interpolation.
// Example: strict enum validation prevents any parameter pollution
const AccountHealthInput = z.object({
platform: z.enum(['google', 'meta', 'tiktok']),
dateRange: z.enum(['last_7_days', 'last_30_days', 'last_90_days']),
// No free-text fields — every param is a closed enum
});Metric data from platform APIs is stored as JSONB but never executed or interpolated into queries. When returned to MCP clients, it is serialized with JSON.stringify — no template construction.
8. Audit Logging (SOC 2 CC6, CC7)
Every security-relevant event is written to an append-only audit_log table. The application DB user has INSERT permission only on this table — no UPDATE or DELETE. This makes the log tamper-evident from the application layer.
// DB schema
export const auditLog = pgTable('audit_log', {
id: uuid('id').primaryKey().defaultRandom(),
tenantId: uuid('tenant_id'), // null for system events
eventType: text('event_type').notNull(), // see enum below
actorIp: text('actor_ip'),
requestId: text('request_id'), // correlates with Pino logs
outcome: text('outcome').notNull(), // 'success' | 'failure'
metadata: jsonb('metadata'), // PII-free context
createdAt: timestamp('created_at').defaultNow().notNull(),
});
// Event types
type AuditEventType =
| 'api_key.auth_success' | 'api_key.auth_failure'
| 'api_key.created' | 'api_key.rotated' | 'api_key.revoked'
| 'oauth.flow_started' | 'oauth.flow_completed' | 'oauth.flow_failed'
| 'oauth.token_refreshed' | 'oauth.token_revoked'
| 'mcp.tool_called' | 'mcp.tool_failed'
| 'tenant.created' | 'tenant.deleted'
| 'rate_limit.exceeded' | 'auth.blocked_ip';Logs are retained for 12 months minimum (SOC 2 requirement). After 12 months, rows are moved to a cold archive table via a scheduled Inngest function, not deleted. The metadata JSONB field must never contain names, email addresses, or any personal data (GDPR data minimisation).
9. GDPR Controls
Data minimisation: The metric_cache table stores ad performance metrics — purely aggregated numerical data. It must never store campaign descriptions, audience names, or anything that could identify individuals.
Retention limits: metric_cache rows expire via expiresAt and are hard-deleted by a nightly Inngest function after 90 days. audit_log rows are archived (not deleted) after 12 months. sync_log rows are deleted after 30 days.
Right to erasure: Deleting a tenant cascades via foreign keys to platform_credentials, metric_cache, and sync_log. The tenant’s DEK is destroyed simultaneously — encrypted credential blobs in the DB become permanently unreadable even if rows are somehow recovered. Audit log rows for that tenant are anonymised (tenantId set to null) rather than deleted, to preserve the integrity of the security record.
export async function deleteTenant(tenantId: string): Promise<void> {
await db.transaction(async (tx) => {
await tx.delete(metricCache).where(eq(metricCache.tenantId, tenantId));
await tx.delete(platformCredentials).where(eq(platformCredentials.tenantId, tenantId));
await tx.delete(syncLog).where(eq(syncLog.tenantId, tenantId));
await destroyDek(tenantId); // DEK deleted from secrets store — tokens unreadable
await tx.update(auditLog) // Anonymise, don't delete
.set({ tenantId: null, metadata: sql`metadata - 'account_id'` })
.where(eq(auditLog.tenantId, tenantId));
await tx.delete(tenants).where(eq(tenants.id, tenantId));
});
}Data processing documentation: Maintain a Record of Processing Activities (ROPA) document noting that this service processes ad performance data (no personal data) on behalf of clients, stored in the EU (or document the actual server region).
10. Database Security
-- Two separate DB roles — principle of least privilege
CREATE ROLE mcp_app LOGIN PASSWORD '...'; -- application queries
CREATE ROLE mcp_admin LOGIN PASSWORD '...'; -- migrations only (CI/CD, not runtime)
-- mcp_app can read/write business tables but cannot alter schema
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO mcp_app;
GRANT USAGE ON ALL SEQUENCES IN SCHEMA public TO mcp_app;
-- audit_log is INSERT-only for the application
REVOKE UPDATE, DELETE ON audit_log FROM mcp_app;
-- RLS: tenants can only see their own rows
ALTER TABLE metric_cache ENABLE ROW LEVEL SECURITY;
ALTER TABLE platform_credentials ENABLE ROW LEVEL SECURITY;
ALTER TABLE sync_log ENABLE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation ON metric_cache
USING (tenant_id = current_setting('app.current_tenant_id')::uuid);The PostgreSQL instance listens on 127.0.0.1 only. All connections use SSL (sslmode=require). pg_hba.conf allows only local connections from the application user.
11. Process & OS Hardening
# Run as a dedicated non-root user, no shell
useradd --system --no-create-home --shell /usr/sbin/nologin deneva-mcp
# Filesystem: app files owned by root, readable by service user — not writable
chown -R root:deneva-mcp /opt/deneva-mcp/dist
chmod -R 750 /opt/deneva-mcp/dist
# systemd unit hardening
[Service]
User=deneva-mcp
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/log/deneva-mcp
CapabilityBoundingSet=
RestrictAddressFamilies=AF_INET AF_INET612. Dependency Security
// package.json — run in CI on every PR and nightly
"scripts": {
"audit": "npm audit --audit-level=high",
"audit:fix": "npm audit fix"
}Pin all dependencies to exact versions in package-lock.json. Set up Dependabot (or Renovate) for automated patch PRs. Fail the CI pipeline if npm audit reports high or critical vulnerabilities.
Database Schema (Drizzle)
export const tenants = pgTable('tenants', {
id: uuid('id').primaryKey().defaultRandom(),
name: text('name').notNull(),
createdAt: timestamp('created_at').defaultNow(),
deletedAt: timestamp('deleted_at'), // soft-delete before cascade
});
// Separate table for API keys — supports rotation (multiple active keys per tenant)
export const apiKeys = pgTable('api_keys', {
id: uuid('id').primaryKey().defaultRandom(),
tenantId: uuid('tenant_id').references(() => tenants.id).notNull(),
keyHash: text('key_hash').notNull().unique(), // HMAC-SHA256, not SHA-256
description: text('description'), // e.g. "Claude Desktop - prod"
lastUsedAt: timestamp('last_used_at'),
expiresAt: timestamp('expires_at'), // mandatory expiry
revokedAt: timestamp('revoked_at'),
createdAt: timestamp('created_at').defaultNow(),
});
export const platformCredentials = pgTable('platform_credentials', {
id: uuid('id').primaryKey().defaultRandom(),
tenantId: uuid('tenant_id').references(() => tenants.id).notNull(),
platform: text('platform').notNull(),
accountId: text('account_id').notNull(),
accessTokenEnc: text('access_token_enc').notNull(), // envelope-encrypted
refreshTokenEnc: text('refresh_token_enc'),
tokenExpiresAt: timestamp('token_expires_at'),
scopes: text('scopes').array(),
updatedAt: timestamp('updated_at').defaultNow(),
});
export const oauthStates = pgTable('oauth_states', {
state: text('state').primaryKey(), // random 32 bytes, base64url
codeVerifier: text('code_verifier').notNull(), // PKCE
tenantId: uuid('tenant_id').notNull(),
platform: text('platform').notNull(),
expiresAt: timestamp('expires_at').notNull(), // 10 min TTL
});
export const metricCache = pgTable('metric_cache', {
id: uuid('id').primaryKey().defaultRandom(),
tenantId: uuid('tenant_id').references(() => tenants.id).notNull(),
platform: text('platform').notNull(),
reportType: text('report_type').notNull(),
dateRangeKey: text('date_range_key').notNull(),
data: jsonb('data').notNull(),
fetchedAt: timestamp('fetched_at').defaultNow(),
expiresAt: timestamp('expires_at').notNull(),
// Hard-deleted by nightly Inngest function after 90 days
});
export const auditLog = pgTable('audit_log', {
id: uuid('id').primaryKey().defaultRandom(),
tenantId: uuid('tenant_id'), // nullable — anonymised on erasure
eventType: text('event_type').notNull(),
actorIp: text('actor_ip'),
requestId: text('request_id'),
outcome: text('outcome').notNull(),
metadata: jsonb('metadata'), // no PII
createdAt: timestamp('created_at').defaultNow().notNull(),
// INSERT only for mcp_app role — no UPDATE/DELETE
});
export const syncLog = pgTable('sync_log', {
id: uuid('id').primaryKey().defaultRandom(),
tenantId: uuid('tenant_id').references(() => tenants.id),
platform: text('platform').notNull(),
status: text('status').notNull(),
durationMs: integer('duration_ms'),
errorMessage: text('error_message'),
createdAt: timestamp('created_at').defaultNow(),
// Deleted after 30 days by nightly Inngest function
});Platform Adapter Interface
export interface PlatformAdapter {
readonly platform: 'google' | 'meta' | 'tiktok';
exchangeCode(tenantId: string, code: string, codeVerifier: string): Promise<void>; // PKCE
ensureValidToken(tenantId: string): Promise<string>;
fetchAccountHealth(tenantId: string, range: DateRange): Promise<AccountHealthData>;
fetchCampaigns(tenantId: string, range: DateRange): Promise<CampaignData[]>;
fetchSearchTerms(tenantId: string, range: DateRange): Promise<SearchTermData[]>;
fetchAuctionInsights(tenantId: string, range: DateRange): Promise<AuctionData>;
fetchKeywordQualityScores(tenantId: string, range: DateRange): Promise<QSData[]>;
fetchAssetGroups(tenantId: string, range: DateRange): Promise<AssetGroupData[]>;
}MCP Tools (7 tools)
| Tool Name | Platforms | Data Returned |
|---|---|---|
get_account_health | All | Spend, ROAS, CPA, CTR — 90-day trends, per-campaign ranking |
get_pmax_breakdown | Asset group ROAS, hidden search categories | |
get_quality_score | Per-keyword QS + spend, worst offenders flagged | |
get_search_term_waste | Google, Meta | Top 50 terms: cost vs conversions, negatives suggested |
get_budget_optimizer | All | Current vs recommended spend split, projected ROAS delta |
get_auction_insights | Google, Meta | Top 5 competitors, impression share trends |
get_weekly_anomaly | All | Every metric that moved >15% week-over-week |
All tool inputs are closed enum sets — no free-text parameters that could carry injection payloads.
Authentication & Multi-Tenancy
Request lifecycle
Incoming request
│
├─ No / invalid X-Api-Key header
│ └─ 401, audit log: api_key.auth_failure, increment IP counter
│
├─ Key present → HMAC hash → lookup apiKeys table (constant-time)
│ ├─ Not found, expired, or revoked → 401
│ └─ Found → inject tenantId into request context
│ → update lastUsedAt (async, non-blocking)
│ → audit log: api_key.auth_success
│
├─ Rate limit check (per-tenant bucket)
│ └─ Exceeded → 429, audit log: rate_limit.exceeded
│
└─ Zod schema validation on tool params
└─ Invalid → 400, no audit entry needed (not a security event)OAuth flow with PKCE
Client / Admin Tool MCP Server Ad Platform
│ │ │
│ GET /auth/google/start?tenantId=... │
│──────────────────────▶│ │
│ │ generate state + PKCE verifier
│ │ store in oauth_states (10min TTL)
│ 302 → consent URL │ │
│ (includes state, │ │
│ code_challenge) │ │
│◀──────────────────────│ │
│ │ │
│ (user consents) │ │
│ │ POST /token │
│ GET /auth/google/ │ + code_verifier (PKCE) │
│ callback?code=&state=│◀───────────────────────────▶│
│──────────────────────▶│ tokens returned │
│ │ │
│ │ verify state (consume + delete)
│ │ envelope-encrypt tokens
│ │ audit log: oauth.flow_completed
│ 200 Connected ✓ │ │
│◀──────────────────────│ │Caching Strategy
MCP tool call
│
▼
metric_cache lookup (tenant + platform + report + date_range)
│
HIT, not expired ────────────────────────────▶ Return JSONB
│
MISS or expired
│
pg_advisory_lock (prevents thundering herd)
│
ensureValidToken() — refresh if needed, log if fails
│
Fetch from platform API
│
Validate response shape (Zod) before storing
│
Upsert metric_cache row
│
Release lock → Return data → audit log: mcp.tool_calledTTL config:
export const TTL_SECONDS = {
google: { account_health: 3600, search_terms: 7200, auction_insights: 14400 },
meta: { account_health: 3600, search_terms: 7200 },
tiktok: { account_health: 7200 },
};Background Sync with Inngest
Inngest requests are verified using the INNGEST_SIGNING_KEY before any function executes. This prevents spoofed requests to /api/inngest from triggering sync jobs.
// src/sync/inngest.ts
export const inngest = new Inngest({
id: 'deneva-mcp',
signingKey: await secretsLoader.get('INNGEST_SIGNING_KEY'),
});Two functions, plus two maintenance functions for GDPR data retention:
// Scheduled fan-out — every hour
export const syncScheduled = inngest.createFunction(
{ id: 'sync-scheduled-refresh' },
{ cron: '0 * * * *' },
async ({ step }) => { /* fan-out events per active tenant+platform */ }
);
// Per-tenant worker with automatic retry
export const syncRefreshTenant = inngest.createFunction(
{ id: 'sync-refresh-tenant', retries: 3 },
{ event: 'sync/refresh-tenant' },
async ({ event, step }) => { /* step-level cache refresh + sync_log write */ }
);
// GDPR: nightly hard-delete of expired metric_cache rows (>90 days)
export const purgeExpiredCache = inngest.createFunction(
{ id: 'gdpr-purge-cache' },
{ cron: '0 2 * * *' }, // 02:00 daily
async ({ step }) => {
await step.run('delete-expired', () =>
db.delete(metricCache).where(lt(metricCache.expiresAt, new Date()))
);
}
);
// Housekeeping: archive audit_log rows older than 12 months, delete sync_log >30 days
export const archiveLogs = inngest.createFunction(
{ id: 'housekeeping-archive-logs' },
{ cron: '0 3 * * *' },
async ({ step }) => { /* move audit rows to archive, delete old sync rows */ }
);Deployment (Linux / PM2)
/opt/deneva-mcp/
├── dist/
├── ecosystem.config.js
└── (no .env file — secrets loaded from systemd-creds at runtime)// ecosystem.config.js
module.exports = {
apps: [{
name: 'mcp-server',
script: './dist/index.js',
instances: 2,
exec_mode: 'cluster',
env_production: {
NODE_ENV: 'production',
PORT: 3001,
// No secrets here — loaded by secretsLoader from /run/credentials/
}
}]
};# UFW firewall — only 443 and 22 reachable from internet
ufw default deny incoming
ufw allow 443/tcp
ufw allow 22/tcp # SSH — restrict to known IPs in production
ufw allow 80/tcp # Let's Encrypt HTTP challenge only
ufw enable
# PostgreSQL not reachable externally
# Port 5432 bound to 127.0.0.1 in postgresql.confEnvironment & Secrets
Non-sensitive config (safe in env / PM2 ecosystem.config.js):
NODE_ENV=production
PORT=3001
DATABASE_URL=postgresql://mcp_app@localhost:5432/deneva_mcp?sslmode=require
GOOGLE_OAUTH_REDIRECT_URI=https://your-domain.com/auth/google/callback
META_OAUTH_REDIRECT_URI=https://your-domain.com/auth/meta/callback
TIKTOK_OAUTH_REDIRECT_URI=https://your-domain.com/auth/tiktok/callbackSecrets (loaded via systemd-creds — never in env vars):
CREDENTIAL_KEK # 32-byte key encryption key
API_KEY_HMAC_SECRET # HMAC key for API key hashing
DB_PASSWORD # PostgreSQL password for mcp_app role
GOOGLE_CLIENT_SECRET
GOOGLE_DEVELOPER_TOKEN
META_APP_SECRET
TIKTOK_APP_SECRET
INNGEST_SIGNING_KEY
INNGEST_EVENT_KEYBuild & Dev Scripts
{
"scripts": {
"dev": "tsx watch src/index.ts",
"build": "tsc --project tsconfig.json",
"start": "node dist/index.js",
"db:migrate": "drizzle-kit migrate",
"db:studio": "drizzle-kit studio",
"inngest:dev": "npx inngest-cli@latest dev -u http://localhost:3001/api/inngest",
"audit": "npm audit --audit-level=high",
"typecheck": "tsc --noEmit"
}
}Development Phases
Phase 1 — Secure Foundation (1–2 weeks) Fastify server, Drizzle schema + RLS policies, secrets loader (file-based for dev, systemd-creds for prod), tenant middleware with HMAC key auth and constant-time comparison, rate limiting plugin, audit log service, one stub MCP tool verifying the full middleware stack end-to-end.
Phase 2 — Google Ads Adapter (1–2 weeks) PKCE + state OAuth flow, envelope encryption for token storage, GAQL query builders, all 7 tools for Google Ads, cache layer with TTL config. Audit log events for all OAuth and tool-call outcomes.
Phase 3 — Meta + TikTok Adapters (1–2 weeks)
Meta Graph API and TikTok Marketing API adapters. Platform-specific tools return unsupported_platform cleanly. Token refresh edge cases handled (revoked tokens, expired scopes).
Phase 4 — Background Sync with Inngest (3–5 days) Signed Inngest client, sync + maintenance functions (cache purge, log archival), sync_log writes, per-step error handling.
Phase 5 — Hardening & Compliance (1–2 weeks) Full nginx TLS config, UFW rules, systemd unit with hardening flags, DB role separation + RLS verification, API key rotation endpoint, GDPR erasure endpoint, dependency audit in CI, penetration test of the public endpoints (at minimum: auth bypass, rate limit bypass, injection attempts).