Phase 2 — Google Ads Adapter
Detailed execution doc for Phase 2 of the MCP Marketing Tool Architecture Plan. Builds directly on Phase 1 — assumes everything in Phase 1’s Definition of Done is already shipped.
Estimated effort: 1–2 weeks for one engineer.
Phase 3 follow-up: (not yet written — will be docs/phase-3-meta-tiktok.md).
Implementation status (rolling)
| PR | Workstream | Landed | Notes |
|---|---|---|---|
| PR-1 | A (cache) + B-schema (tenant_deks + RLS) | ✅ | Migration 0001_modern_sharon_carter.sql. See “Deviations from spec” below. |
| PR-2 | B (envelope encryption service) | ✅ | src/security/credentials.service.ts + 6 integration tests. See “Deviations from spec” below. |
| PR-3 | Config + secrets wiring (Google client) | ✅ | config.ts Google fields + REQUIRED_SECRETS += GOOGLE_CLIENT_SECRET/GOOGLE_DEVELOPER_TOKEN + dev-secrets.sh placeholders. Real values still required for live OAuth (see runbook §4). |
| PR-4 | C (OAuth routes) + D-auth (token exchange/refresh) | ✅ | adapters/google/auth.ts + adapters/registry.ts + auth/oauth-routes.ts; tenant middleware now auths /auth/*/start; migration 0002 adds the unique index for the upsert. 9 integration tests. See “Deviations from spec” below. |
| PR-5 | D-queries (GAQL layer) | ✅ | google-ads-api@^23 installed; adapters/google/queries.ts ships the seven query functions, customers.ts ships resolveCustomerId, index.ts ships the GoogleAdapter class that implements the full PlatformAdapter interface. 12 integration tests with vi.mock('google-ads-api', ...). Code can land; live smoke test still blocked on Google Ads developer-token approval. |
| PR-6 | E (seven tools) | ✅ | mcp/tools/_helpers.ts ships makeTool (cache + adapter dispatch + audit + unsupported_platform); each of the seven mcp/tools/<tool>.ts is now ~15 lines. mcp/server.ts TOOLS registry imports all seven. Also fixes a latent Phase 1 bug: the transport was in stateful mode with a per-request session ID, which made tools/call return “Server not initialized” without an initialize handshake first; PR-6 switches to stateless mode (the spec’s documented one-shot curl pattern now actually works). 12 integration tests. See deviations below. |
| PR-7 | F (revoke) + G (DEK rotation) + H (cache metrics admin) | ✅ | adapters/google/revoke.ts ships best-effort upstream revoke + RLS-scoped local wipe; GoogleAdapter.revoke() implements the new interface method. admin-routes.ts gains POST /admin/tenants/:tenantId/disconnect/:platform and GET /admin/metrics/cache. scripts/rotate-dek.ts re-encrypts a tenant’s tokens under a fresh DEK in one outer transaction (credentials.service now accepts an optional tx for atomicity). 14 integration tests (revocation, dek-rotation, cache-metrics). See deviations below. |
Deviations from spec
-
Tenant context in cache.service.ts. The doc’s
readOrFetchopens a transaction only on cache miss. Implementation wraps the entire call in a transaction so it can runSELECT set_config('app.current_tenant_id', $1, true)on the same transaction as the read — required becausemetric_cacheis RLS-isolated and the runtime connects asmcp_app(NOBYPASSRLS). Cost: one extra BEGIN/COMMIT on cache hits (~sub-ms locally). Benefit: pooled connections never leak tenant context between requests. -
KEK format in credentials.service.ts. Spec was
if (KEK.length !== 32) throw …on the raw loaded buffer. The on-diskCREDENTIAL_KEKis base64-encoded 32 bytes (openssl rand -base64 32perdocs/setup-ubuntu.md), so the raw buffer is ~45 bytes (44 base64 chars + newline) and would always fail that check. Implementation base64-decodes, trims, then validates 32 bytes. Same contract in dev and prod. -
DEK access path in credentials.service.ts.
tenant_deksis RLS-isolated, so reads/writes from themcp_appruntime must run inside a transaction withapp.current_tenant_idset.ensureDek(tenantId)checks the in-process cache first; on miss it opens one transaction to set context and resolve the row. Steady-state encryption is then in-memory only (no DB). -
DEK creation race. Spec used a plain
INSERTwhich would throw on PK conflict if two processes ever miss-and-create for the same tenant simultaneously. Implementation usesINSERT … ON CONFLICT DO NOTHINGfollowed by a re-SELECTso the loser adopts the winner’s row instead of crashing. One DEK per tenant, always. -
tenant.middlewareextended to/auth/*/start. Phase 1’s middleware only enforced auth on/mcp/*. The OAuth start route needsreq.tenantIdto know which tenant is connecting, so the middleware now matches^\/auth\/[^/]+\/start(?:\?|$)in addition./callbackstays unauthenticated by design — the state row carriestenantId. -
ensureValidTokenuses two short transactions. Spec held one transaction across the network refresh call to Google. That would keep a pool connection idle for hundreds of ms during refresh, hurting throughput. Implementation: read row in tx 1, exit, call Google (no tx held), update in tx 2. Race condition: concurrent refreshes for the same tenant will both hit Google; Google does not invalidate prior access tokens on refresh, so the loser’s token stays valid until expiry. -
platform_credentialsunique index. Spec usedonConflictDoUpdateon(tenant_id, platform)but the Phase 1 schema had no matching unique constraint. PR-4 adds migration0002_new_synch.sqlwithplatform_credentials_tenant_platform_idx. -
pgTIMESTAMP type parser overridden. node-postgres parsestimestamp without time zoneas a JS Date in the host’s local timezone. Every timestamp we write isDate.toISOString()UTC, so this silently produces wrong UNIX-millis on any non-UTC host.src/db/index.tsnow registers a UTC parser for OID 1114 — affects every pg-driver connection in the process (Drizzle and the test pools alike). Prod was incidentally safe on default-Ubuntu (UTC), but the fix is robust against drift. -
google-ads-apiauth model and the spec’soauth_access_tokenfield. The spec called for passing a pre-refreshed access token toapiClient.Customer({oauth_access_token: ...}). The library at v23 does not exposeoauth_access_token— theCustomerOptionstype requiresrefresh_token, and the library maintains its own internal access-token LRU keyed by the refresh token. Implementation:queries.tsdecrypts the stored refresh token viadecryptTokenand passes it directly. Trade-off accepted: per-query background refreshes issued by the SDK are not individually written to ouraudit_log(they’re implicit in themcp.tool_calledrow written by the tool wrapper). Our ownensureValidTokenis still the canonical refresh path for the OAuth callback andresolveCustomerId, both of which auditoauth.token_refreshed. -
DateConstantdoesn’t includeLAST_90_DAYS.google-ads-api’s enum stops atLAST_30_DAYS.queries.tscomputesfrom_date/to_datestrings instead so all three Phase 2 date ranges (7/30/90) go through a single code path. -
Customer ID resolution. Phase 2 simplification:
resolveCustomerIdpicks the first entry fromcustomers:listAccessibleCustomers. If the tenant’s OAuth identity has access to multiple customers under an MCC, only one is ever used. Phase 3 will add a customer-picker. -
login_customer_idmirrorscustomer_id. Direct accounts only. MCC traversal (manager-customer-id ≠ child-customer-id) is deferred to Phase 3. -
MCP transport switched to stateless mode (PR-6). Phase 1 set
sessionIdGenerator: () => randomUUID()onStreamableHTTPServerTransport, but every request also built a fresh server + transport instance — so the generated session never got re-used and the next request always failed with “Server not initialized”. The smoke test insetup-ubuntu.mdStep 11 was aspirational, not tested. PR-6 setssessionIdGenerator: undefined, putting the transport in stateless mode where every POST is independent. The cast isas anybecause the SDK’s own type declares the field required, but its JSDoc explicitly showssessionIdGenerator: undefinedfor stateless setups. -
credentials.service now accepts an optional
txfor rotation atomicity (PR-7). Without it, the original rotation pseudocode in the spec had a snapshot-isolation bug: the outer transaction’sDELETE tenant_deksis not visible to a separately-opened tx (used byencryptToken→ensureDek), so re-encryption would silently use the OLD DEK and the committed delete would leave tokens unreadable. Implementation:encryptToken,decryptToken,destroyDek, and the internalensureDekaccept an optionaltx. When provided, all DB hops happen on the SAME connection and see the outer tx’s pending state — rotation can DELETE the old row and immediately re-INSERT a new one in one atomic step. When omitted, behaviour is unchanged from PR-2. -
Rotation is a
tsx-runnable script with an extracted function (PR-7).scripts/rotate-dek.tsparsesprocess.argv[2], callsrotateDek(tenantId), andprocess.exit(0)s. The tests indek-rotation.test.tsinvoke their own copy of the function rather than the script’s argv driver. A comment in the test file warns to keep the two copies in lockstep; future cleanup could pull the function intosrc/security/dek-rotation.tsand have the script + test both import it. Deferred — every other “tooling” script in the repo uses the same shape today. -
/admin/tenants/.../disconnect/...requires a valid v4 UUID. Zod 4’s.uuid()is strict about version/variant bits. Production traffic is fine (gen_random_uuid()produces proper v4), but test fixtures that use placeholder UUIDs (00000000-0000-0000-...) need to be valid v4 — a leading4in the third group and8/9/a/bin the fourth.
Goal
Replace the get_account_health stub with a real Google Ads pipeline: a tenant connects via PKCE-protected OAuth, tokens land in platform_credentials envelope-encrypted, the seven MCP tools serve real data through a TTL-bounded cache, and Phase 1’s audit log captures every step.
If Phase 2 is done correctly, a tenant’s flow is: hit /auth/google/start → consent on Google → tokens stored encrypted → call any of the seven tools → cached responses on hit, fresh GAQL pulls on miss → audit log complete.
Definition of Done (high-level — full checklist in §11)
- A tenant can complete the Google Ads OAuth flow end-to-end and a row appears in
platform_credentialswith non-decryptable ciphertext at rest. - Killing a tenant’s DEK makes their stored tokens unreadable; restoring it makes them readable again.
- All seven MCP tools return real Google Ads data on a connected tenant; same tools return
unsupported_platformformeta/tiktok. - Cache hits short-circuit before any Google Ads API call; cache misses populate the row and emit a
mcp.tool_calledaudit entry withcache: 'miss'. -
POST /admin/tenants/:id/disconnect/googlerevokes upstream and wipes theplatform_credentialsrow. - DEK rotation script re-encrypts every token under a new DEK with no service downtime.
- CI passes with mocked Google Ads client; envelope encryption, cache, and OAuth state hit real Postgres.
Workstream order & dependency graph
A. Cache layer ──────────────┐
│
B. Envelope encryption ──┬───┼──▶ D. Google Ads adapter ──▶ E. Seven tools
│ │ │
▼ │ ├──▶ F. Token revocation endpoint
C. OAuth flow │
(uses Phase 1's ├──▶ G. DEK rotation tooling (depends on B)
oauth-state) │
└──▶ H. Cache metrics (depends on A)
I. Tests run alongside everythingCritical path: A + B → C → D → E. F/G/H parallelisable once their dependencies land.
Workstream A — Cache layer
A1. TTL config
File: src/cache/ttl-config.ts
export const TTL_SECONDS = {
google: {
account_health: 3600, // 1h
search_term_waste: 7200, // 2h
quality_score: 7200,
auction_insights: 14_400, // 4h
pmax_breakdown: 7200,
budget_optimizer: 3600,
weekly_anomaly: 7200,
},
meta: { account_health: 3600 }, // populated in Phase 3
tiktok: { account_health: 7200 },
} as const;
export type Platform = keyof typeof TTL_SECONDS;
export type ReportType = keyof (typeof TTL_SECONDS)['google'];
export function ttlFor(platform: Platform, reportType: ReportType): number {
const ttl = (TTL_SECONDS[platform] as Record<string, number>)[reportType];
if (!ttl) throw new Error(`No TTL configured for ${platform}/${reportType}`);
return ttl;
}A2. Cache service with thundering-herd protection
File: src/cache/cache.service.ts
import { and, eq, gt, sql } from 'drizzle-orm';
import { db } from '../db/index.js';
import { metricCache } from '../db/schema.js';
import { ttlFor, type Platform, type ReportType } from './ttl-config.js';
import { recordCacheEvent } from './metrics.js';
export interface CacheKey {
tenantId: string;
platform: Platform;
reportType: ReportType;
dateRangeKey: string;
}
export async function readOrFetch<T>(
key: CacheKey,
fetcher: () => Promise<T>,
opts: { force?: boolean } = {},
): Promise<{ data: T; cache: 'hit' | 'miss' }> {
const now = new Date();
if (!opts.force) {
const [hit] = await db.select().from(metricCache).where(and(
eq(metricCache.tenantId, key.tenantId),
eq(metricCache.platform, key.platform),
eq(metricCache.reportType, key.reportType),
eq(metricCache.dateRangeKey, key.dateRangeKey),
gt(metricCache.expiresAt, now),
));
if (hit) {
recordCacheEvent('hit', key);
return { data: hit.data as T, cache: 'hit' };
}
}
// pg_advisory_xact_lock prevents the thundering-herd effect: if 50 concurrent
// requests miss the same key, only one fetches; the others wait, then read the
// freshly-written row on retry.
return await db.transaction(async (tx) => {
const lockKey = hashCacheKey(key); // bigint
await tx.execute(sql`SELECT pg_advisory_xact_lock(${lockKey})`);
// Re-check after acquiring lock (double-checked locking) — skipped on force,
// since the eager-sync worker explicitly wants to overwrite a still-fresh row.
if (!opts.force) {
const [retry] = await tx.select().from(metricCache).where(and(
eq(metricCache.tenantId, key.tenantId),
eq(metricCache.platform, key.platform),
eq(metricCache.reportType, key.reportType),
eq(metricCache.dateRangeKey, key.dateRangeKey),
gt(metricCache.expiresAt, new Date()),
));
if (retry) {
recordCacheEvent('hit', key);
return { data: retry.data as T, cache: 'hit' };
}
}
recordCacheEvent('miss', key);
const fresh = await fetcher();
const expiresAt = new Date(Date.now() + ttlFor(key.platform, key.reportType) * 1000);
await tx.insert(metricCache).values({
tenantId: key.tenantId, platform: key.platform,
reportType: key.reportType, dateRangeKey: key.dateRangeKey,
data: fresh as object, expiresAt,
}).onConflictDoUpdate({
target: [metricCache.tenantId, metricCache.platform, metricCache.reportType, metricCache.dateRangeKey],
set: { data: fresh as object, expiresAt, fetchedAt: new Date() },
});
return { data: fresh, cache: 'miss' as const };
});
}
function hashCacheKey(k: CacheKey): bigint {
// Stable 64-bit hash for advisory lock — any deterministic hash works.
const s = `${k.tenantId}|${k.platform}|${k.reportType}|${k.dateRangeKey}`;
let h = 0xcbf29ce484222325n; const prime = 0x100000001b3n;
for (const ch of Buffer.from(s)) { h ^= BigInt(ch); h = (h * prime) & 0xffffffffffffffffn; }
// pg_advisory_lock takes int8 → cast to signed range
return h > 0x7fffffffffffffffn ? h - 0x10000000000000000n : h;
}Add a unique index on the cache key tuple via a Phase 2 migration:
CREATE UNIQUE INDEX metric_cache_key_idx ON metric_cache (tenant_id, platform, report_type, date_range_key);— theonConflictDoUpdaterequires it.
A3. Acceptance
File: tests/integration/cache.test.ts
- Two concurrent calls for the same missing key issue exactly one fetcher call (use a counter mock).
- A second call within TTL hits the cache (no fetcher invocation).
- A second call after TTL expires re-fetches and updates
fetchedAt.
Workstream B — Envelope encryption
B1. Credentials service
File: src/security/credentials.service.ts
import { createCipheriv, createDecipheriv, randomBytes } from 'node:crypto';
import { eq } from 'drizzle-orm';
import { db } from '../db/index.js';
import { tenantDeks } from '../db/schema.js'; // new table — see B3
import { loadSecret } from './secrets.loader.js';
const KEK = await loadSecret('CREDENTIAL_KEK'); // 32 bytes
if (KEK.length !== 32) throw new Error('CREDENTIAL_KEK must be exactly 32 bytes');
interface DekRow { tenantId: string; dekEnc: Buffer; dekIv: Buffer; dekTag: Buffer; kekVersion: number }
const dekCache = new Map<string, Buffer>(); // tenantId → plaintext DEK (process-local only)
async function getOrCreateDek(tenantId: string): Promise<Buffer> {
const cached = dekCache.get(tenantId);
if (cached) return cached;
const [row] = await db.select().from(tenantDeks).where(eq(tenantDeks.tenantId, tenantId));
if (row) {
const dek = decryptWithKek(row.dekEnc, row.dekIv, row.dekTag);
dekCache.set(tenantId, dek);
return dek;
}
// First use for this tenant — generate a fresh DEK and wrap it under the KEK.
const dek = randomBytes(32);
const { ciphertext, iv, tag } = encryptWithKek(dek);
await db.insert(tenantDeks).values({
tenantId, dekEnc: ciphertext, dekIv: iv, dekTag: tag, kekVersion: 1,
});
dekCache.set(tenantId, dek);
return dek;
}
function encryptWithKek(plaintext: Buffer) {
const iv = randomBytes(12);
const cipher = createCipheriv('aes-256-gcm', KEK, iv);
const ciphertext = Buffer.concat([cipher.update(plaintext), cipher.final()]);
return { ciphertext, iv, tag: cipher.getAuthTag() };
}
function decryptWithKek(ciphertext: Buffer, iv: Buffer, tag: Buffer): Buffer {
const decipher = createDecipheriv('aes-256-gcm', KEK, iv);
decipher.setAuthTag(tag);
return Buffer.concat([decipher.update(ciphertext), decipher.final()]);
}
// Format on disk: iv(12) | tag(16) | ciphertext, base64
export async function encryptToken(tenantId: string, plaintext: string): Promise<string> {
const dek = await getOrCreateDek(tenantId);
const iv = randomBytes(12);
const cipher = createCipheriv('aes-256-gcm', dek, iv);
const ct = Buffer.concat([cipher.update(plaintext, 'utf8'), cipher.final()]);
return Buffer.concat([iv, cipher.getAuthTag(), ct]).toString('base64');
}
export async function decryptToken(tenantId: string, blob: string): Promise<string> {
const dek = await getOrCreateDek(tenantId);
const buf = Buffer.from(blob, 'base64');
const iv = buf.subarray(0, 12);
const tag = buf.subarray(12, 28);
const ct = buf.subarray(28);
const decipher = createDecipheriv('aes-256-gcm', dek, iv);
decipher.setAuthTag(tag);
return Buffer.concat([decipher.update(ct), decipher.final()]).toString('utf8');
}
export async function destroyDek(tenantId: string): Promise<void> {
await db.delete(tenantDeks).where(eq(tenantDeks.tenantId, tenantId));
dekCache.delete(tenantId);
}Why a per-tenant DEK? Destroying a single row makes one tenant’s tokens permanently unreadable without re-encrypting anyone else’s data. This is the GDPR right-to-erasure mechanism (architecture doc §9).
B2. New tenant_deks table
Add to src/db/schema.ts:
export const tenantDeks = pgTable('tenant_deks', {
tenantId: uuid('tenant_id').primaryKey().references(() => tenants.id, { onDelete: 'cascade' }),
dekEnc: bytea('dek_enc').notNull(),
dekIv: bytea('dek_iv').notNull(),
dekTag: bytea('dek_tag').notNull(),
kekVersion: integer('kek_version').notNull().default(1), // for B-series KEK rotation
createdAt: timestamp('created_at').defaultNow().notNull(),
rotatedAt: timestamp('rotated_at'),
});(bytea helper: customType<{ data: Buffer; driverData: Buffer }>({ dataType: () => 'bytea' }).)
Append to src/db/rls.sql (introduced in Phase 1 §B3) so tenant_deks is isolated for the same reason as platform_credentials — defence-in-depth on the table holding the keys to decrypt every tenant’s tokens:
ALTER TABLE tenant_deks ENABLE ROW LEVEL SECURITY;
CREATE POLICY tenant_isolation_deks ON tenant_deks
USING (tenant_id = current_setting('app.current_tenant_id', true)::uuid);B3. Acceptance
File: tests/integration/credentials.test.ts
encryptToken('t1', 'hello')produces a value that does NOT contain'hello'and is not equal to a re-encryption with the same input (different IV).decryptToken('t1', encryptToken('t1', 'hello'))round-trips.decryptToken('t2', encryptToken('t1', 'hello'))throws (different DEK).destroyDek('t1')followed bydecryptToken('t1', ...)throws.- The plaintext DEK never appears in any DB row when inspected via raw
psql. - Connecting as
mcp_appwithout settingapp.current_tenant_idand selecting fromtenant_deksreturns zero rows (RLS engaged).
Workstream C — OAuth flow with PKCE
C1. Google client config
Add to src/config.ts (Zod-validated):
export const config = z.object({
GOOGLE_CLIENT_ID: z.string().min(1),
GOOGLE_OAUTH_REDIRECT_URI: z.string().url(),
GOOGLE_AUTH_ENDPOINT: z.string().url().default('https://accounts.google.com/o/oauth2/v2/auth'),
GOOGLE_TOKEN_ENDPOINT: z.string().url().default('https://oauth2.googleapis.com/token'),
GOOGLE_SCOPES: z.string().default('https://www.googleapis.com/auth/adwords'),
}).parse(process.env);GOOGLE_CLIENT_SECRET and GOOGLE_DEVELOPER_TOKEN are loaded via secretsLoader — never process.env.
C2. Auth start route
File: src/auth/oauth-routes.ts
import type { FastifyPluginAsync } from 'fastify';
import { z } from 'zod';
import { createOAuthState, consumeOAuthState } from './oauth-state.service.js';
import { config } from '../config.js';
import { writeAuditEvent } from '../security/audit-log.service.js';
import { getAdapter } from '../adapters/registry.js';
const StartParams = z.object({ platform: z.enum(['google', 'meta', 'tiktok']) });
export const oauthRoutes: FastifyPluginAsync = async (fastify) => {
fastify.get('/auth/:platform/start', async (req, reply) => {
const { platform } = StartParams.parse(req.params);
if (platform !== 'google') return reply.code(501).send({ error: 'unsupported_platform' }); // Phase 3 lifts this
if (!req.tenantId) return reply.code(401).send();
const { state, codeChallenge } = await createOAuthState(req.tenantId, platform);
await writeAuditEvent('oauth.flow_started', 'success', { tenantId: req.tenantId, platform });
const url = new URL(config.GOOGLE_AUTH_ENDPOINT);
url.searchParams.set('client_id', config.GOOGLE_CLIENT_ID);
url.searchParams.set('redirect_uri', config.GOOGLE_OAUTH_REDIRECT_URI);
url.searchParams.set('response_type', 'code');
url.searchParams.set('scope', config.GOOGLE_SCOPES);
url.searchParams.set('access_type', 'offline'); // get refresh_token
url.searchParams.set('prompt', 'consent'); // force refresh_token issuance
url.searchParams.set('state', state);
url.searchParams.set('code_challenge', codeChallenge);
url.searchParams.set('code_challenge_method', 'S256');
return reply.redirect(url.toString(), 302);
});
fastify.get('/auth/:platform/callback', async (req, reply) => {
const { platform } = StartParams.parse(req.params);
const Q = z.object({ code: z.string().min(1), state: z.string().min(1), error: z.string().optional() });
const q = Q.parse(req.query);
if (q.error) {
await writeAuditEvent('oauth.flow_failed', 'failure', { platform, reason: q.error });
return reply.code(400).send({ error: 'oauth_denied' });
}
const stateRow = await consumeOAuthState(q.state); // single-use, throws on miss/expiry
if (stateRow.platform !== platform) {
await writeAuditEvent('oauth.flow_failed', 'failure', { reason: 'state_platform_mismatch' });
return reply.code(400).send({ error: 'invalid_state' });
}
const adapter = getAdapter(platform);
await adapter.exchangeCode(stateRow.tenantId, q.code, stateRow.codeVerifier);
await writeAuditEvent('oauth.flow_completed', 'success', { tenantId: stateRow.tenantId, platform });
return reply.send({ status: 'connected', platform });
});
};Note: the start route requires an authenticated tenant context. The
/auth/*rate limit set up in Phase 1 §E3 already protects both routes (5 req per 15 min per IP).
C3. Acceptance
- Hitting
/auth/google/startwithout a validX-Api-Keyreturns 401. - Hitting
/auth/google/startwith a valid key returns a 302 to Google with all required params andcode_challenge_method=S256. - Replaying a callback with the same
statetwice fails on the second call (single-use enforcement from Phase 1). - Tampering with
state(e.g. mismatched platform) fails withoauth_completed=falseaudit event.
Workstream D — Google Ads adapter
D1. Adapter interface
File: src/adapters/adapter.interface.ts
export interface PlatformAdapter {
readonly platform: 'google' | 'meta' | 'tiktok';
exchangeCode(tenantId: string, code: string, codeVerifier: string): Promise<void>;
ensureValidToken(tenantId: string): Promise<string>;
fetchAccountHealth(tenantId: string, range: DateRange): Promise<AccountHealthData>;
fetchCampaigns(tenantId: string, range: DateRange): Promise<CampaignData[]>;
fetchSearchTerms(tenantId: string, range: DateRange): Promise<SearchTermData[]>;
fetchAuctionInsights(tenantId: string, range: DateRange): Promise<AuctionData>;
fetchKeywordQualityScores(tenantId: string, range: DateRange): Promise<QSData[]>;
fetchAssetGroups(tenantId: string, range: DateRange): Promise<AssetGroupData[]>;
revoke(tenantId: string): Promise<void>; // pulled in for §F
}
export type DateRange = 'last_7_days' | 'last_30_days' | 'last_90_days';File: src/adapters/registry.ts
import { GoogleAdapter } from './google/index.js';
import type { PlatformAdapter } from './adapter.interface.js';
const adapters = new Map<string, PlatformAdapter>([
['google', new GoogleAdapter()],
// Phase 3: Meta, TikTok
]);
export function getAdapter(platform: string): PlatformAdapter {
const a = adapters.get(platform);
if (!a) throw new Error(`unsupported_platform: ${platform}`);
return a;
}D2. OAuth code exchange + token persistence
File: src/adapters/google/auth.ts
import { eq } from 'drizzle-orm';
import { db } from '../../db/index.js';
import { platformCredentials } from '../../db/schema.js';
import { encryptToken, decryptToken } from '../../security/credentials.service.js';
import { loadSecret } from '../../security/secrets.loader.js';
import { config } from '../../config.js';
import { writeAuditEvent } from '../../security/audit-log.service.js';
const REFRESH_SKEW_SECONDS = 300; // refresh 5 min before actual expiry
export async function exchangeAuthCode(tenantId: string, code: string, codeVerifier: string): Promise<void> {
const clientSecret = (await loadSecret('GOOGLE_CLIENT_SECRET' as never)).toString('utf8');
const body = new URLSearchParams({
code,
client_id: config.GOOGLE_CLIENT_ID,
client_secret: clientSecret,
redirect_uri: config.GOOGLE_OAUTH_REDIRECT_URI,
grant_type: 'authorization_code',
code_verifier: codeVerifier,
});
const res = await fetch(config.GOOGLE_TOKEN_ENDPOINT, {
method: 'POST',
headers: { 'content-type': 'application/x-www-form-urlencoded' },
body,
});
if (!res.ok) throw new Error(`google_token_exchange_failed: ${res.status}`);
const tokens = (await res.json()) as { access_token: string; refresh_token: string; expires_in: number; scope: string };
const accessEnc = await encryptToken(tenantId, tokens.access_token);
const refreshEnc = await encryptToken(tenantId, tokens.refresh_token);
// Customer ID resolution — required for Google Ads API. Phase 2 simplification:
// store empty string, populate on first fetch via listAccessibleCustomers.
await db.insert(platformCredentials).values({
tenantId, platform: 'google', accountId: '',
accessTokenEnc: accessEnc, refreshTokenEnc: refreshEnc,
tokenExpiresAt: new Date(Date.now() + tokens.expires_in * 1000),
scopes: tokens.scope.split(' '),
}).onConflictDoUpdate({
target: [platformCredentials.tenantId, platformCredentials.platform],
set: {
accessTokenEnc: accessEnc, refreshTokenEnc: refreshEnc,
tokenExpiresAt: new Date(Date.now() + tokens.expires_in * 1000),
scopes: tokens.scope.split(' '),
updatedAt: new Date(),
},
});
}
export async function ensureValidToken(tenantId: string): Promise<string> {
const [row] = await db.select().from(platformCredentials)
.where(eq(platformCredentials.tenantId, tenantId));
if (!row || row.platform !== 'google') throw new Error('no_google_credentials');
const expiresAt = row.tokenExpiresAt!.getTime();
if (expiresAt - Date.now() > REFRESH_SKEW_SECONDS * 1000) {
return decryptToken(tenantId, row.accessTokenEnc);
}
// Lazy refresh
const refresh = await decryptToken(tenantId, row.refreshTokenEnc!);
const clientSecret = (await loadSecret('GOOGLE_CLIENT_SECRET' as never)).toString('utf8');
const body = new URLSearchParams({
refresh_token: refresh,
client_id: config.GOOGLE_CLIENT_ID,
client_secret: clientSecret,
grant_type: 'refresh_token',
});
const res = await fetch(config.GOOGLE_TOKEN_ENDPOINT, {
method: 'POST', headers: { 'content-type': 'application/x-www-form-urlencoded' }, body,
});
if (!res.ok) {
await writeAuditEvent('oauth.token_refreshed', 'failure', { tenantId, platform: 'google', status: res.status });
throw new Error(`google_token_refresh_failed: ${res.status}`);
}
const tokens = (await res.json()) as { access_token: string; expires_in: number };
const newAccessEnc = await encryptToken(tenantId, tokens.access_token);
await db.update(platformCredentials)
.set({ accessTokenEnc: newAccessEnc, tokenExpiresAt: new Date(Date.now() + tokens.expires_in * 1000), updatedAt: new Date() })
.where(eq(platformCredentials.tenantId, tenantId));
await writeAuditEvent('oauth.token_refreshed', 'success', { tenantId, platform: 'google' });
return tokens.access_token;
}Add
'GOOGLE_CLIENT_SECRET'and'GOOGLE_DEVELOPER_TOKEN'to theREQUIRED_SECRETSlist insecrets.loader.ts. The Phase 1 startup check now also catches these. Also extendscripts/dev-secrets.sh(Phase 1 §C3) so dev startup doesn’t fail after Phase 2 lands —src/adapters/google/queries.tsreads both at module load time:# append to scripts/dev-secrets.sh [[ -f secrets/GOOGLE_CLIENT_SECRET ]] || echo -n 'google_test_client_secret' > secrets/GOOGLE_CLIENT_SECRET [[ -f secrets/GOOGLE_DEVELOPER_TOKEN ]] || echo -n 'google_test_dev_token' > secrets/GOOGLE_DEVELOPER_TOKEN
D3. Query layer (GAQL)
File: src/adapters/google/queries.ts
Use the google-ads-api package; it produces typed results from GAQL strings. Build one query function per report type:
import { GoogleAdsApi } from 'google-ads-api';
import { loadSecret } from '../../security/secrets.loader.js';
import { config } from '../../config.js';
import { ensureValidToken } from './auth.js';
import type { DateRange } from '../adapter.interface.js';
const developerToken = (await loadSecret('GOOGLE_DEVELOPER_TOKEN' as never)).toString('utf8');
const clientSecret = (await loadSecret('GOOGLE_CLIENT_SECRET' as never)).toString('utf8');
const apiClient = new GoogleAdsApi({
client_id: config.GOOGLE_CLIENT_ID,
client_secret: clientSecret,
developer_token: developerToken,
});
const RANGE_GAQL: Record<DateRange, string> = {
last_7_days: 'segments.date DURING LAST_7_DAYS',
last_30_days: 'segments.date DURING LAST_30_DAYS',
last_90_days: 'segments.date DURING LAST_90_DAYS',
};
async function customer(tenantId: string, customerId: string) {
return apiClient.Customer({
customer_id: customerId,
refresh_token: '__handled_via_ensureValidToken__', // adapter-level shim — see D4
});
}
export async function queryAccountHealth(tenantId: string, customerId: string, range: DateRange) {
await ensureValidToken(tenantId); // refreshes if needed; library uses access_token via shim
const c = await customer(tenantId, customerId);
const rows = await c.report({
entity: 'campaign',
attributes: ['campaign.id', 'campaign.name', 'campaign.status'],
metrics: ['metrics.cost_micros', 'metrics.conversions_value', 'metrics.conversions',
'metrics.clicks', 'metrics.impressions'],
constraints: { 'campaign.status': 'ENABLED' },
date_constant: range === 'last_7_days' ? 'LAST_7_DAYS'
: range === 'last_30_days' ? 'LAST_30_DAYS'
: 'LAST_90_DAYS',
});
return rows; // shape captured in src/adapters/google/types.ts (Zod-validated)
}
// queryPmaxBreakdown, querySearchTerms, queryAuctionInsights, queryQualityScore,
// queryBudgetSplit, queryWeeklyAnomaly — same pattern, GAQL string varies per report.Adapter-level shim for the npm client:
google-ads-apiexpects to manage refresh tokens itself. Wrap it so we feed pre-refreshed access tokens via theoauth_access_tokenfield. See google-ads-api#custom-headers — the package exposescustomer({ login_customer_id, customer_id, oauth_access_token }).
Replace the body of customer() with:
const accessToken = await ensureValidToken(tenantId);
return apiClient.Customer({
customer_id: customerId,
login_customer_id: customerId, // for MCC; same as customer_id for direct accounts
oauth_access_token: accessToken,
});D4. Customer ID resolution
File: src/adapters/google/customers.ts
On first use after OAuth, we don’t yet know the customer ID. Add a one-shot resolution:
export async function resolveCustomerId(tenantId: string): Promise<string> {
const [row] = await db.select().from(platformCredentials)
.where(and(eq(platformCredentials.tenantId, tenantId), eq(platformCredentials.platform, 'google')));
if (row.accountId) return row.accountId;
const accessToken = await ensureValidToken(tenantId);
const res = await fetch('https://googleads.googleapis.com/v17/customers:listAccessibleCustomers', {
headers: {
'Authorization': `Bearer ${accessToken}`,
'developer-token': developerToken,
},
});
const json = (await res.json()) as { resourceNames: string[] };
const first = json.resourceNames[0]?.split('/')[1];
if (!first) throw new Error('no_accessible_customers');
await db.update(platformCredentials).set({ accountId: first })
.where(eq(platformCredentials.tenantId, tenantId));
return first;
}Phase 2 caveat: this picks the first accessible customer. Phase 3 will let tenants choose between multiple customers.
D5. Adapter top-level
File: src/adapters/google/index.ts
import type { PlatformAdapter, DateRange } from '../adapter.interface.js';
import { exchangeAuthCode, ensureValidToken } from './auth.js';
import { resolveCustomerId } from './customers.js';
import { revokeUpstream } from './revoke.js'; // §F
import * as q from './queries.js';
export class GoogleAdapter implements PlatformAdapter {
readonly platform = 'google' as const;
exchangeCode = exchangeAuthCode;
ensureValidToken = ensureValidToken;
async fetchAccountHealth(tenantId: string, range: DateRange) {
const customerId = await resolveCustomerId(tenantId);
return q.queryAccountHealth(tenantId, customerId, range);
}
// fetchCampaigns / fetchSearchTerms / etc — same shape
async revoke(tenantId: string) { await revokeUpstream(tenantId); }
}D6. Acceptance
File: tests/integration/google-adapter.test.ts
Mock google-ads-api and fetch (for the OAuth token endpoint) at the module boundary; everything below the adapter is real:
exchangeCodewrites a row with non-plaintext token blobs.ensureValidTokenreturns the cached access token when expiry > 5min away.ensureValidTokencalls the refresh endpoint and updates the row when expiry < 5min.- A 401 from Google during refresh produces
oauth.token_refreshedaudit row withoutcome: 'failure'and the original error propagates.
Workstream E — The seven tools
E1. Tool template
Each tool follows the same structure: validate input, hit cache, on miss call adapter, return data + cache annotation. To avoid seven near-duplicate files, extract a helper:
File: src/mcp/tools/_helpers.ts
import type { z } from 'zod';
import { readOrFetch } from '../../cache/cache.service.js';
import { getAdapter } from '../../adapters/registry.js';
import { writeAuditEvent } from '../../security/audit-log.service.js';
import type { Platform, ReportType } from '../../cache/ttl-config.js';
export function makeTool<I extends { platform: Platform; dateRange: 'last_7_days' | 'last_30_days' | 'last_90_days' }, O>(opts: {
name: string;
description: string;
inputSchema: z.ZodType<I>;
reportType: ReportType;
supportedPlatforms: ReadonlyArray<Platform>;
fetcher: (adapter: ReturnType<typeof getAdapter>, input: I) => Promise<O>;
}) {
return {
name: opts.name,
description: opts.description,
inputSchema: opts.inputSchema,
async handler(input: I, ctx: { tenantId: string; requestId: string }) {
if (!opts.supportedPlatforms.includes(input.platform)) {
await writeAuditEvent('mcp.tool_failed', 'failure', { tenantId: ctx.tenantId, requestId: ctx.requestId, tool: opts.name, reason: 'unsupported_platform' });
return { error: 'unsupported_platform', platform: input.platform } as const;
}
const adapter = getAdapter(input.platform);
const { data, cache } = await readOrFetch({
tenantId: ctx.tenantId,
platform: input.platform,
reportType: opts.reportType,
dateRangeKey: input.dateRange,
}, () => opts.fetcher(adapter, input));
await writeAuditEvent('mcp.tool_called', 'success', { tenantId: ctx.tenantId, requestId: ctx.requestId, tool: opts.name, cache });
return { data, cache };
},
};
}E2. The seven tools
Files: src/mcp/tools/{account-health,pmax-breakdown,quality-score,search-term-waste,budget-optimizer,auction-insights,weekly-anomaly}.ts
Example — replace the Phase 1 stub:
// src/mcp/tools/account-health.ts
import { z } from 'zod';
import { makeTool } from './_helpers.js';
const Input = z.object({
platform: z.enum(['google', 'meta', 'tiktok']),
dateRange: z.enum(['last_7_days', 'last_30_days', 'last_90_days']),
});
export const accountHealthTool = makeTool({
name: 'get_account_health',
description: 'Spend, ROAS, CPA, CTR — 90-day trends and per-campaign ranking.',
inputSchema: Input,
reportType: 'account_health',
supportedPlatforms: ['google'], // Phase 3 adds meta, tiktok
fetcher: (adapter, input) => adapter.fetchAccountHealth(input.dateRange === 'last_90_days' ? 'last_90_days' : input.dateRange),
});Per-tool platform-support matrix (matches architecture doc §MCP Tools):
| Tool | Phase 2 platforms | Phase 3 additions |
|---|---|---|
| get_account_health | meta, tiktok | |
| get_pmax_breakdown | — | |
| get_quality_score | — | |
| get_search_term_waste | meta | |
| get_budget_optimizer | meta, tiktok | |
| get_auction_insights | meta | |
| get_weekly_anomaly | meta, tiktok |
Tools whose supportedPlatforms doesn’t yet include meta/tiktok return { error: 'unsupported_platform' } cleanly — Phase 3 only edits those arrays.
E3. Registry update
File: src/mcp/server.ts
Replace the Phase 1 TOOLS array with all seven imports — pingTool stays for monitoring.
E4. Acceptance
- For each tool: a happy-path test with a connected tenant + mocked adapter returns
{ data, cache: 'miss' }first call,{ data, cache: 'hit' }second. unsupported_platformtest: every tool withmetareturns the error envelope without calling the adapter.- An end-to-end test through the MCP transport hitting all seven tools sequentially produces exactly seven
mcp.tool_calledaudit rows.
Workstream F — Token revocation endpoint
F1. Upstream revoke
File: src/adapters/google/revoke.ts
import { and, eq } from 'drizzle-orm';
import { db } from '../../db/index.js';
import { platformCredentials } from '../../db/schema.js';
import { decryptToken } from '../../security/credentials.service.js';
export async function revokeUpstream(tenantId: string): Promise<void> {
const [row] = await db.select().from(platformCredentials)
.where(and(eq(platformCredentials.tenantId, tenantId), eq(platformCredentials.platform, 'google')));
if (!row) return;
const refresh = await decryptToken(tenantId, row.refreshTokenEnc!);
// Revocation is best-effort — we still wipe the row even if Google is unreachable.
try {
await fetch(`https://oauth2.googleapis.com/revoke?token=${encodeURIComponent(refresh)}`, { method: 'POST' });
} catch { /* swallow — wipe still happens */ }
await db.delete(platformCredentials)
.where(and(eq(platformCredentials.tenantId, tenantId), eq(platformCredentials.platform, 'google')));
}F2. Admin route
Add to src/auth/admin-routes.ts:
fastify.post('/admin/tenants/:tenantId/disconnect/:platform', async (req, reply) => {
if (req.headers['x-admin-token'] !== adminToken) return reply.code(401).send();
const P = z.object({ tenantId: z.string().uuid(), platform: z.enum(['google', 'meta', 'tiktok']) });
const p = P.parse(req.params);
const adapter = getAdapter(p.platform);
await adapter.revoke(p.tenantId);
await writeAuditEvent('oauth.token_revoked', 'success', { tenantId: p.tenantId, platform: p.platform });
return { status: 'disconnected' };
});F3. Acceptance
- After disconnect,
platform_credentialsrow for that tenant+platform is gone. - Subsequent tool call returns
no_google_credentials(or the equivalent typed error). - Audit row
oauth.token_revokedexists.
Workstream G — DEK rotation tooling
G1. Rotation script
File: scripts/rotate-dek.ts (run as one-shot via tsx)
import { eq } from 'drizzle-orm';
import { randomBytes } from 'node:crypto';
import { db } from '../src/db/index.js';
import { tenantDeks, platformCredentials } from '../src/db/schema.js';
import { decryptToken, encryptToken } from '../src/security/credentials.service.js';
const tenantId = process.argv[2];
if (!tenantId) throw new Error('Usage: tsx scripts/rotate-dek.ts <tenantId>');
await db.transaction(async (tx) => {
// 1. Read all encrypted artifacts under the OLD DEK.
const [creds] = await tx.select().from(platformCredentials).where(eq(platformCredentials.tenantId, tenantId));
const accessPlain = creds ? await decryptToken(tenantId, creds.accessTokenEnc) : null;
const refreshPlain = creds?.refreshTokenEnc ? await decryptToken(tenantId, creds.refreshTokenEnc) : null;
// 2. Destroy the cached + persisted DEK so the next encrypt() call generates a new one.
await tx.delete(tenantDeks).where(eq(tenantDeks.tenantId, tenantId));
// 3. Clear the in-memory cache.
(await import('../src/security/credentials.service.js') as any).dekCache.delete?.(tenantId);
// 4. Re-encrypt.
if (creds && accessPlain) {
const accessEnc = await encryptToken(tenantId, accessPlain);
const refreshEnc = refreshPlain ? await encryptToken(tenantId, refreshPlain) : creds.refreshTokenEnc;
await tx.update(platformCredentials)
.set({ accessTokenEnc: accessEnc, refreshTokenEnc: refreshEnc, updatedAt: new Date() })
.where(eq(platformCredentials.tenantId, tenantId));
}
await tx.update(tenantDeks).set({ rotatedAt: new Date() }).where(eq(tenantDeks.tenantId, tenantId));
});
console.log(`Rotated DEK for tenant ${tenantId}`);G2. Online-rotation note
For zero-downtime rotation across all tenants, run the script per tenant inside a loop. Each tenant’s transaction is independent — concurrent reads of that tenant’s tokens during rotation will block on the row-level lock acquired by UPDATE, so requests in-flight will simply wait a few ms.
For a future KEK rotation (rewrap every DEK without changing any DEK), the schema’s kekVersion column is the hook: write a separate script that reads with KEK v1, re-encrypts the DEK row under KEK v2, and bumps the version. Phase 5 owns that procedure.
G3. Acceptance
- Encrypt a token, run the rotation script for that tenant, decrypt — round-trips.
- Inspect
tenant_deks.dek_enc: byte-for-byte different before vs after rotation. - Concurrent tool call during rotation: completes successfully (waits on lock, doesn’t fail).
Workstream H — Cache hit/miss metrics
H1. Metrics module
File: src/cache/metrics.ts
import type { CacheKey } from './cache.service.js';
interface Counter { hit: number; miss: number }
const counters = new Map<string, Counter>(); // key: `${platform}/${reportType}`
export function recordCacheEvent(kind: 'hit' | 'miss', key: Pick<CacheKey, 'platform' | 'reportType'>): void {
const k = `${key.platform}/${key.reportType}`;
const c = counters.get(k) ?? { hit: 0, miss: 0 };
c[kind] += 1;
counters.set(k, c);
}
export function snapshotCacheMetrics(): Record<string, Counter & { hitRate: number }> {
const out: Record<string, Counter & { hitRate: number }> = {};
for (const [k, c] of counters) {
const total = c.hit + c.miss;
out[k] = { ...c, hitRate: total === 0 ? 0 : c.hit / total };
}
return out;
}H2. Admin route
Add to src/auth/admin-routes.ts:
fastify.get('/admin/metrics/cache', async (req, reply) => {
if (req.headers['x-admin-token'] !== adminToken) return reply.code(401).send();
return snapshotCacheMetrics();
});H3. Acceptance
- After 3 hits and 1 miss on
google/account_health,GET /admin/metrics/cachereturns{ "google/account_health": { hit: 3, miss: 1, hitRate: 0.75 } }. - Counters are process-local — Phase 5 will replace with a Prometheus exporter or push them to an OTel collector.
Workstream I — Tests & verification
I1. Required new test files
| File | Covers |
|---|---|
tests/integration/cache.test.ts | Workstream A |
tests/integration/credentials.test.ts | Workstream B |
tests/integration/oauth.test.ts | Workstream C |
tests/integration/google-adapter.test.ts | Workstream D |
tests/integration/tools-google.test.ts | Workstream E (one suite per tool) |
tests/integration/revocation.test.ts | Workstream F |
tests/integration/dek-rotation.test.ts | Workstream G |
tests/integration/cache-metrics.test.ts | Workstream H |
I2. Mock boundary
The mock layer is google-ads-api’s customer().report(...) and the fetch calls inside auth.ts and revoke.ts. Everything below — Drizzle, crypto, cache, Fastify, OAuth state, audit log — runs against real Postgres in CI.
A small tests/_mocks/google.ts helper provides:
mockTokenExchange({ access_token, refresh_token, expires_in })mockTokenRefresh(...)mockReport(reportType, rows)
I3. CI extension
The Phase 1 CI workflow already runs npm test. Phase 2 adds:
- A
secrets/setup step that injects fake Google client secret + dev token (scripts/dev-secrets.shextended). - A migration step that creates the
tenant_dekstable. - No live-API job in CI by default.
§11 — Definition of Done (full checklist)
A. Cache layer
-
metric_cacheunique index added. (PR-1) -
pg_advisory_xact_lockprevents parallel fetcher invocations for the same key. (PR-1) - TTL config covers all seven Google report types. (PR-1)
B. Envelope encryption
-
tenant_dekstable created with KEK-wrapped DEKs. (PR-1 — table + RLS only; service in PR-2) -
encryptToken/decryptTokenround-trip; cross-tenant decrypt fails. (PR-2) -
destroyDekmakes a tenant’s tokens permanently unreadable. (PR-2) - Plaintext DEKs only ever live in process memory. (PR-2)
C. OAuth flow
-
/auth/google/startreturns 302 with all required PKCE params. (PR-4) -
/auth/google/callbackexchanges the code, encrypts and stores tokens. (PR-4) - State replay attack (using same
statetwice) fails. (PR-4) -
oauth.flow_started,oauth.flow_completed,oauth.flow_failedaudit rows on the right paths. (PR-4)
D. Google Ads adapter
- OAuth code exchange persists encrypted tokens and
tokenExpiresAt. (PR-4) -
ensureValidTokenreturns cached token when expiry > 5min away. (PR-4) -
ensureValidTokenlazily refreshes when expiry < 5min and writes audit row. (PR-4) - First fetch resolves and caches the customer ID. (PR-5)
- All seven query functions return Zod-validated typed data. (PR-5)
E. Seven MCP tools
- All seven tools registered in the registry, replacing the Phase 1 stub. (PR-6)
- Each tool returns
{ data, cache: 'hit' | 'miss' }. (PR-6) -
metaandtiktokrequests returnunsupported_platformcleanly. (PR-6) - Each tool’s call produces an
mcp.tool_calledaudit row includingcachefield. (PR-6)
F. Token revocation
-
POST /admin/tenants/:id/disconnect/:platformremovesplatform_credentialsrow. (PR-7) - Best-effort upstream revoke is attempted; failure does not block local wipe. (PR-7)
-
oauth.token_revokedaudit row on success. (PR-7)
G. DEK rotation
-
scripts/rotate-dek.tsre-encrypts a tenant’s tokens under a fresh DEK. (PR-7) -
tenant_deks.dek_encbyte-changes after rotation. (PR-7) - Concurrent tool call during rotation does not fail. (PR-7 — outer transaction with
FOR UPDATEonplatform_credentialsrow serializes concurrent writers; reads outside the tx see either old or new state cleanly.)
H. Cache metrics
-
GET /admin/metrics/cachereturns hit/miss/hitRate per platform+report. (PR-7)
I. Tests
- All new test files exist and pass in CI. (77/77 across 14 files)
-
secrets/dev script seeds Google client secret + developer token placeholders. (PR-3)
§12 — Out of scope (deferred)
| Item | Phase |
|---|---|
| Meta + TikTok adapters and routes | 3 |
| Multi-customer-id selection UI / API | 3 |
| Eager refresh cron / Inngest sync functions | 4 |
| KEK rotation procedure (separate from DEK rotation) | 5 |
| Prometheus / OTel exporter (replace in-memory counters) | 5 |
| nginx OAuth callback IP allow-list | 5 |
| Penetration test of OAuth and admin routes | 5 |
§13 — Manual smoke test
# 0. Phase 1 already up: docker compose up -d postgres, secrets seeded, server running on 127.0.0.1:3001.
# 1. Apply Phase 2 migration (adds tenant_deks + cache unique index)
npm run db:migrate
# 2. Seed a tenant + API key, capture both
node scripts/seed-tenant.mjs
export KEY=...; export TENANT_ID=...
# 3. Start the OAuth flow — should redirect (302) to Google
curl -i -H "X-Api-Key: $KEY" http://127.0.0.1:3001/auth/google/start
# → HTTP/1.1 302 ... Location: https://accounts.google.com/o/oauth2/v2/auth?...code_challenge=...
# 4. Complete consent in a browser; the callback writes to platform_credentials.
# Verify the row is encrypted (no plaintext "ya29...." substring):
psql "postgresql://mcp_admin:dev_only_password@127.0.0.1:5432/deneva_mcp" \
-c "SELECT length(access_token_enc), substring(access_token_enc, 1, 20) FROM platform_credentials;"
# 5. Call get_account_health — first call should be a cache MISS and a real Google fetch
curl -X POST http://127.0.0.1:3001/mcp \
-H "X-Api-Key: $KEY" -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"get_account_health","arguments":{"platform":"google","dateRange":"last_7_days"}}}'
# → response includes "cache":"miss"
# 6. Same call again — cache HIT
curl -X POST http://127.0.0.1:3001/mcp -H "X-Api-Key: $KEY" -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"get_account_health","arguments":{"platform":"google","dateRange":"last_7_days"}}}'
# → response includes "cache":"hit"
# 7. Try the same tool with platform=meta — clean unsupported_platform error
curl -X POST http://127.0.0.1:3001/mcp -H "X-Api-Key: $KEY" -H "Content-Type: application/json" \
-d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"get_account_health","arguments":{"platform":"meta","dateRange":"last_7_days"}}}'
# → {"error":"unsupported_platform","platform":"meta"}
# 8. Inspect cache metrics
curl -H "x-admin-token: $ADMIN_TOKEN" http://127.0.0.1:3001/admin/metrics/cache
# → {"google/account_health":{"hit":1,"miss":1,"hitRate":0.5}}
# 9. Rotate the DEK
npx tsx scripts/rotate-dek.ts $TENANT_ID
# → "Rotated DEK for tenant ..."
# 10. Re-run step 6 — the cached row was encrypted with the OLD DEK, but cache row
# stores plaintext metric data, not tokens. Tokens are decrypted via the NEW DEK
# without issue. Expect the same response.
# 11. Disconnect the tenant
curl -X POST -H "x-admin-token: $ADMIN_TOKEN" http://127.0.0.1:3001/admin/tenants/$TENANT_ID/disconnect/google
# → {"status":"disconnected"}
psql "..." -c "SELECT count(*) FROM platform_credentials WHERE tenant_id = '$TENANT_ID';"
# → 0
# 12. Inspect the audit trail
psql "..." -c "SELECT event_type, outcome, count(*) FROM audit_log WHERE tenant_id = '$TENANT_ID' GROUP BY 1,2 ORDER BY 1,2;"
# → rows for: oauth.flow_started, oauth.flow_completed, oauth.token_refreshed (if expiry crossed),
# mcp.tool_called (×2), api_key.rotated (if you rotated), oauth.token_revokedIf every step produces the expected outcome, Phase 2 is shipped. Move on to Phase 3 (Meta + TikTok).