Skip to Content
Deneva MCPDevSetup Ubuntu

Deneva MCP — Ubuntu server setup

Step-by-step setup for a single Ubuntu 22.04 / 24.04 LTS host. Phase 1 only — this brings up the secure foundation with two stub MCP tools. Real platform integrations land in Phase 2+.

Reading time: ~30 minutes. Hands-on time: ~45–60 minutes for someone familiar with Ubuntu and systemd, longer if it’s your first time.


What you’ll have at the end

  • Ubuntu host running PostgreSQL 16 (localhost only).
  • Node.js 22 LTS via NodeSource.
  • Deneva MCP service running under systemd as a non-root user, secrets encrypted with systemd-creds, listening on 127.0.0.1:3001.
  • nginx terminating TLS at :443, forwarding to the app (TLS 1.3 only, HSTS, security headers).
  • UFW allowing only 22, 80, 443 from the internet.
  • One tenant + one API key seeded; curl to https://your-domain.com/mcp returns 200.

Prerequisites

  • A fresh Ubuntu 22.04 or 24.04 LTS server you control (VM, bare-metal, cloud — your choice).
  • A domain name pointing to that server (A record). Replace your-domain.com everywhere below with your real domain.
  • A non-root SSH user with sudo access. Do not log in as root for these steps.
  • Outbound internet access (the steps install packages and request a Let’s Encrypt certificate).

Step 1 — System updates and base packages

sudo apt update && sudo apt upgrade -y sudo apt install -y \ curl ca-certificates gnupg \ git build-essential \ ufw \ nginx \ postgresql-client \ jq

build-essential is needed to build native Node addons; postgresql-client gives us psql for one-time bootstrap operations against the DB.


Step 2 — Install Node.js 22 LTS (NodeSource)

curl -fsSL https://deb.nodesource.com/setup_22.x | sudo -E bash - sudo apt install -y nodejs node --version # → v22.x.x npm --version # → 10.x.x

We install Node from NodeSource (not Ubuntu’s default repos) to get the current LTS — Ubuntu 22.04 ships an older Node by default.


Step 3 — Install PostgreSQL 16

Ubuntu 22.04’s default Postgres is 14; we want 16 for the RLS policy syntax we use.

# Add the official Postgres apt repo sudo install -d /usr/share/postgresql-common/pgdg sudo curl -o /usr/share/postgresql-common/pgdg/apt.postgresql.org.asc \ --fail https://www.postgresql.org/media/keys/ACCC4CF8.asc echo "deb [signed-by=/usr/share/postgresql-common/pgdg/apt.postgresql.org.asc] \ http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" \ | sudo tee /etc/apt/sources.list.d/pgdg.list sudo apt update # Install sudo apt install -y postgresql-16

Verify:

sudo systemctl status postgresql # should be active (running) sudo -u postgres psql -c "SELECT version();"

Bind Postgres to localhost

Edit /etc/postgresql/16/main/postgresql.conf and set:

listen_addresses = '127.0.0.1' ssl = on

The default Ubuntu install ships a self-signed snake-oil cert at /etc/ssl/certs/ssl-cert-snakeoil.pem. That keeps ssl=on working out of the box. For a real production deploy with Postgres on a different host you’d swap this for a CA-signed cert; on the single-host setup we’re documenting here, snake-oil + 127.0.0.1 is fine because traffic never leaves the host.

Edit /etc/postgresql/16/main/pg_hba.conf and ensure local connections require a password (replace any trust lines for local with scram-sha-256):

# TYPE DATABASE USER ADDRESS METHOD local all all peer host all all 127.0.0.1/32 scram-sha-256 host all all ::1/128 scram-sha-256

Reload:

sudo systemctl restart postgresql

Step 4 — Create database, roles, and apply schema

4a. Set passwords

Generate two strong passwords (32+ chars). You’ll need both during setup; only DB_PASSWORD ends up in the running service’s encrypted credentials.

ADMIN_PW="$(openssl rand -base64 32)" APP_PW="$(openssl rand -base64 32)" API_KEY_HMAC_SECRET="$(openssl rand -base64 32)" echo "Save these somewhere safe (a password manager) — you'll need them again later:" echo " ADMIN_PW=$ADMIN_PW" echo " APP_PW=$APP_PW" echo " API_KEY_HMAC_SECRET=$API_KEY_HMAC_SECRET"

4b. Create database + admin role

sudo -u postgres psql <<SQL CREATE ROLE mcp_admin LOGIN PASSWORD '$ADMIN_PW'; CREATE DATABASE deneva_mcp OWNER mcp_admin; SQL

Quick sanity check:

PGPASSWORD="$ADMIN_PW" psql -h 127.0.0.1 -U mcp_admin -d deneva_mcp -c "SELECT 1"

Why host not local: the mcp_admin role doesn’t have an OS account on the host, so peer authentication via the local socket would fail. Forcing 127.0.0.1 routes through the host rule with scram-sha-256.


Step 5 — Create the service user and lay out files

# Dedicated non-root system user, no shell, no home directory. sudo useradd --system --no-create-home --shell /usr/sbin/nologin deneva-mcp # Application install path, owned by root, readable by deneva-mcp. sudo mkdir -p /opt/deneva-mcp sudo chown root:deneva-mcp /opt/deneva-mcp sudo chmod 750 /opt/deneva-mcp # Log directory — only the service user writes here. sudo mkdir -p /var/log/deneva-mcp sudo chown deneva-mcp:deneva-mcp /var/log/deneva-mcp sudo chmod 750 /var/log/deneva-mcp # Encrypted credentials directory, root-owned. sudo mkdir -p /etc/deneva-mcp/creds sudo chown root:root /etc/deneva-mcp /etc/deneva-mcp/creds sudo chmod 700 /etc/deneva-mcp/creds

Step 6 — Get the source onto the server

# As your sudo user, clone into a working directory you control: git clone <your-repo-url> ~/deneva-mcp-src cd ~/deneva-mcp-src # Install dependencies + run the build. npm ci npm run build # Copy the runtime artifacts into /opt/deneva-mcp. sudo cp -r dist node_modules package.json /opt/deneva-mcp/ sudo chown -R root:deneva-mcp /opt/deneva-mcp sudo chmod -R 750 /opt/deneva-mcp

The clone lives in your home directory; only the built artifacts go to /opt/deneva-mcp. The service user does not have shell access, so it cannot run npm directly.


Step 7 — Encrypt production secrets with systemd-creds

The repo ships scripts/encrypt-prod-secrets.sh for this. It prompts for each secret value, encrypts it with systemd-creds encrypt, and writes *.cred files to /etc/deneva-mcp/creds/.

sudo bash ~/deneva-mcp-src/scripts/encrypt-prod-secrets.sh

It will ask for, in order:

SecretWhat to paste
CREDENTIAL_KEKA fresh 32-byte random value: openssl rand -base64 32
API_KEY_HMAC_SECRETA fresh 32-byte random value: openssl rand -base64 32
DB_PASSWORDThe $APP_PW you generated in Step 4a
DB_ADMIN_PASSWORDThe $ADMIN_PW you generated in Step 4a
INNGEST_SIGNING_KEYA fresh 32-byte random value (real one wired in Phase 4): openssl rand -base64 32
GOOGLE_CLIENT_SECRETThe OAuth client secret from Google Cloud Console → Credentials. Required by Phase 2 PR-3 onwards.
GOOGLE_DEVELOPER_TOKENThe Google Ads API developer token from ads.google.com/aw/apicenter. Until the approval lands, paste a placeholder like pending_dev_token — only PR-5’s GAQL calls actually use it.

When the script finishes, /etc/deneva-mcp/creds/ should contain seven .cred files, all root-owned, mode 600.

sudo ls -la /etc/deneva-mcp/creds/

What this gives you: the encrypted blobs are bound to the host’s TPM (or host key). Move the disk to another machine and the secrets cannot be decrypted. systemd reads them at service start and exposes the plaintext in /run/credentials/deneva-mcp.service/ — a tmpfs mount that exists only for the service’s lifetime.


Step 8 — Run migrations and apply RLS / roles

Migrations require DB_ADMIN_PASSWORD. Set up a temporary secrets/ dir in your clone (NOT in /opt/deneva-mcpmcp_app doesn’t need it at runtime):

cd ~/deneva-mcp-src mkdir -p secrets echo -n "$ADMIN_PW" > secrets/DB_ADMIN_PASSWORD && chmod 600 secrets/DB_ADMIN_PASSWORD echo -n "$APP_PW" > secrets/DB_PASSWORD && chmod 600 secrets/DB_PASSWORD echo -n "$API_KEY_HMAC_SECRET" > secrets/API_KEY_HMAC_SECRET && chmod 600 secrets/API_KEY_HMAC_SECRET # Run migrations as mcp_admin. npm run db:migrate # Apply role separation (creates mcp_app + grants). # Must run as the postgres superuser — mcp_admin lacks CREATEROLE. sudo -u postgres psql -d deneva_mcp \ -v app_password="$APP_PW" -f src/db/roles.sql # Apply RLS: psql "postgresql://mcp_admin:${ADMIN_PW}@127.0.0.1:5432/deneva_mcp" \ -f src/db/rls.sql

Sanity-check the table list:

PGPASSWORD="$ADMIN_PW" psql -h 127.0.0.1 -U mcp_admin -d deneva_mcp -c '\dt' # → tenants, api_keys, platform_credentials, oauth_states, metric_cache, audit_log, sync_log

Step 9 — Seed the first tenant + API key

node scripts/seed-tenant.mjs "acme-corp" # → Tenant: <uuid> # → API key: <43-char base64url string> # → Store this key now — it cannot be retrieved later.

Save the API key in a password manager now. The DB stores only the HMAC hash; there is no way to recover it.


Step 10 — Install the systemd unit

Create /etc/systemd/system/deneva-mcp.service:

[Unit] Description=Deneva MCP server After=network-online.target postgresql.service Wants=network-online.target [Service] Type=simple User=deneva-mcp WorkingDirectory=/opt/deneva-mcp ExecStart=/usr/bin/node dist/index.js Environment=NODE_ENV=production Environment=SYSTEMD_UNIT=deneva-mcp.service Environment=PORT=3001 Environment=HOST=127.0.0.1 # Pin UTC so the pg TIMESTAMP type parser (src/db/index.ts) round-trips # correctly. The parser already compensates for non-UTC hosts, but pinning # UTC means the server clock and the stored timestamps match exactly. Environment=TZ=UTC # Phase 2 (Google Ads). The client ID is a public value (visible in browser # OAuth URLs). The redirect URI must EXACTLY match what is registered in # Google Cloud Console → Credentials → your OAuth 2.0 client. Environment=GOOGLE_CLIENT_ID=<paste-public-client-id> Environment=GOOGLE_OAUTH_REDIRECT_URI=https://app.deneva.io/auth/google/callback # Encrypted credentials — decrypted into /run/credentials/deneva-mcp.service/<NAME> LoadCredentialEncrypted=CREDENTIAL_KEK:/etc/deneva-mcp/creds/CREDENTIAL_KEK.cred LoadCredentialEncrypted=API_KEY_HMAC_SECRET:/etc/deneva-mcp/creds/API_KEY_HMAC_SECRET.cred LoadCredentialEncrypted=DB_PASSWORD:/etc/deneva-mcp/creds/DB_PASSWORD.cred LoadCredentialEncrypted=INNGEST_SIGNING_KEY:/etc/deneva-mcp/creds/INNGEST_SIGNING_KEY.cred LoadCredentialEncrypted=GOOGLE_CLIENT_SECRET:/etc/deneva-mcp/creds/GOOGLE_CLIENT_SECRET.cred LoadCredentialEncrypted=GOOGLE_DEVELOPER_TOKEN:/etc/deneva-mcp/creds/GOOGLE_DEVELOPER_TOKEN.cred # OS hardening (Phase 1 baseline; Phase 5 adds more flags) NoNewPrivileges=true ProtectSystem=strict ProtectHome=true PrivateTmp=true ReadWritePaths=/var/log/deneva-mcp RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX CapabilityBoundingSet= LockPersonality=true RestrictRealtime=true RestrictNamespaces=true # Restart policy Restart=on-failure RestartSec=5 [Install] WantedBy=multi-user.target

Activate it:

sudo systemctl daemon-reload sudo systemctl enable --now deneva-mcp sudo systemctl status deneva-mcp --no-pager

The status output should show active (running). Tail the logs:

sudo journalctl -u deneva-mcp -f

Look for deneva-mcp listening near the top.


Step 11 — Smoke-test the service (before nginx)

The service listens on 127.0.0.1:3001. From the same host:

# /health is unauthenticated curl -s http://127.0.0.1:3001/health # → {"ok":true,"version":"0.1.0","uptimeSec":N} # /mcp without a key → 401 curl -i -X POST http://127.0.0.1:3001/mcp \ -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"ping","arguments":{}}}' # → HTTP/1.1 401 # /mcp with the seeded key → 200. # The Accept header is REQUIRED by the Streamable HTTP transport — it negotiates # between a JSON response and an SSE stream and rejects clients that don't list # both with 406. export KEY="<paste the API key from Step 9>" curl -i -X POST http://127.0.0.1:3001/mcp \ -H "X-Api-Key: $KEY" \ -H "Content-Type: application/json" \ -H "Accept: application/json, text/event-stream" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"ping","arguments":{}}}' # → HTTP/1.1 200

Confirm the audit trail:

PGPASSWORD="$ADMIN_PW" psql -h 127.0.0.1 -U mcp_admin -d deneva_mcp \ -c "SELECT event_type, outcome, count(*) FROM audit_log GROUP BY 1,2 ORDER BY 1,2;"

You should see at least:

api_key.auth_success | success | 1 api_key.auth_failure | failure | 1 mcp.tool_called | success | 1

Step 12 — UFW firewall

Block everything inbound except 22, 80, 443. Order matters here — set defaults first, then open the ports, then enable.

sudo ufw default deny incoming sudo ufw default allow outgoing sudo ufw allow 22/tcp # SSH sudo ufw allow 80/tcp # Let's Encrypt HTTP-01 challenge sudo ufw allow 443/tcp # HTTPS — the only "real" public port sudo ufw enable sudo ufw status verbose

In production, restrict 22/tcp to your office / VPN IP range. Leaving SSH open to the world is a huge attack surface.


Step 13 — TLS certificate (Let’s Encrypt)

Install certbot (Ubuntu’s snap-based path is the most stable):

sudo apt install -y certbot python3-certbot-nginx

Provision a cert. Replace the placeholder values with your real domain and email:

sudo certbot certonly --nginx \ -d your-domain.com \ -m you@example.com --agree-tos --no-eff-email

This writes /etc/letsencrypt/live/your-domain.com/fullchain.pem (and privkey.pem). Renewal is automated — certbot.timer runs twice a day.


Step 14 — nginx reverse proxy

Replace any default /etc/nginx/sites-enabled/default:

sudo rm -f /etc/nginx/sites-enabled/default

Create /etc/nginx/conf.d/deneva-mcp-rate.conf (rate-limit zones must live in http {} scope, which the conf.d/ include puts you in):

limit_req_zone $binary_remote_addr zone=mcp_global:10m rate=30r/m; limit_req_zone $binary_remote_addr zone=mcp_auth:10m rate=5r/m;

Create /etc/nginx/sites-available/deneva-mcp:

server { listen 443 ssl http2; server_name your-domain.com; ssl_certificate /etc/letsencrypt/live/your-domain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/your-domain.com/privkey.pem; ssl_protocols TLSv1.3; # Do NOT add ssl_ciphers here — TLS 1.3 suites are managed by OpenSSL, # not nginx. Listing them in ssl_ciphers causes "no cipher match" at startup. ssl_prefer_server_ciphers off; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_stapling on; ssl_stapling_verify on; add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload" always; add_header X-Frame-Options DENY always; add_header X-Content-Type-Options nosniff always; add_header Referrer-Policy no-referrer always; add_header Permissions-Policy "geolocation=(), microphone=(), camera=()" always; server_tokens off; client_max_body_size 64k; location /mcp { proxy_pass http://127.0.0.1:3001; proxy_http_version 1.1; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_hide_header X-Powered-By; limit_req zone=mcp_global burst=30 nodelay; } location /health { proxy_pass http://127.0.0.1:3001; } # Phase 2 will route /auth/* — strict rate limit on it from day one. location /auth/ { proxy_pass http://127.0.0.1:3001; limit_req zone=mcp_auth burst=5 nodelay; } # /admin/* is intentionally NOT proxied here — Phase 5 adds it back behind # an IP allow-list. In Phase 1 you reach /admin/* only from the host itself # (e.g. via `curl http://127.0.0.1:3001/admin/...`) or via SSH port-forward. # Anything else: black-hole. location / { return 444; } } # HTTP → HTTPS redirect server { listen 80; server_name your-domain.com; return 301 https://$host$request_uri; }

Enable it and reload:

sudo ln -s /etc/nginx/sites-available/deneva-mcp /etc/nginx/sites-enabled/deneva-mcp sudo nginx -t # syntax check sudo systemctl reload nginx

Step 15 — End-to-end smoke test through nginx

# /health over TLS curl -s https://your-domain.com/health # → {"ok":true,...} # /mcp through nginx export KEY="<the API key from Step 9>" curl -i -X POST https://your-domain.com/mcp \ -H "X-Api-Key: $KEY" -H "Content-Type: application/json" \ -d '{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"ping","arguments":{}}}' # → HTTP/1.1 200, JSON body containing "ok":true

If both succeed, Phase 1 is shipping.


Day-2 operations

Logs

sudo journalctl -u deneva-mcp -f # live tail sudo journalctl -u deneva-mcp --since "1 hour ago" # last hour

Restart the service

sudo systemctl restart deneva-mcp

Inspect the audit log

PGPASSWORD="$ADMIN_PW" psql -h 127.0.0.1 -U mcp_admin -d deneva_mcp \ -c "SELECT created_at, event_type, outcome, metadata FROM audit_log ORDER BY created_at DESC LIMIT 50;"

Rotate an API key

# (Phase 1: the admin token is the hex of API_KEY_HMAC_SECRET. Phase 5 # replaces it with a separate ADMIN_TOKEN credential and IP-allow-list.) ADMIN_TOKEN=$(sudo cat /run/credentials/deneva-mcp.service/API_KEY_HMAC_SECRET | xxd -p -c 256) curl -X POST http://127.0.0.1:3001/admin/api-keys/rotate \ -H "Content-Type: application/json" \ -H "X-Admin-Token: $ADMIN_TOKEN" \ -d '{"tenantId":"<uuid>","description":"prod rotation 2026-05-06"}' # → { "apiKey": "<new key>", "graceUntil": "<24h from now>" }

Update the application

cd ~/deneva-mcp-src git pull npm ci npm run build # Snapshot the DB before applying any new migrations. TS=$(date -u +%Y%m%dT%H%M%SZ) sudo mkdir -p /var/backups/deneva-mcp PGPASSWORD="$ADMIN_PW" pg_dump --format=custom --no-owner --no-privileges \ -h 127.0.0.1 -U mcp_admin deneva_mcp \ | sudo tee /var/backups/deneva-mcp/pre-migrate-${TS}.dump > /dev/null # Apply migrations. npm run db:migrate # Roll out the new build. sudo cp -r dist node_modules package.json /opt/deneva-mcp/ sudo chown -R root:deneva-mcp /opt/deneva-mcp sudo systemctl restart deneva-mcp

TLS renewal

Certbot installs a systemd timer. Verify:

sudo systemctl list-timers | grep certbot sudo certbot renew --dry-run # exercise the renewal path without changing anything

Troubleshooting

SymptomFirst thing to check
systemctl status deneva-mcp shows failedjournalctl -u deneva-mcp -n 200 — usually a missing .cred file or a typo in the unit.
502 from nginxService isn’t listening on :3001. sudo ss -tlnp | grep 3001.
/mcp returns 401 even with the right keyRotation expired the key, or audit_log will show auth_failure with a reason — check it.
permission denied for table audit_log from the applicationRoles weren’t applied. Re-run src/db/roles.sql.
RLS test fails locally / cross-tenant rows visibleConnection running as mcp_admin (bypasses RLS). Always test as mcp_app.
nginx complains about limit_req_zone not in scopeThe directive must live in http {} scope, i.e. /etc/nginx/conf.d/*.conf, not inside a server {} block.
nginx -t fails: SSL_CTX_set_cipher_list(…) failed … no cipher matchssl_ciphers does not apply to TLS 1.3 — OpenSSL manages those suites internally. Remove the ssl_ciphers directive entirely; it is not needed when ssl_protocols TLSv1.3 is set.
curl to /mcp hangsUFW is blocking outbound or you forgot nginx -s reload. Check sudo ufw status and sudo nginx -t.

What this guide does NOT do (Phase 2+)

  • Wire real Google / Meta / TikTok OAuth flows. (Phase 2.)
  • Set up Inngest for background sync. (Phase 4.)
  • Configure log shipping to a SIEM, set up alerting on auth.blocked_ip, install fail2ban. (Phase 5.)
  • Provision read replicas, automate full backups to off-host storage. (Phase 5.)

When those phases land, this guide will get matching addenda.


Quick reference

WhatWhere
Application install/opt/deneva-mcp/dist
systemd unit/etc/systemd/system/deneva-mcp.service
Encrypted secrets/etc/deneva-mcp/creds/*.cred
Decrypted secrets at runtime/run/credentials/deneva-mcp.service/<NAME> (tmpfs, root-only)
Logsjournalctl -u deneva-mcp
nginx vhost/etc/nginx/sites-available/deneva-mcp
TLS cert/etc/letsencrypt/live/your-domain.com/
DBlocalhost:5432mcp_admin (ops), mcp_app (runtime)
Bind127.0.0.1:3001 (nginx terminates TLS upstream)