50 min readAutonomous agents

Implementation Guide: Monitor brand mentions and competitor activity across channels and send weekly digests

Step-by-step implementation guide for deploying AI to monitor brand mentions and competitor activity across channels and send weekly digests for Marketing & Creative Agencies clients.

Hardware Procurement

Cloud VPS for n8n Self-Hosted Agent

Hetzner CloudCPX31 (4 vCPU, 8 GB RAM, 160 GB NVMe SSD)Qty: 1

€15.29/month (~$16.50/month MSP cost) / $75–$125/month suggested resale (bundled into managed service)

Hosts the self-hosted n8n workflow automation platform, PostgreSQL database for workflow persistence, and Caddy reverse proxy. This single VPS runs the entire autonomous agent orchestration layer via Docker containers. One VPS can serve multiple agency clients (up to 10–15 clients per instance before scaling).

Cloud VPS for n8n (Alternative)

DigitalOceanPremium Droplet - 4 vCPU / 8 GB RAM / 160 GB SSD (s-4vcpu-8gb)Qty: 1

$48/month MSP cost / $75–$125/month suggested resale (bundled)

Alternative to Hetzner for MSPs preferring US-based cloud providers with better North American latency. Same function as above. Choose either Hetzner or DigitalOcean, not both.

Block Storage Volume (Backup)

Hetzner Cloud (or DigitalOcean)100 GB Block Storage VolumeQty: 1

$5–$10/month

Attached block storage for automated daily backups of n8n workflows, PostgreSQL database dumps, and archived digest reports. Provides disaster recovery capability.

Software Procurement

Brand24 Team Plan

Brand24Team PlanQty: 5 keywords, 5 users, 5,000 mentions/month

$149/month (direct) per agency client; volume discounts available for 10+ accounts. Suggest resale at $225–$275/month.

Primary social listening and brand monitoring platform. Tracks brand mentions and competitor activity across social media (Twitter/X, Facebook, Instagram, TikTok, YouTube, Reddit, Telegram), news sites, blogs, forums, podcasts, and review sites. Provides sentiment analysis, mention volume, reach metrics, and API access for automated data extraction. Team plan includes 5 keywords (enough for 1 brand + 2–3 competitors), 5 users, and 5,000 mentions/month.

n8n Community Edition (Self-Hosted)

n8n GmbHCommunity Edition

$0/month (self-hosted Community Edition). VPS hosting cost is the only expense (~$16.50/month on Hetzner).

Core workflow automation and AI agent orchestration platform. Runs the autonomous agent that: (1) pulls data from Brand24 API on schedule, (2) preprocesses and filters mentions, (3) sends data to GPT-5.4 mini for summarization and analysis, (4) generates branded HTML email digests, (5) delivers via SendGrid and Slack. Supports 500+ native integrations.

OpenAI API (GPT-5.4 mini)

OpenAIGPT-5.4 mini

~$0.15/1M input tokens, $0.60/1M output tokens. Estimated $5–$15/month per client for weekly digest generation. Suggest resale at $25–$50/month bundled.

Large language model API used for: summarizing weekly mentions into executive digest, analyzing sentiment trends, identifying key themes and actionable insights, generating competitor comparison narratives, and creating the natural-language weekly report. GPT-5.4 mini provides the optimal cost/quality balance for this use case.

SendGrid Free/Essentials Plan

Twilio SendGridSaaS - free tier (100 emails/day) or Essentials ($19.95/month for 50K emails)

$0–$19.95/month. Free tier sufficient for up to ~14 weekly digest recipients per client.

Transactional email delivery service for sending branded weekly digest emails. Provides delivery tracking, bounce handling, and professional email infrastructure. Avoids digest emails landing in spam folders.

Slack (Standard or existing workspace)

Slack Technologies (Salesforce)SaaS - client's existing workspace

$0 additional (uses client's existing Slack). If new: Free tier works for alerts.

Real-time alert delivery channel for high-priority mentions (e.g., mention volume spikes, negative sentiment surges, competitor campaign launches). The n8n agent sends formatted Slack messages to designated channels.

$0 additional (uses client's existing Google Workspace)

Historical archive of all processed mentions and weekly digest data. Serves as a lightweight database for trend analysis and allows the agency to build custom Looker Studio dashboards from the data.

$0 (free)

Container runtime for deploying n8n, PostgreSQL, and Caddy on the VPS. Provides clean isolation, reproducible deployments, and easy updates.

Caddy Web Server

Caddy (open-source)

$0 (free)

Reverse proxy with automatic HTTPS (Let's Encrypt) for the n8n dashboard. Provides TLS encryption for the web interface without manual certificate management.

UptimeRobot

UptimeRobotSaaS - free tier (50 monitors)

$0/month (free tier)

Monitors VPS uptime and n8n dashboard availability. Sends email/Slack alerts if the automation server goes down, enabling rapid MSP response.

Prerequisites

  • Stable broadband internet connection (25+ Mbps) at the agency office for dashboard access and Slack alerts
  • Active Slack workspace with admin permissions to install a custom Slack app (or permission from client IT to create one)
  • Business email domain with ability to configure SPF/DKIM DNS records for SendGrid email delivery (prevents digest emails from being flagged as spam)
  • Google Workspace account (or willingness to create one) for Google Sheets historical archive
  • Client must provide: (1) list of brand names and variations to monitor, (2) list of 2–4 key competitors, (3) relevant industry keywords, (4) Slack channel names for alerts, (5) email distribution list for weekly digests
  • Credit card for Brand24 subscription, OpenAI API billing, and VPS hosting
  • Domain name or subdomain available for n8n dashboard (e.g., automation.clientname.com or n8n.mspname.com) with DNS access to create A records
  • SSH key pair generated for VPS access (MSP technician's workstation)
  • Modern web browser (Chrome 120+ or Edge 120+) for Brand24 and n8n dashboard access
  • MSP must have: OpenAI Platform account (platform.openai.com) with API key and at least $10 prepaid credits loaded
  • SendGrid account created and verified with sender authentication completed for the client's email domain
  • MSP technician should have working knowledge of Docker, REST APIs, JSON, and basic HTML/CSS for email template customization

Installation Steps

Step 1: Provision and Secure the VPS

Create a Hetzner Cloud CPX31 instance (4 vCPU, 8 GB RAM, 160 GB NVMe SSD) running Ubuntu 22.04 LTS. This VPS will host the entire n8n automation platform. After provisioning, perform initial security hardening: create a non-root user, configure SSH key authentication, disable password login, set up UFW firewall, and enable automatic security updates.

bash
# After creating VPS in Hetzner Cloud Console, SSH in as root:
ssh root@YOUR_VPS_IP

# Create non-root user
adduser n8nadmin
usermod -aG sudo n8nadmin

# Copy SSH keys to new user
mkdir -p /home/n8nadmin/.ssh
cp ~/.ssh/authorized_keys /home/n8nadmin/.ssh/
chown -R n8nadmin:n8nadmin /home/n8nadmin/.ssh
chmod 700 /home/n8nadmin/.ssh
chmod 600 /home/n8nadmin/.ssh/authorized_keys

# Disable root login and password authentication
sed -i 's/PermitRootLogin yes/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i 's/#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
systemctl restart sshd

# Configure UFW firewall
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp comment 'SSH'
ufw allow 80/tcp comment 'HTTP'
ufw allow 443/tcp comment 'HTTPS'
ufw enable

# Enable automatic security updates
apt update && apt install -y unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades

# Attach 100 GB block storage volume (created in Hetzner console)
mkfs.ext4 /dev/sdb
mkdir -p /mnt/backup
mount /dev/sdb /mnt/backup
echo '/dev/sdb /mnt/backup ext4 defaults 0 2' >> /etc/fstab
Note

Use Hetzner's Falkenstein (FSN1) or Nuremberg (NBG1) datacenter for EU clients, or Ashburn (ASH) for US-based agencies. Always use SSH key authentication — never password-based SSH. Record the VPS IP address and n8nadmin credentials in the MSP's password manager (e.g., IT Glue, Hudu).

Step 2: Install Docker Engine and Docker Compose

Install Docker Engine and Docker Compose v2 on the VPS. All services (n8n, PostgreSQL, Caddy) will run as Docker containers for clean isolation and easy management. Docker Compose orchestrates multi-container deployments from a single YAML file.

bash
# Log in as n8nadmin
ssh n8nadmin@YOUR_VPS_IP

# Install Docker Engine (official method)
sudo apt update
sudo apt install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Add n8nadmin to docker group (avoids needing sudo for docker commands)
sudo usermod -aG docker n8nadmin
newgrp docker

# Verify installation
docker --version
docker compose version
Note

Docker Compose v2 is installed as a Docker plugin (docker compose) rather than standalone binary (docker-compose). Always use 'docker compose' (with a space) for v2 commands. Verify Docker version is 24.0+ and Compose is v2.20+.

Step 3: Create Project Directory and Docker Compose Configuration

Create the directory structure for the n8n deployment and write the Docker Compose file that defines all three services: n8n (automation engine), PostgreSQL (database), and Caddy (reverse proxy with automatic HTTPS). This is the core infrastructure definition.

bash
# Create project directory
mkdir -p ~/n8n-brand-monitor
cd ~/n8n-brand-monitor

# Create required subdirectories
mkdir -p caddy_data caddy_config n8n_data postgres_data backups

# Create .env file with secrets
cat > .env << 'ENVEOF'
# n8n Configuration
N8N_HOST=automation.yourmsp.com
N8N_PORT=5678
N8N_PROTOCOL=https
N8N_BASIC_AUTH_ACTIVE=true
N8N_BASIC_AUTH_USER=admin
N8N_BASIC_AUTH_PASSWORD=CHANGE_ME_TO_STRONG_PASSWORD
GENERIC_TIMEZONE=America/New_York
N8N_ENCRYPTION_KEY=GENERATE_A_RANDOM_32_CHAR_STRING
N8N_USER_MANAGEMENT_DISABLED=false

# PostgreSQL Configuration
POSTGRES_USER=n8n
POSTGRES_PASSWORD=CHANGE_ME_TO_STRONG_DB_PASSWORD
POSTGRES_DB=n8n
POSTGRES_NON_ROOT_USER=n8n
POSTGRES_NON_ROOT_PASSWORD=CHANGE_ME_TO_STRONG_DB_PASSWORD

# Domain for Caddy
DOMAIN_NAME=automation.yourmsp.com
ENVEOF

# Generate encryption key
openssl rand -hex 16
# Copy the output and paste it as N8N_ENCRYPTION_KEY in .env

# Create docker-compose.yml
cat > docker-compose.yml << 'DCEOF'
version: '3.8'

services:
  postgres:
    image: postgres:16-alpine
    restart: unless-stopped
    environment:
      - POSTGRES_USER=${POSTGRES_USER}
      - POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
      - POSTGRES_DB=${POSTGRES_DB}
    volumes:
      - ./postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}']
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - n8n-net

  n8n:
    image: n8nio/n8n:latest
    restart: unless-stopped
    environment:
      - N8N_HOST=${N8N_HOST}
      - N8N_PORT=${N8N_PORT}
      - N8N_PROTOCOL=${N8N_PROTOCOL}
      - WEBHOOK_URL=https://${DOMAIN_NAME}/
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - GENERIC_TIMEZONE=${GENERIC_TIMEZONE}
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=${POSTGRES_DB}
      - DB_POSTGRESDB_USER=${POSTGRES_USER}
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_METRICS=true
    volumes:
      - ./n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy
    networks:
      - n8n-net

  caddy:
    image: caddy:2-alpine
    restart: unless-stopped
    ports:
      - '80:80'
      - '443:443'
    volumes:
      - ./Caddyfile:/etc/caddy/Caddyfile
      - ./caddy_data:/data
      - ./caddy_config:/config
    networks:
      - n8n-net

networks:
  n8n-net:
    driver: bridge
DCEOF

# Create Caddyfile
cat > Caddyfile << 'CADDYEOF'
{$DOMAIN_NAME} {
  reverse_proxy n8n:5678
  encode gzip
}
CADDYEOF
Critical

Replace all placeholder passwords in .env with strong, unique passwords. Store them in the MSP's password manager immediately. The N8N_ENCRYPTION_KEY is especially important — if lost, all stored credentials in n8n become unrecoverable. Back up the .env file securely. Update DOMAIN_NAME and N8N_HOST to match the actual subdomain you configure in Step 4.

Step 4: Configure DNS and Launch Services

Point the chosen subdomain to the VPS IP address via DNS, then launch all Docker services. Caddy will automatically obtain a Let's Encrypt TLS certificate for HTTPS. Verify n8n is accessible via the browser.

shell
# First: In your DNS provider (e.g., Cloudflare, Route 53, GoDaddy):
# Create an A record: automation.yourmsp.com -> YOUR_VPS_IP
# TTL: 300 seconds (5 minutes) for fast propagation
# If using Cloudflare, set proxy status to 'DNS only' (gray cloud) initially

# Wait for DNS propagation (usually 1-5 minutes)
dig automation.yourmsp.com +short
# Should return your VPS IP

# Launch all services
cd ~/n8n-brand-monitor
docker compose up -d

# Check all containers are running
docker compose ps

# View logs for troubleshooting
docker compose logs -f
# (Ctrl+C to exit logs)

# Verify n8n is accessible
curl -I https://automation.yourmsp.com
# Should return HTTP 200
Note

DNS propagation can take up to 48 hours in rare cases, but usually completes in under 5 minutes for new records. If Caddy fails to get a certificate, check that port 80 is open (UFW rule) and DNS is resolving correctly. The first launch may take 1-2 minutes as Docker pulls all images. Access n8n at https://automation.yourmsp.com and complete the initial setup wizard to create the admin account.

Step 5: Create Brand24 Account and Configure Monitoring Projects

Sign up for Brand24 Team plan and create monitoring projects for the client's brand and competitors. Configure keyword tracking with proper Boolean operators to capture relevant mentions while filtering noise. This is the data collection layer that feeds the autonomous agent.

1
Go to https://brand24.com and sign up for Team plan ($149/month)
2
Create a new Project for the agency client
3
Configure monitored keywords using these patterns: • Primary brand keywords (example for agency 'Acme Creative'): - Required keyword: "Acme Creative" - Additional keywords: AcmeCreative, @acmecreative, acmecreative.com - Excluded keywords: acme hardware, acme anvil (filter false positives) • Competitor keywords (create separate project or use additional keyword slots): - Competitor 1: "Rival Agency" OR @rivalagency - Competitor 2: "Other Co" OR otherco.com - Competitor 3: "Third Shop" OR @thirdshop
4
Configure sources: Enable all (Social Media, News, Blogs, Forums, Videos, Podcasts, Reviews)
5
Configure language filter: Match client's target markets (e.g., English only)
6
Enable sentiment analysis (automatic)
7
Set up Slack integration: Brand24 > Integrations > Slack > Authorize
8
Note the Brand24 API key: Brand24 > Settings > API > Copy API key
Note

Brand24's Team plan includes 5 keywords and 5,000 mentions/month. For a typical agency tracking 1 brand + 3 competitors, this is usually sufficient. If the client's brand name is a common word (e.g., 'Atlas'), use quoted phrases and exclusion keywords aggressively to reduce noise. Document the exact keywords configured in the client's implementation notes. The API key is required for n8n integration in subsequent steps.

Step 6: Configure OpenAI API Access

Set up the OpenAI platform account, generate an API key, configure usage limits, and test API connectivity. This API powers the AI summarization and analysis in the weekly digest.

1
Go to https://platform.openai.com and log in (or create account)
2
Navigate to Settings > Billing > Add payment method
3
Add $10–$25 initial credit (auto-recharge recommended at $10 threshold)
4
Set monthly usage limit: Settings > Limits > Set to $50/month (safety cap)
5
Generate API key: API Keys > Create new secret key — Name: 'brand-monitor-production', Permissions: 'All' (or restrict to Chat Completions only for security)
6
Copy the API key immediately (it won't be shown again)
Test API connectivity from VPS
bash
curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer sk-YOUR_API_KEY_HERE" \
  -d '{"model": "gpt-5.4-mini", "messages": [{"role": "user", "content": "Say hello"}], "max_tokens": 50}'
Note

IMPORTANT: Set a monthly usage limit ($50/month is generous for this use case — typical usage is $5–$15/month per client). This prevents unexpected charges from workflow bugs or loops. Store the API key in n8n's credential store, never in plaintext in workflows. GPT-5.4 mini is recommended over GPT-5.4 for cost efficiency — the summarization task doesn't require the full model's capabilities.

Step 7: Configure SendGrid Email Delivery

Set up SendGrid for transactional email delivery of the weekly digest. Configure sender authentication (SPF/DKIM/DMARC) to ensure digest emails reach inboxes rather than spam folders. Create an API key for n8n integration.

1
Sign up at https://sendgrid.com (free tier: 100 emails/day)
2
Complete Sender Authentication: Settings > Sender Authentication > Authenticate Your Domain. Enter the client's domain (e.g., acmecreative.com). Add the provided DNS records (CNAME records for DKIM, TXT for SPF). Verify authentication.
3
Create a Verified Sender: Settings > Sender Authentication > Verify a Single Sender. From: digest@acmecreative.com (or insights@acmecreative.com). Reply-to: team@acmecreative.com
4
Generate API Key: Settings > API Keys > Create API Key. Name: 'n8n-brand-digest'. Permissions: 'Restricted Access' > Mail Send: Full Access (only). Copy the API key.
Test email delivery from VPS
bash
# Test email delivery from VPS
curl --request POST \
  --url https://api.sendgrid.com/v3/mail/send \
  --header "Authorization: Bearer SG.YOUR_SENDGRID_API_KEY" \
  --header 'Content-Type: application/json' \
  --data '{"personalizations":[{"to":[{"email":"test@example.com"}]}],"from":{"email":"digest@acmecreative.com"},"subject":"Test Digest","content":[{"type":"text/plain","value":"SendGrid is working."}]}'
Note

DNS changes for sender authentication can take up to 48 hours. Start this step early in the project. If the client uses Google Workspace, you may need their IT admin to add the CNAME/TXT records. For the free tier, 100 emails/day is more than sufficient for weekly digests (even with 50+ recipients). If the client prefers digests to come from a generic address like insights@mspname.com, configure that domain instead.

Step 8: Create Slack App for Alert Delivery

Create a custom Slack app in the client's workspace for delivering real-time mention alerts and weekly digest notifications. This provides formatted, channel-specific alerts with richer formatting than email.

1
Go to https://api.slack.com/apps
2
Click 'Create New App' > 'From scratch'
3
App Name: 'Brand Monitor Bot' — Workspace: Select client's workspace
4
Navigate to 'OAuth & Permissions'
5
Add Bot Token Scopes: chat:write (send messages), chat:write.public (send to channels bot hasn't been invited to), files:write (upload digest PDF/images if needed)
6
Install App to Workspace > Allow
7
Copy 'Bot User OAuth Token' (starts with xoxb-)
8
Create Slack channels in the client's workspace: #brand-alerts — real-time high-priority mention alerts, #competitor-intel — competitor activity alerts, #weekly-digest — weekly digest summary posts
9
Invite the bot to each channel — in each channel, type: /invite @Brand Monitor Bot
Test Slack bot connectivity from VPS
bash
# Test Slack message from VPS
curl -X POST https://slack.com/api/chat.postMessage \
  -H "Authorization: Bearer xoxb-YOUR_BOT_TOKEN" \
  -H 'Content-Type: application/json' \
  -d '{"channel": "#brand-alerts", "text": "🟢 Brand Monitor Bot is connected and working!"}'
Note

The client's Slack workspace admin must approve the app installation. If the client uses Microsoft Teams instead of Slack, use the n8n Microsoft Teams node instead — the workflow logic is the same, just swap the delivery node. Store the bot token securely in n8n's credential store.

Step 9: Build the Core n8n Workflow: Brand24 Data Collection Agent

Log into the n8n dashboard and build the primary automation workflow. This workflow runs on a schedule (every 6 hours), pulls new mentions from Brand24's API, stores them in Google Sheets, evaluates urgency, and triggers real-time Slack alerts for high-priority mentions. This is the 'always-on monitoring' component.

  • Access n8n at https://automation.yourmsp.com and log in with the admin credentials created during first-launch setup
  • Step 1: Add credentials in n8n — Go to: Settings > Credentials > Add Credential. Add the following: Brand24 API (Type 'Header Auth', Name: 'Brand24 API', Header Name: 'Authorization', Value: 'Bearer YOUR_BRAND24_API_KEY'), OpenAI (Type 'OpenAI', API Key: your OpenAI key), SendGrid (Type 'SendGrid', API Key: your SendGrid key), Slack (Type 'Slack API', Bot Token: your xoxb- token), Google Sheets (Type 'Google Sheets OAuth2', follow OAuth flow)
  • Step 2: Create new workflow named 'Brand Monitor - Real-Time Collection'
  • Step 3: Import the workflow JSON from the custom_ai_components section below
  • Step 4: Configure each node with the correct credentials
  • Step 5: Activate the workflow
Note

n8n stores credentials encrypted using the N8N_ENCRYPTION_KEY from your .env file. Never share workflow exports that contain credential IDs with untrusted parties. The workflow JSON is provided in the custom_ai_components section of this guide. After importing, you must manually link each node to the correct credential.

Step 10: Build the Weekly Digest Generation Workflow

Create the second n8n workflow that runs every Monday at 7:00 AM (client's timezone). This workflow aggregates the past 7 days of collected mentions from Google Sheets, sends them to GPT-5.4 mini for analysis and summarization, generates a branded HTML email digest, and delivers it via SendGrid and Slack.

1
In n8n dashboard: Create new workflow named 'Brand Monitor - Weekly Digest'
2
Import the weekly digest workflow JSON from custom_ai_components section
3
Configure the Cron trigger node to fire every Monday at 7:00 AM in the client's timezone
4
Link all credential nodes to the stored credentials
5
Update the email template HTML with client branding (logo URL, colors)
6
Set the recipient list in the SendGrid node
7
Test-run the workflow manually with sample data
8
Activate the workflow
Note

The digest workflow should be set to the client's local timezone (e.g., America/New_York). Consider sending the digest Sunday evening for agencies that do Monday morning standups. The HTML email template in the custom_ai_components section is customizable — update the logo, colors, and footer with the client's branding. Always test-run with real data before activating the schedule.

Step 11: Configure Google Sheets Historical Archive

Set up the Google Sheets spreadsheet that serves as the data archive for all collected mentions. This sheet is populated by the real-time collection workflow and read by the weekly digest workflow. It also enables the client to build their own Looker Studio dashboards.

1
In the client's Google Workspace, create a new Google Sheet. Name: 'Brand Monitor - [Client Name] - Mentions Archive'
2
Create the following sheets (tabs): Tab 1: 'Raw Mentions' with columns: A: mention_id | B: date | C: source | D: platform | E: author | F: content | G: sentiment | H: sentiment_score | I: reach | J: url | K: brand_or_competitor | L: processed Tab 2: 'Weekly Digests' with columns: A: week_start | B: week_end | C: total_mentions | D: avg_sentiment | E: top_positive | F: top_negative | G: competitor_summary | H: digest_html | I: sent_at Tab 3: 'Competitor Tracker' with columns: A: date | B: competitor_name | C: mention_count | D: avg_sentiment | E: share_of_voice_pct | F: notable_mentions
3
Share the sheet with the n8n service account email (from Google Sheets OAuth2 credential setup)
4
Copy the spreadsheet ID from the URL (the long string between /d/ and /edit in the URL). Use this ID in the n8n workflow Google Sheets nodes
Note

Set up Google Sheets data validation on the sentiment column (Positive/Negative/Neutral) to ensure data consistency. Consider enabling version history on the sheet. For agencies managing multiple clients, create separate spreadsheets per client to maintain data isolation. The spreadsheet ID is needed for n8n node configuration.

Step 12: Configure Alert Thresholds and Escalation Rules

Define the rules that determine when the autonomous agent sends real-time Slack alerts versus simply logging mentions for the weekly digest. This prevents alert fatigue while ensuring critical mentions get immediate attention.

  • In the n8n 'Brand Monitor - Real-Time Collection' workflow, configure the IF/Switch node with these alert rules:
  • CRITICAL ALERT (immediate Slack + email to stakeholders): Mention sentiment is 'Negative' AND reach > 10,000 | Mention source is a Tier 1 publication (configurable list) | Mention volume exceeds 3x the 7-day rolling average in any 1-hour window | Mention contains crisis keywords: 'lawsuit', 'scandal', 'recall', 'hack', 'breach'
  • HIGH PRIORITY (Slack alert within 15 minutes): Any negative mention with reach > 1,000 | Competitor mention that references the client's brand directly | Mentions from verified/influential accounts (blue checkmark indicators)
  • STANDARD (logged to Google Sheets, included in weekly digest): All other mentions
  • Configure these thresholds in the n8n workflow's Function node. The exact implementation is in the custom_ai_components section.
Note

Alert thresholds should be tuned during the first 2 weeks of operation. Start with conservative (higher) thresholds to avoid alert fatigue, then lower them based on client feedback. Document the threshold values in the client's runbook. The reach threshold of 10,000 for critical alerts works for most mid-size agencies — lower it for niche brands, raise it for well-known brands.

Step 13: Set Up Automated Backups

Configure automated daily backups of the n8n workflows, PostgreSQL database, and configuration files to the attached block storage volume. This ensures recovery capability in case of VPS failure.

bash
# Create backup script
cat > ~/n8n-brand-monitor/backup.sh << 'BKEOF'
#!/bin/bash
set -e
BACKUP_DIR=/mnt/backup/n8n-$(date +%Y%m%d-%H%M%S)
mkdir -p $BACKUP_DIR

# Backup PostgreSQL database
docker compose -f ~/n8n-brand-monitor/docker-compose.yml exec -T postgres pg_dump -U n8n n8n > $BACKUP_DIR/n8n_db.sql

# Backup n8n data directory
cp -r ~/n8n-brand-monitor/n8n_data $BACKUP_DIR/n8n_data

# Backup configuration files
cp ~/n8n-brand-monitor/.env $BACKUP_DIR/
cp ~/n8n-brand-monitor/docker-compose.yml $BACKUP_DIR/
cp ~/n8n-brand-monitor/Caddyfile $BACKUP_DIR/

# Compress
tar -czf $BACKUP_DIR.tar.gz -C /mnt/backup $(basename $BACKUP_DIR)
rm -rf $BACKUP_DIR

# Retain only last 30 days of backups
find /mnt/backup -name 'n8n-*.tar.gz' -mtime +30 -delete

echo "Backup completed: $BACKUP_DIR.tar.gz"
BKEOF

chmod +x ~/n8n-brand-monitor/backup.sh

# Schedule daily backup at 2:00 AM
(crontab -l 2>/dev/null; echo '0 2 * * * /home/n8nadmin/n8n-brand-monitor/backup.sh >> /var/log/n8n-backup.log 2>&1') | crontab -

# Test backup script
~/n8n-brand-monitor/backup.sh
Note

Backups run at 2:00 AM server time daily. The 30-day retention keeps storage usage manageable. For critical clients, consider also copying backups off-server to an S3-compatible bucket (Hetzner Object Storage or Backblaze B2, ~$5/month for 100 GB). Test restore procedure at least once before going live.

Step 14: Configure UptimeRobot Monitoring

Set up external uptime monitoring to alert the MSP if the n8n server or any critical component goes down. This ensures the MSP can respond proactively before the client notices any service disruption.

1
Log into https://uptimerobot.com (free account)
2
Add New Monitor: Monitor Type: HTTPS Friendly Name: '[Client Name] Brand Monitor - n8n' URL: https://automation.yourmsp.com/healthz Monitoring Interval: 5 minutes Alert Contacts: MSP team email + Slack webhook
3
Add second monitor for Brand24 API connectivity: Monitor Type: HTTP(s) - Keyword Friendly Name: '[Client Name] Brand24 API' URL: https://api.brand24.com/v3/projects (with auth header) Keyword exists: 'projects' Monitoring Interval: 30 minutes
4
Configure alert contacts: Add MSP team email address Add MSP Slack webhook for #ops-alerts channel
Note

UptimeRobot's free tier supports up to 50 monitors at 5-minute intervals, which is more than sufficient. The n8n health endpoint /healthz returns a 200 status when the application is running. Consider also monitoring the SendGrid and OpenAI API status pages for upstream outage awareness.

Step 15: End-to-End Integration Testing

Run comprehensive end-to-end tests to verify every component of the pipeline works correctly: data collection from Brand24, AI processing, alert delivery, and digest generation. Document results for client handoff.

1
Test 1: Verify Brand24 API data collection — In n8n, open 'Brand Monitor - Real-Time Collection' workflow. Click 'Execute Workflow' manually. Verify: Mentions appear in execution output and are written to Google Sheets.
2
Test 2: Trigger a test alert — In n8n, find a negative mention from the test execution. Manually set its reach to 15000 in the test data. Re-run the alert evaluation node. Verify: Slack message appears in #brand-alerts channel.
3
Test 3: Generate a test weekly digest — In n8n, open 'Brand Monitor - Weekly Digest' workflow. Click 'Execute Workflow' manually. Verify: a) GPT-5.4 mini returns a structured summary, b) HTML email is correctly formatted, c) Email arrives in test inbox via SendGrid, d) Slack message posts to #weekly-digest channel, e) Google Sheets 'Weekly Digests' tab is updated.
4
Test 4: Verify Google Sheets data integrity — Open the Google Sheet and confirm: Raw Mentions tab has data with all columns populated, Sentiment values are one of: Positive, Negative, Neutral, URLs are clickable and point to actual mentions.
5
Test 5: Failure handling — Temporarily revoke the Brand24 API key in n8n. Run the collection workflow. Verify: Error notification is sent to MSP Slack channel. Restore the correct API key.
Note

Document all test results with screenshots for the client handoff package. If any test fails, troubleshoot and re-test before proceeding. Pay special attention to the email digest rendering in different email clients (Gmail, Outlook, Apple Mail) — HTML email rendering varies significantly. Send test digests to multiple email providers.

Custom AI Components

Brand24 Real-Time Collection Agent

Type: workflow n8n workflow that runs every 6 hours, pulls new mentions from Brand24 API, classifies them by urgency, stores them in Google Sheets, and sends real-time Slack alerts for high-priority mentions. This is the always-on monitoring agent.

Implementation:

n8n Workflow JSON (import via n8n UI > Import from JSON) — Workflow Name: Brand Monitor - Real-Time Collection
json
Node Configuration:

1. CRON TRIGGER NODE
   - Type: Schedule Trigger
   - Rule: Every 6 hours (0 */6 * * *)
   - Timezone: Client's timezone (e.g., America/New_York)

2. HTTP REQUEST NODE - 'Fetch Brand24 Mentions'
   - Method: GET
   - URL: https://api.brand24.com/v3/search
   - Authentication: Header Auth (Brand24 API credential)
   - Query Parameters:
     - projectId: {{$env.BRAND24_PROJECT_ID}}
     - sinceDate: {{$now.minus(6, 'hours').toISO()}}
     - limit: 100
     - sort: date_desc
   - Response Format: JSON

3. IF NODE - 'Check for Results'
   - Condition: {{$json.data.length}} > 0
   - True: Continue to processing
   - False: End (no new mentions)

4. SPLIT IN BATCHES NODE - 'Process Each Mention'
   - Batch Size: 1

5. FUNCTION NODE - 'Classify Urgency'
   - Code:
      const mention = $input.item.json;
   const content = (mention.description || '').toLowerCase();
   const reach = mention.reach || 0;
   const sentiment = mention.sentiment || 0;
   
   // Crisis keywords
   const crisisKeywords = ['lawsuit', 'scandal', 'recall', 'hack', 'breach', 'fraud', 'fired', 'boycott', 'investigation'];
   const hasCrisisKeyword = crisisKeywords.some(kw => content.includes(kw));
   
   // Classify urgency
   let urgency = 'standard';
   let sentimentLabel = 'Neutral';
   
   if (sentiment < 0) sentimentLabel = 'Negative';
   else if (sentiment > 0) sentimentLabel = 'Positive';
   
   if (hasCrisisKeyword || (sentimentLabel === 'Negative' && reach > 10000)) {
     urgency = 'critical';
   } else if (sentimentLabel === 'Negative' && reach > 1000) {
     urgency = 'high';
   } else if (reach > 5000) {
     urgency = 'high';
   }
   
   return {
     json: {
       mention_id: mention.id,
       date: mention.date,
       source: mention.source_type,
       platform: mention.domain || mention.source_type,
       author: mention.author || 'Unknown',
       content: mention.description || '',
       sentiment: sentimentLabel,
       sentiment_score: sentiment,
       reach: reach,
       url: mention.url || '',
       brand_or_competitor: mention.query || 'Unknown',
       processed: false,
       urgency: urgency
     }
   };
   
6. GOOGLE SHEETS NODE - 'Log to Archive'
   - Operation: Append Row
   - Document ID: {{SPREADSHEET_ID}}
   - Sheet Name: Raw Mentions
   - Columns: Map all fields from Function node

7. SWITCH NODE - 'Route by Urgency'
   - Rules:
     - urgency equals 'critical' → Critical Alert path
     - urgency equals 'high' → High Alert path
     - Default → End (standard mentions logged only)

8. SLACK NODE - 'Critical Alert' (Critical path)
   - Channel: #brand-alerts
   - Message:
      🚨 *CRITICAL BRAND ALERT*
   
   *Source:* {{$json.platform}}
   *Sentiment:* {{$json.sentiment}} (Score: {{$json.sentiment_score}})
   *Reach:* {{$json.reach.toLocaleString()}} people
   *Author:* {{$json.author}}
   
   > {{$json.content.substring(0, 500)}}
   
   🔗 <{{$json.url}}|View Original>
   📊 Brand/Keyword: {{$json.brand_or_competitor}}
   
9. SLACK NODE - 'High Priority Alert' (High path)
   - Channel: #brand-alerts
   - Message:
      ⚠️ *High Priority Mention*
   
   *Source:* {{$json.platform}} | *Sentiment:* {{$json.sentiment}} | *Reach:* {{$json.reach.toLocaleString()}}
   *Author:* {{$json.author}}
   
   > {{$json.content.substring(0, 300)}}
   
   🔗 <{{$json.url}}|View Original>
   
10. ERROR TRIGGER NODE - 'On Error'
    - Connects to Slack node sending to MSP's #ops-alerts channel
    - Message: '🔴 Brand Monitor workflow error for [Client Name]: {{$json.error.message}}'

Weekly Digest Generation Agent

Type: workflow n8n workflow that runs every Monday at 7:00 AM, aggregates the past 7 days of mentions from Google Sheets, sends them to GPT-5.4 mini for comprehensive analysis, generates a branded HTML email digest, and delivers via SendGrid email and Slack.

Implementation:

n8n Workflow JSON (import via n8n UI)
json
# Workflow Name: Brand Monitor - Weekly Digest

Node Configuration:

1. CRON TRIGGER NODE
   - Type: Schedule Trigger
   - Rule: Every Monday at 7:00 AM (0 7 * * 1)
   - Timezone: Client's timezone

2. GOOGLE SHEETS NODE - 'Fetch This Week Mentions'
   - Operation: Read Rows
   - Document ID: {{SPREADSHEET_ID}}
   - Sheet Name: Raw Mentions
   - Filters: date >= {{$now.minus(7, 'days').toISODate()}}

3. FUNCTION NODE - 'Aggregate Statistics'
   - Code:
      const mentions = $input.all().map(i => i.json);
   
   const totalMentions = mentions.length;
   const positiveMentions = mentions.filter(m => m.sentiment === 'Positive').length;
   const negativeMentions = mentions.filter(m => m.sentiment === 'Negative').length;
   const neutralMentions = mentions.filter(m => m.sentiment === 'Neutral').length;
   
   const totalReach = mentions.reduce((sum, m) => sum + (parseInt(m.reach) || 0), 0);
   
   // Group by brand/competitor
   const byBrand = {};
   mentions.forEach(m => {
     const key = m.brand_or_competitor || 'Unknown';
     if (!byBrand[key]) byBrand[key] = { count: 0, positive: 0, negative: 0, neutral: 0, reach: 0 };
     byBrand[key].count++;
     byBrand[key].reach += parseInt(m.reach) || 0;
     if (m.sentiment === 'Positive') byBrand[key].positive++;
     else if (m.sentiment === 'Negative') byBrand[key].negative++;
     else byBrand[key].neutral++;
   });
   
   // Group by platform
   const byPlatform = {};
   mentions.forEach(m => {
     const key = m.platform || 'Unknown';
     if (!byPlatform[key]) byPlatform[key] = 0;
     byPlatform[key]++;
   });
   
   // Top mentions by reach
   const topPositive = mentions
     .filter(m => m.sentiment === 'Positive')
     .sort((a, b) => (parseInt(b.reach) || 0) - (parseInt(a.reach) || 0))
     .slice(0, 5);
   
   const topNegative = mentions
     .filter(m => m.sentiment === 'Negative')
     .sort((a, b) => (parseInt(b.reach) || 0) - (parseInt(a.reach) || 0))
     .slice(0, 5);
   
   // Calculate share of voice
   const totalBrandMentions = Object.values(byBrand).reduce((sum, b) => sum + b.count, 0);
   const shareOfVoice = {};
   for (const [brand, data] of Object.entries(byBrand)) {
     shareOfVoice[brand] = ((data.count / totalBrandMentions) * 100).toFixed(1) + '%';
   }
   
   // Prepare mention summaries for GPT (limit to 50 most impactful)
   const topMentionsForGPT = mentions
     .sort((a, b) => (parseInt(b.reach) || 0) - (parseInt(a.reach) || 0))
     .slice(0, 50)
     .map(m => `[${m.sentiment}] [${m.platform}] [Reach: ${m.reach}] ${m.author}: ${m.content.substring(0, 200)}`)
     .join('\n');
   
   return {
     json: {
       stats: {
         totalMentions, positiveMentions, negativeMentions, neutralMentions, totalReach,
         sentimentRatio: totalMentions > 0 ? ((positiveMentions / totalMentions) * 100).toFixed(1) : '0',
         byBrand, byPlatform, shareOfVoice
       },
       topPositive: topPositive.map(m => ({ author: m.author, content: m.content.substring(0, 200), reach: m.reach, url: m.url, platform: m.platform })),
       topNegative: topNegative.map(m => ({ author: m.author, content: m.content.substring(0, 200), reach: m.reach, url: m.url, platform: m.platform })),
       mentionsForGPT: topMentionsForGPT,
       weekStart: $now.minus(7, 'days').toISODate(),
       weekEnd: $now.toISODate()
     }
   };
  • OPENAI NODE - 'Generate AI Analysis': Model: gpt-5.4-mini, Temperature: 0.3, Max Tokens: 2000 — System Prompt: see 'Digest Analysis System Prompt' component below

OpenAI Node — User Message: Generate AI Analysis

Generate the weekly brand monitoring digest for the period {{$json.weekStart}} to {{$json.weekEnd}}. STATISTICS: - Total mentions: {{$json.stats.totalMentions}} - Positive: {{$json.stats.positiveMentions}} ({{$json.stats.sentimentRatio}}%) - Negative: {{$json.stats.negativeMentions}} - Neutral: {{$json.stats.neutralMentions}} - Total reach: {{$json.stats.totalReach}} SHARE OF VOICE: {{JSON.stringify($json.stats.shareOfVoice, null, 2)}} BY PLATFORM: {{JSON.stringify($json.stats.byPlatform, null, 2)}} BY BRAND/COMPETITOR: {{JSON.stringify($json.stats.byBrand, null, 2)}} TOP MENTIONS THIS WEEK: {{$json.mentionsForGPT}}
Sonnet 4.6
  • FUNCTION NODE - 'Build HTML Email': Code: see 'Digest Email HTML Template' component below
SENDGRID NODE — 'Send Digest Email' configuration
text
- From: digest@clientdomain.com (configured sender)
- To: {{DIGEST_RECIPIENTS}} (comma-separated list)
- Subject: '📊 Weekly Brand Intelligence Digest | {{$json.weekStart}} – {{$json.weekEnd}}'
- Content Type: text/html
- Body: {{$json.emailHtml}}
SLACK NODE
text
# 'Post Digest Summary to Slack' message template (Channel: #weekly-digest)

📊 *Weekly Brand Intelligence Digest*
📅 {{$json.weekStart}} – {{$json.weekEnd}}

*Key Numbers:*
• Total Mentions: {{$json.stats.totalMentions}}
• Sentiment: {{$json.stats.sentimentRatio}}% positive
• Total Reach: {{$json.stats.totalReach.toLocaleString()}}

*AI Summary:*
{{$json.aiAnalysis.substring(0, 1500)}}

📧 Full digest sent to the team via email.
GOOGLE SHEETS NODE — 'Log Digest Record' configuration
text
- Operation: Append Row
- Sheet: Weekly Digests
- Data: week_start, week_end, total_mentions, avg_sentiment, top_positive, top_negative, competitor_summary, digest_html, sent_at

Digest Analysis System Prompt

Type: prompt System prompt for GPT-5.4 mini that instructs it to analyze weekly brand mention data and produce a structured, actionable executive digest suitable for marketing agency leadership and their clients.

Implementation:

Digest Analysis System Prompt

You are a senior brand intelligence analyst working for a marketing agency. Your job is to analyze weekly brand monitoring data and produce a concise, insightful executive digest. Your analysis must follow this exact structure: ## 1. EXECUTIVE SUMMARY (2-3 sentences) The single most important takeaway from this week's data. Lead with insight, not statistics. ## 2. SENTIMENT OVERVIEW Brief analysis of overall sentiment trends. Compare to what would be expected. Flag any concerning shifts. ## 3. TOP POSITIVE HIGHLIGHTS List the 3-5 most impactful positive mentions with context on why they matter. ## 4. ATTENTION REQUIRED List any negative mentions or emerging issues that need the team's attention. Include recommended actions. ## 5. COMPETITOR INTELLIGENCE Analyze competitor mention volume and sentiment. Highlight any notable competitor activities, campaigns, or shifts in share of voice. Identify competitive threats or opportunities. ## 6. PLATFORM BREAKDOWN Which platforms are driving the most conversation? Any platform-specific trends? ## 7. RECOMMENDED ACTIONS Provide 3-5 specific, actionable recommendations based on this week's data. These should be things the marketing team can act on immediately. RULES: - Be concise. Each section should be 2-5 sentences maximum. - Use plain, professional language suitable for agency executives. - Always quantify when possible (mention counts, reach numbers, percentages). - If data is insufficient for a section, say so briefly rather than fabricating insights. - Never invent mentions or statistics not present in the provided data. - Use bullet points for lists. - Format competitor names in **bold**. - End with a forward-looking statement about what to watch next week.
Sonnet 4.6

Digest Email HTML Template

Type: integration HTML email template function node that combines the AI analysis with statistics and branding into a professional, mobile-responsive digest email. Customizable per client.

Implementation:

n8n Function Node Code (paste into the 'Build HTML Email' Function node)
javascript
const stats = $input.item.json.stats;
const aiAnalysis = $input.item.json.aiAnalysis || $('Generate AI Analysis').item.json.message.content;
const weekStart = $input.item.json.weekStart;
const weekEnd = $input.item.json.weekEnd;
const topPositive = $input.item.json.topPositive || [];
const topNegative = $input.item.json.topNegative || [];

// Convert markdown headers to HTML
const analysisHtml = aiAnalysis
  .replace(/## (\d+\..+)/g, '<h3 style="color:#1a1a2e;margin-top:24px;margin-bottom:8px;font-size:16px;">$1</h3>')
  .replace(/\*\*(.+?)\*\*/g, '<strong>$1</strong>')
  .replace(/\n- /g, '\n<br>• ')
  .replace(/\n\* /g, '\n<br>• ')
  .replace(/\n/g, '<br>');

// CLIENT BRANDING - UPDATE THESE VALUES PER CLIENT
const CLIENT_NAME = 'Agency Name';  // CHANGE THIS
const LOGO_URL = 'https://via.placeholder.com/200x50?text=Logo';  // CHANGE THIS
const PRIMARY_COLOR = '#1a1a2e';  // CHANGE THIS
const ACCENT_COLOR = '#e94560';  // CHANGE THIS

const sentimentColor = parseFloat(stats.sentimentRatio) >= 60 ? '#27ae60' : parseFloat(stats.sentimentRatio) >= 40 ? '#f39c12' : '#e74c3c';

const emailHtml = `
<!DOCTYPE html>
<html>
<head>
  <meta charset="utf-8">
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
  <title>Weekly Brand Intelligence Digest</title>
</head>
<body style="margin:0;padding:0;background-color:#f4f4f8;font-family:Arial,Helvetica,sans-serif;">
  <table role="presentation" width="100%" cellpadding="0" cellspacing="0" style="background-color:#f4f4f8;">
    <tr><td align="center" style="padding:20px 10px;">
      <table role="presentation" width="600" cellpadding="0" cellspacing="0" style="background-color:#ffffff;border-radius:8px;overflow:hidden;box-shadow:0 2px 8px rgba(0,0,0,0.1);">
        
        <!-- Header -->
        <tr><td style="background-color:${PRIMARY_COLOR};padding:24px 32px;text-align:center;">
          <img src="${LOGO_URL}" alt="${CLIENT_NAME}" style="max-width:180px;height:auto;margin-bottom:12px;">
          <h1 style="color:#ffffff;margin:0;font-size:22px;font-weight:600;">Weekly Brand Intelligence Digest</h1>
          <p style="color:#ccccdd;margin:8px 0 0;font-size:14px;">${weekStart} — ${weekEnd}</p>
        </td></tr>
        
        <!-- Stats Bar -->
        <tr><td style="padding:24px 32px;background-color:#f8f9fa;">
          <table role="presentation" width="100%" cellpadding="0" cellspacing="0">
            <tr>
              <td width="25%" style="text-align:center;padding:8px;">
                <div style="font-size:28px;font-weight:700;color:${PRIMARY_COLOR};">${stats.totalMentions}</div>
                <div style="font-size:12px;color:#666;text-transform:uppercase;letter-spacing:0.5px;">Total Mentions</div>
              </td>
              <td width="25%" style="text-align:center;padding:8px;">
                <div style="font-size:28px;font-weight:700;color:${sentimentColor};">${stats.sentimentRatio}%</div>
                <div style="font-size:12px;color:#666;text-transform:uppercase;letter-spacing:0.5px;">Positive</div>
              </td>
              <td width="25%" style="text-align:center;padding:8px;">
                <div style="font-size:28px;font-weight:700;color:${ACCENT_COLOR};">${stats.negativeMentions}</div>
                <div style="font-size:12px;color:#666;text-transform:uppercase;letter-spacing:0.5px;">Negative</div>
              </td>
              <td width="25%" style="text-align:center;padding:8px;">
                <div style="font-size:28px;font-weight:700;color:${PRIMARY_COLOR};">${stats.totalReach.toLocaleString()}</div>
                <div style="font-size:12px;color:#666;text-transform:uppercase;letter-spacing:0.5px;">Total Reach</div>
              </td>
            </tr>
          </table>
        </td></tr>
        
        <!-- AI Analysis -->
        <tr><td style="padding:24px 32px;">
          ${analysisHtml}
        </td></tr>
        
        <!-- Share of Voice -->
        <tr><td style="padding:0 32px 24px;">
          <h3 style="color:${PRIMARY_COLOR};border-bottom:2px solid ${ACCENT_COLOR};padding-bottom:8px;">Share of Voice</h3>
          <table role="presentation" width="100%" cellpadding="8" cellspacing="0" style="font-size:14px;">
            ${Object.entries(stats.shareOfVoice || {}).map(([brand, pct]) => 
              `<tr style="border-bottom:1px solid #eee;"><td><strong>${brand}</strong></td><td style="text-align:right;">${pct}</td></tr>`
            ).join('')}
          </table>
        </td></tr>
        
        <!-- Footer -->
        <tr><td style="background-color:${PRIMARY_COLOR};padding:20px 32px;text-align:center;">
          <p style="color:#ccccdd;margin:0;font-size:12px;">Powered by AI Brand Intelligence | Managed by Your MSP</p>
          <p style="color:#999;margin:8px 0 0;font-size:11px;">This digest was automatically generated. Data sourced from social media, news, blogs, and forums.</p>
        </td></tr>
        
      </table>
    </td></tr>
  </table>
</body>
</html>
`;

return {
  json: {
    emailHtml,
    aiAnalysis,
    stats,
    weekStart,
    weekEnd,
    topPositive,
    topNegative
  }
};
  • Update CLIENT_NAME, LOGO_URL, PRIMARY_COLOR, and ACCENT_COLOR per client
  • LOGO_URL should point to a hosted image (use the client's website logo or upload to an image CDN)
  • Test rendering in https://litmus.com or https://www.htmlemailcheck.com before go-live
  • The template is mobile-responsive using inline styles (required for email clients)

Alert Urgency Classifier

Type: skill JavaScript function that classifies incoming mentions into urgency tiers (critical, high, standard) based on configurable rules including sentiment, reach, crisis keywords, and source authority. Used in the real-time collection workflow's Function node.

Implementation:

javascript
// Alert Urgency Classifier - Configurable per client
// Place this in an n8n Function node

// ===== CLIENT-SPECIFIC CONFIGURATION =====
const CONFIG = {
  // Crisis keywords trigger immediate critical alerts
  crisisKeywords: [
    'lawsuit', 'scandal', 'recall', 'hack', 'breach', 'fraud',
    'fired', 'boycott', 'investigation', 'class action', 'data leak',
    'discrimination', 'harassment', 'SEC', 'FDA warning'
  ],
  
  // Tier 1 publications - mentions here are always high priority
  tier1Sources: [
    'nytimes.com', 'wsj.com', 'bbc.com', 'reuters.com', 'bloomberg.com',
    'techcrunch.com', 'theverge.com', 'mashable.com', 'adweek.com',
    'adage.com', 'marketingweek.com', 'digiday.com'
  ],
  
  // Thresholds
  criticalReachThreshold: 10000,  // Negative + this reach = critical
  highReachThreshold: 1000,       // Negative + this reach = high
  highReachAnysentiment: 5000,    // Any sentiment + this reach = high
  
  // Mention volume spike detection (3x rolling average)
  spikeMultiplier: 3.0
};

// ===== CLASSIFICATION LOGIC =====
function classifyMention(mention) {
  const content = (mention.content || '').toLowerCase();
  const reach = parseInt(mention.reach) || 0;
  const sentiment = mention.sentiment || 'Neutral';
  const source = (mention.platform || '').toLowerCase();
  const url = (mention.url || '').toLowerCase();
  
  let urgency = 'standard';
  let reasons = [];
  
  // Check crisis keywords
  const foundCrisis = CONFIG.crisisKeywords.filter(kw => content.includes(kw));
  if (foundCrisis.length > 0) {
    urgency = 'critical';
    reasons.push(`Crisis keyword detected: ${foundCrisis.join(', ')}`);
  }
  
  // Check negative + high reach
  if (sentiment === 'Negative' && reach > CONFIG.criticalReachThreshold) {
    urgency = 'critical';
    reasons.push(`Negative sentiment with reach ${reach.toLocaleString()}`);
  }
  
  // Check Tier 1 source
  const isTier1 = CONFIG.tier1Sources.some(s => url.includes(s));
  if (isTier1) {
    if (urgency !== 'critical') urgency = 'high';
    reasons.push(`Tier 1 source: ${url}`);
  }
  
  // Check negative + moderate reach
  if (sentiment === 'Negative' && reach > CONFIG.highReachThreshold && urgency === 'standard') {
    urgency = 'high';
    reasons.push(`Negative sentiment with reach ${reach.toLocaleString()}`);
  }
  
  // Check any sentiment + very high reach
  if (reach > CONFIG.highReachAnysentiment && urgency === 'standard') {
    urgency = 'high';
    reasons.push(`High reach mention: ${reach.toLocaleString()}`);
  }
  
  return {
    ...mention,
    urgency,
    alert_reasons: reasons.join('; ') || 'Standard monitoring - included in weekly digest'
  };
}

// Process the incoming mention
const mention = $input.item.json;
const classified = classifyMention(mention);

return { json: classified };
1
Update crisisKeywords array with industry-specific terms for each client
2
Update tier1Sources with publications relevant to the client's industry
3
Adjust reach thresholds based on the client's brand size: - Small/niche brand: criticalReachThreshold: 5000, highReachThreshold: 500 - Mid-size brand: criticalReachThreshold: 10000, highReachThreshold: 1000 (default) - Large/well-known brand: criticalReachThreshold: 50000, highReachThreshold: 5000
4
Document the configured thresholds in the client's runbook

Competitor Share-of-Voice Calculator

Type: skill Calculates and tracks share-of-voice percentages across the client's brand and all monitored competitors, with week-over-week trend analysis. Feeds into both the weekly digest and the Google Sheets Competitor Tracker tab.

Implementation:

n8n Function Node — Competitor Share-of-Voice Calculator
javascript
// Competitor Share-of-Voice Calculator
// n8n Function Node - runs as part of the Weekly Digest workflow

const mentions = $input.all().map(i => i.json);

// Group mentions by brand/competitor
const brandGroups = {};
mentions.forEach(m => {
  const brand = m.brand_or_competitor || 'Unknown';
  if (!brandGroups[brand]) {
    brandGroups[brand] = {
      name: brand,
      mentionCount: 0,
      totalReach: 0,
      positive: 0,
      negative: 0,
      neutral: 0,
      platforms: {},
      topMentions: []
    };
  }
  const group = brandGroups[brand];
  group.mentionCount++;
  group.totalReach += parseInt(m.reach) || 0;
  
  if (m.sentiment === 'Positive') group.positive++;
  else if (m.sentiment === 'Negative') group.negative++;
  else group.neutral++;
  
  const platform = m.platform || 'Unknown';
  group.platforms[platform] = (group.platforms[platform] || 0) + 1;
  
  // Track top mentions by reach
  group.topMentions.push({ content: m.content, reach: parseInt(m.reach) || 0, url: m.url });
});

// Calculate share of voice
const totalMentions = mentions.length;
const shareOfVoice = {};
const competitorReport = [];

for (const [brand, data] of Object.entries(brandGroups)) {
  const sovPct = totalMentions > 0 ? ((data.mentionCount / totalMentions) * 100) : 0;
  shareOfVoice[brand] = sovPct.toFixed(1) + '%';
  
  // Sort top mentions by reach, keep top 3
  data.topMentions.sort((a, b) => b.reach - a.reach);
  const notable = data.topMentions.slice(0, 3).map(m => m.content.substring(0, 100)).join(' | ');
  
  competitorReport.push({
    date: new Date().toISOString().split('T')[0],
    competitor_name: brand,
    mention_count: data.mentionCount,
    avg_sentiment: data.mentionCount > 0 
      ? ((data.positive - data.negative) / data.mentionCount).toFixed(2) 
      : '0.00',
    share_of_voice_pct: sovPct.toFixed(1),
    notable_mentions: notable,
    positive_count: data.positive,
    negative_count: data.negative,
    total_reach: data.totalReach,
    top_platform: Object.entries(data.platforms).sort((a, b) => b[1] - a[1])[0]?.[0] || 'N/A'
  });
}

// Sort by share of voice descending
competitorReport.sort((a, b) => parseFloat(b.share_of_voice_pct) - parseFloat(a.share_of_voice_pct));

return {
  json: {
    shareOfVoice,
    competitorReport,
    totalMentionsAnalyzed: totalMentions,
    brandGroups // Pass full data to next node for GPT analysis
  }
};
  • This node runs between the Google Sheets read and the OpenAI analysis in the Weekly Digest workflow
  • Output feeds into both the GPT prompt (for narrative analysis) and a Google Sheets append (Competitor Tracker tab)
  • Add a Google Sheets node after this to append competitorReport rows to the 'Competitor Tracker' tab

Testing & Validation

  • TEST 1 - Brand24 API Connectivity: In n8n, create a simple HTTP Request node pointing to https://api.brand24.com/v3/projects with the Brand24 Header Auth credential. Execute it and verify a 200 response with the project list JSON. Expected: JSON array containing the configured project with its ID and name.
  • TEST 2 - Mention Collection Pipeline: Manually execute the 'Brand Monitor - Real-Time Collection' workflow. Verify that (a) Brand24 returns mention data, (b) each mention passes through the urgency classifier, (c) all mentions are appended to the Google Sheets 'Raw Mentions' tab with all 12 columns populated, and (d) no errors appear in the execution log. Expected: At least 1 mention collected and logged.
  • TEST 3 - Critical Alert Delivery: Inject a test mention into the workflow with sentiment='Negative', reach=15000, and content containing the word 'lawsuit'. Verify that (a) the urgency classifier labels it 'critical', (b) a Slack message appears in #brand-alerts within 30 seconds, and (c) the message contains the red 🚨 critical alert formatting with mention details.
  • TEST 4 - High Priority Alert Delivery: Inject a test mention with sentiment='Negative' and reach=2000. Verify it is classified as 'high' priority and a ⚠️ formatted Slack alert appears in #brand-alerts.
  • TEST 5 - Standard Mention Logging (No Alert): Inject a test mention with sentiment='Positive' and reach=50. Verify it is classified as 'standard', logged to Google Sheets, but NO Slack alert is sent. This confirms alert fatigue prevention is working.
  • TEST 6 - Weekly Digest AI Analysis: Manually execute the Weekly Digest workflow. Verify that (a) Google Sheets data for the past 7 days is retrieved, (b) the statistics aggregation function produces correct counts (manually verify against the sheet), (c) GPT-5.4 mini returns a structured analysis following all 7 sections defined in the system prompt, and (d) the analysis contains no hallucinated statistics.
  • TEST 7 - Email Digest Delivery and Rendering: After the digest workflow completes, verify (a) the email arrives in the test recipient's inbox (check spam folder), (b) the HTML renders correctly in Gmail web, Outlook desktop, and Apple Mail, (c) all statistics in the email header match the Google Sheets data, (d) the Share of Voice table populates correctly, and (e) the client logo and brand colors display properly.
  • TEST 8 - Slack Digest Summary: Verify the weekly digest summary posts to the #weekly-digest Slack channel with correct statistics and a truncated AI summary.
  • TEST 9 - Google Sheets Competitor Tracker: After the digest workflow runs, verify the 'Competitor Tracker' tab has new rows for each tracked brand/competitor with correct mention counts, sentiment averages, and share-of-voice percentages.
  • TEST 10 - Error Handling and MSP Alerts: Temporarily replace the Brand24 API key with an invalid key. Run the collection workflow. Verify (a) the workflow fails gracefully without crashing n8n, (b) an error notification is sent to the MSP's Slack #ops-alerts channel with the error message, and (c) subsequent scheduled runs still trigger (the cron isn't disabled by the error). Restore the correct API key afterward.
  • TEST 11 - Backup and Recovery: Run the backup script manually (~/n8n-brand-monitor/backup.sh). Verify a .tar.gz file is created in /mnt/backup/. To test recovery: stop Docker containers, delete the postgres_data directory, restore from backup, restart containers, and verify n8n loads with all workflows and credentials intact.
  • TEST 12 - Uptime Monitoring: Verify UptimeRobot shows the n8n monitor as 'Up' with a green status. Temporarily stop the n8n container (docker compose stop n8n). Wait 5–10 minutes and verify UptimeRobot sends a down alert. Restart the container and verify an 'Up' recovery notification is received.
  • TEST 13 - Full Week Simulation: Let the system run autonomously for one full week (7 days) without manual intervention. At the end of the week, verify: (a) collection ran 28 times (4x/day × 7 days), (b) the Monday digest was generated and delivered on schedule, (c) no workflow errors occurred, (d) Google Sheets has a complete record of all mentions, and (e) any high/critical mentions generated appropriate real-time alerts.

Client Handoff

The client handoff session should be a 60-minute video call (recorded for future reference) covering the following:

1
System Overview (10 min): Walk through the architecture diagram showing how Brand24 feeds into n8n, which uses AI to generate alerts and digests. Explain the three Slack channels (#brand-alerts, #competitor-intel, #weekly-digest) and what to expect in each. Show the weekly digest email sample and explain each section.
2
Brand24 Dashboard Training (15 min): Log into Brand24 with the client's credentials. Show how to view live mentions, filter by sentiment/source/date, and understand the built-in analytics dashboard. Demonstrate how to add or remove keywords (noting this should be coordinated with the MSP to update the automation).
3
Slack Alerts Guide (10 min): Demonstrate the alert format differences between critical (🚨), high (⚠️), and summary posts. Explain how to @mention team members or react to alerts. Show how to mute or adjust notification preferences per channel.
4
Weekly Digest Walkthrough (10 min): Review a sample weekly digest email section by section. Explain the statistics bar, AI analysis sections, share-of-voice table, and recommended actions. Clarify that the AI recommendations are suggestions, not directives.
5
Google Sheets Archive Access (5 min): Show the client the shared Google Sheet with Raw Mentions, Weekly Digests, and Competitor Tracker tabs. Explain they can build their own Looker Studio dashboards from this data or use it for client reporting.
6
Change Request Process (5 min): Explain how to request changes: adding/removing tracked keywords, adjusting alert thresholds, modifying digest recipients, updating competitor list. Provide the MSP support email/ticket system. Typical turnaround: 1–2 business days for keyword changes, 3–5 days for workflow modifications.
7
Escalation and Support (5 min): Provide MSP contact information for: (a) urgent issues (system down, no alerts received) — MSP support phone/email with 4-hour SLA, (b) non-urgent requests (keyword changes, threshold adjustments) — ticket system with 48-hour SLA, (c) quarterly review scheduling.

Documentation to Deliver

Success Criteria to Review Together

Maintenance

Weekly Maintenance (15 min/week, automated monitoring)

  • Review n8n execution logs for any failed workflows (check every Monday after digest delivery)
  • Verify UptimeRobot shows 99.9%+ uptime for the past week
  • Spot-check one Slack alert and the weekly digest for quality/accuracy
  • Monitor OpenAI API usage dashboard to ensure spending is within the $50/month cap

Monthly Maintenance (1–2 hours/month)

  • Review and rotate API keys if security policy requires it (OpenAI, Brand24, SendGrid, Slack)
  • Update n8n to the latest stable version
  • Review Brand24 mention volume — if consistently hitting the 5,000/month limit, discuss plan upgrade with client
  • Check Google Sheets row count — if approaching 100,000 rows, archive older data to a separate sheet
  • Review and clear n8n execution history older than 30 days to prevent database bloat: n8n Settings > Execution Data > Prune
  • Verify backup script is running daily and /mnt/backup has recent files
  • Send monthly service report to client: uptime percentage, total mentions processed, alerts delivered, digest delivery confirmations
Update n8n to the latest stable version
shell
cd ~/n8n-brand-monitor && docker compose pull n8n && docker compose up -d n8n

Quarterly Maintenance (2–3 hours/quarter)

  • Schedule a 30-minute review call with the client to discuss: effectiveness of monitoring keywords (add/remove as needed), accuracy of sentiment analysis (adjust Brand24 filters if too many false positives), relevance of competitor list (businesses change, new competitors emerge), alert threshold tuning (too many alerts = fatigue, too few = missed insights), satisfaction with digest format and content
  • Review and update the GPT system prompt if digest quality has drifted or client wants different emphasis
  • Test disaster recovery: perform a backup restore to a test environment to verify recoverability
  • Review platform pricing changes and renegotiate if needed
  • Update crisis keyword list based on industry developments

Annual Maintenance

  • Full security audit: SSL certificates (auto-renewed by Caddy), server OS patches, Docker image vulnerabilities
  • VPS right-sizing: review CPU/RAM usage metrics — scale up or down based on actual utilization
  • Contract renewal discussions with client: review ROI, propose enhancements (e.g., adding new channels, more competitors, custom dashboards)
  • Evaluate whether Brand24 is still the best-value platform or if a migration to Mention, Mentionlytics, or Sprout Social is warranted
  • Upgrade Ubuntu LTS if a new version is available (every 2 years)

SLA Considerations

  • Real-time alerts: 99.5% delivery uptime (maximum 3.65 hours downtime/month)
  • Weekly digest: delivered within 2 hours of scheduled time, 100% of weeks (52/52)
  • MSP response time for critical issues (system down): 4 hours during business hours, next business day for after-hours
  • MSP response time for change requests: 2 business days for keyword/threshold changes, 5 business days for workflow modifications
  • Data retention: 24 months of mention archives maintained in Google Sheets

Escalation Path

1
Automated: UptimeRobot alerts MSP ops team via Slack + email
2
Level 1: MSP junior technician checks n8n logs, restarts containers if needed (15 min resolution target)
3
Level 2: MSP senior technician investigates API failures, workflow errors, or data quality issues (2 hour resolution target)
4
Level 3: MSP architect for platform migrations, major workflow redesigns, or vendor escalations
5
Vendor support: Brand24 support (support@brand24.com), n8n community forum or paid support, OpenAI API status page (status.openai.com)

Alternatives

...

Fully Managed SaaS (No Self-Hosting)

Replace self-hosted n8n with n8n Cloud ($20/month starter), Zapier ($19.99/month), or Make.com ($9/month). All monitoring, orchestration, and delivery runs entirely on vendor-managed SaaS platforms with zero server management. Brand24's built-in weekly email reports can partially replace custom digests for simpler requirements.

Enterprise-Grade: Sprout Social or Brandwatch + Custom Integration

Use Sprout Social Advanced Listening ($999+/month add-on) or Brandwatch ($2,000–$5,000/month) as the monitoring platform instead of Brand24. These platforms have more sophisticated built-in analytics, deeper historical data, and native competitive benchmarking. Pair with Make.com or custom Python scripts for digest generation.

Open-Source DIY: Python + Scrapers + Self-Hosted LLM

Build a fully custom solution using Python scripts with social media APIs (Twitter/X API, Reddit API, NewsAPI), BeautifulSoup/Scrapy for web scraping, a local sentiment model (VADER or transformers), and an open-source LLM (Llama 3 or Mistral) running on a GPU VPS for digest generation. Orchestrate with CrewAI or LangChain.

Replace Brand24 ($149/month) with Mentionlytics ($69/month starting) for budget-conscious agencies. Mentionlytics includes sentiment analysis, competitor tracking, and AI-powered insights even on lower pricing plans. Rest of the stack remains the same (n8n + OpenAI + SendGrid + Slack).

CrewAI Multi-Agent Orchestration

Want early access to the full toolkit?