
Implementation Guide: Draft service follow-up communications and maintenance reminder campaigns
Step-by-step implementation guide for deploying AI to draft service follow-up communications and maintenance reminder campaigns for Automotive clients.
Hardware Procurement
Automation Server
$1,800 per unit (MSP cost) / $2,800–$3,200 suggested resale
Hosts the self-hosted n8n Community Edition workflow orchestrator, local DMS API bridge middleware, prompt template repository, and audit logging database. Provides on-premise control for data processing and eliminates per-execution cloud fees. Can serve multiple dealership clients if MSP centralizes operations.
Firewall / UTM Appliance
$400 hardware + $200/year FortiGuard subscription (MSP cost) / $800 hardware + $400/year suggested resale
Secures API traffic between on-premise systems and cloud services (OpenAI, Twilio, DMS APIs). Provides network segmentation to isolate customer PII processing from general dealership traffic, supporting FTC Safeguards Rule compliance. Enables outbound HTTPS whitelisting to approved API endpoints only.
Uninterruptible Power Supply
$550 per unit (MSP cost) / $750 suggested resale
Protects the automation server from power interruptions ensuring workflow continuity and preventing data corruption during active DMS sync or campaign generation operations.
Staff Workstations (if needed)
$900 per unit (MSP cost) / $1,200 suggested resale
Workstations for the service manager and marketing coordinator to review AI-generated content, approve campaigns, and manage the n8n dashboard. Only needed if existing dealership workstations are outdated or insufficient.
Software Procurement
n8n Community Edition
$0/month software cost; MSP resells as managed service at $100–$200/month
Core workflow orchestration engine. Connects DMS APIs to AI content generation APIs to email/SMS delivery platforms. Handles scheduling, branching logic, error handling, and audit logging for all automated communications.
OpenAI API (GPT-5.4 mini)
$0.15/1M input tokens, $0.60/1M output tokens; estimated $15–$50/month per dealership (2,000 customers)
Primary AI engine for generating personalized service follow-up emails, maintenance reminders, and campaign content. GPT-5.4 mini provides excellent quality at very low cost for template-based generation tasks.
Anthropic API (Claude Haiku 4.5) — Fallback
$0.80/1M input tokens, $4.00/1M output tokens; estimated $20–$75/month per dealership if used as primary
Secondary/fallback AI engine. Provides diversity in content generation style and serves as redundancy if OpenAI experiences outages. Excellent at maintaining consistent brand voice across long campaigns. License type: Usage-based API (pay-per-token).
Twilio SendGrid Pro
$89.95/month for 100K emails (MSP cost) / $150–$200/month suggested resale
Transactional and marketing email delivery platform. Handles all email sending including post-service follow-ups, maintenance reminders, and seasonal campaigns. Pro tier includes dedicated IP address for deliverability and advanced analytics.
~$0.0079/segment + carrier surcharges (~$0.003–$0.005); estimated $40–$120/month per dealership / resell at $0.02–$0.03/msg
SMS delivery for maintenance reminders and service follow-ups. Supports A2P 10DLC registration for compliant business texting. Two-way messaging enables customers to reply to schedule appointments.
PostgreSQL 16
$0/month
Local database on the automation server storing consent/opt-in records with timestamps, communication logs for compliance auditing, prompt templates, and campaign performance metrics.
Docker Engine
$0/month
Container runtime for hosting n8n and PostgreSQL on the automation server. Simplifies deployment, updates, and backup/restore operations.
Nginx Proxy Manager
$0/month
Reverse proxy with automatic SSL certificate management for securely exposing n8n webhook endpoints that receive DMS event notifications.
Prerequisites
- Active DMS subscription with API access enabled — CDK Global (Fortellis developer account and app registration), Reynolds & Reynolds (RCI Program certification initiated), or Tekion (Automotive Partner Cloud API key). Verify API access tier with the DMS vendor before beginning implementation.
- Dealership email domain (e.g., service@dealername.com) with DNS management access for configuring SPF, DKIM, and DMARC records required for email deliverability.
- Static public IP address at the dealership or MSP data center for whitelisting with DMS API providers and for hosting n8n webhook endpoints.
- Minimum 50 Mbps symmetric internet connection (100+ Mbps recommended) with low latency for API calls to OpenAI, Twilio, and DMS endpoints.
- Existing customer database with minimum fields: customer name, email, phone, vehicle year/make/model/VIN, last service date, services performed, and next recommended service. This data typically resides in the DMS.
- FTC Safeguards Rule compliance baseline: designated Qualified Individual, written information security plan, multi-factor authentication on all systems accessing customer PII, encryption at rest and in transit.
- A2P 10DLC brand and campaign registration initiated through Twilio (takes 1–4 weeks for approval). Required before any SMS messages can be sent at production volume.
- TCPA-compliant opt-in records: documented prior express written consent for promotional SMS, with timestamps. Service reminders may qualify as informational but marketing upsells require explicit consent.
- CAN-SPAM compliance: physical mailing address for the dealership to include in email footers, working unsubscribe mechanism, accurate sender identification.
- Dealership stakeholder sign-off on communication templates, brand voice guidelines, campaign frequency, and escalation procedures for AI-generated content review.
Installation Steps
...
Step 1: Provision and Secure the Automation Server
Rack and connect the Dell PowerEdge T360. Install Ubuntu Server 22.04 LTS. Configure the Fortinet FortiGate 40F firewall. Connect the APC UPS. Harden the OS per CIS benchmarks and enable the UFW firewall.
# Install Ubuntu Server 22.04 LTS from USB/iDRAC virtual media
# After OS install, update and harden:
sudo apt update && sudo apt upgrade -y
sudo apt install -y ufw fail2ban unattended-upgrades curl git
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 443/tcp
sudo ufw allow 5678/tcp comment 'n8n web UI - restrict to management IPs later'
sudo ufw enable
# Configure fail2ban for SSH
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
# Enable automatic security updates
sudo dpkg-reconfigure -plow unattended-upgrades
# Set timezone to dealership local time
sudo timedatectl set-timezone America/ChicagoReplace America/Chicago with the dealership's local timezone. Configure FortiGate 40F to only allow outbound HTTPS to: api.openai.com, api.anthropic.com, api.sendgrid.com, api.twilio.com, and the DMS API endpoint (e.g., api.fortellis.io). Enable FortiGate logging for all API traffic. Set up iDRAC remote management for the PowerEdge with a separate management VLAN.
Step 2: Install Docker and Docker Compose
Install Docker Engine and Docker Compose on the automation server to containerize n8n, PostgreSQL, and Nginx Proxy Manager for simplified deployment and management.
# Install Docker Engine
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# Install Docker Compose plugin
sudo apt install -y docker-compose-plugin
# Verify installation
docker --version
docker compose version
# Create project directory
sudo mkdir -p /opt/ai-comms-platform
sudo chown $USER:$USER /opt/ai-comms-platform
cd /opt/ai-comms-platformLog out and back in after adding user to docker group. All subsequent commands assume you are in /opt/ai-comms-platform directory.
Step 3: Create Docker Compose Configuration
Create the docker-compose.yml file that defines the n8n workflow engine, PostgreSQL database, and Nginx Proxy Manager containers with proper networking, volumes, and environment variables.
cat > /opt/ai-comms-platform/docker-compose.yml << 'EOF'
version: '3.8'
volumes:
n8n_data:
postgres_data:
nginx_data:
nginx_letsencrypt:
networks:
ai-comms-net:
driver: bridge
services:
postgres:
image: postgres:16-alpine
container_name: ai-comms-postgres
restart: always
networks:
- ai-comms-net
environment:
POSTGRES_DB: n8n
POSTGRES_USER: n8n
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init-scripts:/docker-entrypoint-initdb.d
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U n8n']
interval: 10s
timeout: 5s
retries: 5
n8n:
image: n8nio/n8n:latest
container_name: ai-comms-n8n
restart: always
networks:
- ai-comms-net
ports:
- '5678:5678'
environment:
N8N_HOST: ${N8N_HOST}
N8N_PORT: 5678
N8N_PROTOCOL: https
WEBHOOK_URL: https://${N8N_HOST}/
DB_TYPE: postgresdb
DB_POSTGRESDB_HOST: postgres
DB_POSTGRESDB_PORT: 5432
DB_POSTGRESDB_DATABASE: n8n
DB_POSTGRESDB_USER: n8n
DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
N8N_BASIC_AUTH_ACTIVE: 'true'
N8N_BASIC_AUTH_USER: ${N8N_ADMIN_USER}
N8N_BASIC_AUTH_PASSWORD: ${N8N_ADMIN_PASSWORD}
GENERIC_TIMEZONE: ${TIMEZONE}
volumes:
- n8n_data:/home/node/.n8n
depends_on:
postgres:
condition: service_healthy
nginx-proxy:
image: jc21/nginx-proxy-manager:latest
container_name: ai-comms-nginx
restart: always
networks:
- ai-comms-net
ports:
- '80:80'
- '443:443'
- '81:81'
volumes:
- nginx_data:/data
- nginx_letsencrypt:/etc/letsencrypt
EOF
# Create environment file
cat > /opt/ai-comms-platform/.env << 'EOF'
POSTGRES_PASSWORD=CHANGE_ME_STRONG_PASSWORD_32CHARS
N8N_HOST=n8n.yourmsp.com
N8N_ENCRYPTION_KEY=CHANGE_ME_RANDOM_64CHAR_HEX_STRING
N8N_ADMIN_USER=mspadmin
N8N_ADMIN_PASSWORD=CHANGE_ME_STRONG_PASSWORD
TIMEZONE=America/Chicago
EOF
# Set secure permissions on .env
chmod 600 /opt/ai-comms-platform/.envCRITICAL: Replace all CHANGE_ME values with strong, unique passwords. Generate the N8N_ENCRYPTION_KEY with: openssl rand -hex 32. The N8N_HOST should be a subdomain pointed to the server's static IP (e.g., n8n.yourmsp.com). This encryption key protects stored credentials — back it up securely. If lost, all n8n credentials must be re-entered.
Step 4: Create Database Initialization Scripts
Create SQL initialization scripts that set up the compliance tracking tables for consent records, communication logs, and campaign metrics alongside the n8n database.
mkdir -p /opt/ai-comms-platform/init-scripts
cat > /opt/ai-comms-platform/init-scripts/01-compliance-tables.sql << 'SQLEOF'
-- Compliance and audit database
CREATE DATABASE IF NOT EXISTS compliance;
\c compliance;
CREATE TABLE IF NOT EXISTS consent_records (
id SERIAL PRIMARY KEY,
customer_id VARCHAR(100) NOT NULL,
dealership_id VARCHAR(50) NOT NULL,
channel VARCHAR(10) NOT NULL CHECK (channel IN ('email', 'sms')),
consent_type VARCHAR(20) NOT NULL CHECK (consent_type IN ('promotional', 'informational', 'transactional')),
consent_given BOOLEAN NOT NULL DEFAULT false,
consent_timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
consent_source VARCHAR(255),
ip_address INET,
opt_out_timestamp TIMESTAMPTZ,
opt_out_source VARCHAR(255),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_consent_customer ON consent_records(customer_id, channel);
CREATE INDEX idx_consent_dealership ON consent_records(dealership_id);
CREATE TABLE IF NOT EXISTS communication_log (
id SERIAL PRIMARY KEY,
dealership_id VARCHAR(50) NOT NULL,
customer_id VARCHAR(100) NOT NULL,
channel VARCHAR(10) NOT NULL,
message_type VARCHAR(50) NOT NULL,
subject_line TEXT,
ai_model_used VARCHAR(50),
ai_prompt_template VARCHAR(100),
content_hash VARCHAR(64),
sendgrid_message_id VARCHAR(255),
twilio_message_sid VARCHAR(255),
delivery_status VARCHAR(30),
sent_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
opened_at TIMESTAMPTZ,
clicked_at TIMESTAMPTZ,
bounced_at TIMESTAMPTZ,
unsubscribed_at TIMESTAMPTZ
);
CREATE INDEX idx_comlog_customer ON communication_log(customer_id);
CREATE INDEX idx_comlog_dealership_date ON communication_log(dealership_id, sent_at);
CREATE TABLE IF NOT EXISTS campaign_metrics (
id SERIAL PRIMARY KEY,
dealership_id VARCHAR(50) NOT NULL,
campaign_name VARCHAR(255) NOT NULL,
campaign_type VARCHAR(50) NOT NULL,
total_sent INTEGER DEFAULT 0,
total_delivered INTEGER DEFAULT 0,
total_opened INTEGER DEFAULT 0,
total_clicked INTEGER DEFAULT 0,
total_unsubscribed INTEGER DEFAULT 0,
total_bounced INTEGER DEFAULT 0,
appointments_booked INTEGER DEFAULT 0,
revenue_attributed NUMERIC(10,2) DEFAULT 0,
started_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
completed_at TIMESTAMPTZ
);
SQLEOFThe compliance database is separate from the n8n operational database for clarity and potential future separation. These tables satisfy FTC Safeguards audit logging requirements and TCPA consent documentation requirements. Back up this database daily.
Step 5: Launch the Platform Stack
Start all Docker containers, verify they are running correctly, and configure Nginx Proxy Manager with SSL for the n8n web interface.
cd /opt/ai-comms-platform
docker compose up -d
# Verify all containers are running
docker compose ps
# Check n8n logs for successful startup
docker compose logs n8n --tail 50
# Check PostgreSQL is healthy
docker compose logs postgres --tail 20sudo ufw delete allow 5678/tcp
sudo ufw allow from 127.0.0.1 to any port 5678After configuring Nginx Proxy Manager, access n8n only via https://n8n.yourmsp.com. The Nginx Proxy Manager admin interface (port 81) should be firewalled to MSP management IPs only. Change the default NPM admin credentials immediately on first login.
Step 6: Configure Email Authentication (SPF, DKIM, DMARC)
Set up proper email authentication records on the dealership's domain to ensure high deliverability for AI-generated communications. This is critical — without it, emails will land in spam.
dig TXT dealername.com +short
dig CNAME s1._domainkey.dealername.com +short
dig TXT _dmarc.dealername.com +shortDNS propagation takes 24–48 hours. Start with DMARC policy p=none for monitoring, then move to p=quarantine after 2 weeks of clean reports. Send the DMARC aggregate reports to the MSP's monitoring mailbox. Use https://mxtoolbox.com/SuperTool.aspx to verify all records. SendGrid domain authentication is mandatory for the Pro plan's dedicated IP to work properly.
Step 7: Register A2P 10DLC Brand and Campaign with Twilio
Register the dealership as a brand and register the messaging campaign with Twilio for compliant A2P business SMS. This is legally required for all business texting in the US and takes 1–4 weeks for approval.
A2P 10DLC registration costs $4/brand + $15/campaign (one-time) plus $2/month for the phone number. Brand vetting typically takes 1–5 business days. Campaign registration takes an additional 1–3 weeks. DO NOT send SMS at volume before registration is approved — messages will be filtered and the number can be blocked. If the dealership needs to start sending immediately, consider Twilio's Toll-Free number verification as a faster alternative (3-5 business days).
Step 8: Configure API Credentials in n8n
Add all API credentials to n8n's credential store. These are encrypted at rest using the N8N_ENCRYPTION_KEY. Set up credentials for OpenAI, Anthropic (fallback), Twilio SendGrid, Twilio SMS, and the DMS API.
Create separate API keys for each service with minimal required permissions. For OpenAI, set a monthly spending limit in the dashboard (Settings -> Billing -> Usage limits) — start at $100/month and adjust. For SendGrid, create a key with only 'Mail Send' permission, not full access. Store a backup of all credentials in the MSP's password manager (e.g., IT Glue, Hudu). Never share credentials in plaintext via email or tickets.
Step 9: Build the DMS Data Extraction Workflow
Create the n8n workflow that periodically queries the DMS API to extract service records, customer information, and vehicle data. This workflow runs on a schedule (e.g., every 4 hours) and identifies customers who need follow-up communications based on service events and maintenance schedules.
# CDK Fortellis HTTP Request:
# Method: GET
# URL: https://api.fortellis.io/cdkdrive/service/v1/repair-orders
# Query Params: status=closed&closedAfter={{$now.minus(4,'hours').toISO()}}
# Authentication: OAuth2 (Fortellis credential)
# Headers: Accept: application/json, Request-Id: {{$workflow.id}}-{{$now.toMillis()}}The DMS API integration is the most variable part of this implementation. CDK Fortellis has the most straightforward REST API. Tekion APC offers open self-serve APIs. Reynolds & Reynolds RCI requires a formal certification process that can take 4–8 weeks — plan accordingly and start the RCI application during Phase 1. If DMS API access is delayed, create a temporary CSV import workflow where the service manager exports data weekly from the DMS as a stopgap.
Step 10: Build the AI Content Generation Workflow
Create the main n8n workflow that takes queued customer records, generates personalized email and SMS content using the AI API, validates the output, and queues it for delivery. This is the core AI workflow.
- In n8n, create a new workflow: 'AI Content Generator'
- Trigger: runs every 30 minutes or on webhook from DMS Sync workflow
- Workflow nodes:
- 1. Postgres node → Fetch unprocessed records from queue
- 2. SplitInBatches node → Process 10 at a time to manage API rate limits
- 3. Function node → Build personalized prompt from template + customer data
- 4. HTTP Request node → POST to OpenAI API (or Anthropic fallback)
- 5. Function node → Parse AI response JSON, validate fields present
- 6. IF node → Quality check (length > 50 chars, no placeholder text left)
- 7. Postgres node → Log AI generation to communication_log
- 8. Set node → Format for delivery queue
- 9. Postgres node → Insert into delivery queue with scheduled send time
- Error handling: Add Error Trigger node for the workflow
- On OpenAI failure: retry 3x with exponential backoff
- On persistent failure: switch to Anthropic Claude Haiku
- Log all errors to compliance database
# POST to https://api.openai.com/v1/chat/completions
{
"model": "gpt-5.4-mini",
"temperature": 0.7,
"max_tokens": 500,
"messages": [
{"role": "system", "content": "{{system_prompt}}"},
{"role": "user", "content": "{{user_prompt}}"}
],
"response_format": {"type": "json_object"}
}Request JSON response format from the AI to get structured output (subject line, email body, sms body as separate fields). This makes parsing reliable. Set temperature to 0.7 for variety in content while maintaining professionalism. The quality check node should verify: no empty fields, no leftover template variables like {{name}}, content length within bounds, and no inappropriate content. Add a Wait node between batches to respect API rate limits (GPT-5.4 mini: 30,000 TPM on free tier, 2M+ TPM on paid).
Step 11: Build the Email Delivery Workflow
Create the n8n workflow that takes approved content from the delivery queue and sends personalized emails through Twilio SendGrid with proper tracking, unsubscribe handling, and deliverability best practices.
POST https://api.sendgrid.com/v3/mail/send
Authorization: Bearer {{$credentials.sendgridApiKey}}
{
"personalizations": [{
"to": [{"email": "{{customer_email}}", "name": "{{customer_name}}"}],
"subject": "{{ai_subject_line}}",
"custom_args": {
"dealership_id": "{{dealership_id}}",
"campaign_type": "{{message_type}}",
"comm_log_id": "{{comm_log_id}}"
}
}],
"from": {"email": "service@dealername.com", "name": "Dealer Service Team"},
"reply_to": {"email": "service@dealername.com"},
"content": [{"type": "text/html", "value": "{{ai_email_body_html}}"}],
"tracking_settings": {
"open_tracking": {"enable": true},
"click_tracking": {"enable": true}
},
"asm": {
"group_id": 12345,
"groups_to_display": [12345, 12346]
}
}The ASM (Advanced Suppression Manager) group_id must be configured in SendGrid first — create groups for 'Service Reminders' and 'Promotional Offers'. SendGrid Event Webhooks must be configured in Settings -> Mail Settings -> Event Webhook pointing to the n8n webhook URL. This provides real-time tracking data. Set up IP warmup if using a new dedicated IP — start with 50 emails/day and increase gradually over 4 weeks. Use SendGrid's suppression management to automatically honor unsubscribes.
Step 12: Build the SMS Delivery Workflow
Create the n8n workflow for sending maintenance reminders and service follow-ups via Twilio Programmable SMS with TCPA compliance enforcement.
# In n8n, create a new workflow: 'SMS Delivery Engine'
# Trigger: Schedule every 15 minutes during business hours (9AM-8PM local, TCPA requirement)
# Workflow nodes:
# 1. Postgres node -> Fetch queued SMS (channel='sms', status='pending')
# 2. Postgres node -> STRICT consent check:
# SELECT * FROM consent_records
# WHERE customer_id={{customer_id}}
# AND channel='sms'
# AND consent_given=true
# AND opt_out_timestamp IS NULL
# AND (consent_type='promotional' OR '{{message_type}}' IN ('post_service_followup','maintenance_reminder'))
# 3. IF node -> Consent verified? Continue : Skip + log refusal reason
# 4. Function node -> Ensure message includes opt-out: append 'Reply STOP to opt out'
# 5. Twilio node -> Send SMS
# From: +1XXXXXXXXXX (registered Messaging Service number)
# To: {{customer_phone}}
# Body: {{ai_sms_body}} Reply STOP to opt out.
# MessagingServiceSid: MGXXXXXXXXX (registered A2P campaign)
# 6. Postgres node -> Log twilio_message_sid and delivery status
# Inbound SMS handling workflow:
# Create webhook: https://n8n.yourmsp.com/webhook/twilio-inbound
# Configure in Twilio: Messaging Service -> Integration -> Webhook URL
# Nodes:
# 1. Webhook Trigger
# 2. Function node -> Parse inbound message
# 3. Switch node -> Check for STOP/CANCEL/UNSUBSCRIBE/START/HELP
# 4a. STOP branch: Postgres -> Set opt_out_timestamp, Twilio -> confirm opt-out
# 4b. START branch: Postgres -> Create new consent record (requires verification)
# 4c. HELP branch: Twilio -> Send help message with dealership phone number
# 4d. Other: Forward to dealership service desk or queue for human reviewTCPA compliance is NON-NEGOTIABLE. Never send SMS without verified consent. Never send outside 8AM-9PM recipient local time. Always include opt-out language. Process STOP requests immediately — Twilio handles automatic STOP processing but you should also update your database. The TCPA penalty is $500-$1,500 PER MESSAGE for violations. An Oklahoma dealership group paid $850,000 in settlement for texting without consent. Use Twilio's Messaging Service (not raw API) to benefit from built-in carrier compliance features.
Step 13: Configure SendGrid Event Webhooks and Twilio Status Callbacks
Set up real-time event tracking so delivery status, opens, clicks, bounces, and unsubscribes flow back into the compliance database for reporting and automated list hygiene.
- SendGrid Event Webhook Configuration: In SendGrid dashboard: Settings -> Mail Settings -> Event Webhook
- HTTP POST URL: https://n8n.yourmsp.com/webhook/sendgrid-events
- Select events: Delivered, Opened, Clicked, Bounced, Spam Report, Unsubscribe, Group Unsubscribe
- Enable: Active
- Create n8n workflow 'SendGrid Event Processor':
- Webhook node (POST /webhook/sendgrid-events)
- SplitInBatches (events come in arrays)
- Switch node on event type: 'delivered': UPDATE communication_log SET delivery_status='delivered'
- Switch node on event type: 'open': UPDATE communication_log SET opened_at=NOW()
- Switch node on event type: 'click': UPDATE communication_log SET clicked_at=NOW()
- Switch node on event type: 'bounce': UPDATE communication_log SET bounced_at=NOW(), delivery_status='bounced'
- Switch node on event type: 'spamreport'/'unsubscribe': UPDATE consent_records SET opt_out_timestamp=NOW()
- Twilio Status Callback: In Twilio: Messaging Service -> Settings -> Status Callback URL
- URL: https://n8n.yourmsp.com/webhook/twilio-status
- Create matching n8n workflow to process Twilio delivery receipts
- Update communication_log with: queued, sent, delivered, undelivered, failed
SendGrid signs webhook payloads — enable webhook signature verification in the n8n workflow for security (Settings -> Mail Settings -> Signed Event Webhook Requests). For Twilio, validate the X-Twilio-Signature header against your auth token. Both of these prevent spoofed webhook calls from corrupting your data.
Step 14: Import and Configure Prompt Templates
Load the AI prompt templates into n8n's static data or a dedicated database table. These templates define the system prompt and user prompt structure for each communication type with merge field placeholders.
# Create prompt templates table in compliance database:
docker exec -it ai-comms-postgres psql -U n8n -d compliance -c "
CREATE TABLE IF NOT EXISTS prompt_templates (
id SERIAL PRIMARY KEY,
template_name VARCHAR(100) UNIQUE NOT NULL,
communication_type VARCHAR(50) NOT NULL,
channel VARCHAR(10) NOT NULL,
system_prompt TEXT NOT NULL,
user_prompt_template TEXT NOT NULL,
active BOOLEAN DEFAULT true,
version INTEGER DEFAULT 1,
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW()
);"Keep prompt templates versioned. When updating a template, increment the version number and keep the old version for A/B testing comparisons. The service manager should review and approve all prompt templates before activation. Include dealership-specific branding elements (name, slogan, service hours, phone number, address) as variables in the system prompt that are populated per-client.
Step 15: Deploy Monitoring and Alerting
Set up monitoring for the automation platform to ensure workflows run reliably, API quotas are not exceeded, and any delivery failures are caught early.
# Install Uptime Kuma for lightweight monitoring
docker run -d --name uptime-kuma --network ai-comms-net -p 3001:3001 -v uptime-kuma:/app/data --restart always louislam/uptime-kuma:1- Set up notification channels in Uptime Kuma: Email to msp-alerts@yourmsp.com
- Slack/Teams webhook to MSP operations channel
- SMS via Twilio to on-call technician
Restrict Uptime Kuma port 3001 to MSP management IPs only in the firewall. Set alert thresholds: workflow failure > 0 in 1 hour = immediate alert; email bounce rate > 5% = warning; API error rate > 10% = critical. The daily health report provides proactive visibility before the client notices any issues.
Step 16: Perform Initial Data Migration and Consent Import
Import existing customer consent records and historical service data into the system. This is critical for TCPA compliance — you must have documented consent before sending any messages.
# 1. Export existing opt-in data from current email/SMS systems
# (ActiveCampaign, Mailchimp, DealerSocket, etc.)
# Format as CSV: customer_id, email, phone, consent_type, consent_date, source
# 2. Create import script:
cat > /opt/ai-comms-platform/import-consent.py << 'PYEOF'
import csv
import psycopg2
from datetime import datetime
conn = psycopg2.connect(
host='localhost', port=5432,
dbname='compliance', user='n8n',
password='YOUR_POSTGRES_PASSWORD'
)
cur = conn.cursor()
with open('/tmp/consent_export.csv', 'r') as f:
reader = csv.DictReader(f)
for row in reader:
cur.execute("""
INSERT INTO consent_records
(customer_id, dealership_id, channel, consent_type,
consent_given, consent_timestamp, consent_source)
VALUES (%s, %s, %s, %s, %s, %s, %s)
ON CONFLICT DO NOTHING
""", (
row['customer_id'], row.get('dealership_id', 'DEFAULT'),
row['channel'], row.get('consent_type', 'informational'),
True, row.get('consent_date', datetime.now().isoformat()),
'historical_import'
))
conn.commit()
cur.close()
conn.close()
print('Import complete')
PYEOF
# 3. Run import:
sudo apt install -y python3-psycopg2
python3 /opt/ai-comms-platform/import-consent.py
# 4. Verify import:
docker exec -it ai-comms-postgres psql -U n8n -d compliance -c "SELECT channel, consent_type, COUNT(*) FROM consent_records GROUP BY channel, consent_type;"CRITICAL: If the dealership cannot provide documented consent records for SMS marketing, those customers must be re-opted-in before sending promotional texts. Service reminders may qualify as informational (lower consent bar) but consult legal counsel. For email, existing business relationship provides implicit consent for CAN-SPAM but explicit opt-in is best practice. Import only customers who have NOT previously opted out. Cross-reference with any existing suppression lists.
Custom AI Components
Post-Service Follow-Up Prompt Template
Type: prompt System and user prompts for generating personalized thank-you emails sent 2-4 hours after a service visit. Includes satisfaction inquiry, next service preview, and call-to-action for online review. Generates both email and SMS variants in a single API call. Implementation:
Post-Service Follow-Up – System Prompt
Post-Service Follow-Up – User Prompt Template
{
"email_subject": "Thanks for bringing in your 2021 Camry, Sarah!",
"email_body_html": "<p>Hi Sarah,</p><p>Thank you for trusting us with your 2021 Toyota Camry's oil change and tire rotation today. We appreciate your continued loyalty to our service team.</p><p>Your advisor Mike wanted to let you know that your next recommended service — a 30,000-mile inspection — will be due around August 2025. We'll send you a reminder as it approaches.</p><p>If you have a moment, we'd love to hear about your experience: <a href='https://g.page/r/dealer/review'>Leave a quick review</a></p><p>Thank you,<br>The Service Team at Valley Toyota</p>",
"sms_body": "Hi Sarah! Thanks for visiting Valley Toyota today for your Camry's service. Questions? Call us at 555-0123.",
"communication_tone": "warm, grateful, personal"
}Maintenance Reminder Campaign Prompt Template
Type: prompt System and user prompts for generating time-based and mileage-based maintenance reminders. Supports multiple reminder types: oil change, tire rotation, brake inspection, seasonal service, and manufacturer-recommended intervals. Creates urgency without being alarmist.
Implementation:
Maintenance Reminder Campaign — System Prompt
Maintenance Reminder Campaign — User Prompt Template
Declined Service Follow-Up Prompt Template
Type: prompt Generates tactful follow-up messages for customers who declined recommended services during their visit. Educates about the importance of the declined service without being pushy. Sent 7 days after the service visit.
Implementation:
Declined Service Follow-Up Prompt Template
DMS Data Extraction Workflow
Type: workflow n8n workflow that connects to the dealership's DMS via API (CDK Fortellis, Tekion APC, or Reynolds RCI) on a 4-hour schedule, extracts recently closed repair orders and customer service records, normalizes the data, categorizes customers into communication segments, and queues them for AI content generation.
Implementation:
# import into n8n and configure credentials)
{
"name": "DMS Data Sync - Service Records",
"nodes": [
{
"name": "Schedule Trigger",
"type": "n8n-nodes-base.scheduleTrigger",
"parameters": {
"rule": {
"interval": [{"field": "hours", "hoursInterval": 4}]
}
},
"position": [240, 300]
},
{
"name": "Get DMS Auth Token",
"type": "n8n-nodes-base.httpRequest",
"parameters": {
"method": "POST",
"url": "https://identity.fortellis.io/oauth2/aus1p1ixy7YL8cMq02p7/v1/token",
"sendHeaders": true,
"headerParameters": {
"parameters": [{"name": "Content-Type", "value": "application/x-www-form-urlencoded"}]
},
"sendBody": true,
"bodyParameters": {
"parameters": [
{"name": "grant_type", "value": "client_credentials"},
{"name": "scope", "value": "{{$credentials.fortellisScope}}"}
]
},
"authentication": "genericCredentialType",
"genericAuthType": "httpBasicAuth"
},
"position": [460, 300]
},
{
"name": "Fetch Closed Repair Orders",
"type": "n8n-nodes-base.httpRequest",
"parameters": {
"method": "GET",
"url": "https://api.fortellis.io/cdkdrive/service/v1/repair-orders",
"sendQuery": true,
"queryParameters": {
"parameters": [
{"name": "status", "value": "closed"},
{"name": "closedAfter", "value": "={{$now.minus(4, 'hours').toISO()}}"}
]
},
"sendHeaders": true,
"headerParameters": {
"parameters": [
{"name": "Authorization", "value": "Bearer {{$node['Get DMS Auth Token'].json.access_token}}"},
{"name": "Accept", "value": "application/json"},
{"name": "Request-Id", "value": "={{$workflow.id}}-{{$now.toMillis()}}"}
]
}
},
"position": [680, 300]
},
{
"name": "Normalize & Categorize",
"type": "n8n-nodes-base.function",
"parameters": {
"functionCode": "const repairOrders = items[0].json.repairOrders || items[0].json.data || [];\nconst results = [];\nconst now = new Date();\n\nfor (const ro of repairOrders) {\n const customer = {\n customer_id: ro.customerId || ro.customer?.id,\n customer_first_name: ro.customerFirstName || ro.customer?.firstName,\n customer_last_name: ro.customerLastName || ro.customer?.lastName,\n customer_email: ro.customerEmail || ro.customer?.email,\n customer_phone: ro.customerPhone || ro.customer?.phone,\n vehicle_year: ro.vehicleYear || ro.vehicle?.year,\n vehicle_make: ro.vehicleMake || ro.vehicle?.make,\n vehicle_model: ro.vehicleModel || ro.vehicle?.model,\n vehicle_vin: ro.vin || ro.vehicle?.vin,\n services_performed: (ro.lineItems || ro.services || []).filter(s => s.status === 'completed').map(s => s.description).join(', '),\n services_declined: (ro.lineItems || ro.services || []).filter(s => s.status === 'declined').map(s => s.description).join(', '),\n service_date: ro.closedDate || ro.completedDate,\n advisor_name: ro.advisorName || ro.serviceAdvisor?.name || 'your service advisor',\n ro_number: ro.repairOrderNumber || ro.id,\n next_service_recommendation: ro.nextService?.description || '',\n next_service_due: ro.nextService?.dueDate || ro.nextService?.dueMileage || '',\n estimated_mileage: ro.mileageIn || ro.vehicle?.currentMileage\n };\n\n // Determine communication type\n const hoursSinceClose = (now - new Date(customer.service_date)) / (1000*60*60);\n \n if (hoursSinceClose <= 6) {\n customer.comm_type = 'post_service_followup';\n customer.scheduled_send_delay_hours = 2;\n }\n \n if (customer.services_declined && customer.services_declined.length > 0) {\n const declinedRecord = {...customer, comm_type: 'declined_service_followup', scheduled_send_delay_hours: 168};\n results.push({json: declinedRecord});\n }\n \n results.push({json: customer});\n}\n\nreturn results;"
},
"position": [900, 300]
},
{
"name": "Check Already Contacted",
"type": "n8n-nodes-base.postgres",
"parameters": {
"operation": "executeQuery",
"query": "SELECT COUNT(*) as cnt FROM communication_log WHERE customer_id='{{$json.customer_id}}' AND message_type='{{$json.comm_type}}' AND sent_at > NOW() - INTERVAL '7 days'"
},
"position": [1120, 300]
},
{
"name": "Not Already Contacted?",
"type": "n8n-nodes-base.if",
"parameters": {
"conditions": {
"number": [{"value1": "={{$json.cnt}}", "operation": "equal", "value2": 0}]
}
},
"position": [1340, 300]
},
{
"name": "Insert Into Queue",
"type": "n8n-nodes-base.postgres",
"parameters": {
"operation": "insert",
"table": "processing_queue",
"columns": "customer_id,customer_first_name,customer_email,customer_phone,vehicle_year,vehicle_make,vehicle_model,services_performed,services_declined,service_date,advisor_name,comm_type,scheduled_send_at,status",
"values": "={{$json.customer_id}},={{$json.customer_first_name}},={{$json.customer_email}},={{$json.customer_phone}},={{$json.vehicle_year}},={{$json.vehicle_make}},={{$json.vehicle_model}},={{$json.services_performed}},={{$json.services_declined}},={{$json.service_date}},={{$json.advisor_name}},={{$json.comm_type}},={{$now.plus($json.scheduled_send_delay_hours, 'hours').toISO()}},pending"
},
"position": [1560, 240]
}
],
"connections": {
"Schedule Trigger": {"main": [[{"node": "Get DMS Auth Token", "type": "main", "index": 0}]]},
"Get DMS Auth Token": {"main": [[{"node": "Fetch Closed Repair Orders", "type": "main", "index": 0}]]},
"Fetch Closed Repair Orders": {"main": [[{"node": "Normalize & Categorize", "type": "main", "index": 0}]]},
"Normalize & Categorize": {"main": [[{"node": "Check Already Contacted", "type": "main", "index": 0}]]},
"Check Already Contacted": {"main": [[{"node": "Not Already Contacted?", "type": "main", "index": 0}]]},
"Not Already Contacted?": {"main": [[{"node": "Insert Into Queue", "type": "main", "index": 0}], []]}
}
}This workflow JSON is a structural template. After importing into n8n, you must: 1. Attach the correct credentials to each node 2. Adjust the Fortellis API endpoints to match your specific CDK API subscription 3. Create the processing_queue table in PostgreSQL (see installation step 4 for schema) 4. For Tekion APC: change the auth flow and base URL to Tekion's endpoints 5. For Reynolds RCI: implement their specific SOAP/REST bridge as needed
AI Content Generation Engine Workflow
Type: workflow Core n8n workflow that reads from the processing queue, retrieves the appropriate prompt template, populates it with customer data, calls the OpenAI API, validates the JSON response, and queues the generated content for delivery.
Implementation
N8N WORKFLOW STRUCTURE:
Workflow Name: AI Content Generation Engine
Trigger: Schedule every 30 minutes
Node 1 - Schedule Trigger:
Type: scheduleTrigger
Config: every 30 minutes
Node 2 - Fetch Pending Queue Items:
Type: postgres
Query: SELECT * FROM processing_queue WHERE status='pending' AND scheduled_send_at <= NOW() ORDER BY scheduled_send_at ASC LIMIT 20
Node 3 - Check If Items Exist:
Type: if
Condition: items.length > 0
False path: NoOp (stop)
Node 4 - Split In Batches:
Type: splitInBatches
Batch Size: 5
Options: pause between batches: 2000ms
Node 5 - Get Prompt Template:
Type: postgres
Query: SELECT system_prompt, user_prompt_template FROM prompt_templates WHERE communication_type='{{$json.comm_type}}' AND channel='email' AND active=true ORDER BY version DESC LIMIT 1
Node 6 - Build Prompt:
Type: function
Code:
const template = items[0].json;
const customer = $('Split In Batches').item.json;
// Replace all template variables
let systemPrompt = template.system_prompt;
let userPrompt = template.user_prompt_template;
const replacements = {
'{{customer_first_name}}': customer.customer_first_name,
'{{vehicle_year}}': customer.vehicle_year,
'{{vehicle_make}}': customer.vehicle_make,
'{{vehicle_model}}': customer.vehicle_model,
'{{services_performed}}': customer.services_performed,
'{{services_declined}}': customer.services_declined || 'none',
'{{service_date}}': customer.service_date,
'{{advisor_name}}': customer.advisor_name,
'{{next_service_recommendation}}': customer.next_service_recommendation || 'Regular maintenance per manufacturer schedule',
'{{next_service_due_date_or_mileage}}': customer.next_service_due || 'Per manufacturer recommendation',
'{{estimated_current_mileage}}': customer.estimated_mileage || 'N/A',
'{{scheduling_url}}': $env.SCHEDULING_URL || 'https://dealername.com/schedule',
'{{google_review_url}}': $env.GOOGLE_REVIEW_URL || 'https://g.page/r/dealer/review',
'{{service_phone}}': $env.SERVICE_PHONE || '555-0123',
'{{dealership_name}}': $env.DEALERSHIP_NAME || 'Our Dealership'
};
for (const [key, value] of Object.entries(replacements)) {
systemPrompt = systemPrompt.replaceAll(key, value || '');
userPrompt = userPrompt.replaceAll(key, value || '');
}
return [{json: {
system_prompt: systemPrompt,
user_prompt: userPrompt,
customer_data: customer
}}];
Node 7 - Call OpenAI API:
Type: httpRequest
Method: POST
URL: https://api.openai.com/v1/chat/completions
Headers:
Authorization: Bearer {{$credentials.openaiApiKey}}
Content-Type: application/json
Body:
{
"model": "gpt-5.4-mini",
"temperature": 0.7,
"max_tokens": 600,
"response_format": {"type": "json_object"},
"messages": [
{"role": "system", "content": "{{$json.system_prompt}}"},
{"role": "user", "content": "{{$json.user_prompt}}"}
]
}
Retry on Fail: true
Max Retries: 3
Wait Between Retries: 5000ms
Node 8 - Parse AI Response:
Type: function
Code:
const aiResponse = JSON.parse(items[0].json.choices[0].message.content);
const customer = $('Build Prompt').item.json.customer_data;
// Quality validation
const errors = [];
if (!aiResponse.email_subject || aiResponse.email_subject.length < 10) errors.push('Missing/short subject');
if (!aiResponse.email_body_html || aiResponse.email_body_html.length < 50) errors.push('Missing/short email body');
if (!aiResponse.sms_body || aiResponse.sms_body.length < 20) errors.push('Missing/short SMS body');
if (aiResponse.sms_body && aiResponse.sms_body.length > 145) errors.push('SMS too long');
if (aiResponse.email_body_html && aiResponse.email_body_html.includes('{{')) errors.push('Unreplaced template variable in email');
if (aiResponse.sms_body && aiResponse.sms_body.includes('{{')) errors.push('Unreplaced template variable in SMS');
return [{
json: {
...aiResponse,
customer_id: customer.customer_id,
customer_email: customer.customer_email,
customer_phone: customer.customer_phone,
comm_type: customer.comm_type,
dealership_id: customer.dealership_id || 'default',
quality_passed: errors.length === 0,
quality_errors: errors.join('; '),
ai_model: 'gpt-5.4-mini'
}
}];
Node 9 - Quality Gate:
Type: if
Condition: $json.quality_passed == true
True: Continue to delivery queue
False: Log error and flag for human review
Node 10 - Queue for Email Delivery:
Type: postgres
Insert into delivery_queue: customer_id, channel='email', subject, body_html, comm_type, scheduled_at, status='pending'
Node 11 - Queue for SMS Delivery:
Type: postgres
Insert into delivery_queue: customer_id, channel='sms', body=sms_body, comm_type, scheduled_at, status='pending'
Node 12 - Update Processing Queue:
Type: postgres
UPDATE processing_queue SET status='generated' WHERE id={{$json.queue_id}}
ERROR HANDLING:
Error Trigger -> Function (check if OpenAI error) -> IF (rate limit?) -> Wait 60s -> Retry
Persistent failure -> Switch to Anthropic Claude:
URL: https://api.anthropic.com/v1/messages
Headers: x-api-key, anthropic-version: 2023-06-01
Body: {model: 'claude-haiku-4-5', max_tokens: 600, messages: [...]}
Still failing -> Mark as 'failed' in queue, send alert to MSPConsent Verification Middleware
Type: integration A reusable n8n sub-workflow that verifies TCPA and CAN-SPAM consent before any message is sent. Called by both the Email and SMS delivery workflows. Returns a boolean consent status with the reason, and logs all consent checks for compliance auditing.
Implementation:
// Node 1: Determine Consent Requirement
// Input Parameters (passed from parent workflow):
// - customer_id (string)
// - channel ('email' | 'sms')
// - message_type ('post_service_followup' | 'maintenance_reminder' | 'declined_service_followup' | 'seasonal_campaign' | 'satisfaction_check' | 'recall_notification')
// - dealership_id (string)
const { channel, message_type } = items[0].json;
// TCPA consent classification
// Informational: service confirmations, recall notices, appointment reminders
// Promotional: marketing campaigns, upsells, seasonal offers
const informationalTypes = ['post_service_followup', 'recall_notification', 'satisfaction_check'];
const promotionalTypes = ['seasonal_campaign', 'declined_service_followup', 'maintenance_reminder'];
let required_consent_level;
if (informationalTypes.includes(message_type)) {
required_consent_level = 'informational'; // Lower bar - existing business relationship may suffice
} else {
required_consent_level = 'promotional'; // Requires explicit prior express written consent for SMS
}
// For email (CAN-SPAM): existing business relationship = implicit consent
// But explicit consent is always preferred
if (channel === 'email') {
required_consent_level = 'informational'; // CAN-SPAM is opt-out based, not opt-in
}
return [{json: {...items[0].json, required_consent_level}}];-- Node 2: Query Consent Database
SELECT
consent_given,
consent_type,
consent_timestamp,
opt_out_timestamp,
CASE
WHEN opt_out_timestamp IS NOT NULL THEN 'opted_out'
WHEN consent_given = false THEN 'no_consent'
WHEN consent_type = 'promotional' THEN 'full_consent'
WHEN consent_type = 'informational' THEN 'informational_only'
WHEN consent_type = 'transactional' THEN 'transactional_only'
ELSE 'unknown'
END as consent_status
FROM consent_records
WHERE customer_id = '{{$json.customer_id}}'
AND channel = '{{$json.channel}}'
AND dealership_id = '{{$json.dealership_id}}'
ORDER BY consent_timestamp DESC
LIMIT 1// Node 3: Evaluate Consent
const input = $('Determine Consent Requirement').item.json;
const record = items[0].json;
let allowed = false;
let reason = '';
if (!record || !record.consent_status) {
allowed = false;
reason = 'No consent record found for this customer/channel';
} else if (record.consent_status === 'opted_out') {
allowed = false;
reason = `Customer opted out on ${record.opt_out_timestamp}`;
} else if (record.consent_status === 'no_consent') {
allowed = false;
reason = 'Consent record exists but consent not given';
} else if (input.required_consent_level === 'promotional' && record.consent_type === 'informational') {
// Need promotional consent but only have informational
if (input.channel === 'email') {
allowed = true; // CAN-SPAM allows with opt-out mechanism
reason = 'Email: CAN-SPAM opt-out model applies. Informational consent sufficient.';
} else {
allowed = false;
reason = 'SMS promotional message requires explicit promotional consent. Customer only has informational consent.';
}
} else {
allowed = true;
reason = `Valid ${record.consent_type} consent from ${record.consent_timestamp}`;
}
return [{json: {
consent_allowed: allowed,
consent_reason: reason,
customer_id: input.customer_id,
channel: input.channel,
message_type: input.message_type,
checked_at: new Date().toISOString()
}}];- Node 4 — Log Consent Check (Type: postgres): Insert into consent_audit_log: customer_id, channel, message_type, consent_allowed, consent_reason, checked_at
- Node 5 — Return Result (Type: respondToWebhook if called via webhook, or set output): Returns { consent_allowed: boolean, consent_reason: string }
CREATE TABLE consent_audit_log (
id SERIAL PRIMARY KEY,
customer_id VARCHAR(100) NOT NULL,
channel VARCHAR(10) NOT NULL,
message_type VARCHAR(50) NOT NULL,
consent_allowed BOOLEAN NOT NULL,
consent_reason TEXT,
checked_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_audit_customer ON consent_audit_log(customer_id, checked_at);Campaign Performance Analytics Agent
Type: agent A weekly analytics workflow that aggregates campaign metrics, calculates key performance indicators (open rate, click rate, appointment conversion, revenue attribution), compares to industry benchmarks, and generates an executive summary report using AI for the dealership general manager and the MSP account manager.
Implementation:
N8N WORKFLOW: Weekly Campaign Analytics Report
Trigger: Schedule every Monday at 7:00 AM
Node 1 - Aggregate Weekly Metrics:
Type: postgres
Query:
SELECT
cm.campaign_type,
cm.dealership_id,
SUM(cm.total_sent) as total_sent,
SUM(cm.total_delivered) as total_delivered,
SUM(cm.total_opened) as total_opened,
SUM(cm.total_clicked) as total_clicked,
SUM(cm.total_unsubscribed) as total_unsubscribed,
SUM(cm.total_bounced) as total_bounced,
SUM(cm.appointments_booked) as appointments_booked,
SUM(cm.revenue_attributed) as revenue_attributed,
ROUND(SUM(cm.total_delivered)::numeric / NULLIF(SUM(cm.total_sent), 0) * 100, 1) as delivery_rate,
ROUND(SUM(cm.total_opened)::numeric / NULLIF(SUM(cm.total_delivered), 0) * 100, 1) as open_rate,
ROUND(SUM(cm.total_clicked)::numeric / NULLIF(SUM(cm.total_opened), 0) * 100, 1) as click_rate,
ROUND(SUM(cm.total_unsubscribed)::numeric / NULLIF(SUM(cm.total_sent), 0) * 100, 2) as unsubscribe_rate
FROM campaign_metrics cm
WHERE cm.started_at >= NOW() - INTERVAL '7 days'
GROUP BY cm.campaign_type, cm.dealership_id
ORDER BY cm.campaign_type;
Node 2 - Get Historical Comparison:
Type: postgres
Query: Same query but for previous 4 weeks for trend analysis
Node 3 - Build Analytics Prompt:
Type: function
Code:
const currentWeek = $('Aggregate Weekly Metrics').all();
const history = $('Get Historical Comparison').all();
const benchmarks = {
automotive_email_open_rate: 17.5, // industry average
automotive_email_click_rate: 2.1,
automotive_sms_response_rate: 12.0,
healthy_unsubscribe_rate: 0.5,
healthy_bounce_rate: 2.0
};
const prompt = `Analyze these weekly automotive service communication metrics and generate an executive summary report.
This Week's Performance:
${JSON.stringify(currentWeek, null, 2)}
Previous 4 Weeks Trend:
${JSON.stringify(history, null, 2)}
Industry Benchmarks:
- Automotive email open rate average: ${benchmarks.automotive_email_open_rate}%
- Automotive email click rate average: ${benchmarks.automotive_email_click_rate}%
- Healthy unsubscribe rate: <${benchmarks.healthy_unsubscribe_rate}%
- Healthy bounce rate: <${benchmarks.healthy_bounce_rate}%
Generate a JSON report with:
{
"executive_summary": "3-4 sentence overview for the dealership GM",
"highlights": ["list of positive trends"],
"concerns": ["list of any metrics below benchmark or negative trends"],
"recommendations": ["specific actionable recommendations"],
"roi_summary": "statement about appointments booked and revenue attributed this week",
"week_over_week_trend": "improving | stable | declining"
}`;
return [{json: {prompt, benchmarks, raw_metrics: currentWeek}}];
Node 4 - Generate AI Report:
Type: httpRequest (OpenAI GPT-5.4 for higher quality analysis)
Model: gpt-5.4
Temperature: 0.3 (more analytical, less creative)
Node 5 - Format HTML Report:
Type: function
Build professional HTML email with dealership branding, charts placeholders, and the AI-generated insights
Node 6 - Send to Dealership Manager:
Type: sendGrid
To: dealership_gm@dealername.com
Subject: 'Weekly AI Communications Performance Report - {{$now.format("MMM D, YYYY")}}'
Node 7 - Send to MSP Account Manager:
Type: sendGrid
To: account_manager@yourmsp.com
Subject: '[Client: DealerName] Weekly Performance - {{$now.format("MMM D, YYYY")}}'Testing & Validation
- UNIT TEST - AI Content Quality: Send 10 test prompts (2 per communication type) through the AI Content Generation workflow using sample customer data. Verify all JSON responses parse correctly, email subjects are under 60 characters, email bodies are under 150 words, SMS bodies are under 145 characters, no template variables remain unreplaced, and tone is professional and on-brand.
- UNIT TEST - Consent Verification: Create test consent records covering all scenarios: active promotional consent, active informational-only consent, opted-out customer, no consent record, and expired consent. Run each through the Consent Verification sub-workflow and verify: promotional SMS blocked for informational-only consent, all channels blocked for opted-out customers, no-record customers are blocked, and active consent customers are approved.
- INTEGRATION TEST - DMS Data Extraction: Trigger the DMS Data Sync workflow manually and verify it successfully authenticates with the DMS API (Fortellis/Tekion/RCI), retrieves recent closed repair orders, correctly parses customer and vehicle data, properly categorizes communication types, and inserts records into the processing queue without duplicates.
- INTEGRATION TEST - Email Delivery End-to-End: Generate a test post-service follow-up email for a test customer (use MSP team email addresses). Verify the email arrives in inbox (not spam), SPF/DKIM/DMARC all pass (check email headers), SendGrid tracking works (open and click events fire back to the webhook), unsubscribe link works and updates the consent database within 10 seconds, and the email renders correctly on desktop and mobile.
- INTEGRATION TEST - SMS Delivery End-to-End: Send a test maintenance reminder SMS to MSP team phone numbers. Verify message delivers within 30 seconds, sender shows the registered 10DLC number, message content matches AI-generated text with opt-out appended, reply STOP triggers automatic opt-out in both Twilio and the local consent database, and Twilio delivery receipt webhook fires and updates communication_log.
- LOAD TEST - Batch Processing: Queue 200 test records in the processing queue and run the AI Content Generation and Delivery workflows. Verify all 200 are processed within 2 hours, API rate limits are respected (no 429 errors), error handling correctly retries failed API calls, batch splitting works correctly without data loss, and database records are complete and consistent.
- COMPLIANCE TEST - Opt-Out Timing: Trigger an unsubscribe via email and a STOP reply via SMS. Verify both channels cease all communications to that customer within the next workflow execution cycle (max 30 minutes). Verify the consent database is updated with opt-out timestamp and source. Attempt to queue a new message for the opted-out customer and verify it is blocked by the Consent Verification middleware.
- COMPLIANCE TEST - CAN-SPAM Requirements: Review 5 sample AI-generated emails and verify each contains: accurate From address and dealership name, non-deceptive subject line, physical mailing address of the dealership, working unsubscribe mechanism, and identification as an advertisement (if promotional).
- DELIVERABILITY TEST - Email Warmup: After configuring the dedicated SendGrid IP, send 50 emails on Day 1 to engaged contacts (MSP team + dealership staff). Check deliverability via SendGrid analytics. Gradually increase volume per SendGrid's IP warmup schedule over 4 weeks. Monitor bounce rate (must stay under 2%) and spam complaint rate (must stay under 0.1%).
- MONITORING TEST - Alert Verification: Deliberately cause each monitored failure condition: stop the n8n container (verify Uptime Kuma alerts within 2 minutes), use an invalid API key (verify error workflow triggers alert), simulate a high bounce rate in test data (verify daily health report flags the issue). Confirm all alerts reach the MSP operations channel via the configured notification method.
Client Handoff
The client handoff should be conducted as a 90-minute on-site session with the service manager, marketing coordinator (if applicable), and general manager. Cover the following topics:
NEVER add customers to SMS lists without documented written consent.
SUCCESS CRITERIA TO REVIEW: Confirm the client can identify where to see campaign performance, knows who to call for support, understands the consent requirement for SMS, and can articulate the value proposition to their leadership (expected 15-25% increase in service rebooking rate, labor savings of 5-10 hours/week on manual follow-ups).
Maintenance
Ongoing Maintenance Responsibilities
Weekly (MSP Technician, 1–2 Hours)
- Review the automated weekly analytics report for anomalies (open rate drop >5%, bounce rate spike, unusual unsubscribe volume)
- Sample 10 AI-generated messages across all communication types for quality assurance and brand voice consistency
- Check n8n workflow execution logs for errors or warnings
- Verify DMS data sync is pulling current records (check last sync timestamp)
- Review Uptime Kuma monitoring dashboard for any downtime events
Monthly (MSP Technician, 2–3 Hours)
- Review and optimize AI prompt templates based on performance data (A/B test subject lines, adjust tone)
- Update seasonal campaign prompts and promotional offers per dealership marketing calendar
- Audit consent database: verify opt-out processing is working, check for stale records
- Review OpenAI/Anthropic API usage and billing — adjust model selection if costs are trending above budget
- Update n8n to latest stable version
- Review SendGrid deliverability metrics and sender reputation score
- Backup PostgreSQL compliance database
docker compose pull n8n && docker compose up -d n8ndocker exec ai-comms-postgres pg_dump -U n8n compliance > /backup/compliance_$(date +%Y%m%d).sqlQuarterly (MSP Account Manager + Technician, 4–6 Hours)
- Comprehensive compliance audit: verify FTC Safeguards documentation is current, review all consent records for completeness, test opt-out flows end-to-end
- Client business review: present campaign ROI metrics (appointments booked, estimated revenue attributed), compare to baseline, recommend strategy adjustments
- Update AI models if new cost-effective options are available (e.g., new OpenAI mini model release)
- Review and update firewall rules, SSL certificates, and security patches on the automation server
- Test disaster recovery: restore PostgreSQL backup to verify integrity
- Review DMS API for any changes or deprecations announced by CDK/Tekion/Reynolds
Annually
- Full security assessment of the automation platform per FTC Safeguards requirements
- Renew FortiGate subscription licenses
- Review and renegotiate Twilio/SendGrid plans based on actual volume
- Update client's written information security plan
Model Retraining / Prompt Refresh Triggers
- Open rate drops below 15% for 3 consecutive weeks
- Unsubscribe rate exceeds 0.5% for any campaign type
- Client rebrands or changes marketing messaging
- New OEM co-op requirements are issued
- Major AI model update released (test new model in staging before production swap)
SLA Framework
- P1 (System outage, compliance breach, data leak): 1-hour response, 4-hour resolution target
- P2 (Delivery failures, DMS sync broken, high bounce rate): 4-hour response, 1 business day resolution
- P3 (Template changes, new campaign setup, reporting requests): 1 business day response, 3 business day resolution
- P4 (Feature requests, optimization suggestions): Next scheduled maintenance window
Escalation Path
Technician → Senior Engineer → MSP Account Manager → MSP Operations Director. For compliance incidents: immediately notify MSP compliance officer and client's designated Qualified Individual.
Alternatives
...
Kimoby Turnkey Platform
Replace the entire custom integration stack with Kimoby, a purpose-built automotive service communication platform. Kimoby provides pre-built DMS integrations (CDK, Reynolds, DealerTrack), automated service reminders, two-way texting, photo/video messaging, and AI-assisted content — all in a single SaaS platform managed by the vendor.
ActiveCampaign + OpenAI via Zapier
Use ActiveCampaign as the central CRM and marketing automation platform, connected to OpenAI via Zapier for AI content generation. ActiveCampaign handles email delivery, contact management, automation sequences, and basic SMS. Zapier orchestrates the AI API calls and DMS data sync.
Fullpath AI Marketing Ecosystem
Use Fullpath's automotive-specific CDP and AI-powered marketing platform. Fullpath unifies dealership data sources into a single customer data platform, then uses AI agents to generate ad copy, email campaigns, and audience targeting — with built-in OEM compliance for franchise dealers.
Cloud-Hosted n8n (No On-Premise Server)
Identical to the primary approach but replace the on-premise Dell PowerEdge server with n8n Cloud hosting or a small VPS (e.g., Hetzner, DigitalOcean, or AWS Lightsail). Eliminates on-premise hardware entirely.
Want early access to the full toolkit?