52 min readContent generation

Implementation Guide: Draft service follow-up communications and maintenance reminder campaigns

Step-by-step implementation guide for deploying AI to draft service follow-up communications and maintenance reminder campaigns for Automotive clients.

Hardware Procurement

Automation Server

Dell TechnologiesDell PowerEdge T360 (Xeon E-2434, 32GB DDR5 ECC, 1TB NVMe SSD, iDRAC9)Qty: 1

$1,800 per unit (MSP cost) / $2,800–$3,200 suggested resale

Hosts the self-hosted n8n Community Edition workflow orchestrator, local DMS API bridge middleware, prompt template repository, and audit logging database. Provides on-premise control for data processing and eliminates per-execution cloud fees. Can serve multiple dealership clients if MSP centralizes operations.

Firewall / UTM Appliance

FortinetFortiGate 40F (FG-40F)Qty: 1

$400 hardware + $200/year FortiGuard subscription (MSP cost) / $800 hardware + $400/year suggested resale

Secures API traffic between on-premise systems and cloud services (OpenAI, Twilio, DMS APIs). Provides network segmentation to isolate customer PII processing from general dealership traffic, supporting FTC Safeguards Rule compliance. Enables outbound HTTPS whitelisting to approved API endpoints only.

Uninterruptible Power Supply

APC by Schneider ElectricAPC Smart-UPS 1500VA (SMT1500)Qty: 1

$550 per unit (MSP cost) / $750 suggested resale

Protects the automation server from power interruptions ensuring workflow continuity and preventing data corruption during active DMS sync or campaign generation operations.

Staff Workstations (if needed)

Dell TechnologiesDell OptiPlex 7020 Micro (i5-14500T, 16GB DDR5, 256GB SSD)Qty: 2

$900 per unit (MSP cost) / $1,200 suggested resale

Workstations for the service manager and marketing coordinator to review AI-generated content, approve campaigns, and manage the n8n dashboard. Only needed if existing dealership workstations are outdated or insufficient.

Software Procurement

$0/month software cost; MSP resells as managed service at $100–$200/month

Core workflow orchestration engine. Connects DMS APIs to AI content generation APIs to email/SMS delivery platforms. Handles scheduling, branching logic, error handling, and audit logging for all automated communications.

OpenAI API (GPT-5.4 mini)

OpenAIGPT-5.4 mini

$0.15/1M input tokens, $0.60/1M output tokens; estimated $15–$50/month per dealership (2,000 customers)

Primary AI engine for generating personalized service follow-up emails, maintenance reminders, and campaign content. GPT-5.4 mini provides excellent quality at very low cost for template-based generation tasks.

$0.80/1M input tokens, $4.00/1M output tokens; estimated $20–$75/month per dealership if used as primary

Secondary/fallback AI engine. Provides diversity in content generation style and serves as redundancy if OpenAI experiences outages. Excellent at maintaining consistent brand voice across long campaigns. License type: Usage-based API (pay-per-token).

$89.95/month for 100K emails (MSP cost) / $150–$200/month suggested resale

Transactional and marketing email delivery platform. Handles all email sending including post-service follow-ups, maintenance reminders, and seasonal campaigns. Pro tier includes dedicated IP address for deliverability and advanced analytics.

~$0.0079/segment + carrier surcharges (~$0.003–$0.005); estimated $40–$120/month per dealership / resell at $0.02–$0.03/msg

SMS delivery for maintenance reminders and service follow-ups. Supports A2P 10DLC registration for compliant business texting. Two-way messaging enables customers to reply to schedule appointments.

PostgreSQL 16

PostgreSQL Global Development GroupPostgreSQL 16

$0/month

Local database on the automation server storing consent/opt-in records with timestamps, communication logs for compliance auditing, prompt templates, and campaign performance metrics.

Docker Engine

Docker Inc.

$0/month

Container runtime for hosting n8n and PostgreSQL on the automation server. Simplifies deployment, updates, and backup/restore operations.

Nginx Proxy Manager

Open-source community

$0/month

Reverse proxy with automatic SSL certificate management for securely exposing n8n webhook endpoints that receive DMS event notifications.

Prerequisites

  • Active DMS subscription with API access enabled — CDK Global (Fortellis developer account and app registration), Reynolds & Reynolds (RCI Program certification initiated), or Tekion (Automotive Partner Cloud API key). Verify API access tier with the DMS vendor before beginning implementation.
  • Dealership email domain (e.g., service@dealername.com) with DNS management access for configuring SPF, DKIM, and DMARC records required for email deliverability.
  • Static public IP address at the dealership or MSP data center for whitelisting with DMS API providers and for hosting n8n webhook endpoints.
  • Minimum 50 Mbps symmetric internet connection (100+ Mbps recommended) with low latency for API calls to OpenAI, Twilio, and DMS endpoints.
  • Existing customer database with minimum fields: customer name, email, phone, vehicle year/make/model/VIN, last service date, services performed, and next recommended service. This data typically resides in the DMS.
  • FTC Safeguards Rule compliance baseline: designated Qualified Individual, written information security plan, multi-factor authentication on all systems accessing customer PII, encryption at rest and in transit.
  • A2P 10DLC brand and campaign registration initiated through Twilio (takes 1–4 weeks for approval). Required before any SMS messages can be sent at production volume.
  • TCPA-compliant opt-in records: documented prior express written consent for promotional SMS, with timestamps. Service reminders may qualify as informational but marketing upsells require explicit consent.
  • CAN-SPAM compliance: physical mailing address for the dealership to include in email footers, working unsubscribe mechanism, accurate sender identification.
  • Dealership stakeholder sign-off on communication templates, brand voice guidelines, campaign frequency, and escalation procedures for AI-generated content review.

Installation Steps

...

Step 1: Provision and Secure the Automation Server

Rack and connect the Dell PowerEdge T360. Install Ubuntu Server 22.04 LTS. Configure the Fortinet FortiGate 40F firewall. Connect the APC UPS. Harden the OS per CIS benchmarks and enable the UFW firewall.

bash
# Install Ubuntu Server 22.04 LTS from USB/iDRAC virtual media
# After OS install, update and harden:
sudo apt update && sudo apt upgrade -y
sudo apt install -y ufw fail2ban unattended-upgrades curl git
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow ssh
sudo ufw allow 443/tcp
sudo ufw allow 5678/tcp comment 'n8n web UI - restrict to management IPs later'
sudo ufw enable
# Configure fail2ban for SSH
sudo systemctl enable fail2ban
sudo systemctl start fail2ban
# Enable automatic security updates
sudo dpkg-reconfigure -plow unattended-upgrades
# Set timezone to dealership local time
sudo timedatectl set-timezone America/Chicago
Note

Replace America/Chicago with the dealership's local timezone. Configure FortiGate 40F to only allow outbound HTTPS to: api.openai.com, api.anthropic.com, api.sendgrid.com, api.twilio.com, and the DMS API endpoint (e.g., api.fortellis.io). Enable FortiGate logging for all API traffic. Set up iDRAC remote management for the PowerEdge with a separate management VLAN.

Step 2: Install Docker and Docker Compose

Install Docker Engine and Docker Compose on the automation server to containerize n8n, PostgreSQL, and Nginx Proxy Manager for simplified deployment and management.

bash
# Install Docker Engine
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
# Install Docker Compose plugin
sudo apt install -y docker-compose-plugin
# Verify installation
docker --version
docker compose version
# Create project directory
sudo mkdir -p /opt/ai-comms-platform
sudo chown $USER:$USER /opt/ai-comms-platform
cd /opt/ai-comms-platform
Note

Log out and back in after adding user to docker group. All subsequent commands assume you are in /opt/ai-comms-platform directory.

Step 3: Create Docker Compose Configuration

Create the docker-compose.yml file that defines the n8n workflow engine, PostgreSQL database, and Nginx Proxy Manager containers with proper networking, volumes, and environment variables.

bash
cat > /opt/ai-comms-platform/docker-compose.yml << 'EOF'
version: '3.8'

volumes:
  n8n_data:
  postgres_data:
  nginx_data:
  nginx_letsencrypt:

networks:
  ai-comms-net:
    driver: bridge

services:
  postgres:
    image: postgres:16-alpine
    container_name: ai-comms-postgres
    restart: always
    networks:
      - ai-comms-net
    environment:
      POSTGRES_DB: n8n
      POSTGRES_USER: n8n
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./init-scripts:/docker-entrypoint-initdb.d
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U n8n']
      interval: 10s
      timeout: 5s
      retries: 5

  n8n:
    image: n8nio/n8n:latest
    container_name: ai-comms-n8n
    restart: always
    networks:
      - ai-comms-net
    ports:
      - '5678:5678'
    environment:
      N8N_HOST: ${N8N_HOST}
      N8N_PORT: 5678
      N8N_PROTOCOL: https
      WEBHOOK_URL: https://${N8N_HOST}/
      DB_TYPE: postgresdb
      DB_POSTGRESDB_HOST: postgres
      DB_POSTGRESDB_PORT: 5432
      DB_POSTGRESDB_DATABASE: n8n
      DB_POSTGRESDB_USER: n8n
      DB_POSTGRESDB_PASSWORD: ${POSTGRES_PASSWORD}
      N8N_ENCRYPTION_KEY: ${N8N_ENCRYPTION_KEY}
      N8N_BASIC_AUTH_ACTIVE: 'true'
      N8N_BASIC_AUTH_USER: ${N8N_ADMIN_USER}
      N8N_BASIC_AUTH_PASSWORD: ${N8N_ADMIN_PASSWORD}
      GENERIC_TIMEZONE: ${TIMEZONE}
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      postgres:
        condition: service_healthy

  nginx-proxy:
    image: jc21/nginx-proxy-manager:latest
    container_name: ai-comms-nginx
    restart: always
    networks:
      - ai-comms-net
    ports:
      - '80:80'
      - '443:443'
      - '81:81'
    volumes:
      - nginx_data:/data
      - nginx_letsencrypt:/etc/letsencrypt
EOF
# Create environment file
cat > /opt/ai-comms-platform/.env << 'EOF'
POSTGRES_PASSWORD=CHANGE_ME_STRONG_PASSWORD_32CHARS
N8N_HOST=n8n.yourmsp.com
N8N_ENCRYPTION_KEY=CHANGE_ME_RANDOM_64CHAR_HEX_STRING
N8N_ADMIN_USER=mspadmin
N8N_ADMIN_PASSWORD=CHANGE_ME_STRONG_PASSWORD
TIMEZONE=America/Chicago
EOF
# Set secure permissions on .env
chmod 600 /opt/ai-comms-platform/.env
Critical

CRITICAL: Replace all CHANGE_ME values with strong, unique passwords. Generate the N8N_ENCRYPTION_KEY with: openssl rand -hex 32. The N8N_HOST should be a subdomain pointed to the server's static IP (e.g., n8n.yourmsp.com). This encryption key protects stored credentials — back it up securely. If lost, all n8n credentials must be re-entered.

Step 4: Create Database Initialization Scripts

Create SQL initialization scripts that set up the compliance tracking tables for consent records, communication logs, and campaign metrics alongside the n8n database.

bash
mkdir -p /opt/ai-comms-platform/init-scripts
cat > /opt/ai-comms-platform/init-scripts/01-compliance-tables.sql << 'SQLEOF'
-- Compliance and audit database
CREATE DATABASE IF NOT EXISTS compliance;
\c compliance;

CREATE TABLE IF NOT EXISTS consent_records (
  id SERIAL PRIMARY KEY,
  customer_id VARCHAR(100) NOT NULL,
  dealership_id VARCHAR(50) NOT NULL,
  channel VARCHAR(10) NOT NULL CHECK (channel IN ('email', 'sms')),
  consent_type VARCHAR(20) NOT NULL CHECK (consent_type IN ('promotional', 'informational', 'transactional')),
  consent_given BOOLEAN NOT NULL DEFAULT false,
  consent_timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  consent_source VARCHAR(255),
  ip_address INET,
  opt_out_timestamp TIMESTAMPTZ,
  opt_out_source VARCHAR(255),
  created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);

CREATE INDEX idx_consent_customer ON consent_records(customer_id, channel);
CREATE INDEX idx_consent_dealership ON consent_records(dealership_id);

CREATE TABLE IF NOT EXISTS communication_log (
  id SERIAL PRIMARY KEY,
  dealership_id VARCHAR(50) NOT NULL,
  customer_id VARCHAR(100) NOT NULL,
  channel VARCHAR(10) NOT NULL,
  message_type VARCHAR(50) NOT NULL,
  subject_line TEXT,
  ai_model_used VARCHAR(50),
  ai_prompt_template VARCHAR(100),
  content_hash VARCHAR(64),
  sendgrid_message_id VARCHAR(255),
  twilio_message_sid VARCHAR(255),
  delivery_status VARCHAR(30),
  sent_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  opened_at TIMESTAMPTZ,
  clicked_at TIMESTAMPTZ,
  bounced_at TIMESTAMPTZ,
  unsubscribed_at TIMESTAMPTZ
);

CREATE INDEX idx_comlog_customer ON communication_log(customer_id);
CREATE INDEX idx_comlog_dealership_date ON communication_log(dealership_id, sent_at);

CREATE TABLE IF NOT EXISTS campaign_metrics (
  id SERIAL PRIMARY KEY,
  dealership_id VARCHAR(50) NOT NULL,
  campaign_name VARCHAR(255) NOT NULL,
  campaign_type VARCHAR(50) NOT NULL,
  total_sent INTEGER DEFAULT 0,
  total_delivered INTEGER DEFAULT 0,
  total_opened INTEGER DEFAULT 0,
  total_clicked INTEGER DEFAULT 0,
  total_unsubscribed INTEGER DEFAULT 0,
  total_bounced INTEGER DEFAULT 0,
  appointments_booked INTEGER DEFAULT 0,
  revenue_attributed NUMERIC(10,2) DEFAULT 0,
  started_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
  completed_at TIMESTAMPTZ
);
SQLEOF
Note

The compliance database is separate from the n8n operational database for clarity and potential future separation. These tables satisfy FTC Safeguards audit logging requirements and TCPA consent documentation requirements. Back up this database daily.

Step 5: Launch the Platform Stack

Start all Docker containers, verify they are running correctly, and configure Nginx Proxy Manager with SSL for the n8n web interface.

Start containers and verify startup logs
shell
cd /opt/ai-comms-platform
docker compose up -d
# Verify all containers are running
docker compose ps
# Check n8n logs for successful startup
docker compose logs n8n --tail 50
# Check PostgreSQL is healthy
docker compose logs postgres --tail 20
1
Access Nginx Proxy Manager at http://SERVER_IP:81
2
Default login: admin@example.com / changeme
3
Configure proxy host: n8n.yourmsp.com -> http://ai-comms-n8n:5678
4
Enable SSL with Let's Encrypt
Restrict port 5678 to localhost only
shell
sudo ufw delete allow 5678/tcp
sudo ufw allow from 127.0.0.1 to any port 5678
Note

After configuring Nginx Proxy Manager, access n8n only via https://n8n.yourmsp.com. The Nginx Proxy Manager admin interface (port 81) should be firewalled to MSP management IPs only. Change the default NPM admin credentials immediately on first login.

Step 6: Configure Email Authentication (SPF, DKIM, DMARC)

Set up proper email authentication records on the dealership's domain to ensure high deliverability for AI-generated communications. This is critical — without it, emails will land in spam.

1
SPF Record — Add a TXT record for dealername.com: v=spf1 include:sendgrid.net include:_spf.google.com ~all
2
DKIM — In SendGrid dashboard: Settings -> Sender Authentication -> Authenticate Your Domain. Follow wizard to get CNAME records, then add to DNS: s1._domainkey.dealername.com CNAME s1.domainkey.uXXXXXX.wlXXX.sendgrid.net and s2._domainkey.dealername.com CNAME s2.domainkey.uXXXXXX.wlXXX.sendgrid.net
3
DMARC Record — Add a TXT record for _dmarc.dealername.com: v=DMARC1; p=quarantine; rua=mailto:dmarc@yourmsp.com; pct=100
4
SendGrid Link Branding (optional but recommended) — Settings -> Sender Authentication -> Link Branding. Creates CNAME for click/open tracking links using dealer domain.
Verify all DNS records after propagation
shell
dig TXT dealername.com +short
dig CNAME s1._domainkey.dealername.com +short
dig TXT _dmarc.dealername.com +short
Note

DNS propagation takes 24–48 hours. Start with DMARC policy p=none for monitoring, then move to p=quarantine after 2 weeks of clean reports. Send the DMARC aggregate reports to the MSP's monitoring mailbox. Use https://mxtoolbox.com/SuperTool.aspx to verify all records. SendGrid domain authentication is mandatory for the Pro plan's dedicated IP to work properly.

Step 7: Register A2P 10DLC Brand and Campaign with Twilio

Register the dealership as a brand and register the messaging campaign with Twilio for compliant A2P business SMS. This is legally required for all business texting in the US and takes 1–4 weeks for approval.

1
Navigate to: Messaging -> Trust Hub -> A2P Brand Registration. Register dealership as brand with: Legal business name (as on IRS EIN letter), EIN (Tax ID), Business address, Website URL, Business type: Private/Public Company, Industry: Automotive
2
After brand approval, register Campaign: Navigate to: Messaging -> Services -> Create Messaging Service. Campaign use case: Mixed (service reminders + marketing). Sample messages: 'Hi [Name], your [Vehicle] is due for an oil change. Reply STOP to opt out.' and 'Thank you for servicing your [Vehicle] at [Dealer]. How was your experience? Reply STOP to opt out.' Opt-in description: 'Customers provide consent via service intake form checkbox'. Opt-in keywords: START, YES. Opt-out keywords: STOP, CANCEL, UNSUBSCRIBE. Help keyword: HELP
3
Purchase a local phone number: Navigate to: Phone Numbers -> Buy a Number. Select local area code matching dealership location. Assign to Messaging Service.
Note

A2P 10DLC registration costs $4/brand + $15/campaign (one-time) plus $2/month for the phone number. Brand vetting typically takes 1–5 business days. Campaign registration takes an additional 1–3 weeks. DO NOT send SMS at volume before registration is approved — messages will be filtered and the number can be blocked. If the dealership needs to start sending immediately, consider Twilio's Toll-Free number verification as a faster alternative (3-5 business days).

Step 8: Configure API Credentials in n8n

Add all API credentials to n8n's credential store. These are encrypted at rest using the N8N_ENCRYPTION_KEY. Set up credentials for OpenAI, Anthropic (fallback), Twilio SendGrid, Twilio SMS, and the DMS API.

1
Access n8n at https://n8n.yourmsp.com — Navigate to: Credentials -> Add Credential
2
OpenAI API — Type: OpenAI | API Key: sk-proj-XXXX (from https://platform.openai.com/api-keys) | Organization ID: org-XXXX (optional)
3
Anthropic API (fallback) — Type: HTTP Header Auth | Header Name: x-api-key | Header Value: sk-ant-XXXX (from https://console.anthropic.com/settings/keys)
4
SendGrid API — Type: SendGrid | API Key: SG.XXXX (from SendGrid -> Settings -> API Keys -> Create with Mail Send permission)
5
Twilio API — Type: Twilio | Account SID: ACXXXX | Auth Token: XXXX | (from Twilio Console dashboard)
6
DMS API (example for CDK Fortellis) — Type: OAuth2 | Client ID: (from Fortellis app registration) | Client Secret: (from Fortellis app registration) | Access Token URL: https://identity.fortellis.io/oauth2/aus1p1ixy7YL8cMq02p7/v1/token | Scope: (per Fortellis app permissions)
7
PostgreSQL (compliance DB) — Type: Postgres | Host: postgres (Docker internal hostname) | Port: 5432 | Database: compliance | User: n8n | Password: (from .env POSTGRES_PASSWORD)
Note

Create separate API keys for each service with minimal required permissions. For OpenAI, set a monthly spending limit in the dashboard (Settings -> Billing -> Usage limits) — start at $100/month and adjust. For SendGrid, create a key with only 'Mail Send' permission, not full access. Store a backup of all credentials in the MSP's password manager (e.g., IT Glue, Hudu). Never share credentials in plaintext via email or tickets.

Step 9: Build the DMS Data Extraction Workflow

Create the n8n workflow that periodically queries the DMS API to extract service records, customer information, and vehicle data. This workflow runs on a schedule (e.g., every 4 hours) and identifies customers who need follow-up communications based on service events and maintenance schedules.

1
In n8n, create a new workflow: 'DMS Data Sync'
2
Import the following workflow JSON via n8n's import feature
3
Schedule Trigger (every 4 hours)
4
HTTP Request node → DMS API (get recent closed repair orders)
5
Function node → Parse and normalize DMS response
6
Postgres node → Check if customer already in recent comms log
7
IF node → Branch: new follow-up needed vs. already contacted
8
Function node → Categorize communication type: post_service_followup (RO closed today), satisfaction_check (RO closed 3 days ago), maintenance_reminder (next service due within 2 weeks), declined_service_followup (services declined on RO, 7 days later), seasonal_campaign (quarterly, all eligible customers)
9
Postgres node → Insert records into processing queue table
CDK Fortellis example HTTP Request node configuration for n8n
http
# CDK Fortellis HTTP Request:
# Method: GET
# URL: https://api.fortellis.io/cdkdrive/service/v1/repair-orders
# Query Params: status=closed&closedAfter={{$now.minus(4,'hours').toISO()}}
# Authentication: OAuth2 (Fortellis credential)
# Headers: Accept: application/json, Request-Id: {{$workflow.id}}-{{$now.toMillis()}}
Note

The DMS API integration is the most variable part of this implementation. CDK Fortellis has the most straightforward REST API. Tekion APC offers open self-serve APIs. Reynolds & Reynolds RCI requires a formal certification process that can take 4–8 weeks — plan accordingly and start the RCI application during Phase 1. If DMS API access is delayed, create a temporary CSV import workflow where the service manager exports data weekly from the DMS as a stopgap.

Step 10: Build the AI Content Generation Workflow

Create the main n8n workflow that takes queued customer records, generates personalized email and SMS content using the AI API, validates the output, and queues it for delivery. This is the core AI workflow.

  • In n8n, create a new workflow: 'AI Content Generator'
  • Trigger: runs every 30 minutes or on webhook from DMS Sync workflow
  • Workflow nodes:
  • 1. Postgres node → Fetch unprocessed records from queue
  • 2. SplitInBatches node → Process 10 at a time to manage API rate limits
  • 3. Function node → Build personalized prompt from template + customer data
  • 4. HTTP Request node → POST to OpenAI API (or Anthropic fallback)
  • 5. Function node → Parse AI response JSON, validate fields present
  • 6. IF node → Quality check (length > 50 chars, no placeholder text left)
  • 7. Postgres node → Log AI generation to communication_log
  • 8. Set node → Format for delivery queue
  • 9. Postgres node → Insert into delivery queue with scheduled send time
  • Error handling: Add Error Trigger node for the workflow
  • On OpenAI failure: retry 3x with exponential backoff
  • On persistent failure: switch to Anthropic Claude Haiku
  • Log all errors to compliance database
HTTP Request node body
json
# POST to https://api.openai.com/v1/chat/completions

{
  "model": "gpt-5.4-mini",
  "temperature": 0.7,
  "max_tokens": 500,
  "messages": [
    {"role": "system", "content": "{{system_prompt}}"},
    {"role": "user", "content": "{{user_prompt}}"}
  ],
  "response_format": {"type": "json_object"}
}
Note

Request JSON response format from the AI to get structured output (subject line, email body, sms body as separate fields). This makes parsing reliable. Set temperature to 0.7 for variety in content while maintaining professionalism. The quality check node should verify: no empty fields, no leftover template variables like {{name}}, content length within bounds, and no inappropriate content. Add a Wait node between batches to respect API rate limits (GPT-5.4 mini: 30,000 TPM on free tier, 2M+ TPM on paid).

Step 11: Build the Email Delivery Workflow

Create the n8n workflow that takes approved content from the delivery queue and sends personalized emails through Twilio SendGrid with proper tracking, unsubscribe handling, and deliverability best practices.

1
In n8n, create a new workflow: 'Email Delivery Engine'
2
Trigger: Schedule every 15 minutes during business hours (8AM-7PM local)
3
Postgres node → Fetch queued emails (channel='email', status='pending')
4
Postgres node → Verify consent: SELECT consent_given FROM consent_records WHERE customer_id={{customer_id}} AND channel='email' AND opt_out_timestamp IS NULL
5
IF node → consent_given == true ? Continue : Skip + log
6
SplitInBatches → Process 50 at a time
7
HTTP Request node → SendGrid v3 Mail Send API (see payload below)
8
Postgres node → Update communication_log with sendgrid_message_id and status
9
Wait node → 200ms between sends for rate limiting
10
Webhook workflow for SendGrid Event Webhooks: Create separate workflow triggered by SendGrid events (opens, clicks, bounces, unsubscribes) — URL: https://n8n.yourmsp.com/webhook/sendgrid-events — Parses events and updates communication_log and consent_records tables
SendGrid v3 Mail Send API — HTTP Request node payload
http
POST https://api.sendgrid.com/v3/mail/send
Authorization: Bearer {{$credentials.sendgridApiKey}}

{
  "personalizations": [{
    "to": [{"email": "{{customer_email}}", "name": "{{customer_name}}"}],
    "subject": "{{ai_subject_line}}",
    "custom_args": {
      "dealership_id": "{{dealership_id}}",
      "campaign_type": "{{message_type}}",
      "comm_log_id": "{{comm_log_id}}"
    }
  }],
  "from": {"email": "service@dealername.com", "name": "Dealer Service Team"},
  "reply_to": {"email": "service@dealername.com"},
  "content": [{"type": "text/html", "value": "{{ai_email_body_html}}"}],
  "tracking_settings": {
    "open_tracking": {"enable": true},
    "click_tracking": {"enable": true}
  },
  "asm": {
    "group_id": 12345,
    "groups_to_display": [12345, 12346]
  }
}
Note

The ASM (Advanced Suppression Manager) group_id must be configured in SendGrid first — create groups for 'Service Reminders' and 'Promotional Offers'. SendGrid Event Webhooks must be configured in Settings -> Mail Settings -> Event Webhook pointing to the n8n webhook URL. This provides real-time tracking data. Set up IP warmup if using a new dedicated IP — start with 50 emails/day and increase gradually over 4 weeks. Use SendGrid's suppression management to automatically honor unsubscribes.

Step 12: Build the SMS Delivery Workflow

Create the n8n workflow for sending maintenance reminders and service follow-ups via Twilio Programmable SMS with TCPA compliance enforcement.

n8n SMS Delivery Engine workflow with inbound STOP/HELP handling
sql
# In n8n, create a new workflow: 'SMS Delivery Engine'
# Trigger: Schedule every 15 minutes during business hours (9AM-8PM local, TCPA requirement)

# Workflow nodes:
# 1. Postgres node -> Fetch queued SMS (channel='sms', status='pending')
# 2. Postgres node -> STRICT consent check:
#    SELECT * FROM consent_records
#    WHERE customer_id={{customer_id}}
#    AND channel='sms'
#    AND consent_given=true
#    AND opt_out_timestamp IS NULL
#    AND (consent_type='promotional' OR '{{message_type}}' IN ('post_service_followup','maintenance_reminder'))
# 3. IF node -> Consent verified? Continue : Skip + log refusal reason
# 4. Function node -> Ensure message includes opt-out: append 'Reply STOP to opt out'
# 5. Twilio node -> Send SMS
#    From: +1XXXXXXXXXX (registered Messaging Service number)
#    To: {{customer_phone}}
#    Body: {{ai_sms_body}} Reply STOP to opt out.
#    MessagingServiceSid: MGXXXXXXXXX (registered A2P campaign)
# 6. Postgres node -> Log twilio_message_sid and delivery status

# Inbound SMS handling workflow:
# Create webhook: https://n8n.yourmsp.com/webhook/twilio-inbound
# Configure in Twilio: Messaging Service -> Integration -> Webhook URL
# Nodes:
# 1. Webhook Trigger
# 2. Function node -> Parse inbound message
# 3. Switch node -> Check for STOP/CANCEL/UNSUBSCRIBE/START/HELP
# 4a. STOP branch: Postgres -> Set opt_out_timestamp, Twilio -> confirm opt-out
# 4b. START branch: Postgres -> Create new consent record (requires verification)
# 4c. HELP branch: Twilio -> Send help message with dealership phone number
# 4d. Other: Forward to dealership service desk or queue for human review
Note

TCPA compliance is NON-NEGOTIABLE. Never send SMS without verified consent. Never send outside 8AM-9PM recipient local time. Always include opt-out language. Process STOP requests immediately — Twilio handles automatic STOP processing but you should also update your database. The TCPA penalty is $500-$1,500 PER MESSAGE for violations. An Oklahoma dealership group paid $850,000 in settlement for texting without consent. Use Twilio's Messaging Service (not raw API) to benefit from built-in carrier compliance features.

Step 13: Configure SendGrid Event Webhooks and Twilio Status Callbacks

Set up real-time event tracking so delivery status, opens, clicks, bounces, and unsubscribes flow back into the compliance database for reporting and automated list hygiene.

  • SendGrid Event Webhook Configuration: In SendGrid dashboard: Settings -> Mail Settings -> Event Webhook
  • HTTP POST URL: https://n8n.yourmsp.com/webhook/sendgrid-events
  • Select events: Delivered, Opened, Clicked, Bounced, Spam Report, Unsubscribe, Group Unsubscribe
  • Enable: Active
  • Create n8n workflow 'SendGrid Event Processor':
  • Webhook node (POST /webhook/sendgrid-events)
  • SplitInBatches (events come in arrays)
  • Switch node on event type: 'delivered': UPDATE communication_log SET delivery_status='delivered'
  • Switch node on event type: 'open': UPDATE communication_log SET opened_at=NOW()
  • Switch node on event type: 'click': UPDATE communication_log SET clicked_at=NOW()
  • Switch node on event type: 'bounce': UPDATE communication_log SET bounced_at=NOW(), delivery_status='bounced'
  • Switch node on event type: 'spamreport'/'unsubscribe': UPDATE consent_records SET opt_out_timestamp=NOW()
  • Twilio Status Callback: In Twilio: Messaging Service -> Settings -> Status Callback URL
  • URL: https://n8n.yourmsp.com/webhook/twilio-status
  • Create matching n8n workflow to process Twilio delivery receipts
  • Update communication_log with: queued, sent, delivered, undelivered, failed
Note

SendGrid signs webhook payloads — enable webhook signature verification in the n8n workflow for security (Settings -> Mail Settings -> Signed Event Webhook Requests). For Twilio, validate the X-Twilio-Signature header against your auth token. Both of these prevent spoofed webhook calls from corrupting your data.

Step 14: Import and Configure Prompt Templates

Load the AI prompt templates into n8n's static data or a dedicated database table. These templates define the system prompt and user prompt structure for each communication type with merge field placeholders.

Create prompt templates table in compliance database
sql
# Create prompt templates table in compliance database:
docker exec -it ai-comms-postgres psql -U n8n -d compliance -c "
CREATE TABLE IF NOT EXISTS prompt_templates (
  id SERIAL PRIMARY KEY,
  template_name VARCHAR(100) UNIQUE NOT NULL,
  communication_type VARCHAR(50) NOT NULL,
  channel VARCHAR(10) NOT NULL,
  system_prompt TEXT NOT NULL,
  user_prompt_template TEXT NOT NULL,
  active BOOLEAN DEFAULT true,
  version INTEGER DEFAULT 1,
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW()
);"
1
post_service_followup_email
2
post_service_followup_sms
3
maintenance_reminder_email
4
maintenance_reminder_sms
5
declined_service_followup_email
6
declined_service_followup_sms
7
seasonal_campaign_email
8
satisfaction_check_email
9
recall_notification_email
10
recall_notification_sms
Note

Keep prompt templates versioned. When updating a template, increment the version number and keep the old version for A/B testing comparisons. The service manager should review and approve all prompt templates before activation. Include dealership-specific branding elements (name, slogan, service hours, phone number, address) as variables in the system prompt that are populated per-client.

Step 15: Deploy Monitoring and Alerting

Set up monitoring for the automation platform to ensure workflows run reliably, API quotas are not exceeded, and any delivery failures are caught early.

Install Uptime Kuma for lightweight monitoring
bash
# Install Uptime Kuma for lightweight monitoring
docker run -d --name uptime-kuma --network ai-comms-net -p 3001:3001 -v uptime-kuma:/app/data --restart always louislam/uptime-kuma:1
1
Configure monitors in Uptime Kuma (http://SERVER_IP:3001)
2
n8n health: HTTP GET https://n8n.yourmsp.com/healthz
3
PostgreSQL: TCP port 5432 on localhost
4
OpenAI API: HTTP GET https://api.openai.com/v1/models (with auth header)
5
SendGrid: HTTP GET https://api.sendgrid.com/v3/scopes (with auth header)
6
DMS API: HTTP GET [Fortellis/Tekion health endpoint]
  • Set up notification channels in Uptime Kuma: Email to msp-alerts@yourmsp.com
  • Slack/Teams webhook to MSP operations channel
  • SMS via Twilio to on-call technician
1
Create n8n monitoring workflow 'Daily Health Report' — runs at 7AM daily
2
Postgres → Count messages sent yesterday, delivery rate, bounce rate
3
Postgres → Count any failed workflow executions
4
HTTP Request → Check OpenAI billing usage
5
Function → Generate summary report
6
SendGrid → Email report to MSP and dealership manager
Note

Restrict Uptime Kuma port 3001 to MSP management IPs only in the firewall. Set alert thresholds: workflow failure > 0 in 1 hour = immediate alert; email bounce rate > 5% = warning; API error rate > 10% = critical. The daily health report provides proactive visibility before the client notices any issues.

Import existing customer consent records and historical service data into the system. This is critical for TCPA compliance — you must have documented consent before sending any messages.

bash
# 1. Export existing opt-in data from current email/SMS systems
# (ActiveCampaign, Mailchimp, DealerSocket, etc.)
# Format as CSV: customer_id, email, phone, consent_type, consent_date, source

# 2. Create import script:
cat > /opt/ai-comms-platform/import-consent.py << 'PYEOF'
import csv
import psycopg2
from datetime import datetime

conn = psycopg2.connect(
    host='localhost', port=5432,
    dbname='compliance', user='n8n',
    password='YOUR_POSTGRES_PASSWORD'
)
cur = conn.cursor()

with open('/tmp/consent_export.csv', 'r') as f:
    reader = csv.DictReader(f)
    for row in reader:
        cur.execute("""
            INSERT INTO consent_records 
            (customer_id, dealership_id, channel, consent_type, 
             consent_given, consent_timestamp, consent_source)
            VALUES (%s, %s, %s, %s, %s, %s, %s)
            ON CONFLICT DO NOTHING
        """, (
            row['customer_id'], row.get('dealership_id', 'DEFAULT'),
            row['channel'], row.get('consent_type', 'informational'),
            True, row.get('consent_date', datetime.now().isoformat()),
            'historical_import'
        ))

conn.commit()
cur.close()
conn.close()
print('Import complete')
PYEOF

# 3. Run import:
sudo apt install -y python3-psycopg2
python3 /opt/ai-comms-platform/import-consent.py

# 4. Verify import:
docker exec -it ai-comms-postgres psql -U n8n -d compliance -c "SELECT channel, consent_type, COUNT(*) FROM consent_records GROUP BY channel, consent_type;"
Critical

CRITICAL: If the dealership cannot provide documented consent records for SMS marketing, those customers must be re-opted-in before sending promotional texts. Service reminders may qualify as informational (lower consent bar) but consult legal counsel. For email, existing business relationship provides implicit consent for CAN-SPAM but explicit opt-in is best practice. Import only customers who have NOT previously opted out. Cross-reference with any existing suppression lists.

Custom AI Components

Post-Service Follow-Up Prompt Template

Type: prompt System and user prompts for generating personalized thank-you emails sent 2-4 hours after a service visit. Includes satisfaction inquiry, next service preview, and call-to-action for online review. Generates both email and SMS variants in a single API call. Implementation:

Post-Service Follow-Up – System Prompt

SYSTEM PROMPT: --- You are a professional automotive service communication writer for {{dealership_name}}, located at {{dealership_address}}. Your tone is warm, professional, and appreciative — like a trusted service advisor, not a salesperson. You write concise, personalized messages that make customers feel valued. Brand guidelines: - Dealership name: {{dealership_name}} - Service department hours: {{service_hours}} - Service phone: {{service_phone}} - Website: {{dealership_website}} - Tagline: {{dealership_tagline}} - OEM brand(s): {{oem_brands}} Rules: 1. Never fabricate service details — only reference what is provided in the customer data. 2. Always be factually accurate about services performed. 3. Never include pricing or cost information in follow-ups. 4. Include a subtle invitation to leave a Google review (provide link) but do not be pushy. 5. Mention the next recommended service if provided. 6. Keep email body under 150 words. Keep SMS under 160 characters (1 segment). 7. Do not use ALL CAPS, excessive exclamation marks, or salesy language. 8. Always address the customer by first name. 9. Reference their specific vehicle (year, make, model). 10. Respond in valid JSON format only. ---
Sonnet 4.6

Post-Service Follow-Up – User Prompt Template

USER PROMPT TEMPLATE: --- Generate a post-service follow-up for this customer: Customer first name: {{customer_first_name}} Vehicle: {{vehicle_year}} {{vehicle_make}} {{vehicle_model}} Services performed: {{services_performed}} Service date: {{service_date}} Service advisor: {{advisor_name}} Next recommended service: {{next_service_recommendation}} Next service due: {{next_service_due_date_or_mileage}} Google review link: {{google_review_url}} Respond with this exact JSON structure: { "email_subject": "Subject line for the email (under 60 characters)", "email_body_html": "HTML formatted email body with paragraphs using <p> tags. Include the Google review link as an <a> tag. Under 150 words.", "sms_body": "SMS text under 140 characters (leave room for opt-out append). Do not include the review link in SMS.", "communication_tone": "brief description of the tone used for QA" } ---
Sonnet 4.6
Example Output
json
{
  "email_subject": "Thanks for bringing in your 2021 Camry, Sarah!",
  "email_body_html": "<p>Hi Sarah,</p><p>Thank you for trusting us with your 2021 Toyota Camry's oil change and tire rotation today. We appreciate your continued loyalty to our service team.</p><p>Your advisor Mike wanted to let you know that your next recommended service — a 30,000-mile inspection — will be due around August 2025. We'll send you a reminder as it approaches.</p><p>If you have a moment, we'd love to hear about your experience: <a href='https://g.page/r/dealer/review'>Leave a quick review</a></p><p>Thank you,<br>The Service Team at Valley Toyota</p>",
  "sms_body": "Hi Sarah! Thanks for visiting Valley Toyota today for your Camry's service. Questions? Call us at 555-0123.",
  "communication_tone": "warm, grateful, personal"
}

Maintenance Reminder Campaign Prompt Template

Type: prompt System and user prompts for generating time-based and mileage-based maintenance reminders. Supports multiple reminder types: oil change, tire rotation, brake inspection, seasonal service, and manufacturer-recommended intervals. Creates urgency without being alarmist.

Implementation:

Maintenance Reminder Campaign — System Prompt

SYSTEM PROMPT: --- You are a helpful automotive maintenance reminder writer for {{dealership_name}}. Your goal is to remind customers about upcoming or overdue maintenance in a way that is informative, helpful, and encourages them to schedule an appointment. You are NOT a salesperson — you are a knowledgeable service advisor helping customers maintain their vehicle. Brand guidelines: - Dealership name: {{dealership_name}} - Scheduling link: {{scheduling_url}} - Service phone: {{service_phone}} - Service hours: {{service_hours}} - Current seasonal promotion (if any): {{current_promo}} Rules: 1. Explain WHY the maintenance matters in 1 sentence (safety, longevity, performance). 2. Reference the specific vehicle and approximate mileage/time interval. 3. Include a clear call-to-action to schedule (link or phone number). 4. If there is a current promotion, mention it naturally — not as the primary reason to visit. 5. Email body: under 120 words. SMS: under 145 characters. 6. For overdue maintenance, use concerned but not alarming tone. 7. For upcoming maintenance, use proactive and helpful tone. 8. Never fabricate mileage numbers — use only what is provided. 9. Respond in valid JSON format only. ---
Sonnet 4.6

Maintenance Reminder Campaign — User Prompt Template

USER PROMPT TEMPLATE: --- Generate a maintenance reminder for this customer: Customer first name: {{customer_first_name}} Vehicle: {{vehicle_year}} {{vehicle_make}} {{vehicle_model}} Current estimated mileage: {{estimated_current_mileage}} Last service date: {{last_service_date}} Last service performed: {{last_service_type}} Recommended service now: {{recommended_service}} Service urgency: {{urgency_level}} (upcoming | due_now | overdue) Days since last visit: {{days_since_last_visit}} Current promotion: {{current_promo_text}} (or 'none') Respond with this exact JSON structure: { "email_subject": "Subject line under 60 characters", "email_body_html": "HTML email body under 120 words with <p> tags. Include scheduling link.", "sms_body": "SMS under 145 characters. Include phone number for scheduling.", "urgency_framing": "how urgency was communicated" } ---
Sonnet 4.6

Declined Service Follow-Up Prompt Template

Type: prompt Generates tactful follow-up messages for customers who declined recommended services during their visit. Educates about the importance of the declined service without being pushy. Sent 7 days after the service visit.

Implementation:

Declined Service Follow-Up Prompt Template

SYSTEM PROMPT: --- You are a thoughtful automotive service communication writer for {{dealership_name}}. You are following up with a customer who declined a recommended service during their recent visit. Your tone is educational, caring, and absolutely non-pushy. You respect their decision and simply want to ensure they have the information to make the best choice for their vehicle. Rules: 1. NEVER be pushy, guilt-tripping, or use fear tactics. 2. Briefly explain what the declined service does and why it matters. 3. Offer to answer questions — position the service advisor as a helpful resource. 4. Include a scheduling link for when they are ready. 5. Keep email under 130 words. SMS under 150 characters. 6. Do not mention pricing — they can call for a quote. 7. Only reference services that were actually declined (provided in data). 8. Respond in valid JSON format only. --- USER PROMPT TEMPLATE: --- Generate a declined-service follow-up for this customer: Customer first name: {{customer_first_name}} Vehicle: {{vehicle_year}} {{vehicle_make}} {{vehicle_model}} Visit date: {{service_date}} Services completed: {{services_completed}} Services declined: {{services_declined}} Service advisor: {{advisor_name}} Scheduling link: {{scheduling_url}} Service phone: {{service_phone}} Respond with this exact JSON structure: { "email_subject": "Subject line under 60 characters — do not mention 'declined'", "email_body_html": "HTML email body under 130 words. Educational tone about the declined service.", "sms_body": "SMS under 150 chars. Gentle, helpful.", "educational_point": "the key benefit/safety point communicated" } ---
Sonnet 4.6

DMS Data Extraction Workflow

Type: workflow n8n workflow that connects to the dealership's DMS via API (CDK Fortellis, Tekion APC, or Reynolds RCI) on a 4-hour schedule, extracts recently closed repair orders and customer service records, normalizes the data, categorizes customers into communication segments, and queues them for AI content generation.

Implementation:

N8N WORKFLOW JSON (simplified structure
json
# import into n8n and configure credentials)

{
  "name": "DMS Data Sync - Service Records",
  "nodes": [
    {
      "name": "Schedule Trigger",
      "type": "n8n-nodes-base.scheduleTrigger",
      "parameters": {
        "rule": {
          "interval": [{"field": "hours", "hoursInterval": 4}]
        }
      },
      "position": [240, 300]
    },
    {
      "name": "Get DMS Auth Token",
      "type": "n8n-nodes-base.httpRequest",
      "parameters": {
        "method": "POST",
        "url": "https://identity.fortellis.io/oauth2/aus1p1ixy7YL8cMq02p7/v1/token",
        "sendHeaders": true,
        "headerParameters": {
          "parameters": [{"name": "Content-Type", "value": "application/x-www-form-urlencoded"}]
        },
        "sendBody": true,
        "bodyParameters": {
          "parameters": [
            {"name": "grant_type", "value": "client_credentials"},
            {"name": "scope", "value": "{{$credentials.fortellisScope}}"}
          ]
        },
        "authentication": "genericCredentialType",
        "genericAuthType": "httpBasicAuth"
      },
      "position": [460, 300]
    },
    {
      "name": "Fetch Closed Repair Orders",
      "type": "n8n-nodes-base.httpRequest",
      "parameters": {
        "method": "GET",
        "url": "https://api.fortellis.io/cdkdrive/service/v1/repair-orders",
        "sendQuery": true,
        "queryParameters": {
          "parameters": [
            {"name": "status", "value": "closed"},
            {"name": "closedAfter", "value": "={{$now.minus(4, 'hours').toISO()}}"}
          ]
        },
        "sendHeaders": true,
        "headerParameters": {
          "parameters": [
            {"name": "Authorization", "value": "Bearer {{$node['Get DMS Auth Token'].json.access_token}}"},
            {"name": "Accept", "value": "application/json"},
            {"name": "Request-Id", "value": "={{$workflow.id}}-{{$now.toMillis()}}"}
          ]
        }
      },
      "position": [680, 300]
    },
    {
      "name": "Normalize & Categorize",
      "type": "n8n-nodes-base.function",
      "parameters": {
        "functionCode": "const repairOrders = items[0].json.repairOrders || items[0].json.data || [];\nconst results = [];\nconst now = new Date();\n\nfor (const ro of repairOrders) {\n  const customer = {\n    customer_id: ro.customerId || ro.customer?.id,\n    customer_first_name: ro.customerFirstName || ro.customer?.firstName,\n    customer_last_name: ro.customerLastName || ro.customer?.lastName,\n    customer_email: ro.customerEmail || ro.customer?.email,\n    customer_phone: ro.customerPhone || ro.customer?.phone,\n    vehicle_year: ro.vehicleYear || ro.vehicle?.year,\n    vehicle_make: ro.vehicleMake || ro.vehicle?.make,\n    vehicle_model: ro.vehicleModel || ro.vehicle?.model,\n    vehicle_vin: ro.vin || ro.vehicle?.vin,\n    services_performed: (ro.lineItems || ro.services || []).filter(s => s.status === 'completed').map(s => s.description).join(', '),\n    services_declined: (ro.lineItems || ro.services || []).filter(s => s.status === 'declined').map(s => s.description).join(', '),\n    service_date: ro.closedDate || ro.completedDate,\n    advisor_name: ro.advisorName || ro.serviceAdvisor?.name || 'your service advisor',\n    ro_number: ro.repairOrderNumber || ro.id,\n    next_service_recommendation: ro.nextService?.description || '',\n    next_service_due: ro.nextService?.dueDate || ro.nextService?.dueMileage || '',\n    estimated_mileage: ro.mileageIn || ro.vehicle?.currentMileage\n  };\n\n  // Determine communication type\n  const hoursSinceClose = (now - new Date(customer.service_date)) / (1000*60*60);\n  \n  if (hoursSinceClose <= 6) {\n    customer.comm_type = 'post_service_followup';\n    customer.scheduled_send_delay_hours = 2;\n  }\n  \n  if (customer.services_declined && customer.services_declined.length > 0) {\n    const declinedRecord = {...customer, comm_type: 'declined_service_followup', scheduled_send_delay_hours: 168};\n    results.push({json: declinedRecord});\n  }\n  \n  results.push({json: customer});\n}\n\nreturn results;"
      },
      "position": [900, 300]
    },
    {
      "name": "Check Already Contacted",
      "type": "n8n-nodes-base.postgres",
      "parameters": {
        "operation": "executeQuery",
        "query": "SELECT COUNT(*) as cnt FROM communication_log WHERE customer_id='{{$json.customer_id}}' AND message_type='{{$json.comm_type}}' AND sent_at > NOW() - INTERVAL '7 days'"
      },
      "position": [1120, 300]
    },
    {
      "name": "Not Already Contacted?",
      "type": "n8n-nodes-base.if",
      "parameters": {
        "conditions": {
          "number": [{"value1": "={{$json.cnt}}", "operation": "equal", "value2": 0}]
        }
      },
      "position": [1340, 300]
    },
    {
      "name": "Insert Into Queue",
      "type": "n8n-nodes-base.postgres",
      "parameters": {
        "operation": "insert",
        "table": "processing_queue",
        "columns": "customer_id,customer_first_name,customer_email,customer_phone,vehicle_year,vehicle_make,vehicle_model,services_performed,services_declined,service_date,advisor_name,comm_type,scheduled_send_at,status",
        "values": "={{$json.customer_id}},={{$json.customer_first_name}},={{$json.customer_email}},={{$json.customer_phone}},={{$json.vehicle_year}},={{$json.vehicle_make}},={{$json.vehicle_model}},={{$json.services_performed}},={{$json.services_declined}},={{$json.service_date}},={{$json.advisor_name}},={{$json.comm_type}},={{$now.plus($json.scheduled_send_delay_hours, 'hours').toISO()}},pending"
      },
      "position": [1560, 240]
    }
  ],
  "connections": {
    "Schedule Trigger": {"main": [[{"node": "Get DMS Auth Token", "type": "main", "index": 0}]]},
    "Get DMS Auth Token": {"main": [[{"node": "Fetch Closed Repair Orders", "type": "main", "index": 0}]]},
    "Fetch Closed Repair Orders": {"main": [[{"node": "Normalize & Categorize", "type": "main", "index": 0}]]},
    "Normalize & Categorize": {"main": [[{"node": "Check Already Contacted", "type": "main", "index": 0}]]},
    "Check Already Contacted": {"main": [[{"node": "Not Already Contacted?", "type": "main", "index": 0}]]},
    "Not Already Contacted?": {"main": [[{"node": "Insert Into Queue", "type": "main", "index": 0}], []]}
  }
}
Note

This workflow JSON is a structural template. After importing into n8n, you must: 1. Attach the correct credentials to each node 2. Adjust the Fortellis API endpoints to match your specific CDK API subscription 3. Create the processing_queue table in PostgreSQL (see installation step 4 for schema) 4. For Tekion APC: change the auth flow and base URL to Tekion's endpoints 5. For Reynolds RCI: implement their specific SOAP/REST bridge as needed

AI Content Generation Engine Workflow

Type: workflow Core n8n workflow that reads from the processing queue, retrieves the appropriate prompt template, populates it with customer data, calls the OpenAI API, validates the JSON response, and queues the generated content for delivery.

Implementation

n8n Workflow: AI Content Generation Engine — full node-by-node structure with error handling
plaintext
N8N WORKFLOW STRUCTURE:

Workflow Name: AI Content Generation Engine
Trigger: Schedule every 30 minutes

Node 1 - Schedule Trigger:
  Type: scheduleTrigger
  Config: every 30 minutes

Node 2 - Fetch Pending Queue Items:
  Type: postgres
  Query: SELECT * FROM processing_queue WHERE status='pending' AND scheduled_send_at <= NOW() ORDER BY scheduled_send_at ASC LIMIT 20

Node 3 - Check If Items Exist:
  Type: if
  Condition: items.length > 0
  False path: NoOp (stop)

Node 4 - Split In Batches:
  Type: splitInBatches
  Batch Size: 5
  Options: pause between batches: 2000ms

Node 5 - Get Prompt Template:
  Type: postgres
  Query: SELECT system_prompt, user_prompt_template FROM prompt_templates WHERE communication_type='{{$json.comm_type}}' AND channel='email' AND active=true ORDER BY version DESC LIMIT 1

Node 6 - Build Prompt:
  Type: function
  Code:
    const template = items[0].json;
  const customer = $('Split In Batches').item.json;
  
  // Replace all template variables
  let systemPrompt = template.system_prompt;
  let userPrompt = template.user_prompt_template;
  
  const replacements = {
    '{{customer_first_name}}': customer.customer_first_name,
    '{{vehicle_year}}': customer.vehicle_year,
    '{{vehicle_make}}': customer.vehicle_make,
    '{{vehicle_model}}': customer.vehicle_model,
    '{{services_performed}}': customer.services_performed,
    '{{services_declined}}': customer.services_declined || 'none',
    '{{service_date}}': customer.service_date,
    '{{advisor_name}}': customer.advisor_name,
    '{{next_service_recommendation}}': customer.next_service_recommendation || 'Regular maintenance per manufacturer schedule',
    '{{next_service_due_date_or_mileage}}': customer.next_service_due || 'Per manufacturer recommendation',
    '{{estimated_current_mileage}}': customer.estimated_mileage || 'N/A',
    '{{scheduling_url}}': $env.SCHEDULING_URL || 'https://dealername.com/schedule',
    '{{google_review_url}}': $env.GOOGLE_REVIEW_URL || 'https://g.page/r/dealer/review',
    '{{service_phone}}': $env.SERVICE_PHONE || '555-0123',
    '{{dealership_name}}': $env.DEALERSHIP_NAME || 'Our Dealership'
  };
  
  for (const [key, value] of Object.entries(replacements)) {
    systemPrompt = systemPrompt.replaceAll(key, value || '');
    userPrompt = userPrompt.replaceAll(key, value || '');
  }
  
  return [{json: {
    system_prompt: systemPrompt,
    user_prompt: userPrompt,
    customer_data: customer
  }}];
  
Node 7 - Call OpenAI API:
  Type: httpRequest
  Method: POST
  URL: https://api.openai.com/v1/chat/completions
  Headers:
    Authorization: Bearer {{$credentials.openaiApiKey}}
    Content-Type: application/json
  Body:
  {
    "model": "gpt-5.4-mini",
    "temperature": 0.7,
    "max_tokens": 600,
    "response_format": {"type": "json_object"},
    "messages": [
      {"role": "system", "content": "{{$json.system_prompt}}"},
      {"role": "user", "content": "{{$json.user_prompt}}"}
    ]
  }
  Retry on Fail: true
  Max Retries: 3
  Wait Between Retries: 5000ms

Node 8 - Parse AI Response:
  Type: function
  Code:
    const aiResponse = JSON.parse(items[0].json.choices[0].message.content);
  const customer = $('Build Prompt').item.json.customer_data;
  
  // Quality validation
  const errors = [];
  if (!aiResponse.email_subject || aiResponse.email_subject.length < 10) errors.push('Missing/short subject');
  if (!aiResponse.email_body_html || aiResponse.email_body_html.length < 50) errors.push('Missing/short email body');
  if (!aiResponse.sms_body || aiResponse.sms_body.length < 20) errors.push('Missing/short SMS body');
  if (aiResponse.sms_body && aiResponse.sms_body.length > 145) errors.push('SMS too long');
  if (aiResponse.email_body_html && aiResponse.email_body_html.includes('{{')) errors.push('Unreplaced template variable in email');
  if (aiResponse.sms_body && aiResponse.sms_body.includes('{{')) errors.push('Unreplaced template variable in SMS');
  
  return [{
    json: {
      ...aiResponse,
      customer_id: customer.customer_id,
      customer_email: customer.customer_email,
      customer_phone: customer.customer_phone,
      comm_type: customer.comm_type,
      dealership_id: customer.dealership_id || 'default',
      quality_passed: errors.length === 0,
      quality_errors: errors.join('; '),
      ai_model: 'gpt-5.4-mini'
    }
  }];
  
Node 9 - Quality Gate:
  Type: if
  Condition: $json.quality_passed == true
  True: Continue to delivery queue
  False: Log error and flag for human review

Node 10 - Queue for Email Delivery:
  Type: postgres
  Insert into delivery_queue: customer_id, channel='email', subject, body_html, comm_type, scheduled_at, status='pending'

Node 11 - Queue for SMS Delivery:
  Type: postgres
  Insert into delivery_queue: customer_id, channel='sms', body=sms_body, comm_type, scheduled_at, status='pending'

Node 12 - Update Processing Queue:
  Type: postgres
  UPDATE processing_queue SET status='generated' WHERE id={{$json.queue_id}}

ERROR HANDLING:
  Error Trigger -> Function (check if OpenAI error) -> IF (rate limit?) -> Wait 60s -> Retry
  Persistent failure -> Switch to Anthropic Claude:
    URL: https://api.anthropic.com/v1/messages
    Headers: x-api-key, anthropic-version: 2023-06-01
    Body: {model: 'claude-haiku-4-5', max_tokens: 600, messages: [...]}
  Still failing -> Mark as 'failed' in queue, send alert to MSP

Type: integration A reusable n8n sub-workflow that verifies TCPA and CAN-SPAM consent before any message is sent. Called by both the Email and SMS delivery workflows. Returns a boolean consent status with the reason, and logs all consent checks for compliance auditing.

Implementation:

N8N SUB-WORKFLOW: Consent Verification
javascript
// Node 1: Determine Consent Requirement

// Input Parameters (passed from parent workflow):
// - customer_id (string)
// - channel ('email' | 'sms')
// - message_type ('post_service_followup' | 'maintenance_reminder' | 'declined_service_followup' | 'seasonal_campaign' | 'satisfaction_check' | 'recall_notification')
// - dealership_id (string)

const { channel, message_type } = items[0].json;

// TCPA consent classification
// Informational: service confirmations, recall notices, appointment reminders
// Promotional: marketing campaigns, upsells, seasonal offers
const informationalTypes = ['post_service_followup', 'recall_notification', 'satisfaction_check'];
const promotionalTypes = ['seasonal_campaign', 'declined_service_followup', 'maintenance_reminder'];

let required_consent_level;
if (informationalTypes.includes(message_type)) {
  required_consent_level = 'informational'; // Lower bar - existing business relationship may suffice
} else {
  required_consent_level = 'promotional'; // Requires explicit prior express written consent for SMS
}

// For email (CAN-SPAM): existing business relationship = implicit consent
// But explicit consent is always preferred
if (channel === 'email') {
  required_consent_level = 'informational'; // CAN-SPAM is opt-out based, not opt-in
}

return [{json: {...items[0].json, required_consent_level}}];
N8N SUB-WORKFLOW: Consent Verification
sql
-- Node 2: Query Consent Database

SELECT 
  consent_given,
  consent_type,
  consent_timestamp,
  opt_out_timestamp,
  CASE 
    WHEN opt_out_timestamp IS NOT NULL THEN 'opted_out'
    WHEN consent_given = false THEN 'no_consent'
    WHEN consent_type = 'promotional' THEN 'full_consent'
    WHEN consent_type = 'informational' THEN 'informational_only'
    WHEN consent_type = 'transactional' THEN 'transactional_only'
    ELSE 'unknown'
  END as consent_status
FROM consent_records
WHERE customer_id = '{{$json.customer_id}}'
  AND channel = '{{$json.channel}}'
  AND dealership_id = '{{$json.dealership_id}}'
ORDER BY consent_timestamp DESC
LIMIT 1
N8N SUB-WORKFLOW: Consent Verification
javascript
// Node 3: Evaluate Consent

const input = $('Determine Consent Requirement').item.json;
const record = items[0].json;

let allowed = false;
let reason = '';

if (!record || !record.consent_status) {
  allowed = false;
  reason = 'No consent record found for this customer/channel';
} else if (record.consent_status === 'opted_out') {
  allowed = false;
  reason = `Customer opted out on ${record.opt_out_timestamp}`;
} else if (record.consent_status === 'no_consent') {
  allowed = false;
  reason = 'Consent record exists but consent not given';
} else if (input.required_consent_level === 'promotional' && record.consent_type === 'informational') {
  // Need promotional consent but only have informational
  if (input.channel === 'email') {
    allowed = true; // CAN-SPAM allows with opt-out mechanism
    reason = 'Email: CAN-SPAM opt-out model applies. Informational consent sufficient.';
  } else {
    allowed = false;
    reason = 'SMS promotional message requires explicit promotional consent. Customer only has informational consent.';
  }
} else {
  allowed = true;
  reason = `Valid ${record.consent_type} consent from ${record.consent_timestamp}`;
}

return [{json: {
  consent_allowed: allowed,
  consent_reason: reason,
  customer_id: input.customer_id,
  channel: input.channel,
  message_type: input.message_type,
  checked_at: new Date().toISOString()
}}];
  • Node 4 — Log Consent Check (Type: postgres): Insert into consent_audit_log: customer_id, channel, message_type, consent_allowed, consent_reason, checked_at
  • Node 5 — Return Result (Type: respondToWebhook if called via webhook, or set output): Returns { consent_allowed: boolean, consent_reason: string }
Create consent_audit_log table and index
sql
CREATE TABLE consent_audit_log (
  id SERIAL PRIMARY KEY,
  customer_id VARCHAR(100) NOT NULL,
  channel VARCHAR(10) NOT NULL,
  message_type VARCHAR(50) NOT NULL,
  consent_allowed BOOLEAN NOT NULL,
  consent_reason TEXT,
  checked_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_audit_customer ON consent_audit_log(customer_id, checked_at);

Campaign Performance Analytics Agent

Type: agent A weekly analytics workflow that aggregates campaign metrics, calculates key performance indicators (open rate, click rate, appointment conversion, revenue attribution), compares to industry benchmarks, and generates an executive summary report using AI for the dealership general manager and the MSP account manager.

Implementation:

N8N Workflow: Weekly Campaign Analytics Report
plaintext
N8N WORKFLOW: Weekly Campaign Analytics Report
Trigger: Schedule every Monday at 7:00 AM

Node 1 - Aggregate Weekly Metrics:
  Type: postgres
  Query:
    SELECT 
    cm.campaign_type,
    cm.dealership_id,
    SUM(cm.total_sent) as total_sent,
    SUM(cm.total_delivered) as total_delivered,
    SUM(cm.total_opened) as total_opened,
    SUM(cm.total_clicked) as total_clicked,
    SUM(cm.total_unsubscribed) as total_unsubscribed,
    SUM(cm.total_bounced) as total_bounced,
    SUM(cm.appointments_booked) as appointments_booked,
    SUM(cm.revenue_attributed) as revenue_attributed,
    ROUND(SUM(cm.total_delivered)::numeric / NULLIF(SUM(cm.total_sent), 0) * 100, 1) as delivery_rate,
    ROUND(SUM(cm.total_opened)::numeric / NULLIF(SUM(cm.total_delivered), 0) * 100, 1) as open_rate,
    ROUND(SUM(cm.total_clicked)::numeric / NULLIF(SUM(cm.total_opened), 0) * 100, 1) as click_rate,
    ROUND(SUM(cm.total_unsubscribed)::numeric / NULLIF(SUM(cm.total_sent), 0) * 100, 2) as unsubscribe_rate
  FROM campaign_metrics cm
  WHERE cm.started_at >= NOW() - INTERVAL '7 days'
  GROUP BY cm.campaign_type, cm.dealership_id
  ORDER BY cm.campaign_type;
  
Node 2 - Get Historical Comparison:
  Type: postgres
  Query: Same query but for previous 4 weeks for trend analysis

Node 3 - Build Analytics Prompt:
  Type: function
  Code:
    const currentWeek = $('Aggregate Weekly Metrics').all();
  const history = $('Get Historical Comparison').all();
  
  const benchmarks = {
    automotive_email_open_rate: 17.5, // industry average
    automotive_email_click_rate: 2.1,
    automotive_sms_response_rate: 12.0,
    healthy_unsubscribe_rate: 0.5,
    healthy_bounce_rate: 2.0
  };
  
  const prompt = `Analyze these weekly automotive service communication metrics and generate an executive summary report.

This Week's Performance:
${JSON.stringify(currentWeek, null, 2)}

Previous 4 Weeks Trend:
${JSON.stringify(history, null, 2)}

Industry Benchmarks:
- Automotive email open rate average: ${benchmarks.automotive_email_open_rate}%
- Automotive email click rate average: ${benchmarks.automotive_email_click_rate}%
- Healthy unsubscribe rate: <${benchmarks.healthy_unsubscribe_rate}%
- Healthy bounce rate: <${benchmarks.healthy_bounce_rate}%

Generate a JSON report with:
{
  "executive_summary": "3-4 sentence overview for the dealership GM",
  "highlights": ["list of positive trends"],
  "concerns": ["list of any metrics below benchmark or negative trends"],
  "recommendations": ["specific actionable recommendations"],
  "roi_summary": "statement about appointments booked and revenue attributed this week",
  "week_over_week_trend": "improving | stable | declining"
}`;
  
  return [{json: {prompt, benchmarks, raw_metrics: currentWeek}}];
  
Node 4 - Generate AI Report:
  Type: httpRequest (OpenAI GPT-5.4 for higher quality analysis)
  Model: gpt-5.4
  Temperature: 0.3 (more analytical, less creative)

Node 5 - Format HTML Report:
  Type: function
  Build professional HTML email with dealership branding, charts placeholders, and the AI-generated insights

Node 6 - Send to Dealership Manager:
  Type: sendGrid
  To: dealership_gm@dealername.com
  Subject: 'Weekly AI Communications Performance Report - {{$now.format("MMM D, YYYY")}}'

Node 7 - Send to MSP Account Manager:
  Type: sendGrid  
  To: account_manager@yourmsp.com
  Subject: '[Client: DealerName] Weekly Performance - {{$now.format("MMM D, YYYY")}}'

Testing & Validation

  • UNIT TEST - AI Content Quality: Send 10 test prompts (2 per communication type) through the AI Content Generation workflow using sample customer data. Verify all JSON responses parse correctly, email subjects are under 60 characters, email bodies are under 150 words, SMS bodies are under 145 characters, no template variables remain unreplaced, and tone is professional and on-brand.
  • UNIT TEST - Consent Verification: Create test consent records covering all scenarios: active promotional consent, active informational-only consent, opted-out customer, no consent record, and expired consent. Run each through the Consent Verification sub-workflow and verify: promotional SMS blocked for informational-only consent, all channels blocked for opted-out customers, no-record customers are blocked, and active consent customers are approved.
  • INTEGRATION TEST - DMS Data Extraction: Trigger the DMS Data Sync workflow manually and verify it successfully authenticates with the DMS API (Fortellis/Tekion/RCI), retrieves recent closed repair orders, correctly parses customer and vehicle data, properly categorizes communication types, and inserts records into the processing queue without duplicates.
  • INTEGRATION TEST - Email Delivery End-to-End: Generate a test post-service follow-up email for a test customer (use MSP team email addresses). Verify the email arrives in inbox (not spam), SPF/DKIM/DMARC all pass (check email headers), SendGrid tracking works (open and click events fire back to the webhook), unsubscribe link works and updates the consent database within 10 seconds, and the email renders correctly on desktop and mobile.
  • INTEGRATION TEST - SMS Delivery End-to-End: Send a test maintenance reminder SMS to MSP team phone numbers. Verify message delivers within 30 seconds, sender shows the registered 10DLC number, message content matches AI-generated text with opt-out appended, reply STOP triggers automatic opt-out in both Twilio and the local consent database, and Twilio delivery receipt webhook fires and updates communication_log.
  • LOAD TEST - Batch Processing: Queue 200 test records in the processing queue and run the AI Content Generation and Delivery workflows. Verify all 200 are processed within 2 hours, API rate limits are respected (no 429 errors), error handling correctly retries failed API calls, batch splitting works correctly without data loss, and database records are complete and consistent.
  • COMPLIANCE TEST - Opt-Out Timing: Trigger an unsubscribe via email and a STOP reply via SMS. Verify both channels cease all communications to that customer within the next workflow execution cycle (max 30 minutes). Verify the consent database is updated with opt-out timestamp and source. Attempt to queue a new message for the opted-out customer and verify it is blocked by the Consent Verification middleware.
  • COMPLIANCE TEST - CAN-SPAM Requirements: Review 5 sample AI-generated emails and verify each contains: accurate From address and dealership name, non-deceptive subject line, physical mailing address of the dealership, working unsubscribe mechanism, and identification as an advertisement (if promotional).
  • DELIVERABILITY TEST - Email Warmup: After configuring the dedicated SendGrid IP, send 50 emails on Day 1 to engaged contacts (MSP team + dealership staff). Check deliverability via SendGrid analytics. Gradually increase volume per SendGrid's IP warmup schedule over 4 weeks. Monitor bounce rate (must stay under 2%) and spam complaint rate (must stay under 0.1%).
  • MONITORING TEST - Alert Verification: Deliberately cause each monitored failure condition: stop the n8n container (verify Uptime Kuma alerts within 2 minutes), use an invalid API key (verify error workflow triggers alert), simulate a high bounce rate in test data (verify daily health report flags the issue). Confirm all alerts reach the MSP operations channel via the configured notification method.

Client Handoff

The client handoff should be conducted as a 90-minute on-site session with the service manager, marketing coordinator (if applicable), and general manager. Cover the following topics:

1
SYSTEM OVERVIEW (15 min): Walk through the architecture diagram showing how data flows from DMS → AI → Email/SMS. Show the n8n dashboard at a high level. Explain that AI generates draft content but the system enforces quality checks and compliance rules automatically.
2
CAMPAIGN MANAGEMENT PORTAL (20 min): Demonstrate the n8n dashboard for viewing active workflows, checking queue status, and reviewing recent communications. Show how to pause a campaign in an emergency (disable workflow toggle). Show the weekly analytics report email and explain each metric.
3
CONTENT REVIEW & APPROVAL PROCESS (20 min): Walk through the prompt template library. Show 5-10 example AI-generated messages for each communication type. Explain how to request template changes (submit to MSP via ticket). Discuss the review cadence — MSP will sample 10 messages/week for quality assurance during the first month, then 10/month ongoing.
4
COMPLIANCE & OPT-OUT MANAGEMENT (20 min): Explain TCPA and CAN-SPAM requirements in plain language. Show the consent database and how opt-outs are processed automatically. Demonstrate the opt-out flow for both email and SMS. Emphasize: NEVER add customers to SMS lists without documented written consent. Provide the printed compliance checklist for service intake staff (consent checkbox on service forms).
5
ESCALATION & SUPPORT (15 min): Provide the MSP support contact card with: helpdesk email, phone number, emergency after-hours number. Define SLAs: P1 (system down/compliance issue) = 1 hour response; P2 (delivery issue) = 4 hour response; P3 (content/template change) = 1 business day. Leave behind the Client Operations Binder containing: system architecture diagram, credential recovery procedures (MSP-held), compliance checklist, campaign calendar template, weekly report interpretation guide, and escalation matrix.
Critical

NEVER add customers to SMS lists without documented written consent.

SUCCESS CRITERIA TO REVIEW: Confirm the client can identify where to see campaign performance, knows who to call for support, understands the consent requirement for SMS, and can articulate the value proposition to their leadership (expected 15-25% increase in service rebooking rate, labor savings of 5-10 hours/week on manual follow-ups).

Maintenance

Ongoing Maintenance Responsibilities

Weekly (MSP Technician, 1–2 Hours)

  • Review the automated weekly analytics report for anomalies (open rate drop >5%, bounce rate spike, unusual unsubscribe volume)
  • Sample 10 AI-generated messages across all communication types for quality assurance and brand voice consistency
  • Check n8n workflow execution logs for errors or warnings
  • Verify DMS data sync is pulling current records (check last sync timestamp)
  • Review Uptime Kuma monitoring dashboard for any downtime events

Monthly (MSP Technician, 2–3 Hours)

  • Review and optimize AI prompt templates based on performance data (A/B test subject lines, adjust tone)
  • Update seasonal campaign prompts and promotional offers per dealership marketing calendar
  • Audit consent database: verify opt-out processing is working, check for stale records
  • Review OpenAI/Anthropic API usage and billing — adjust model selection if costs are trending above budget
  • Update n8n to latest stable version
  • Review SendGrid deliverability metrics and sender reputation score
  • Backup PostgreSQL compliance database
Update n8n to latest stable version
bash
docker compose pull n8n && docker compose up -d n8n
Backup PostgreSQL compliance database
bash
docker exec ai-comms-postgres pg_dump -U n8n compliance > /backup/compliance_$(date +%Y%m%d).sql

Quarterly (MSP Account Manager + Technician, 4–6 Hours)

  • Comprehensive compliance audit: verify FTC Safeguards documentation is current, review all consent records for completeness, test opt-out flows end-to-end
  • Client business review: present campaign ROI metrics (appointments booked, estimated revenue attributed), compare to baseline, recommend strategy adjustments
  • Update AI models if new cost-effective options are available (e.g., new OpenAI mini model release)
  • Review and update firewall rules, SSL certificates, and security patches on the automation server
  • Test disaster recovery: restore PostgreSQL backup to verify integrity
  • Review DMS API for any changes or deprecations announced by CDK/Tekion/Reynolds

Annually

  • Full security assessment of the automation platform per FTC Safeguards requirements
  • Renew FortiGate subscription licenses
  • Review and renegotiate Twilio/SendGrid plans based on actual volume
  • Update client's written information security plan

Model Retraining / Prompt Refresh Triggers

  • Open rate drops below 15% for 3 consecutive weeks
  • Unsubscribe rate exceeds 0.5% for any campaign type
  • Client rebrands or changes marketing messaging
  • New OEM co-op requirements are issued
  • Major AI model update released (test new model in staging before production swap)

SLA Framework

  • P1 (System outage, compliance breach, data leak): 1-hour response, 4-hour resolution target
  • P2 (Delivery failures, DMS sync broken, high bounce rate): 4-hour response, 1 business day resolution
  • P3 (Template changes, new campaign setup, reporting requests): 1 business day response, 3 business day resolution
  • P4 (Feature requests, optimization suggestions): Next scheduled maintenance window

Escalation Path

Technician → Senior Engineer → MSP Account Manager → MSP Operations Director. For compliance incidents: immediately notify MSP compliance officer and client's designated Qualified Individual.

Alternatives

...

Kimoby Turnkey Platform

Replace the entire custom integration stack with Kimoby, a purpose-built automotive service communication platform. Kimoby provides pre-built DMS integrations (CDK, Reynolds, DealerTrack), automated service reminders, two-way texting, photo/video messaging, and AI-assisted content — all in a single SaaS platform managed by the vendor.

ActiveCampaign + OpenAI via Zapier

Use ActiveCampaign as the central CRM and marketing automation platform, connected to OpenAI via Zapier for AI content generation. ActiveCampaign handles email delivery, contact management, automation sequences, and basic SMS. Zapier orchestrates the AI API calls and DMS data sync.

Fullpath AI Marketing Ecosystem

Use Fullpath's automotive-specific CDP and AI-powered marketing platform. Fullpath unifies dealership data sources into a single customer data platform, then uses AI agents to generate ad copy, email campaigns, and audience targeting — with built-in OEM compliance for franchise dealers.

Cloud-Hosted n8n (No On-Premise Server)

Identical to the primary approach but replace the on-premise Dell PowerEdge server with n8n Cloud hosting or a small VPS (e.g., Hetzner, DigitalOcean, or AWS Lightsail). Eliminates on-premise hardware entirely.

Want early access to the full toolkit?