66 min readIntelligence & insights

Implementation Guide: Analyze service bay utilization, technician efficiency, and parts margin by job type

Step-by-step implementation guide for deploying AI to analyze service bay utilization, technician efficiency, and parts margin by job type for Automotive clients.

Hardware Procurement

Ubiquiti UniFi G5 Turret Ultra Camera

UbiquitiUVC-G5-Turret-UltraQty: 6

$99/unit MSP cost / $149 suggested resale

Primary bay occupancy monitoring camera — one per service bay. 2K resolution, compact indoor form factor, PoE-powered. Feeds into UniFi Protect for recording and into Leverege PitCrew or custom AI pipeline via RTSP for bay occupancy detection.

Ubiquiti UniFi G5 Turret Ultra Camera (Entrance/Exit)

UbiquitiUVC-G5-Turret-UltraQty: 2

$99/unit MSP cost / $149 suggested resale

Vehicle entrance and exit monitoring to track bay ingress/egress times. Positioned at shop entrance and exit doors to capture vehicle arrival and departure events for total dwell-time calculation.

Ubiquiti UniFi Network Video Recorder

UbiquitiUNVRQty: 1

$299 MSP cost / $449 suggested resale

4-bay NVR supporting up to 18 cameras at 4K. Runs UniFi Protect with no recurring license fees. Stores 30 days of video locally for review and provides RTSP feeds to analytics pipeline. Requires separate HDD purchase.

Seagate IronWolf NAS HDD 4TB

SeagateST4000VN006Qty: 2

$95/unit MSP cost / $140 suggested resale

Surveillance-rated hard drives for the UNVR. Two 4TB drives in RAID-1 provide 4TB usable storage with redundancy — sufficient for 30 days of recording from 8 cameras at 2K resolution with motion-based recording.

Ubiquiti UniFi Switch Pro 24 PoE

UbiquitiUSW-Pro-24-PoEQty: 1

$479 MSP cost / $699 suggested resale

Managed Layer 3 PoE+ switch providing power and connectivity for all 8 cameras and the NVR. Supports VLAN segmentation to isolate camera/IoT traffic from POS and business networks per PCI DSS and FTC Safeguards Rule requirements. 400W PoE budget handles all cameras with headroom.

Ubiquiti UniFi Security Gateway or Dream Machine

UbiquitiUDM-ProQty: 1

$379 MSP cost / $549 suggested resale (only if client lacks existing UniFi or enterprise-grade firewall)

Unified gateway providing firewall, VLAN routing, IDS/IPS, and VPN for secure remote MSP access. Required for proper network segmentation between camera VLAN, business VLAN, and POS VLAN. Skip if client already has adequate firewall/router.

CAT6 Plenum-Rated Ethernet Cable (1000ft spool)

Monoprice or equivalentMP-CAT6-PLENUM-1000Qty: 1

$180 MSP cost / included in installation labor

Cabling runs from PoE switch to each camera location across service bays. Plenum-rated required for runs through drop ceilings and HVAC spaces common in shop environments.

Cable termination supplies and surface mount boxes

VariousRJ45 keystones, wall plates, conduitQty: 1

$75 total MSP cost / included in installation labor

Termination hardware for clean camera cable runs. Service bay environments require conduit or surface-mount raceways to protect cabling from shop hazards (lifts, tools, chemicals).

Dell PowerEdge T160 Tower Server

Dell TechnologiesPET160-BASEQty: 1

$1,800 configured MSP cost (Xeon E-2314 2.8GHz, 32GB DDR4, 2×2TB SATA RAID-1) / $2,700 suggested resale

Optional on-premises edge compute server for shops requiring local data processing, Power BI Report Server, or custom Python analytics pipelines. Houses the ETL middleware, local database, and optional AI inference. Can be omitted for cloud-only deployments where all analytics run in Power BI Cloud and Leverege PitCrew SaaS.

APC Smart-UPS 1500VA

APC by Schneider ElectricSMT1500CQty: 1

$550 MSP cost / $750 suggested resale

Battery backup for NVR, edge server (if deployed), and PoE switch. Auto shops experience frequent power fluctuations from compressors, lifts, and welders. UPS protects analytics infrastructure and ensures continuous recording during brief outages.

Software Procurement

$349/month per location

Primary operational data source. Provides repair order data, technician clock-in/out times, parts cost and sale prices, job type categorization, and customer information. REST API enables automated data extraction for the analytics pipeline. Grow plan includes labor guide and maintenance schedule required for book-time comparisons. If client already uses a competing SMS (Shopmonkey, Shop-Ware, Mitchell1), integrate with their existing platform instead.

Leverege PitCrew

LeveregePitCrew

$300–$600/month per location (contact vendor for MSP partner pricing)

Purpose-built AI bay intelligence platform. Analyzes camera feeds to automatically track bay occupancy, vehicle dwell time, idle time between jobs, and technician presence. Integrates with existing IP cameras via RTSP — no proprietary hardware required. Provides real-time bay status dashboard, historical utilization reports, and anomaly alerting. Proven at scale with Discount Tire deploying across all locations.

Microsoft Power BI Pro

MicrosoftPer-seat SaaS (via Microsoft CSP)Qty: 3 seats (owner, service manager, lead tech)

$14/user/month (MSP CSP cost ~$11-12/user, resell at $14)

Business intelligence platform for custom dashboards combining SMS data, PitCrew bay data, and accounting data into unified KPI views. Displays technician efficiency ratios, parts margin by job type, bay utilization heat maps, and trend analysis. MSP builds and maintains dashboards; client consumes reports and interactive visuals.

Microsoft 365 Business Basic

MicrosoftPer-seat SaaS (via Microsoft CSP)

$6/user/month per seat

Provides Azure AD accounts required for Power BI Pro licensing, plus Teams for MSP communication with shop staff, SharePoint for document storage of compliance records and reports, and Exchange Online for automated report email delivery.

$92/month

Accounting platform providing actual parts cost data, vendor payment records, and financial reconciliation. Integrates with Tekmetric/Shopmonkey for PO sync. Parts margin calculations in Power BI cross-reference SMS parts sale price against QuickBooks COGS entries for accurate margin-by-job-type reporting. Skip if client already uses QuickBooks or Xero.

$0/month

Video management system running on the UNVR. Provides camera configuration, motion detection zones, video playback, and RTSP stream endpoints that feed into Leverege PitCrew. No per-camera licensing fees — a significant cost advantage over Milestone, Genetec, or other VMS platforms.

$0 self-hosted on edge server / Zapier Professional at $49/month if cloud-only

Workflow automation platform connecting Tekmetric API, QuickBooks API, Leverege PitCrew API, and Power BI data refresh triggers. Automates nightly data extraction, transformation, and loading into the analytics data model. Self-hosted n8n is preferred for cost savings and data control.

PostgreSQL Database

PostgreSQL Global Development Group

$0 (self-hosted on edge server or $15/month on Azure Database for PostgreSQL Flexible Server, Burstable B1ms)

Central analytics data warehouse storing normalized data from SMS, accounting, and bay intelligence sources. Power BI connects directly to PostgreSQL for dashboard queries. Schema designed for automotive service KPIs with fact and dimension tables for repair orders, technicians, bays, job types, and parts.

Prerequisites

  • Active internet connection with minimum 50 Mbps symmetrical bandwidth (100 Mbps preferred). Verify with speed test at multiple times of day — shop hours may coincide with peak neighborhood usage.
  • Existing shop management system (Tekmetric, Shopmonkey, Shop-Ware, Mitchell1, or DMS) with API access enabled and at least 6 months of historical repair order data for baseline analytics.
  • Accounting software (QuickBooks Online or Xero) actively used with parts purchases recorded and categorized — required for accurate COGS and margin calculations.
  • Electrical power availability at camera mounting locations — all cameras are PoE-powered from the switch, but the switch, NVR, and edge server need standard 120V outlets in the server/network closet area.
  • Physical network closet or secure area for NVR, switch, edge server, and UPS — must be climate-controlled (not in the shop bay area where temperatures can exceed 100°F in summer).
  • Clear line of sight from camera mounting positions to each service bay — conduct site survey to identify optimal mounting points (typically 10-12 feet high on walls or ceiling, angled to capture full bay floor area).
  • Shop owner authorization and employee notification regarding camera installation — while service bays are not private spaces, best practice and some state laws require notification. Prepare a written camera policy document.
  • Client must designate a project champion — typically the service manager or shop owner — who will attend training, validate KPI definitions, and provide domain expertise on job type categorization and technician workflows.
  • FTC Safeguards Rule compliance assessment completed or in progress — if the shop handles any consumer financing (even referrals to lenders), they are classified as a financial institution and must comply. This project's network infrastructure must align with their Written Information Security Program (WISP).
  • Parts supplier electronic ordering integration active in the SMS (WorldPac SpeedDial, NexPart, or OEC RepairLink) — this ensures parts cost data flows automatically rather than requiring manual entry.

Installation Steps

...

Step 1: Site Survey and Network Assessment

Conduct an on-site assessment of the shop's physical layout, existing network infrastructure, camera mounting locations, and data source readiness. Document the number of service bays, their physical dimensions, existing network drops, power availability, and current SMS/accounting setup. Photograph each bay from proposed camera angles. Verify internet bandwidth with iperf3 or speedtest-cli. Map the existing network topology and identify where the new VLAN-segmented camera network will connect.

bash
speedtest-cli --simple
nmap -sn 192.168.1.0/24
# Document results in site survey template
Note

This step is critical — poor site surveys lead to costly return trips. Allocate 2-3 hours on-site. Bring a laptop, cable tester, and laser measure. Check ceiling height and material (drop ceiling vs. open rafters vs. concrete) as this affects camera mounting hardware needs. Verify the SMS platform and version — ask to see the settings/API page to confirm API access is available on their subscription tier.

Step 2: Procure and Stage Hardware

Order all hardware items per the procurement list. Upon receipt, stage all equipment in the MSP lab: unbox cameras, update firmware, pre-configure the UNVR with hard drives, and pre-configure the PoE switch with VLAN settings. Pre-configuring in the lab saves significant on-site time and reduces shop disruption.

  • On UNVR — after initial setup via UniFi OS Setup wizard: Access UniFi OS at https://<UNVR-IP>
  • Navigate to UniFi Protect > Settings > General
  • Set recording mode: Enhanced (motion-based with pre-roll)
  • Set retention: 30 days
  • On USW-Pro-24-PoE via UniFi Network Controller: Create VLANs:
  • VLAN 10 — Business Network (existing devices)
  • VLAN 20 — Camera/IoT Network (cameras, NVR, edge server analytics port)
  • VLAN 30 — POS/Payment Network (payment terminals - isolated)
  • Assign switch ports: Ports 1-8: VLAN 20 (untagged) — camera ports
  • Port 9: VLAN 20 (untagged) — NVR
  • Port 10: Trunk (VLAN 10, 20 tagged) — uplink to router/firewall
  • Ports 11-24: VLAN 10 (untagged) — business devices
Note

Firmware updates should be done in the lab on a stable connection — camera firmware updates over a shop's potentially unstable connection can brick devices. Label each camera with its assigned bay number using a label maker before going on-site. Pre-cut and test any patch cables needed.

Step 3: Install Network Infrastructure On-Site

Install the PoE switch, UPS, and edge server (if applicable) in the client's network closet. Run CAT6 cabling from the switch location to each camera mounting point. Use conduit or surface-mount raceways in the shop bay area to protect cables from physical damage. Terminate all cable runs with RJ45 keystones. Connect the PoE switch uplink to the existing router/firewall. If deploying a UDM-Pro, replace the existing router at this stage.

Cable run testing and firewall rule configuration
bash
# Test each cable run before mounting cameras:
# On laptop connected to switch port:
ping -c 4 <gateway-ip>
# Verify PoE delivery on each port via UniFi Controller:
# Network > Devices > USW-Pro-24-PoE > Ports tab
# Each camera port should show 'PoE: Delivering' after camera connection

# Configure firewall rules (on UDM-Pro or existing firewall):
# ALLOW: VLAN 20 -> Internet (for camera firmware updates, PitCrew cloud)
# ALLOW: VLAN 20 -> VLAN 10 (specific IP of edge server analytics port only)
# DENY: VLAN 20 -> VLAN 10 (all other)
# DENY: VLAN 20 -> VLAN 30 (complete isolation from POS)
# DENY: VLAN 30 -> VLAN 20 (complete isolation)
# ALLOW: MSP VPN -> VLAN 20 (remote management)
Note

Cable runs in auto shops face unique hazards: hydraulic lift arms, air hose reels, rolling tool carts, and chemical exposure. Always use conduit in the bay area — exposed CAT6 will be damaged within weeks. Route cables along walls at 10+ feet height where possible. Use cable trays above drop ceilings. Each cable run should be tested with a cable certifier (at minimum a continuity tester) before camera mounting.

Step 4: Mount and Configure Cameras

Mount each camera at its pre-determined location (identified during site survey). Optimal mounting height is 10-12 feet, angled 30-45 degrees downward to capture the full bay floor, lift, and work area. Connect cameras to their labeled PoE switch ports. Adopt each camera into UniFi Protect via the UNVR. Configure motion zones for each camera to focus on the vehicle work area and exclude irrelevant areas (walls, aisles). Enable RTSP streams for Leverege PitCrew integration.

1
Navigate to Devices > [Camera] > Settings in UniFi Protect (https://<UNVR-IP>/protect)
2
Set name: 'Bay-1', 'Bay-2', etc. or 'Entrance', 'Exit'
3
Set recording mode: Always (for bay cameras), Motion (for entrance/exit)
4
Configure Motion Zones: Draw zone covering vehicle work area only. Set sensitivity: 70-80% (adjust based on testing). Exclude high-traffic walkways to reduce false triggers.
5
Enable RTSP: Navigate to Settings > Advanced > RTSP. Enable all quality streams. Note the RTSP URL: rtsp://<UNVR-IP>:7447/<stream-id>
Verify RTSP stream from UniFi camera
bash
# Verify RTSP feed externally:
ffplay rtsp://<UNVR-IP>:7447/<camera-stream-id>
# Or use VLC:
vlc rtsp://<UNVR-IP>:7447/<camera-stream-id>
Note

Camera angle is critical for AI-based bay occupancy detection. The camera must see the full bay floor to detect whether a vehicle is present. Avoid backlighting from bay doors — if a bay faces a large open door, the camera should be mounted on the same wall as the door, shooting inward, not facing the door. Test each camera feed during both open-door (daylight) and closed-door (night/artificial light) conditions to verify image quality. IR night vision on the G5 Turret Ultra is sufficient for most shop lighting conditions.

Step 5: Deploy Edge Server and Database (If On-Premises)

If deploying the Dell PowerEdge T160 for local processing, install Ubuntu Server 22.04 LTS, Docker, PostgreSQL, and n8n. Configure the analytics database schema designed for automotive service KPIs. For cloud-only deployments, provision Azure Database for PostgreSQL Flexible Server instead and skip the physical server setup.

Install and configure Docker, PostgreSQL, and n8n on Dell PowerEdge T160 running Ubuntu Server 22.04 LTS
bash
# Install Ubuntu Server 22.04 LTS on Dell PowerEdge T160
# After OS installation and initial setup:
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl git htop net-tools ufw

# Install Docker and Docker Compose:
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo usermod -aG docker $USER
sudo apt install -y docker-compose-plugin

# Create project directory:
sudo mkdir -p /opt/shop-analytics
cd /opt/shop-analytics

# Create docker-compose.yml:
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
  postgres:
    image: postgres:16
    container_name: shop-analytics-db
    environment:
      POSTGRES_DB: shop_analytics
      POSTGRES_USER: analytics_admin
      POSTGRES_PASSWORD: ${DB_PASSWORD}
    volumes:
      - pgdata:/var/lib/postgresql/data
      - ./init-scripts:/docker-entrypoint-initdb.d
    ports:
      - '5432:5432'
    restart: unless-stopped

  n8n:
    image: n8nio/n8n:latest
    container_name: shop-analytics-n8n
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=${N8N_USER}
      - N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=analytics_admin
      - DB_POSTGRESDB_PASSWORD=${DB_PASSWORD}
    volumes:
      - n8ndata:/home/node/.n8n
    ports:
      - '5678:5678'
    depends_on:
      - postgres
    restart: unless-stopped

volumes:
  pgdata:
  n8ndata:
EOF

# Create environment file:
cat > .env << 'EOF'
DB_PASSWORD=<GENERATE_STRONG_PASSWORD_HERE>
N8N_USER=mspadmin
N8N_PASSWORD=<GENERATE_STRONG_PASSWORD_HERE>
EOF
chmod 600 .env

# Start services:
docker compose up -d

# Verify services running:
docker compose ps
docker compose logs --tail=20 postgres
docker compose logs --tail=20 n8n
Note

For cloud-only deployment, provision Azure Database for PostgreSQL Flexible Server (Burstable B1ms, 1 vCore, 2GB RAM, 32GB storage — approximately $15/month) and use Zapier instead of self-hosted n8n. The edge server approach is preferred for shops concerned about cloud data residency or wanting to minimize recurring costs. Always generate strong unique passwords — use 'openssl rand -base64 24' for each password field.

Step 6: Initialize Analytics Database Schema

Create the PostgreSQL database schema optimized for automotive service analytics. This schema supports the three core analytics domains: bay utilization, technician efficiency, and parts margin by job type. The schema includes fact tables for repair orders, labor lines, parts lines, and bay events, plus dimension tables for technicians, bays, job types, and time periods.

Create the PostgreSQL analytics schema initialization script and optionally apply it to a running container
bash
# Create the initialization SQL script:
cat > /opt/shop-analytics/init-scripts/001-create-schema.sql << 'EOSQL'
-- Automotive Service Analytics Schema
-- Version 1.0

CREATE SCHEMA IF NOT EXISTS analytics;

-- Dimension: Service Bays
CREATE TABLE analytics.dim_bays (
    bay_id SERIAL PRIMARY KEY,
    bay_number VARCHAR(10) NOT NULL,
    bay_type VARCHAR(50), -- 'general', 'alignment', 'quick_lube', 'heavy_repair', 'diagnostic'
    camera_id VARCHAR(100), -- RTSP stream identifier or PitCrew bay ID
    is_active BOOLEAN DEFAULT true,
    created_at TIMESTAMP DEFAULT NOW()
);

-- Dimension: Technicians
CREATE TABLE analytics.dim_technicians (
    tech_id SERIAL PRIMARY KEY,
    sms_tech_id VARCHAR(50) NOT NULL, -- ID from shop management system
    first_name VARCHAR(100),
    last_name VARCHAR(100),
    skill_level VARCHAR(20), -- 'A', 'B', 'C', 'apprentice'
    hourly_cost_rate DECIMAL(8,2), -- what shop pays the tech
    is_active BOOLEAN DEFAULT true,
    created_at TIMESTAMP DEFAULT NOW()
);

-- Dimension: Job Types
CREATE TABLE analytics.dim_job_types (
    job_type_id SERIAL PRIMARY KEY,
    job_type_code VARCHAR(20) NOT NULL,
    job_type_name VARCHAR(200) NOT NULL,
    category VARCHAR(100), -- 'maintenance', 'repair', 'diagnostic', 'tire', 'body'
    avg_book_hours DECIMAL(6,2),
    target_parts_margin_pct DECIMAL(5,2),
    created_at TIMESTAMP DEFAULT NOW()
);

-- Dimension: Date (for time-series analysis)
CREATE TABLE analytics.dim_date (
    date_key DATE PRIMARY KEY,
    day_of_week INTEGER,
    day_name VARCHAR(10),
    week_number INTEGER,
    month_number INTEGER,
    month_name VARCHAR(10),
    quarter INTEGER,
    year INTEGER,
    is_weekend BOOLEAN,
    is_holiday BOOLEAN DEFAULT false
);

-- Fact: Repair Orders
CREATE TABLE analytics.fact_repair_orders (
    ro_id SERIAL PRIMARY KEY,
    sms_ro_number VARCHAR(50) NOT NULL,
    date_key DATE REFERENCES analytics.dim_date(date_key),
    customer_id VARCHAR(50),
    vehicle_vin VARCHAR(17),
    vehicle_year INTEGER,
    vehicle_make VARCHAR(50),
    vehicle_model VARCHAR(50),
    total_labor_sale DECIMAL(10,2),
    total_parts_sale DECIMAL(10,2),
    total_parts_cost DECIMAL(10,2),
    total_ro_sale DECIMAL(10,2),
    ro_status VARCHAR(20), -- 'open', 'in_progress', 'completed', 'invoiced'
    created_at_sms TIMESTAMP,
    completed_at_sms TIMESTAMP,
    loaded_at TIMESTAMP DEFAULT NOW(),
    UNIQUE(sms_ro_number)
);

-- Fact: Labor Lines (one per job per tech per RO)
CREATE TABLE analytics.fact_labor_lines (
    labor_line_id SERIAL PRIMARY KEY,
    ro_id INTEGER REFERENCES analytics.fact_repair_orders(ro_id),
    tech_id INTEGER REFERENCES analytics.dim_technicians(tech_id),
    job_type_id INTEGER REFERENCES analytics.dim_job_types(job_type_id),
    date_key DATE REFERENCES analytics.dim_date(date_key),
    description TEXT,
    book_hours DECIMAL(6,2), -- standard/book time for the job
    actual_hours DECIMAL(6,2), -- clock time the tech spent
    billed_hours DECIMAL(6,2), -- hours billed to customer
    labor_rate DECIMAL(8,2), -- $/hour charged
    labor_sale DECIMAL(10,2), -- billed_hours * labor_rate
    clock_in TIMESTAMP,
    clock_out TIMESTAMP,
    loaded_at TIMESTAMP DEFAULT NOW()
);

-- Fact: Parts Lines (one per part per RO)
CREATE TABLE analytics.fact_parts_lines (
    parts_line_id SERIAL PRIMARY KEY,
    ro_id INTEGER REFERENCES analytics.fact_repair_orders(ro_id),
    job_type_id INTEGER REFERENCES analytics.dim_job_types(job_type_id),
    date_key DATE REFERENCES analytics.dim_date(date_key),
    part_number VARCHAR(50),
    part_description TEXT,
    vendor VARCHAR(100),
    quantity DECIMAL(8,2),
    cost_per_unit DECIMAL(10,2),
    sale_per_unit DECIMAL(10,2),
    total_cost DECIMAL(10,2),
    total_sale DECIMAL(10,2),
    margin_amount DECIMAL(10,2) GENERATED ALWAYS AS (total_sale - total_cost) STORED,
    margin_pct DECIMAL(5,2) GENERATED ALWAYS AS (
        CASE WHEN total_sale > 0 THEN ((total_sale - total_cost) / total_sale * 100) ELSE 0 END
    ) STORED,
    loaded_at TIMESTAMP DEFAULT NOW()
);

-- Fact: Bay Events (from PitCrew or camera AI)
CREATE TABLE analytics.fact_bay_events (
    event_id SERIAL PRIMARY KEY,
    bay_id INTEGER REFERENCES analytics.dim_bays(bay_id),
    date_key DATE REFERENCES analytics.dim_date(date_key),
    event_type VARCHAR(20) NOT NULL, -- 'vehicle_in', 'vehicle_out', 'idle_start', 'idle_end'
    event_timestamp TIMESTAMP NOT NULL,
    vehicle_detected BOOLEAN,
    tech_detected BOOLEAN,
    ro_id INTEGER REFERENCES analytics.fact_repair_orders(ro_id),
    source VARCHAR(50), -- 'pitcrew', 'custom_ai', 'manual'
    loaded_at TIMESTAMP DEFAULT NOW()
);

-- Aggregation: Daily Bay Utilization Summary
CREATE TABLE analytics.agg_daily_bay_utilization (
    id SERIAL PRIMARY KEY,
    bay_id INTEGER REFERENCES analytics.dim_bays(bay_id),
    date_key DATE REFERENCES analytics.dim_date(date_key),
    total_occupied_minutes INTEGER,
    total_idle_minutes INTEGER,
    total_available_minutes INTEGER, -- shop hours that day
    utilization_pct DECIMAL(5,2),
    vehicle_count INTEGER,
    avg_dwell_minutes INTEGER,
    loaded_at TIMESTAMP DEFAULT NOW(),
    UNIQUE(bay_id, date_key)
);

-- Populate date dimension for 3 years
INSERT INTO analytics.dim_date
SELECT
    d::date AS date_key,
    EXTRACT(DOW FROM d) AS day_of_week,
    TO_CHAR(d, 'Day') AS day_name,
    EXTRACT(WEEK FROM d) AS week_number,
    EXTRACT(MONTH FROM d) AS month_number,
    TO_CHAR(d, 'Month') AS month_name,
    EXTRACT(QUARTER FROM d) AS quarter,
    EXTRACT(YEAR FROM d) AS year,
    EXTRACT(DOW FROM d) IN (0,6) AS is_weekend,
    false AS is_holiday
FROM generate_series('2024-01-01'::date, '2026-12-31'::date, '1 day'::interval) d;

-- Create indexes for query performance
CREATE INDEX idx_fact_ro_date ON analytics.fact_repair_orders(date_key);
CREATE INDEX idx_fact_labor_date ON analytics.fact_labor_lines(date_key);
CREATE INDEX idx_fact_labor_tech ON analytics.fact_labor_lines(tech_id);
CREATE INDEX idx_fact_labor_job ON analytics.fact_labor_lines(job_type_id);
CREATE INDEX idx_fact_parts_date ON analytics.fact_parts_lines(date_key);
CREATE INDEX idx_fact_parts_job ON analytics.fact_parts_lines(job_type_id);
CREATE INDEX idx_fact_bay_events_bay ON analytics.fact_bay_events(bay_id, date_key);
CREATE INDEX idx_agg_bay_util ON analytics.agg_daily_bay_utilization(bay_id, date_key);
EOSQL

# If database is already running, apply manually:
docker exec -i shop-analytics-db psql -U analytics_admin -d shop_analytics < /opt/shop-analytics/init-scripts/001-create-schema.sql
Note

This schema is designed for Power BI DirectQuery or Import mode. For shops with fewer than 50 ROs/day, Import mode with scheduled refresh (4x daily) is recommended over DirectQuery for better dashboard performance. The generated columns for margin_amount and margin_pct ensure consistent calculation. Adjust the dim_date holiday flag for the shop's actual holiday schedule.

Step 7: Configure Shop Management System API Integration

Set up API access to the client's shop management system and configure the n8n ETL workflow to extract repair order, labor, parts, and technician data on a scheduled basis. This example uses Tekmetric's REST API; adapt endpoints for Shopmonkey, Shop-Ware, or other platforms.

1
Log into Tekmetric as admin
2
Navigate to Settings > Integrations > API
3
Generate API key and note the Shop ID
4
Navigate to Settings > Integrations > API Keys (Shopmonkey) — Generate key with read access to work orders, line items, technicians
5
Configure n8n workflow (access at http://<edge-server-ip>:5678): Import the pre-built workflow JSON (see custom_ai_components section)
6
Set credentials: Tekmetric — HTTP Header Auth with Bearer token; PostgreSQL — host=localhost, port=5432, db=shop_analytics; QuickBooks — OAuth2 credentials (see Step 8)
Tekmetric and Shopmonkey API test requests
bash
# Test Tekmetric API access:
curl -H 'Authorization: Bearer <TEKMETRIC_API_KEY>' \
  'https://sandbox.tekmetric.com/api/v1/shops/<SHOP_ID>/repair-orders?page=0&size=10'

# Expected response: JSON with repair order data including
# labor lines, parts lines, technician assignments, and status

# For Shopmonkey:
curl -H 'Authorization: Bearer <SHOPMONKEY_API_KEY>' \
  'https://api.shopmonkey.cloud/v3/work_orders?limit=10'
Note

Tekmetric API rate limits are generous for single-shop use. Schedule ETL runs during off-hours (e.g., 11 PM nightly for full sync, plus incremental every 2 hours during business hours). Shopmonkey and Shop-Ware have similar REST APIs with webhook support for real-time updates. For Mitchell1 Manager SE, data extraction may require CSV export automation rather than API — Mitchell1's API access is limited to their ProDemand repair content platform, not the shop management data. In that case, configure a scheduled task to export and parse CSV files.

Step 8: Configure QuickBooks Integration for Parts Cost Data

Connect QuickBooks Online to the analytics pipeline to pull accurate parts cost data (COGS) that supplements the SMS data. This is critical for accurate margin calculations, especially when SMS parts cost fields are not consistently updated. Set up OAuth2 authentication for QuickBooks API access via n8n.

1
Create a QuickBooks Developer account at developer.intuit.com
2
Create a new app with Accounting scope
3
Configure redirect URI: http://<edge-server-ip>:5678/rest/oauth2-credential/callback
4
Note Client ID and Client Secret
5
In n8n: Go to Credentials > Add Credential > OAuth2 API
6
Set Grant Type: Authorization Code
7
Set Authorization URL: https://appcenter.intuit.com/connect/oauth2
8
Set Access Token URL: https://oauth.platform.intuit.com/oauth2/v1/tokens/bearer
9
Set Client ID: <from QuickBooks developer portal>
10
Set Client Secret: <from QuickBooks developer portal>
11
Set Scope: com.intuit.quickbooks.accounting
12
Click 'Connect' and authorize
Test QuickBooks Purchase data pull
bash
curl -H 'Authorization: Bearer <QB_ACCESS_TOKEN>' \
  -H 'Accept: application/json' \
  'https://quickbooks.api.intuit.com/v3/company/<COMPANY_ID>/query?query=SELECT * FROM Purchase WHERE TxnDate >= 2024-01-01 MAXRESULTS 5'
Note

QuickBooks OAuth2 tokens expire every 60 minutes with a refresh token valid for 100 days. n8n handles token refresh automatically once configured. If the shop uses QuickBooks Desktop instead of Online, use the QuickBooks Web Connector or a middleware like Conductor (conductor.is) to bridge the gap. For Xero, the OAuth2 flow is similar — use Xero's Accounting API with the n8n Xero node.

Step 9: Configure Leverege PitCrew Bay Intelligence

Onboard the shop location in Leverege PitCrew, connect camera feeds, define bay zones, and configure real-time bay status monitoring. PitCrew will become the primary source for bay utilization data, providing vehicle occupancy detection, dwell time tracking, and idle time measurement.

1
Contact Leverege sales/partner team to provision the shop location. Provide: Shop name, address, number of bays, bay layout diagram. Receive: PitCrew tenant URL, admin credentials, API key
2
Log into PitCrew admin portal. Navigate to Location Setup > Cameras. Add each camera with its RTSP URL: Bay 1: rtsp://<UNVR-IP>:7447/<bay1-stream-id>, Bay 2: rtsp://<UNVR-IP>:7447/<bay2-stream-id>, ... repeat for all bays and entrance/exit cameras
3
Define Bay Zones in PitCrew: For each camera, draw the polygon zone covering the vehicle area. PitCrew's AI model detects vehicles within these zones
4
Configure PitCrew API webhook to push events to our database: In PitCrew: Settings > Integrations > Webhooks. Add webhook URL: https://<edge-server-or-n8n-cloud>/webhook/pitcrew-events. Events: bay_occupied, bay_vacated, idle_detected
Test API data pull
bash
# expected response: JSON with current status of each bay

curl -H 'Authorization: Bearer <PITCREW_API_KEY>' \
  'https://api.leverege.com/pitcrew/v1/locations/<LOCATION_ID>/bays/status'
Expected PitCrew bay status API response format
json
{ "bays": [{ "id": "bay-1", "status": "occupied", "since": "2024-...", "vehicle_detected": true }] }
Note

PitCrew onboarding typically takes 1-2 weeks as Leverege calibrates their AI models to the shop's specific camera angles and lighting conditions. During calibration, PitCrew may request camera angle adjustments — be prepared for minor repositioning. If PitCrew is not available or budget is constrained, the custom AI bay detection component (see custom_ai_components) can be deployed as an alternative, though it requires more MSP development effort. Ensure the UNVR's RTSP ports (7447) are accessible from the PitCrew cloud service — this may require a firewall rule allowing outbound RTSP or configuring PitCrew's on-premises connector.

Step 10: Build Power BI Analytics Dashboards

Create the Power BI dataset connecting to the PostgreSQL analytics database and build three primary dashboard pages: Bay Utilization, Technician Efficiency, and Parts Margin by Job Type. Import the pre-built .pbix template and customize for the client's specific bays, technicians, and job types.

1
Install Power BI Desktop on MSP development workstation — Download from https://powerbi.microsoft.com/desktop
2
Connect to PostgreSQL data source: Get Data > PostgreSQL database | Server: <edge-server-ip> or <azure-postgres-host> | Database: shop_analytics | User: analytics_readonly (create this read-only user first)
3
Import all tables from analytics schema
4
Create relationships in Power BI data model: fact_repair_orders.date_key -> dim_date.date_key | fact_labor_lines.tech_id -> dim_technicians.tech_id | fact_labor_lines.job_type_id -> dim_job_types.job_type_id | fact_labor_lines.ro_id -> fact_repair_orders.ro_id | fact_parts_lines.ro_id -> fact_repair_orders.ro_id | fact_parts_lines.job_type_id -> dim_job_types.job_type_id | fact_bay_events.bay_id -> dim_bays.bay_id | agg_daily_bay_utilization.bay_id -> dim_bays.bay_id
5
Create key DAX measures (see custom_ai_components for full measure set)
6
Publish to Power BI Service workspace
7
Configure scheduled refresh: 4x daily (6am, 10am, 2pm, 6pm)
8
Share dashboard with client Power BI Pro users
Create read-only Power BI user in PostgreSQL
sql
docker exec -i shop-analytics-db psql -U analytics_admin -d shop_analytics << 'EOF'
CREATE USER analytics_readonly WITH PASSWORD '<READONLY_PASSWORD>';
GRANT USAGE ON SCHEMA analytics TO analytics_readonly;
GRANT SELECT ON ALL TABLES IN SCHEMA analytics TO analytics_readonly;
ALTER DEFAULT PRIVILEGES IN SCHEMA analytics GRANT SELECT ON TABLES TO analytics_readonly;
EOF
Note

Use Import mode (not DirectQuery) for best performance with this data volume. Scheduled refresh 4x daily is sufficient — real-time bay status is handled by PitCrew's own dashboard. The Power BI dashboards provide the historical analytics and trend analysis layer. Always create a dedicated read-only PostgreSQL user for Power BI — never connect BI tools with admin credentials. The full .pbix template and DAX measures are provided in the custom_ai_components section.

Step 11: Configure Automated Data Pipeline (n8n ETL Workflows)

Import and configure the n8n workflows that automate data extraction from Tekmetric/Shopmonkey, QuickBooks, and PitCrew into the PostgreSQL analytics database. Three workflows handle the ETL: (1) SMS data sync every 2 hours, (2) QuickBooks parts cost sync nightly, and (3) PitCrew bay event ingestion via webhook.

  • Access n8n at http://<edge-server-ip>:5678 and log in with credentials configured in Step 5
  • Navigate to Workflows > Import from File
  • Import: sms-data-sync.json
  • Import: quickbooks-cost-sync.json
  • Import: pitcrew-webhook-ingest.json
  • Import: daily-aggregation.json
  • Configure credentials in each workflow: Tekmetric (HTTP Header Auth / Bearer token), PostgreSQL (Connection to shop_analytics database), QuickBooks (OAuth2 credential configured in Step 8)
  • Activate all workflows
  • Test each workflow manually first by clicking 'Execute Workflow'
Verify data is flowing into the analytics database
bash
docker exec -i shop-analytics-db psql -U analytics_admin -d shop_analytics << 'EOF'
SELECT COUNT(*) as ro_count FROM analytics.fact_repair_orders;
SELECT COUNT(*) as labor_count FROM analytics.fact_labor_lines;
SELECT COUNT(*) as parts_count FROM analytics.fact_parts_lines;
SELECT COUNT(*) as event_count FROM analytics.fact_bay_events;
EOF
Note

Run an initial historical backfill for the past 6-12 months of SMS data before activating the incremental sync. The backfill workflow (provided in custom_ai_components) paginates through all historical repair orders. Expect 30-60 minutes for the initial backfill depending on RO volume. Monitor n8n execution logs for the first week to catch any data mapping errors or API rate limit issues.

Step 12: Security Hardening and Compliance Configuration

Implement security controls required by the FTC Safeguards Rule and PCI DSS. This includes firewall rules, encryption, access controls, logging, and documentation. Generate the initial compliance documentation for the client's Written Information Security Program (WISP).

UFW firewall rules, PostgreSQL SSL setup, and automated security update configuration
bash
# Enable UFW firewall on edge server:
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow from <MSP_VPN_SUBNET> to any port 22 proto tcp comment 'SSH via VPN'
sudo ufw allow from <MSP_VPN_SUBNET> to any port 5678 proto tcp comment 'n8n via VPN'
sudo ufw allow from <POWER_BI_GATEWAY_IP> to any port 5432 proto tcp comment 'Power BI DB'
sudo ufw allow from <PITCREW_WEBHOOK_IPS> to any port 5678 proto tcp comment 'PitCrew webhooks'
sudo ufw enable

# Enable PostgreSQL SSL:
# Edit postgresql.conf in Docker volume:
docker exec -i shop-analytics-db bash -c "cat >> /var/lib/postgresql/data/postgresql.conf << 'EOF'
ssl = on
ssl_cert_file = '/var/lib/postgresql/data/server.crt'
ssl_key_file = '/var/lib/postgresql/data/server.key'
log_connections = on
log_disconnections = on
EOF"

# Generate self-signed cert (or use Let's Encrypt):
docker exec -i shop-analytics-db bash -c \
  'openssl req -new -x509 -days 365 -nodes -text \
   -out /var/lib/postgresql/data/server.crt \
   -keyout /var/lib/postgresql/data/server.key \
   -subj "/CN=shop-analytics-db" && \
   chmod 600 /var/lib/postgresql/data/server.key && \
   chown postgres:postgres /var/lib/postgresql/data/server.*'

docker compose restart postgres

# Configure automated security updates:
sudo apt install -y unattended-upgrades
sudo dpkg-reconfigure -plow unattended-upgrades

# Set up log forwarding (if MSP uses SIEM):
# Install rsyslog or configure Docker logging driver to forward to SIEM
  • Document in WISP: Network diagram with VLAN segmentation
  • Document in WISP: Data flow diagram showing what customer data is collected/stored
  • Document in WISP: Access control matrix (who can access what systems)
  • Document in WISP: Encryption status (data at rest: PostgreSQL SSL, data in transit: HTTPS/TLS)
  • Document in WISP: Incident response procedures for camera system breach
  • Document in WISP: Annual penetration testing schedule
  • Document in WISP: Bi-annual vulnerability assessment schedule
Note

FTC Safeguards Rule compliance is not optional for any automotive business that handles consumer financing. Penalties are $51,744 per violation. Ensure the camera system cannot view payment card entry areas — reposition cameras if necessary. The WISP documentation should be stored in the client's SharePoint and reviewed quarterly. Schedule the first penetration test within 90 days of deployment. If the client does not already have a WISP, this is an excellent upsell opportunity for the MSP's compliance services.

Step 13: Load Historical Data and Validate Analytics

Execute the historical data backfill, validate data accuracy by cross-referencing Power BI dashboard figures against known SMS reports, and calibrate PitCrew bay detection accuracy. This is the critical QA phase before client handoff.

Historical backfill execution and SQL data validation queries
sql
# Run historical backfill (adjust date range as needed):
# In n8n, open 'SMS Historical Backfill' workflow
# Set parameters: start_date=2024-01-01, end_date=<today>
# Execute workflow and monitor progress

# Validate data counts against SMS:
docker exec -i shop-analytics-db psql -U analytics_admin -d shop_analytics << 'EOF'
-- Compare these numbers against SMS reports:
SELECT 
  COUNT(*) as total_ros,
  SUM(total_labor_sale) as total_labor_revenue,
  SUM(total_parts_sale) as total_parts_revenue,
  SUM(total_ro_sale) as total_revenue
FROM analytics.fact_repair_orders
WHERE date_key >= '2024-01-01' AND date_key < '2025-01-01';

-- Check technician efficiency data:
SELECT 
  t.first_name || ' ' || t.last_name as technician,
  COUNT(*) as job_count,
  ROUND(AVG(l.billed_hours / NULLIF(l.actual_hours, 0) * 100), 1) as avg_efficiency_pct,
  ROUND(SUM(l.billed_hours), 1) as total_billed,
  ROUND(SUM(l.actual_hours), 1) as total_actual
FROM analytics.fact_labor_lines l
JOIN analytics.dim_technicians t ON l.tech_id = t.tech_id
WHERE l.date_key >= '2024-01-01'
GROUP BY t.first_name, t.last_name
ORDER BY avg_efficiency_pct DESC;

-- Check parts margin by job type:
SELECT 
  jt.job_type_name,
  jt.category,
  COUNT(*) as parts_line_count,
  ROUND(SUM(p.total_sale), 2) as total_parts_sale,
  ROUND(SUM(p.total_cost), 2) as total_parts_cost,
  ROUND(AVG(p.margin_pct), 1) as avg_margin_pct
FROM analytics.fact_parts_lines p
JOIN analytics.dim_job_types jt ON p.job_type_id = jt.job_type_id
WHERE p.date_key >= '2024-01-01'
GROUP BY jt.job_type_name, jt.category
ORDER BY avg_margin_pct ASC;
EOF

# Compare Power BI dashboard totals against these SQL results
# Differences should be < 1% (rounding)
Note

Data validation is the most time-consuming step but absolutely critical. Common issues: (1) SMS has repair orders with $0 parts cost because the cost field was never filled — cross-reference with QuickBooks COGS data; (2) Technician clock times may be missing for older ROs if time-tracking was not enforced; (3) Job type categorization may be inconsistent — work with the service manager to define a clean job type taxonomy and create mapping rules. Budget 3-5 days for data validation and cleanup.

Step 14: Client Training and Go-Live

Conduct training sessions for the shop owner, service manager, and lead technician on using the Power BI dashboards, PitCrew bay status display, and understanding the KPIs. Set up the bay status display TV in the shop. Configure automated email reports.

1
Mount a 43-55 inch commercial display TV in the shop visible to service manager desk
2
Connect a Chromebit or Fire TV Stick with Silk browser
3
Navigate to PitCrew dashboard URL in kiosk mode
4
Set display to auto-refresh every 60 seconds
1
In Power BI Service: Open the dashboard
2
Click '...' menu > Subscribe to emails
3
Add subscribers: shop owner, service manager
4
Schedule: Daily at 7:00 AM (before shop opens)
5
Include: Bay utilization summary, tech efficiency, low-margin alerts
1
Install Power BI app on owner's smartphone (iOS/Android)
2
Sign in with their Microsoft 365 account
3
Navigate to shared workspace > pin key dashboards to favorites
Note

Training should be conducted in two sessions: (1) A 90-minute hands-on session with the owner and service manager covering all three dashboard pages, how to interpret KPIs, and how to take action on insights; (2) A 30-minute session with technicians explaining how their efficiency is measured and how it benefits them (more efficient techs can earn more through flat-rate pay). Emphasize that this is a tool to help them, not a surveillance system. Leave behind a printed quick-reference guide with KPI definitions and dashboard navigation instructions.

Custom AI Components

Automotive KPI DAX Measures for Power BI

Type: integration Complete set of DAX measures implementing the three core automotive service analytics domains: Bay Utilization, Technician Efficiency, and Parts Margin by Job Type. These measures power all Power BI dashboard visuals and provide calculated KPIs aligned with industry benchmarks from NCM Associates and DRIVE.

Implementation:

Power BI DAX Measures
dax
# Bay Utilization, Technician Efficiency, Parts Margin, and Composite KPIs

-- POWER BI DAX MEASURES --
-- Add these in Power BI Desktop > Modeling > New Measure

-- ===== BAY UTILIZATION MEASURES =====

Bay Utilization % = 
VAR _OccupiedMinutes = SUM(agg_daily_bay_utilization[total_occupied_minutes])
VAR _AvailableMinutes = SUM(agg_daily_bay_utilization[total_available_minutes])
RETURN
    IF(_AvailableMinutes > 0, DIVIDE(_OccupiedMinutes, _AvailableMinutes, 0), BLANK())

Avg Vehicles Per Bay Per Day = 
AVERAGE(agg_daily_bay_utilization[vehicle_count])

Avg Dwell Time (Minutes) = 
AVERAGE(agg_daily_bay_utilization[avg_dwell_minutes])

Idle Time % = 
VAR _IdleMinutes = SUM(agg_daily_bay_utilization[total_idle_minutes])
VAR _AvailableMinutes = SUM(agg_daily_bay_utilization[total_available_minutes])
RETURN
    IF(_AvailableMinutes > 0, DIVIDE(_IdleMinutes, _AvailableMinutes, 0), BLANK())

Bay Utilization Target = 0.85 -- 85% industry benchmark

Bay Utilization vs Target = [Bay Utilization %] - [Bay Utilization Target]

Total Shop Available Hours = 
VAR _TotalMinutes = SUM(agg_daily_bay_utilization[total_available_minutes])
RETURN DIVIDE(_TotalMinutes, 60, 0)

Lost Revenue from Idle Bays = 
VAR _IdleHours = DIVIDE(SUM(agg_daily_bay_utilization[total_idle_minutes]), 60, 0)
VAR _AvgLaborRate = AVERAGE(fact_labor_lines[labor_rate])
VAR _AvgEfficiency = [Overall Tech Efficiency %]
RETURN _IdleHours * _AvgLaborRate * _AvgEfficiency

-- ===== TECHNICIAN EFFICIENCY MEASURES =====

Total Billed Hours = SUM(fact_labor_lines[billed_hours])

Total Actual Hours = SUM(fact_labor_lines[actual_hours])

Total Book Hours = SUM(fact_labor_lines[book_hours])

Overall Tech Efficiency % = 
DIVIDE([Total Billed Hours], [Total Actual Hours], 0)

Tech Productivity % = 
DIVIDE([Total Billed Hours], [Total Shop Available Hours], 0)

Tech Proficiency % = 
DIVIDE([Total Billed Hours], [Total Book Hours], 0)

Effective Labor Rate = 
DIVIDE(SUM(fact_labor_lines[labor_sale]), [Total Actual Hours], 0)

Posted Labor Rate = 
AVERAGE(fact_labor_lines[labor_rate])

Labor Rate Recovery % = 
DIVIDE([Effective Labor Rate], [Posted Labor Rate], 0)

Avg RO Labor Revenue = 
DIVIDE(SUM(fact_repair_orders[total_labor_sale]), DISTINCTCOUNT(fact_repair_orders[ro_id]), 0)

Tech Efficiency Target = 1.0 -- 100% = billed equals actual (break-even for flat rate)

Top Performer Threshold = 1.25 -- 125% efficiency = A-tech benchmark

Jobs Per Tech Per Day = 
VAR _JobCount = COUNTROWS(fact_labor_lines)
VAR _WorkDays = DISTINCTCOUNT(fact_labor_lines[date_key])
VAR _TechCount = DISTINCTCOUNT(fact_labor_lines[tech_id])
RETURN DIVIDE(_JobCount, _WorkDays * _TechCount, 0)

Comebacks % = 
VAR _TotalROs = DISTINCTCOUNT(fact_repair_orders[ro_id])
VAR _Comebacks = CALCULATE(
    DISTINCTCOUNT(fact_repair_orders[ro_id]),
    FILTER(fact_repair_orders, fact_repair_orders[ro_status] = "comeback")
)
RETURN DIVIDE(_Comebacks, _TotalROs, 0)

-- ===== PARTS MARGIN MEASURES =====

Total Parts Revenue = SUM(fact_parts_lines[total_sale])

Total Parts Cost = SUM(fact_parts_lines[total_cost])

Total Parts Gross Profit = [Total Parts Revenue] - [Total Parts Cost]

Parts Margin % = 
DIVIDE([Total Parts Gross Profit], [Total Parts Revenue], 0)

Parts Margin Target = 0.50 -- 50% GPM industry target for independents

Parts Margin vs Target = [Parts Margin %] - [Parts Margin Target]

Parts-to-Labor Ratio = 
DIVIDE(SUM(fact_repair_orders[total_parts_sale]), SUM(fact_repair_orders[total_labor_sale]), 0)

Avg RO Parts Revenue = 
DIVIDE(SUM(fact_repair_orders[total_parts_sale]), DISTINCTCOUNT(fact_repair_orders[ro_id]), 0)

Avg RO Value (ARO) = 
DIVIDE(SUM(fact_repair_orders[total_ro_sale]), DISTINCTCOUNT(fact_repair_orders[ro_id]), 0)

Low Margin Parts Lines Count = 
CALCULATE(
    COUNTROWS(fact_parts_lines),
    fact_parts_lines[margin_pct] < 35
)

Low Margin Parts Revenue = 
CALCULATE(
    SUM(fact_parts_lines[total_sale]),
    fact_parts_lines[margin_pct] < 35
)

Margin Leakage Amount = 
VAR _TargetMarginRate = 0.50
RETURN
    SUMX(
        fact_parts_lines,
        VAR _ActualMargin = fact_parts_lines[margin_amount]
        VAR _TargetMargin = fact_parts_lines[total_sale] * _TargetMarginRate
        RETURN IF(_TargetMargin > _ActualMargin, _TargetMargin - _ActualMargin, 0)
    )

-- ===== COMPOSITE / SUMMARY MEASURES =====

Total Revenue = SUM(fact_repair_orders[total_ro_sale])

Total Gross Profit = 
[Total Parts Gross Profit] + SUM(fact_labor_lines[labor_sale]) - 
    SUMX(fact_labor_lines, fact_labor_lines[actual_hours] * RELATED(dim_technicians[hourly_cost_rate]))

Gross Profit % = DIVIDE([Total Gross Profit], [Total Revenue], 0)

RO Count = DISTINCTCOUNT(fact_repair_orders[ro_id])

Car Count = DISTINCTCOUNT(fact_repair_orders[vehicle_vin])

n8n SMS Data Sync Workflow

Type: workflow Automated n8n workflow that extracts repair order, labor line, parts line, and technician data from Tekmetric's REST API every 2 hours during business hours and loads it into the PostgreSQL analytics database. Handles incremental sync using last-modified timestamps and includes error handling with MSP notification.

Implementation:

n8n workflow definition for Tekmetric SMS Data Sync
json
{
  "name": "SMS Data Sync - Tekmetric",
  "nodes": [
    {
      "parameters": {
        "rule": {
          "interval": [{ "field": "hours", "hoursInterval": 2 }]
        }
      },
      "name": "Schedule Trigger",
      "type": "n8n-nodes-base.scheduleTrigger",
      "position": [250, 300]
    },
    {
      "parameters": {
        "url": "=https://sandbox.tekmetric.com/api/v1/shops/{{$env.TEKMETRIC_SHOP_ID}}/repair-orders",
        "authentication": "genericCredentialType",
        "genericAuthType": "httpHeaderAuth",
        "sendQuery": true,
        "queryParameters": {
          "parameters": [
            { "name": "updatedDateStart", "value": "={{$now.minus(3, 'hours').toISO()}}" },
            { "name": "size", "value": "100" },
            { "name": "page", "value": "0" }
          ]
        },
        "options": { "response": { "response": { "responseFormat": "json" } } }
      },
      "name": "Fetch Repair Orders",
      "type": "n8n-nodes-base.httpRequest",
      "position": [450, 300],
      "credentials": { "httpHeaderAuth": { "name": "Tekmetric API" } }
    },
    {
      "parameters": {
        "operation": "executeQuery",
        "query": "INSERT INTO analytics.fact_repair_orders (sms_ro_number, date_key, customer_id, vehicle_vin, vehicle_year, vehicle_make, vehicle_model, total_labor_sale, total_parts_sale, total_parts_cost, total_ro_sale, ro_status, created_at_sms, completed_at_sms) VALUES ($1, $2::date, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13::timestamp, $14::timestamp) ON CONFLICT (sms_ro_number) DO UPDATE SET total_labor_sale = EXCLUDED.total_labor_sale, total_parts_sale = EXCLUDED.total_parts_sale, total_parts_cost = EXCLUDED.total_parts_cost, total_ro_sale = EXCLUDED.total_ro_sale, ro_status = EXCLUDED.ro_status, completed_at_sms = EXCLUDED.completed_at_sms RETURNING ro_id",
        "options": {}
      },
      "name": "Upsert Repair Orders",
      "type": "n8n-nodes-base.postgres",
      "position": [650, 300],
      "credentials": { "postgres": { "name": "Shop Analytics DB" } }
    },
    {
      "parameters": {
        "operation": "executeQuery",
        "query": "INSERT INTO analytics.fact_labor_lines (ro_id, tech_id, job_type_id, date_key, description, book_hours, actual_hours, billed_hours, labor_rate, labor_sale, clock_in, clock_out) SELECT ro.ro_id, t.tech_id, COALESCE(jt.job_type_id, 1), ro.date_key, $1, $2, $3, $4, $5, $6, $7::timestamp, $8::timestamp FROM analytics.fact_repair_orders ro LEFT JOIN analytics.dim_technicians t ON t.sms_tech_id = $9 LEFT JOIN analytics.dim_job_types jt ON jt.job_type_code = $10 WHERE ro.sms_ro_number = $11 ON CONFLICT DO NOTHING",
        "options": {}
      },
      "name": "Insert Labor Lines",
      "type": "n8n-nodes-base.postgres",
      "position": [850, 200],
      "credentials": { "postgres": { "name": "Shop Analytics DB" } }
    },
    {
      "parameters": {
        "operation": "executeQuery",
        "query": "INSERT INTO analytics.fact_parts_lines (ro_id, job_type_id, date_key, part_number, part_description, vendor, quantity, cost_per_unit, sale_per_unit, total_cost, total_sale) SELECT ro.ro_id, COALESCE(jt.job_type_id, 1), ro.date_key, $1, $2, $3, $4, $5, $6, $7, $8 FROM analytics.fact_repair_orders ro LEFT JOIN analytics.dim_job_types jt ON jt.job_type_code = $9 WHERE ro.sms_ro_number = $10 ON CONFLICT DO NOTHING",
        "options": {}
      },
      "name": "Insert Parts Lines",
      "type": "n8n-nodes-base.postgres",
      "position": [850, 400],
      "credentials": { "postgres": { "name": "Shop Analytics DB" } }
    },
    {
      "parameters": {
        "conditions": { "boolean": [{ "value1": "={{$json.error}}", "value2": true }] }
      },
      "name": "Error Check",
      "type": "n8n-nodes-base.if",
      "position": [1050, 300]
    },
    {
      "parameters": {
        "resource": "message",
        "channel": "#msp-alerts",
        "text": "=⚠️ SMS Data Sync Failed for {{$env.CLIENT_NAME}}\nError: {{$json.error}}\nTimestamp: {{$now.toISO()}}"
      },
      "name": "Alert MSP (Slack)",
      "type": "n8n-nodes-base.slack",
      "position": [1250, 200]
    }
  ],
  "connections": {
    "Schedule Trigger": { "main": [[{ "node": "Fetch Repair Orders", "type": "main", "index": 0 }]] },
    "Fetch Repair Orders": { "main": [[{ "node": "Upsert Repair Orders", "type": "main", "index": 0 }]] },
    "Upsert Repair Orders": { "main": [[{ "node": "Insert Labor Lines", "type": "main", "index": 0 }, { "node": "Insert Parts Lines", "type": "main", "index": 0 }]] },
    "Insert Labor Lines": { "main": [[{ "node": "Error Check", "type": "main", "index": 0 }]] },
    "Error Check": { "main": [[{ "node": "Alert MSP (Slack)", "type": "main", "index": 0 }], []] }
  },
  "settings": { "executionOrder": "v1", "saveManualExecutions": true, "saveDataSuccessExecution": "all" }
}
  • Replace sandbox URL with production: https://shop.tekmetric.com/api/v1/
  • For Shopmonkey: Change endpoint to https://api.shopmonkey.cloud/v3/work_orders and adapt field mappings (work_order_number, line_items array structure differs)
  • For Shop-Ware: Use their GraphQL API endpoint with appropriate queries
  • The workflow includes pagination logic — for the initial backfill, create a copy that iterates through all pages (set page parameter dynamically)
  • Tekmetric RO response includes nested labor[] and parts[] arrays that must be split into individual rows for the labor and parts fact tables
  • Configure n8n environment variables: TEKMETRIC_SHOP_ID, TEKMETRIC_API_KEY, CLIENT_NAME

Daily Bay Utilization Aggregation

Type: workflow SQL-based aggregation workflow that runs nightly at 11 PM, computing daily bay utilization summaries from raw bay events. Calculates occupied time, idle time, vehicle count, and utilization percentage for each bay for each day. This pre-computed aggregation ensures fast Power BI dashboard performance.

Implementation:

Daily Aggregation SQL (run via n8n Postgres node on nightly schedule) — processes the previous day's bay events into the aggregation table
sql
-- Daily Aggregation SQL (run via n8n Postgres node on nightly schedule)
-- This processes the previous day's bay events into the aggregation table

WITH shop_hours AS (
    -- Define shop operating hours (adjust per client)
    SELECT 
        (CURRENT_DATE - INTERVAL '1 day')::date AS work_date,
        ((CURRENT_DATE - INTERVAL '1 day') + TIME '07:30:00')::timestamp AS open_time,
        ((CURRENT_DATE - INTERVAL '1 day') + TIME '18:00:00')::timestamp AS close_time,
        630 AS available_minutes -- 10.5 hours * 60 minutes
),
bay_transitions AS (
    SELECT 
        be.bay_id,
        be.event_type,
        be.event_timestamp,
        LEAD(be.event_timestamp) OVER (
            PARTITION BY be.bay_id ORDER BY be.event_timestamp
        ) AS next_event_timestamp,
        LEAD(be.event_type) OVER (
            PARTITION BY be.bay_id ORDER BY be.event_timestamp
        ) AS next_event_type
    FROM analytics.fact_bay_events be
    CROSS JOIN shop_hours sh
    WHERE be.date_key = sh.work_date
    AND be.event_timestamp BETWEEN sh.open_time AND sh.close_time
),
bay_durations AS (
    SELECT
        bay_id,
        event_type,
        EXTRACT(EPOCH FROM (COALESCE(next_event_timestamp, (SELECT close_time FROM shop_hours)) - event_timestamp)) / 60.0 AS duration_minutes
    FROM bay_transitions
),
bay_summary AS (
    SELECT
        bd.bay_id,
        (SELECT work_date FROM shop_hours) AS date_key,
        COALESCE(SUM(CASE WHEN bd.event_type = 'vehicle_in' THEN bd.duration_minutes END), 0)::integer AS total_occupied_minutes,
        COALESCE(SUM(CASE WHEN bd.event_type IN ('vehicle_out', 'idle_start') THEN bd.duration_minutes END), 0)::integer AS total_idle_minutes,
        (SELECT available_minutes FROM shop_hours) AS total_available_minutes,
        COUNT(CASE WHEN bd.event_type = 'vehicle_in' THEN 1 END) AS vehicle_count
    FROM bay_durations bd
    GROUP BY bd.bay_id
)
INSERT INTO analytics.agg_daily_bay_utilization 
    (bay_id, date_key, total_occupied_minutes, total_idle_minutes, total_available_minutes, utilization_pct, vehicle_count, avg_dwell_minutes)
SELECT
    bs.bay_id,
    bs.date_key,
    bs.total_occupied_minutes,
    bs.total_idle_minutes,
    bs.total_available_minutes,
    ROUND((bs.total_occupied_minutes::decimal / NULLIF(bs.total_available_minutes, 0)) * 100, 1) AS utilization_pct,
    bs.vehicle_count,
    CASE WHEN bs.vehicle_count > 0 
        THEN (bs.total_occupied_minutes / bs.vehicle_count)::integer 
        ELSE 0 
    END AS avg_dwell_minutes
FROM bay_summary bs
ON CONFLICT (bay_id, date_key) DO UPDATE SET
    total_occupied_minutes = EXCLUDED.total_occupied_minutes,
    total_idle_minutes = EXCLUDED.total_idle_minutes,
    utilization_pct = EXCLUDED.utilization_pct,
    vehicle_count = EXCLUDED.vehicle_count,
    avg_dwell_minutes = EXCLUDED.avg_dwell_minutes;

-- After aggregation, clean up raw events older than 90 days to manage storage:
DELETE FROM analytics.fact_bay_events 
WHERE date_key < CURRENT_DATE - INTERVAL '90 days';

PitCrew Webhook Event Ingestion

Type: integration n8n webhook endpoint that receives real-time bay occupancy events from Leverege PitCrew and inserts them into the analytics database. Handles event deduplication, bay ID mapping, and optional RO correlation when a vehicle can be matched to an active repair order.

Implementation:

n8n Workflow: PitCrew Webhook Ingest
json
{
  "name": "PitCrew Webhook Ingest",
  "nodes": [
    {
      "parameters": {
        "httpMethod": "POST",
        "path": "pitcrew-events",
        "authentication": "headerAuth",
        "options": { "responseCode": 200, "responseData": "{ \"status\": \"received\" }" }
      },
      "name": "PitCrew Webhook",
      "type": "n8n-nodes-base.webhook",
      "position": [250, 300]
    },
    {
      "parameters": {
        "jsCode": "// Map PitCrew event to our schema\nconst event = $input.first().json;\nconst bayMapping = {\n  'pitcrew-bay-1': 1,\n  'pitcrew-bay-2': 2,\n  'pitcrew-bay-3': 3,\n  'pitcrew-bay-4': 4,\n  'pitcrew-bay-5': 5,\n  'pitcrew-bay-6': 6\n};\n\nconst eventTypeMapping = {\n  'bay_occupied': 'vehicle_in',\n  'bay_vacated': 'vehicle_out',\n  'idle_detected': 'idle_start',\n  'activity_resumed': 'idle_end'\n};\n\nconst mapped = {\n  bay_id: bayMapping[event.bay_id] || null,\n  date_key: event.timestamp.substring(0, 10),\n  event_type: eventTypeMapping[event.event_type] || event.event_type,\n  event_timestamp: event.timestamp,\n  vehicle_detected: event.vehicle_detected || false,\n  tech_detected: event.technician_detected || false,\n  source: 'pitcrew'\n};\n\nif (!mapped.bay_id) {\n  throw new Error(`Unknown bay_id: ${event.bay_id}`);\n}\n\nreturn [{ json: mapped }];"
      },
      "name": "Map Event",
      "type": "n8n-nodes-base.code",
      "position": [450, 300]
    },
    {
      "parameters": {
        "operation": "executeQuery",
        "query": "INSERT INTO analytics.fact_bay_events (bay_id, date_key, event_type, event_timestamp, vehicle_detected, tech_detected, source) VALUES ($1, $2::date, $3, $4::timestamp, $5, $6, $7)",
        "options": {}
      },
      "name": "Insert Bay Event",
      "type": "n8n-nodes-base.postgres",
      "position": [650, 300],
      "credentials": { "postgres": { "name": "Shop Analytics DB" } }
    }
  ],
  "connections": {
    "PitCrew Webhook": { "main": [[{ "node": "Map Event", "type": "main", "index": 0 }]] },
    "Map Event": { "main": [[{ "node": "Insert Bay Event", "type": "main", "index": 0 }]] }
  }
}
  • The bayMapping object must be configured during PitCrew onboarding to match PitCrew's bay identifiers to our dim_bays.bay_id values
  • Webhook authentication uses a shared secret in the X-Webhook-Secret header
  • If PitCrew does not support outbound webhooks, configure a polling workflow that calls the PitCrew API every 5 minutes to fetch recent bay status changes
  • For high-volume shops, consider batching inserts (collect 10 events then bulk insert)

Parts Margin Anomaly Detection Agent

Type: agent A scheduled Python script that analyzes parts margin data to detect anomalies — unusually low margins on specific job types, sudden margin drops, and parts being sold below cost. Generates weekly alerts sent to the shop owner and MSP team with specific actionable recommendations to recover lost margin.

Implementation:

Parts Margin Anomaly Detection Agent
python
# weekly scheduled script

#!/usr/bin/env python3
"""
Parts Margin Anomaly Detection Agent
Runs weekly via cron or n8n schedule.
Detects margin anomalies and generates alerts.

Dependencies: pip install psycopg2-binary pandas numpy
"""

import os
import json
import smtplib
import logging
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from datetime import datetime, timedelta
from decimal import Decimal

import psycopg2
import psycopg2.extras
import pandas as pd
import numpy as np

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

# Configuration (set via environment variables)
DB_HOST = os.environ.get('DB_HOST', 'localhost')
DB_PORT = os.environ.get('DB_PORT', '5432')
DB_NAME = os.environ.get('DB_NAME', 'shop_analytics')
DB_USER = os.environ.get('DB_USER', 'analytics_admin')
DB_PASS = os.environ.get('DB_PASSWORD')
SMTP_HOST = os.environ.get('SMTP_HOST', 'smtp.office365.com')
SMTP_PORT = int(os.environ.get('SMTP_PORT', '587'))
SMTP_USER = os.environ.get('SMTP_USER')
SMTP_PASS = os.environ.get('SMTP_PASSWORD')
ALERT_RECIPIENTS = os.environ.get('ALERT_RECIPIENTS', '').split(',')
CLIENT_NAME = os.environ.get('CLIENT_NAME', 'Auto Shop')

TARGET_MARGIN_PCT = 50.0  # Industry target: 50% GPM
CRITICAL_MARGIN_PCT = 25.0  # Below this is critical
BELOW_COST_THRESHOLD = 0.0  # Sold below cost
TREND_LOOKBACK_WEEKS = 12  # Compare current week to trailing average
MIN_PARTS_LINES = 5  # Minimum lines to trigger job-type alert


def get_connection():
    return psycopg2.connect(
        host=DB_HOST, port=DB_PORT, dbname=DB_NAME,
        user=DB_USER, password=DB_PASS
    )


def detect_below_cost_parts(conn, start_date, end_date):
    """Find parts sold at or below cost in the period."""
    query = """
    SELECT p.part_number, p.part_description, p.vendor,
           p.cost_per_unit, p.sale_per_unit, p.quantity,
           p.total_cost, p.total_sale, p.margin_pct,
           jt.job_type_name, ro.sms_ro_number
    FROM analytics.fact_parts_lines p
    JOIN analytics.fact_repair_orders ro ON p.ro_id = ro.ro_id
    LEFT JOIN analytics.dim_job_types jt ON p.job_type_id = jt.job_type_id
    WHERE p.date_key BETWEEN %s AND %s
      AND p.margin_pct <= %s
    ORDER BY p.margin_amount ASC
    LIMIT 50
    """
    df = pd.read_sql(query, conn, params=[start_date, end_date, BELOW_COST_THRESHOLD])
    return df


def detect_low_margin_job_types(conn, start_date, end_date):
    """Find job types with margin significantly below target."""
    query = """
    SELECT jt.job_type_name, jt.category, jt.target_parts_margin_pct,
           COUNT(*) as parts_line_count,
           ROUND(SUM(p.total_sale)::numeric, 2) as total_parts_sale,
           ROUND(SUM(p.total_cost)::numeric, 2) as total_parts_cost,
           ROUND(AVG(p.margin_pct)::numeric, 1) as avg_margin_pct,
           ROUND((SUM(p.total_sale) - SUM(p.total_cost))::numeric, 2) as total_margin
    FROM analytics.fact_parts_lines p
    JOIN analytics.dim_job_types jt ON p.job_type_id = jt.job_type_id
    WHERE p.date_key BETWEEN %s AND %s
    GROUP BY jt.job_type_name, jt.category, jt.target_parts_margin_pct
    HAVING COUNT(*) >= %s
    ORDER BY AVG(p.margin_pct) ASC
    """
    df = pd.read_sql(query, conn, params=[start_date, end_date, MIN_PARTS_LINES])
    df['below_target'] = df['avg_margin_pct'] < TARGET_MARGIN_PCT
    df['margin_gap'] = TARGET_MARGIN_PCT - df['avg_margin_pct']
    df['potential_recovery'] = (df['margin_gap'] / 100) * df['total_parts_sale']
    return df[df['below_target']]


def detect_margin_trend_decline(conn, current_week_start):
    """Detect if this week's margin is declining vs trailing average."""
    lookback_start = current_week_start - timedelta(weeks=TREND_LOOKBACK_WEEKS)
    query = """
    SELECT
        DATE_TRUNC('week', p.date_key)::date as week_start,
        ROUND(AVG(p.margin_pct)::numeric, 1) as avg_margin_pct,
        ROUND(SUM(p.total_sale)::numeric, 2) as total_parts_sale,
        COUNT(*) as parts_lines
    FROM analytics.fact_parts_lines p
    WHERE p.date_key >= %s
    GROUP BY DATE_TRUNC('week', p.date_key)
    ORDER BY week_start
    """
    df = pd.read_sql(query, conn, params=[lookback_start])
    if len(df) < 4:
        return None, None  # Not enough data
    trailing_avg = df['avg_margin_pct'].iloc[:-1].mean()
    current_margin = df['avg_margin_pct'].iloc[-1]
    decline = trailing_avg - current_margin
    return current_margin, decline if decline > 3.0 else None  # Alert if >3% decline


def generate_alert_email(below_cost_df, low_margin_df, current_margin, margin_decline, period_str):
    """Generate HTML email with anomaly findings."""
    html = f"""
    <html><body style='font-family: Arial, sans-serif;'>
    <h2>📊 {CLIENT_NAME} — Parts Margin Alert Report</h2>
    <p>Period: {period_str}</p>
    """

    if margin_decline:
        html += f"""
        <div style='background: #fff3cd; border-left: 4px solid #ffc107; padding: 12px; margin: 12px 0;'>
            <h3>⚠️ Margin Trend Declining</h3>
            <p>Current week average margin: <strong>{current_margin:.1f}%</strong></p>
            <p>Decline from trailing 12-week average: <strong>{margin_decline:.1f} percentage points</strong></p>
            <p><em>Action: Review parts pricing matrix and vendor costs for recent changes.</em></p>
        </div>
        """

    if not below_cost_df.empty:
        html += f"""
        <div style='background: #f8d7da; border-left: 4px solid #dc3545; padding: 12px; margin: 12px 0;'>
            <h3>🚨 Parts Sold Below Cost ({len(below_cost_df)} items)</h3>
            <table border='1' cellpadding='6' cellspacing='0' style='border-collapse: collapse; font-size: 13px;'>
            <tr style='background: #dc3545; color: white;'>
                <th>Part #</th><th>Description</th><th>Cost</th><th>Sale</th><th>Margin %</th><th>Job Type</th><th>RO #</th>
            </tr>
        """
        for _, row in below_cost_df.head(20).iterrows():
            html += f"<tr><td>{row['part_number']}</td><td>{row['part_description'][:40]}</td><td>${row['cost_per_unit']:.2f}</td><td>${row['sale_per_unit']:.2f}</td><td>{row['margin_pct']:.1f}%</td><td>{row['job_type_name']}</td><td>{row['sms_ro_number']}</td></tr>"
        html += "</table></div>"

    if not low_margin_df.empty:
        total_recovery = low_margin_df['potential_recovery'].sum()
        html += f"""
        <div style='background: #fff3cd; border-left: 4px solid #ffc107; padding: 12px; margin: 12px 0;'>
            <h3>💰 Job Types Below {TARGET_MARGIN_PCT:.0f}% Target Margin</h3>
            <p>Estimated recoverable margin: <strong>${total_recovery:,.2f}</strong></p>
            <table border='1' cellpadding='6' cellspacing='0' style='border-collapse: collapse; font-size: 13px;'>
            <tr style='background: #ffc107;'>
                <th>Job Type</th><th>Category</th><th>Avg Margin</th><th>Parts Revenue</th><th>Gap to Target</th><th>Recovery Potential</th>
            </tr>
        """
        for _, row in low_margin_df.iterrows():
            html += f"<tr><td>{row['job_type_name']}</td><td>{row['category']}</td><td>{row['avg_margin_pct']:.1f}%</td><td>${row['total_parts_sale']:,.2f}</td><td>{row['margin_gap']:.1f}%</td><td>${row['potential_recovery']:,.2f}</td></tr>"
        html += "</table></div>"

    if below_cost_df.empty and low_margin_df.empty and not margin_decline:
        html += "<p style='color: green;'>✅ No margin anomalies detected this period. All job types within target range.</p>"

    html += """
    <hr><p style='font-size: 11px; color: #666;'>Generated by Shop Intelligence Analytics Platform | Managed by your MSP partner</p>
    </body></html>
    """
    return html


def send_email(subject, html_body):
    msg = MIMEMultipart('alternative')
    msg['Subject'] = subject
    msg['From'] = SMTP_USER
    msg['To'] = ', '.join(ALERT_RECIPIENTS)
    msg.attach(MIMEText(html_body, 'html'))
    with smtplib.SMTP(SMTP_HOST, SMTP_PORT) as server:
        server.starttls()
        server.login(SMTP_USER, SMTP_PASS)
        server.sendmail(SMTP_USER, ALERT_RECIPIENTS, msg.as_string())
    logger.info(f'Alert email sent to {len(ALERT_RECIPIENTS)} recipients')


def main():
    today = datetime.now().date()
    week_start = today - timedelta(days=today.weekday())
    week_end = today
    period_str = f"{week_start.strftime('%b %d')} - {week_end.strftime('%b %d, %Y')}"

    conn = get_connection()
    try:
        below_cost = detect_below_cost_parts(conn, week_start, week_end)
        low_margin = detect_low_margin_job_types(conn, week_start, week_end)
        current_margin, decline = detect_margin_trend_decline(conn, week_start)

        has_anomalies = not below_cost.empty or not low_margin.empty or decline
        if has_anomalies:
            subject = f'⚠️ {CLIENT_NAME} Parts Margin Alert — {period_str}'
        else:
            subject = f'✅ {CLIENT_NAME} Parts Margin OK — {period_str}'

        html = generate_alert_email(below_cost, low_margin, current_margin, decline, period_str)
        send_email(subject, html)

        logger.info(f'Analysis complete: {len(below_cost)} below-cost items, '
                    f'{len(low_margin)} low-margin job types, '
                    f'margin decline: {decline}')
    finally:
        conn.close()


if __name__ == '__main__':
    main()
1
Save as /opt/shop-analytics/scripts/margin_anomaly_agent.py
2
Install dependencies: pip install psycopg2-binary pandas numpy
3
Create .env file with required variables
4
Add to crontab (run every Monday at 7 AM): 0 7 * * 1 cd /opt/shop-analytics/scripts && source .env && python3 margin_anomaly_agent.py — OR trigger from n8n via Execute Command node on a weekly schedule

Technician Efficiency Scoring Prompt

Type: prompt GPT-4 / Claude prompt template used in the monthly Quarterly Business Review (QBR) report generator. Takes technician efficiency data from the analytics database and generates natural-language insights and recommendations for the shop owner. Called via OpenAI or Anthropic API from n8n or Python script. Implementation:

System Prompt — Technician Efficiency Analyst

You are an expert automotive service department performance analyst working for a managed service provider. Your role is to analyze technician efficiency data and provide clear, actionable insights to auto shop owners and service managers. You understand these key automotive KPIs: - Efficiency %: Billed hours / Clock hours (target: 100%+, A-techs hit 125%+) - Productivity %: Billed hours / Available hours (target: 85%+) - Proficiency %: Billed hours / Book hours (target: 100%+, indicates speed vs book time) - Effective Labor Rate: Total labor revenue / Total clock hours - Comeback %: Percentage of ROs that return for rework (target: <2%) Provide insights that are specific, data-driven, and actionable. Avoid generic advice. Reference specific technicians by first name and specific job categories where applicable. Recommendations should include estimated financial impact.
Sonnet 4.6

User Prompt Template — Technician Efficiency Report

Analyze the following technician efficiency data for {{CLIENT_NAME}} for the period {{PERIOD_START}} to {{PERIOD_END}}. Shop context: - Number of bays: {{BAY_COUNT}} - Shop hours: {{SHOP_HOURS}} (e.g., Mon-Fri 7:30am-6pm, Sat 8am-2pm) - Posted labor rate: ${{POSTED_LABOR_RATE}}/hr - Number of technicians: {{TECH_COUNT}} Technician Performance Data: {{TECH_DATA_TABLE}} Format: Name | Skill Level | Billed Hours | Clock Hours | Efficiency % | Productivity % | Proficiency % | Effective Labor Rate | Comeback % | Top 3 Job Types Overall shop metrics: - Total billed hours: {{TOTAL_BILLED}} - Total available hours: {{TOTAL_AVAILABLE}} - Average bay utilization: {{BAY_UTIL_PCT}}% - Average RO value: ${{AVG_RO_VALUE}} - Total RO count: {{RO_COUNT}} Please provide: 1. **Executive Summary** (2-3 sentences on overall shop performance) 2. **Individual Technician Analysis** (for each tech: strengths, areas for improvement, specific recommendations) 3. **Scheduling Optimization** (based on efficiency patterns and bay utilization, recommend scheduling changes) 4. **Training Recommendations** (identify specific skill gaps and suggest training focus areas) 5. **Financial Impact Summary** (estimate monthly revenue impact if recommendations are implemented) Format as a professional report suitable for printing and discussing in a monthly shop meeting.
Sonnet 4.6
Example API Call
python
# Generate Technician Efficiency Report via OpenAI

import openai

def generate_tech_efficiency_report(tech_data, shop_context):
    prompt = USER_PROMPT_TEMPLATE.format(
        CLIENT_NAME=shop_context['name'],
        PERIOD_START=shop_context['period_start'],
        PERIOD_END=shop_context['period_end'],
        BAY_COUNT=shop_context['bay_count'],
        SHOP_HOURS=shop_context['shop_hours'],
        POSTED_LABOR_RATE=shop_context['labor_rate'],
        TECH_COUNT=len(tech_data),
        TECH_DATA_TABLE=format_tech_table(tech_data),
        TOTAL_BILLED=shop_context['total_billed'],
        TOTAL_AVAILABLE=shop_context['total_available'],
        BAY_UTIL_PCT=shop_context['bay_util_pct'],
        AVG_RO_VALUE=shop_context['avg_ro_value'],
        RO_COUNT=shop_context['ro_count']
    )
    
    response = openai.chat.completions.create(
        model='gpt-5.4',
        messages=[
            {'role': 'system', 'content': SYSTEM_PROMPT},
            {'role': 'user', 'content': prompt}
        ],
        temperature=0.3,
        max_tokens=3000
    )
    return response.choices[0].message.content

# Cost: ~$0.02-0.05 per report generation (GPT-5.4 pricing)
# Schedule: Monthly, timed before QBR meeting

Custom Bay Occupancy Detection (Fallback to PitCrew)

Type: skill Lightweight Python-based bay occupancy detection using YOLOv8 for vehicle detection on RTSP camera feeds. This is the fallback option if Leverege PitCrew is not available or budget does not allow. Runs on the edge server, processes one frame every 30 seconds per camera, and writes bay events to the PostgreSQL database.

Implementation:

Custom Bay Occupancy Detector
python
# fallback alternative to Leverege PitCrew. Deploy as a systemd service on
# the edge server.

#!/usr/bin/env python3
"""
Custom Bay Occupancy Detector
Fallback alternative to Leverege PitCrew.
Uses YOLOv8 nano model for vehicle detection on RTSP camera feeds.

Requirements:
  pip install ultralytics opencv-python-headless psycopg2-binary numpy
  Hardware: CPU-only is sufficient for 30-second frame intervals
            GPU recommended if processing more frequently

Deploy: Run as a systemd service on the edge server.
"""

import os
import sys
import time
import logging
from datetime import datetime
from typing import Dict, Optional, Tuple

import cv2
import numpy as np
import psycopg2
from ultralytics import YOLO

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
    handlers=[logging.StreamHandler(), logging.FileHandler('/var/log/bay-detector.log')]
)
logger = logging.getLogger('bay_detector')

# Configuration
CAMERAS = {
    1: {'name': 'Bay 1', 'rtsp_url': os.environ.get('CAM_BAY1_RTSP', 'rtsp://192.168.20.10:7447/stream1'),
        'roi': [(100, 200), (900, 200), (900, 700), (100, 700)]},  # Polygon ROI points
    2: {'name': 'Bay 2', 'rtsp_url': os.environ.get('CAM_BAY2_RTSP', 'rtsp://192.168.20.11:7447/stream1'),
        'roi': [(100, 200), (900, 200), (900, 700), (100, 700)]},
    3: {'name': 'Bay 3', 'rtsp_url': os.environ.get('CAM_BAY3_RTSP', 'rtsp://192.168.20.12:7447/stream1'),
        'roi': [(100, 200), (900, 200), (900, 700), (100, 700)]},
    4: {'name': 'Bay 4', 'rtsp_url': os.environ.get('CAM_BAY4_RTSP', 'rtsp://192.168.20.13:7447/stream1'),
        'roi': [(100, 200), (900, 200), (900, 700), (100, 700)]},
    5: {'name': 'Bay 5', 'rtsp_url': os.environ.get('CAM_BAY5_RTSP', 'rtsp://192.168.20.14:7447/stream1'),
        'roi': [(100, 200), (900, 200), (900, 700), (100, 700)]},
    6: {'name': 'Bay 6', 'rtsp_url': os.environ.get('CAM_BAY6_RTSP', 'rtsp://192.168.20.15:7447/stream1'),
        'roi': [(100, 200), (900, 200), (900, 700), (100, 700)]},
}

DB_CONFIG = {
    'host': os.environ.get('DB_HOST', 'localhost'),
    'port': os.environ.get('DB_PORT', '5432'),
    'dbname': os.environ.get('DB_NAME', 'shop_analytics'),
    'user': os.environ.get('DB_USER', 'analytics_admin'),
    'password': os.environ.get('DB_PASSWORD'),
}

CHECK_INTERVAL = 30  # seconds between frame captures
VEHICLE_CLASSES = [2, 5, 7]  # COCO: car=2, bus=5, truck=7
CONFIDENCE_THRESHOLD = 0.45
STATE_CHANGE_CONFIRMATIONS = 3  # Require N consecutive consistent readings


class BayState:
    def __init__(self, bay_id: int):
        self.bay_id = bay_id
        self.occupied = False
        self.last_change = None
        self.confirmation_count = 0
        self.pending_state = None

    def update(self, vehicle_detected: bool) -> Optional[str]:
        """Update bay state. Returns event_type if state changed, None otherwise."""
        if vehicle_detected != self.occupied:
            if self.pending_state == vehicle_detected:
                self.confirmation_count += 1
            else:
                self.pending_state = vehicle_detected
                self.confirmation_count = 1

            if self.confirmation_count >= STATE_CHANGE_CONFIRMATIONS:
                self.occupied = vehicle_detected
                self.last_change = datetime.now()
                self.pending_state = None
                self.confirmation_count = 0
                return 'vehicle_in' if vehicle_detected else 'vehicle_out'
        else:
            self.pending_state = None
            self.confirmation_count = 0
        return None


class BayDetector:
    def __init__(self):
        logger.info('Loading YOLOv8n model...')
        self.model = YOLO('yolov8n.pt')  # Downloads automatically on first run (~6MB)
        self.bay_states: Dict[int, BayState] = {
            bay_id: BayState(bay_id) for bay_id in CAMERAS
        }
        self.db_conn = None
        self._connect_db()

    def _connect_db(self):
        try:
            self.db_conn = psycopg2.connect(**DB_CONFIG)
            self.db_conn.autocommit = True
            logger.info('Database connected')
        except Exception as e:
            logger.error(f'Database connection failed: {e}')
            self.db_conn = None

    def _ensure_db(self):
        if self.db_conn is None or self.db_conn.closed:
            self._connect_db()

    def capture_frame(self, rtsp_url: str) -> Optional[np.ndarray]:
        """Capture a single frame from RTSP stream."""
        try:
            cap = cv2.VideoCapture(rtsp_url)
            cap.set(cv2.CAP_PROP_BUFFERSIZE, 1)
            ret, frame = cap.read()
            cap.release()
            if ret:
                return frame
            else:
                logger.warning(f'Failed to capture frame from {rtsp_url}')
                return None
        except Exception as e:
            logger.error(f'Error capturing from {rtsp_url}: {e}')
            return None

    def detect_vehicle_in_roi(self, frame: np.ndarray, roi_points: list) -> bool:
        """Detect if a vehicle is present within the ROI polygon."""
        results = self.model(frame, verbose=False, conf=CONFIDENCE_THRESHOLD)
        roi_polygon = np.array(roi_points, dtype=np.int32)

        for result in results:
            for box in result.boxes:
                cls = int(box.cls[0])
                if cls in VEHICLE_CLASSES:
                    x1, y1, x2, y2 = map(int, box.xyxy[0])
                    center_x = (x1 + x2) // 2
                    center_y = (y1 + y2) // 2
                    if cv2.pointPolygonTest(roi_polygon, (center_x, center_y), False) >= 0:
                        return True
        return False

    def record_event(self, bay_id: int, event_type: str):
        """Write bay event to database."""
        self._ensure_db()
        if not self.db_conn:
            logger.error('Cannot record event: no database connection')
            return
        try:
            now = datetime.now()
            with self.db_conn.cursor() as cur:
                cur.execute(
                    """INSERT INTO analytics.fact_bay_events 
                       (bay_id, date_key, event_type, event_timestamp, vehicle_detected, source)
                       VALUES (%s, %s, %s, %s, %s, 'custom_ai')""",
                    (bay_id, now.date(), event_type, now, event_type == 'vehicle_in')
                )
            logger.info(f'Bay {bay_id}: {event_type} at {now}')
        except Exception as e:
            logger.error(f'Database insert failed: {e}')

    def run(self):
        """Main detection loop."""
        logger.info(f'Starting bay detection for {len(CAMERAS)} bays, interval={CHECK_INTERVAL}s')
        while True:
            for bay_id, cam_config in CAMERAS.items():
                frame = self.capture_frame(cam_config['rtsp_url'])
                if frame is None:
                    continue

                vehicle_present = self.detect_vehicle_in_roi(frame, cam_config['roi'])
                event = self.bay_states[bay_id].update(vehicle_present)

                if event:
                    self.record_event(bay_id, event)

            time.sleep(CHECK_INTERVAL)


if __name__ == '__main__':
    detector = BayDetector()
    detector.run()

# --- SYSTEMD SERVICE FILE ---
# Save as /etc/systemd/system/bay-detector.service
# [Unit]
# Description=Bay Occupancy Detector
# After=network.target docker.service
# Requires=docker.service
#
# [Service]
# Type=simple
# User=analytics
# WorkingDirectory=/opt/shop-analytics/scripts
# EnvironmentFile=/opt/shop-analytics/.env
# ExecStart=/usr/bin/python3 /opt/shop-analytics/scripts/bay_detector.py
# Restart=always
# RestartSec=10
#
# [Install]
# WantedBy=multi-user.target
#
# Enable: sudo systemctl enable bay-detector && sudo systemctl start bay-detector

# --- ROI CALIBRATION TOOL ---
# Run once per camera to define the ROI polygon interactively:
# python3 -c "
# import cv2
# cap = cv2.VideoCapture('rtsp://...')
# ret, frame = cap.read()
# cap.release()
# points = []
# def click(event, x, y, flags, param):
#     if event == cv2.EVENT_LBUTTONDOWN:
#         points.append((x, y))
#         print(f'Point: ({x}, {y}), Total: {len(points)}')
# cv2.imshow('ROI', frame)
# cv2.setMouseCallback('ROI', click)
# cv2.waitKey(0)
# print(f'ROI: {points}')
# "
# Copy the printed ROI points into the CAMERAS config above.

# --- PERFORMANCE NOTES ---
# YOLOv8n on CPU (Intel Xeon E-2314): ~50-80ms per frame inference
# 6 cameras * 1 frame / 30 sec = 12 inferences/minute = very low CPU load
# Memory: ~500MB for model + frame buffers
# Disk: Negligible (no video storage, only event records)

Testing & Validation

  • NETWORK CONNECTIVITY TEST: From a laptop connected to the camera VLAN (VLAN 20), ping each camera IP address and verify response. Then from a laptop on the business VLAN (VLAN 10), attempt to ping camera IPs — this should FAIL, confirming VLAN isolation. Verify firewall blocks between VLAN 20 and VLAN 30 (POS) in both directions.
  • CAMERA FEED QUALITY TEST: For each of the 8 cameras, open the UniFi Protect live view and verify: (a) full bay floor is visible, (b) a vehicle on the lift is detectable, (c) image quality is sufficient in both open-bay-door (daylight) and closed-door (artificial light) conditions, (d) no dead pixels or obstructions, (e) IR night vision activates properly when lights are off.
  • RTSP STREAM TEST: From the edge server, use ffplay or VLC to open each camera's RTSP stream URL (rtsp://<UNVR-IP>:7447/<stream-id>). Verify all 8 streams are accessible and stable for at least 5 minutes without dropout. This confirms PitCrew or the custom AI detector will have reliable feed access.
  • SMS API DATA TEST: Execute the Tekmetric (or equivalent SMS) API call from the edge server and verify the response includes: repair order number, labor lines with tech assignments and hours, parts lines with cost and sale prices, and status fields. Compare 10 random repair orders from the API response against the SMS UI to confirm data accuracy.
  • ETL PIPELINE TEST: Trigger the n8n SMS Data Sync workflow manually. After completion, query the PostgreSQL database and verify: (a) fact_repair_orders has new records matching recent SMS data, (b) fact_labor_lines has tech assignments with hours populated, (c) fact_parts_lines has cost and sale prices populated, (d) no orphaned records (labor/parts lines without matching RO). Compare total revenue in database vs SMS monthly report — should match within 1%.
  • PARTS MARGIN CALCULATION TEST: Select 5 repair orders spanning different job types. For each, manually calculate parts margin from the SMS (sale price - cost / sale price × 100). Compare against the Power BI dashboard parts margin display for those same ROs. Values should match exactly. If discrepancies exist, check whether QuickBooks COGS data is overriding SMS cost data incorrectly.
  • BAY OCCUPANCY DETECTION TEST: With PitCrew (or custom detector) active, perform a controlled test: drive a vehicle into Bay 1, wait 3 minutes, then remove it. Verify: (a) PitCrew/detector registers 'vehicle_in' event within 2 minutes of vehicle entering, (b) 'vehicle_out' event registers within 2 minutes of vehicle leaving, (c) events appear in the fact_bay_events database table with correct bay_id and timestamps.
  • BAY UTILIZATION DASHBOARD TEST: After running the daily aggregation on a day with known bay activity, verify the Power BI Bay Utilization page shows: (a) correct utilization percentage per bay (compare against manual time log if available), (b) heat map colors reflect relative utilization, (c) vehicle count per bay matches the number of ROs dispatched to that bay in the SMS, (d) idle time calculation is reasonable (total available hours minus occupied time).
  • TECHNICIAN EFFICIENCY DASHBOARD TEST: In Power BI, select a known technician and a known date range. Verify: (a) billed hours matches the SMS technician report, (b) actual/clock hours matches the SMS time clock, (c) efficiency percentage calculates correctly (billed/actual × 100), (d) effective labor rate is reasonable (total labor revenue / total clock hours for that tech).
  • PARTS MARGIN ANOMALY ALERT TEST: Manually insert a test parts line into the database with margin_pct of -10% (sold below cost). Run the margin anomaly detection agent manually. Verify: (a) the agent detects the below-cost part, (b) generates an email with the correct part details, (c) email is received by all configured recipients, (d) the email HTML renders correctly in Outlook/Gmail. Delete the test record after verification.
  • SECURITY COMPLIANCE TEST: (a) Verify PostgreSQL connections require SSL (attempt non-SSL connection — should fail), (b) Verify UFW is active and rules are correct (sudo ufw status verbose), (c) Verify camera VLAN cannot reach POS VLAN (attempt connection from camera network to payment terminal IP), (d) Verify VPN is required for remote MSP access (attempt SSH to edge server from outside VPN — should fail), (e) Verify no camera has line-of-sight to payment card entry areas.
  • POWER BI SCHEDULED REFRESH TEST: In Power BI Service, trigger a manual dataset refresh. Verify it completes successfully within 10 minutes. Check that all dashboard visuals update with the latest data. Configure and verify the automated 4x daily refresh schedule (6am, 10am, 2pm, 6pm) runs without errors for 3 consecutive days.
  • END-TO-END SMOKE TEST: Create a real repair order in the SMS with a known technician, bay assignment, and parts. Complete the RO through the normal workflow. Wait for the next ETL cycle (or trigger manually). Verify the complete data chain: RO appears in fact_repair_orders → labor lines appear with correct tech and hours → parts lines appear with correct margins → bay events show vehicle_in/out for the assigned bay → all three Power BI dashboards reflect the new data correctly.

Client Handoff

The client handoff should be conducted as two separate sessions over 1-2 days, plus a follow-up session 2 weeks after go-live.

Session 1: Owner/Service Manager Training (90 minutes)

  • Walk through each of the three Power BI dashboard pages: Bay Utilization, Technician Efficiency, and Parts Margin by Job Type
  • Explain each KPI with industry benchmarks: Bay utilization target 85%, tech efficiency target 100%+, parts margin target 50% GPM
  • Demonstrate how to filter by date range, technician, job type, and bay number
  • Show how to use PitCrew real-time bay status display and its actionable insights
  • Review the automated email report they will receive daily at 7 AM
  • Install and configure Power BI mobile app on owner's smartphone
  • Show how to interpret the weekly Parts Margin Anomaly Alert emails
  • Discuss the monthly QBR report they will receive with AI-generated technician insights

Session 2: Technician Briefing (30 minutes)

  • Explain at a high level what the system measures and why it benefits them
  • Emphasize that efficient technicians earn more under flat-rate pay and the system helps identify unfair work distribution
  • Show the bay status display and how it helps scheduling
  • Address any concerns about surveillance — cameras monitor bays for vehicle presence, not individual behavior

Documentation to Leave Behind

  • Printed KPI Quick Reference Card (laminated, posted in service manager office) defining all metrics with target ranges
  • Power BI Dashboard Navigation Guide (4-page PDF) with screenshots and step-by-step instructions
  • PitCrew User Guide (from Leverege)
  • Network diagram showing all installed equipment and IP addresses
  • Emergency contact card with MSP support phone/email and escalation procedures
  • Login credentials document (sealed envelope to shop owner) for all systems
  • Camera location map with bay assignments
  • FTC Safeguards Rule compliance documentation additions (network diagram, data flow diagram, access control matrix)

2-Week Follow-Up Session (60 minutes)

  • Review actual KPI data from the first 2 weeks of operation
  • Discuss any data quality issues discovered and remediate
  • Fine-tune job type categorization based on the client's actual mix
  • Adjust alert thresholds if too many or too few alerts are firing
  • Address any user questions or workflow friction
  • Confirm automated reports and refreshes are working correctly

Success Criteria to Review

Maintenance

Ongoing Managed Service Responsibilities:

Daily (Automated + Spot Check)

  • Monitor n8n workflow execution logs for ETL failures (automated Slack/email alerts on failure)
  • Verify Power BI dataset refresh completed successfully (check Power BI Service refresh history)
  • Confirm PitCrew bay detection is active (check for recent bay events in database)
  • Automated UPS battery status check via SNMP or APC management interface

Weekly

  • Review Parts Margin Anomaly Alert email — even if no anomalies, verify the report was generated and sent
  • Check NVR storage utilization — should remain below 80% with 30-day retention policy
  • Review n8n workflow execution statistics for performance degradation (increasing execution times may indicate API rate limiting or database growth issues)
  • Verify camera uptime via UniFi Protect — all cameras should show 99%+ uptime

Monthly

  • Generate and deliver the AI-powered Technician Efficiency Report via the GPT-4 prompt component
  • Conduct a 30-minute remote check-in with the service manager to review insights and gather feedback
  • Review and rotate any system passwords per security policy
  • Check for firmware updates for cameras, NVR, and PoE switch — apply during off-hours maintenance window (Sunday night)
  • Update Docker images (PostgreSQL, n8n) if security patches are available: docker compose pull && docker compose up -d
  • Review PostgreSQL database size and performance — vacuum and analyze tables if needed
Update Docker images for PostgreSQL and n8n when security patches are available
shell
docker compose pull && docker compose up -d

Quarterly

  • Conduct Quarterly Business Review (QBR) on-site with shop owner: Present 90-day KPI trends with year-over-year comparisons (once available); Identify top 3 actionable improvement opportunities; Review ROI of analytics investment (incremental revenue from efficiency gains vs. monthly cost); Adjust job type taxonomy and target margins based on changing parts costs; Recalibrate bay detection zones if shop layout has changed
  • Perform vulnerability scan on all analytics infrastructure (FTC Safeguards requirement)
  • Update Power BI dashboards with any new measures or visuals requested by client
  • Review and update the FTC Safeguards compliance documentation

Semi-Annually

  • Conduct penetration test on the analytics infrastructure (FTC Safeguards Rule requirement)
  • Review and renew SSL certificates if not auto-renewing
  • Evaluate whether the analytics platform should be expanded (additional locations, new KPIs)
  • Review camera image quality — lenses may need cleaning in dusty shop environments

Annually

  • Full infrastructure review: evaluate hardware lifecycle (cameras typically 5-7 years, NVR 3-5 years, HDDs 3-4 years)
  • Review software licensing and subscription renewals
  • Renegotiate PitCrew and SMS subscription pricing based on usage
  • Present annual ROI summary to shop owner
  • Update all system documentation

Model/AI Retraining Triggers

  • PitCrew: Leverege handles model updates automatically via their cloud platform
  • Custom bay detector (if used): Retrain/recalibrate if: (a) camera angle changes, (b) bay layout changes, (c) detection accuracy drops below 90% (measured by weekly spot checks), (d) new vehicle types are regularly misdetected
  • Parts margin anomaly thresholds: Adjust quarterly based on supplier price changes and market conditions

SLA Considerations

  • Dashboard availability: 99% uptime (measured by Power BI Service status)
  • ETL pipeline: Maximum 4-hour data lag during business hours; 24-hour resolution for pipeline failures
  • Camera system: 8-hour on-site response for camera/NVR hardware failure; next-business-day for non-critical issues
  • Anomaly alerts: Sent within 24 hours of detection (weekly cycle)
  • QBR reports: Delivered within 5 business days of quarter end

Escalation Path

1
Automated alert fires → MSP L1 tech reviews within 2 hours
2
L1 cannot resolve → Escalate to L2 (BI developer/network engineer) within 4 hours
3
Vendor-specific issue → Open ticket with Leverege (PitCrew), Ubiquiti (cameras), Tekmetric (SMS API), or Microsoft (Power BI)
4
Critical business impact (all dashboards down) → MSP manager notified, 4-hour resolution target

Alternatives

...

Leverege PitCrew as Primary (Turnkey SaaS Approach)

Instead of building custom Power BI dashboards and the PostgreSQL data warehouse, rely primarily on Leverege PitCrew for bay utilization analytics and the shop management system's built-in reporting for technician efficiency and parts margins. PitCrew provides its own dashboards for bay intelligence. Tekmetric, Shopmonkey, and Shop-Ware all have built-in KPI dashboards covering tech efficiency and parts margins. The MSP role shifts from dashboard developer to configuration/optimization consultant.

Dealership DMS Integration (CDK/Reynolds) for Franchise Dealers

For franchise dealerships using CDK Global or Reynolds & Reynolds, replace the independent SMS integration with DMS-specific data extraction. Use CDK's Fortellis Data Extract API or Reynolds' data feeds to pull service department data. Consider TARGIT Automotive or Technosoft Automotive BI instead of custom Power BI builds, as these have pre-built dealership-specific dashboards.

Cloud-Only Deployment (No Edge Server)

Eliminate the Dell PowerEdge T160 edge server entirely. Run PostgreSQL on Azure Database for PostgreSQL Flexible Server ($15/mo), use Zapier instead of self-hosted n8n ($49/mo), and rely entirely on cloud services. Camera NVR remains on-site but all analytics processing is cloud-based.

Camera-Free Approach (SMS Data Only)

Skip camera installation and bay intelligence (PitCrew) entirely. Focus exclusively on SMS and accounting data analysis for technician efficiency and parts margin analytics. Bay utilization is estimated from repair order timestamps and technician clock-in/out data rather than measured via cameras.

Tableau Cloud Instead of Power BI

DRIVE KPI + Protractor (Specialized Automotive Analytics)

Instead of custom Power BI dashboards, use DRIVE KPI for financial performance analytics and Protractor for shop management with advanced parts matrix and labor margin tools. Both are purpose-built for automotive and require minimal customization.

Want early access to the full toolkit?