
Implementation Guide: Analyze sales mix by daypart, item, and server to optimize menu and staffing
Step-by-step implementation guide for deploying AI to analyze sales mix by daypart, item, and server to optimize menu and staffing for Restaurants & Food Service clients.
Hardware Procurement
UniFi Dream Machine Pro Max
UniFi Dream Machine Pro Max
$430 MSP distributor cost / $599 suggested resale
All-in-one gateway, firewall, switch, and UniFi controller. Provides VLAN segmentation to isolate POS traffic from guest WiFi and analytics/back-office traffic, which is critical for PCI DSS compliance. Manages the entire network from a single pane of glass.
UniFi U6+ Access Point
$70 MSP distributor cost / $99 suggested resale per unit
WiFi 6 access points providing wireless coverage for handheld POS devices, manager tablets, and back-office workstations. Two APs recommended for typical restaurant floor plan (front-of-house + back-of-house). Powered via PoE from the switch.
UniFi USW-Lite-8-PoE Switch
$80 MSP distributor cost / $109 suggested resale
8-port PoE managed switch providing power and connectivity to access points, POS terminals, and kitchen display systems. Supports VLAN tagging for network segmentation.
APC Back-UPS Pro 1500VA
$180 MSP distributor cost / $270 suggested resale
Uninterruptible power supply for networking equipment and POS system. Ensures payment processing continuity during power outages — critical for restaurants that cannot afford downtime during service. Provides 10–15 minutes of runtime for graceful operations.
Back-Office Mini PC
Back-Office Mini PC
$450 MSP distributor cost / $650 suggested resale
Compact workstation for manager's office. Used for accessing analytics dashboards, running reports, and performing back-office administrative tasks. Mounted behind monitor via VESA bracket to save desk space. Runs Windows 11 Pro with Chrome/Edge for dashboard access.
Business-Class Monitor
$150 MSP distributor cost / $220 suggested resale
24-inch IPS display for the back-office workstation. Managers use this to review daily dashboards, weekly trend reports, and staff performance metrics. VESA-mountable.
iPad 10th Generation (if Lightspeed/Square POS)
iPad 10th Generation
$420 MAP pricing / $449 suggested resale
Only needed if client uses Lightspeed (iPad-only platform) or Square for Restaurants. Serves as an additional manager-access tablet for reviewing dashboards on the floor during service. Not needed if client uses Toast (proprietary hardware).
Cat6 Ethernet Cabling Kit
$50 materials cost / $200 installed (labor included)
Structured cabling for connecting POS terminals, KDS, switch, and access points via wired ethernet. Pre-terminated Cat6 patch cables and a small box of bulk Cat6 for custom runs. Critical for reliable POS connectivity vs. relying solely on WiFi.
Software Procurement
MarginEdge
$300/location/month (no setup fees)
Primary analytics and menu intelligence platform. Provides menu engineering matrix (Star/Dog/Puzzle/Plow Horse classification), invoice automation, recipe costing, and food cost tracking. Integrates directly with Toast, Square, Lightspeed, and other major POS systems. Compares purchased amounts, sold amounts, and remaining inventory to identify waste.
7shifts
$76.99/location/month
AI-assisted labor scheduling and server performance analytics. Tracks labor cost as a percentage of sales, provides shift-level analytics, and enables schedule optimization based on forecasted demand. Integrates with major POS systems for sales data and with payroll providers for cost reconciliation. The Works plan includes labor compliance, shift feedback, and manager logbook.
Metabase (Self-Hosted)
$0 software cost; $40–$60/month cloud hosting (MSP-managed VM)
Open-source business intelligence platform for building custom dashboards beyond what MarginEdge and 7shifts provide natively. Used for cross-platform consolidated views (combining POS, labor, and accounting data), custom daypart heatmaps, server leaderboards, and trend analysis. Self-hosted on MSP infrastructure for white-label branding and zero per-seat licensing cost.
PostgreSQL Database
$0 software cost (runs on same VM as Metabase)
Central data warehouse for aggregated POS transaction data, labor data, and menu engineering metrics. Metabase connects to this database for custom reporting. ETL scripts pull data from POS APIs and normalize it into a star schema optimized for restaurant analytics.
Microsoft 365 Business Basic
$6/user/month via CSP (MSP margin 10-15%)
Provides Exchange Online email for automated report delivery (daily digest emails to managers), OneDrive for document storage, and Teams for MSP-client communication channel. Most restaurants already have this; if not, provision 2–3 seats for manager and owner.
Cloud VM (Metabase + PostgreSQL Host)
$40–$60/month for 4 vCPU / 8GB RAM / 160GB SSD droplet
Hosts the self-managed Metabase instance and PostgreSQL database. DigitalOcean recommended for simplicity; Hetzner for cost optimization. MSP manages this infrastructure and charges the client as part of the managed analytics service, bundling the cost into the monthly fee.
n8n (Self-Hosted)
$0 software cost (runs on same VM or a small secondary container)
Workflow automation platform for orchestrating ETL pipelines, triggering daily email digests, sending anomaly alerts via Slack/email, and scheduling data sync jobs between POS APIs, PostgreSQL, and Metabase. Replaces the need for custom cron jobs or Zapier subscriptions.
Prerequisites
- Client operates a cloud-based POS system with API access: Toast (REST API via partner program), Square (open API), or Lightspeed (REST API). Legacy POS systems without API access (e.g., older Micros, Aloha versions) require upgrade before proceeding.
- Business-class internet connection with minimum 50 Mbps download / 10 Mbps upload symmetric. 100 Mbps+ recommended. Verify ISP SLA includes uptime guarantee. LTE failover strongly recommended for payment processing continuity.
- Existing electrical outlets near networking equipment locations (telecom closet or back office) and POS terminal positions. Dedicated 20-amp circuit preferred for POS + networking rack.
- Client provides a list of all current menu items with categories, current pricing, and any existing recipe cost data. If recipe costs are not documented, the MSP will work with the kitchen manager during Phase 3.
- Client provides access credentials for POS admin portal, accounting software (QuickBooks Online or Xero), and any existing scheduling platform.
- Client designates a primary point of contact (typically General Manager or Owner) who has authority to approve menu changes, scheduling adjustments, and who will be the daily dashboard user.
- Client has at least 3 months of historical POS transaction data available for initial analysis and baseline establishment. Six months or more is ideal for seasonal trend identification.
- All POS terminals are running current firmware/software versions. Toast terminals should be on Toast OS 3.x+; Square should be on current app version; Lightspeed should be on current Restaurant platform (not legacy).
- Client has a physical location for back-office workstation and networking equipment — a locked manager's office or closet is ideal for PCI compliance (restricting physical access to network infrastructure).
- Staff (servers, bartenders) are already logging into the POS with individual credentials — not a shared 'server' login. Individual logins are required for per-server analytics. If shared logins are in use, POS user setup must be completed first.
Installation Steps
Step 1: Site Survey and Network Assessment
Perform an on-site assessment of the restaurant's physical layout, existing network infrastructure, POS hardware, and electrical/cabling situation. Document the current POS model and software version, internet service provider and speed (run a speed test), existing networking equipment (router, switches, APs), number and placement of POS terminals, KDS screens, and printers. Identify optimal locations for access points using a WiFi survey tool (UniFi Design Center or Ekahau). Document the back-office area for workstation placement. Take photos of all equipment and cable runs.
speedtest-cli --simplessh admin@<gateway-ip> 'info'nmap -sn 192.168.1.0/24 > site_survey_network_scan.txtSchedule the site survey during a non-peak time (typically 2–4 PM on a weekday). Bring a laptop with UniFi Design Center installed for AP placement planning. If the restaurant has no existing structured cabling, add cabling installation to the project scope and budget ($200–$500 for typical single-location restaurant).
Step 2: Network Infrastructure Installation
Install and configure the Ubiquiti UniFi network stack. Mount the UDM-Pro-Max in the back office or telecom closet (rack-mount or shelf). Connect ISP modem/ONT to the WAN port. Install the USW-Lite-8-PoE switch. Run Cat6 cables to AP mounting locations. Mount and connect U6+ access points. Connect the UPS and plug in all networking equipment. Perform initial UniFi setup via the UniFi Network app or web interface at https://unifi.ui.com.
# enter in UniFi Controller > Settings > Networks
# VLANs:
VLAN 10: POS Network (192.168.10.0/24) — POS terminals, KDS, printers
VLAN 20: Management (192.168.20.0/24) — back-office PC, manager tablet
VLAN 30: Guest WiFi (192.168.30.0/24) — customer-facing, isolated
VLAN 40: IoT/Cameras (192.168.40.0/24) — security cameras, smart devices# enter in UniFi Controller > Settings > Firewall & Security
# Firewall Rules:
Rule 1: ALLOW VLAN 10 (POS) → Internet (for cloud POS and payment processing)
Rule 2: ALLOW VLAN 20 (Mgmt) → VLAN 10 (POS) (for admin access to POS)
Rule 3: BLOCK VLAN 30 (Guest) → ALL other VLANs
Rule 4: BLOCK VLAN 40 (IoT) → VLAN 10 (POS)
Rule 5: ALLOW VLAN 20 (Mgmt) → Internet (for dashboard access)# WiFi SSIDs:
SSID: [RestaurantName]-POS → VLAN 10 (WPA3-Enterprise or WPA2-PSK with strong key)
SSID: [RestaurantName]-Office → VLAN 20 (WPA2/3-PSK)
SSID: [RestaurantName]-Guest → VLAN 30 (WPA2-PSK, captive portal optional)POS terminals should be on wired ethernet whenever physically possible — WiFi is acceptable only for handheld devices like Toast Go. Label all ethernet cables at both ends with VLAN assignment. Document the VLAN architecture and firewall rules in the client's IT documentation. Store the UniFi admin credentials in the MSP's password manager (e.g., IT Glue, Hudu). Enable UniFi remote management so the MSP can monitor and manage the network remotely.
Step 3: POS Audit and Data Export Verification
Log into the client's POS admin portal and verify that the system is properly configured for data extraction. Confirm all menu items are categorized correctly (appetizers, entrees, beverages, desserts, etc.), all servers have individual login credentials, dayparts are defined in the POS (breakfast, lunch, dinner, late-night — adjust to match the restaurant's actual operating hours), and historical transaction data is accessible via API or export. Run a test data export covering the most recent full week to verify data completeness.
curl -X GET 'https://toast-api-server/orders/v2/orders' \
-H 'Toast-Restaurant-External-ID: <restaurant-guid>' \
-H 'Authorization: Bearer <access-token>' \
-H 'Content-Type: application/json' \
--data-urlencode 'startDate=2025-01-01T00:00:00.000+0000' \
--data-urlencode 'endDate=2025-01-07T23:59:59.000+0000'curl -X POST 'https://connect.squareup.com/v2/orders/search' \
-H 'Square-Version: 2025-01-23' \
-H 'Authorization: Bearer <access-token>' \
-H 'Content-Type: application/json' \
-d '{
"location_ids": ["<location-id>"],
"query": {
"filter": {
"date_time_filter": {
"created_at": {
"start_at": "2025-01-01T00:00:00Z",
"end_at": "2025-01-07T23:59:59Z"
}
}
}
}
}'# retrieve transactions via Lightspeed Restaurant API
curl -X GET 'https://api.lightspeedrestaurant.com/transactions' \
-H 'Authorization: Bearer <access-token>' \
-H 'Content-Type: application/json'Toast API access requires enrollment in the Toast Developer Partner program — this can take 1–2 weeks for approval. Square API is open and self-service — create an application at https://developer.squareup.com. Lightspeed API access requires contacting Lightspeed support or applying through their developer portal. If API access is not immediately available, most POS systems support CSV export of transaction data as a stopgap. Verify that daypart definitions in the POS match the restaurant's actual meal periods; if not, work with the manager to reconfigure.
Step 4: Analytics Platform Deployment — MarginEdge Setup
Create the client's MarginEdge account, configure the POS integration, and onboard the restaurant. MarginEdge handles invoice scanning, recipe costing, and menu engineering analytics natively. The MSP should be listed as the technology partner on the account for ongoing management access. Connect the POS system (Toast, Square, or Lightspeed) via MarginEdge's built-in integrations. Upload the client's vendor invoices for the past 30 days to establish food cost baselines. Enter recipe cards for the top 20 menu items by sales volume.
MarginEdge charges $300/month per location with no setup fees — the MSP should negotiate a partner/referral arrangement for recurring commission. It typically takes 2–4 weeks of invoice uploads for food cost data to become statistically meaningful. The menu engineering matrix (Star/Dog/Puzzle/Plow Horse) requires both sales volume and food cost data — it will not be fully functional until recipe cards and invoices are entered. MarginEdge's onboarding team provides complimentary setup assistance — schedule a joint call with the MSP and client for initial configuration.
Step 5: Labor Analytics Platform Deployment — 7shifts Setup
Create the client's 7shifts account on The Works plan, configure the POS integration for sales data ingestion, and set up all employee profiles. 7shifts will provide labor cost analytics, schedule optimization based on sales forecasts, and per-server performance tracking. Import the employee roster, set labor targets (typically 25–35% of sales for full-service restaurants), and configure shift templates based on current scheduling patterns.
7shifts The Works plan at $76.99/location/month includes labor compliance features — essential in cities with predictive scheduling laws (NYC, Chicago, Seattle, San Francisco). If the client already uses a scheduling platform (HotSchedules, Homebase, etc.), evaluate whether migration is justified. Migration is recommended if the existing platform lacks POS integration or sales-to-labor analytics. Import historical schedules if available to establish baseline staffing patterns. Enable the 7shifts mobile app for all staff to reduce scheduling friction.
Step 6: Cloud Infrastructure Provisioning for Custom Analytics
Provision a cloud VM to host the self-managed Metabase instance and PostgreSQL data warehouse. This infrastructure serves as the MSP's value-add analytics layer — hosting custom dashboards that go beyond what MarginEdge and 7shifts provide natively. The VM is managed by the MSP and costs are bundled into the monthly managed service fee.
ssh root@<droplet-ip>apt update && apt upgrade -ycurl -fsSL https://get.docker.com | sh
apt install -y docker-compose-pluginmkdir -p /opt/restaurant-analytics
cd /opt/restaurant-analyticscat > docker-compose.yml << 'EOF'
version: '3.8'
services:
postgres:
image: postgres:16
container_name: restaurant_db
restart: always
environment:
POSTGRES_DB: restaurant_analytics
POSTGRES_USER: analytics_admin
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
- ./init-scripts:/docker-entrypoint-initdb.d
ports:
- '127.0.0.1:5432:5432'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U analytics_admin -d restaurant_analytics']
interval: 10s
timeout: 5s
retries: 5
metabase:
image: metabase/metabase:latest
container_name: metabase
restart: always
environment:
MB_DB_TYPE: postgres
MB_DB_DBNAME: metabase_appdb
MB_DB_PORT: 5432
MB_DB_USER: analytics_admin
MB_DB_PASS: ${POSTGRES_PASSWORD}
MB_DB_HOST: postgres
MB_SITE_URL: https://analytics.${MSP_DOMAIN}
JAVA_TOOL_OPTIONS: '-Xmx4g'
ports:
- '127.0.0.1:3000:3000'
depends_on:
postgres:
condition: service_healthy
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: always
environment:
N8N_BASIC_AUTH_ACTIVE: 'true'
N8N_BASIC_AUTH_USER: ${N8N_USER}
N8N_BASIC_AUTH_PASSWORD: ${N8N_PASSWORD}
WEBHOOK_URL: https://automation.${MSP_DOMAIN}
volumes:
- n8n_data:/home/node/.n8n
ports:
- '127.0.0.1:5678:5678'
caddy:
image: caddy:latest
container_name: caddy
restart: always
ports:
- '80:80'
- '443:443'
volumes:
- ./Caddyfile:/etc/caddy/Caddyfile
- caddy_data:/data
- caddy_config:/config
volumes:
postgres_data:
n8n_data:
caddy_data:
caddy_config:
EOFcat > .env << 'EOF'
POSTGRES_PASSWORD=<generate-strong-password-here>
N8N_USER=msp_admin
N8N_PASSWORD=<generate-strong-password-here>
MSP_DOMAIN=yourmsp.com
EOFcat > Caddyfile << 'EOF'
analytics.{$MSP_DOMAIN} {
reverse_proxy metabase:3000
header {
Strict-Transport-Security max-age=31536000
X-Content-Type-Options nosniff
X-Frame-Options SAMEORIGIN
}
}
automation.{$MSP_DOMAIN} {
reverse_proxy n8n:5678
}
EOFdocker compose up -ddocker compose psUse strong randomly-generated passwords for all services — store them in the MSP's password manager (IT Glue, Hudu, Passportal). Set up DNS A records for analytics.yourmsp.com and automation.yourmsp.com pointing to the droplet's IP before starting Caddy (it needs DNS to issue TLS certificates via Let's Encrypt). Enable DigitalOcean's automated backups ($8/month extra) for disaster recovery. Configure DigitalOcean's firewall to allow only ports 80, 443, and 22 (SSH from MSP IP range only). This single VM comfortably supports 5–10 restaurant clients before needing to scale.
Step 7: Database Schema Creation for Restaurant Analytics
Create the PostgreSQL schema that will serve as the central data warehouse for restaurant analytics. This star schema is optimized for daypart analysis, menu mix reporting, and server performance tracking. The schema supports multiple restaurant clients on a single database instance using a tenant_id pattern.
docker exec -it restaurant_db psql -U analytics_admin -d restaurant_analyticsCREATE SCHEMA IF NOT EXISTS restaurant;
-- Dimension: Restaurants (multi-tenant support)
CREATE TABLE restaurant.dim_restaurant (
restaurant_id SERIAL PRIMARY KEY,
tenant_name VARCHAR(255) NOT NULL,
pos_type VARCHAR(50) NOT NULL, -- 'toast', 'square', 'lightspeed'
timezone VARCHAR(50) DEFAULT 'America/New_York',
created_at TIMESTAMP DEFAULT NOW()
);
-- Dimension: Dayparts
CREATE TABLE restaurant.dim_daypart (
daypart_id SERIAL PRIMARY KEY,
restaurant_id INT REFERENCES restaurant.dim_restaurant(restaurant_id),
daypart_name VARCHAR(50) NOT NULL, -- 'Breakfast', 'Lunch', 'Dinner', 'Late Night'
start_time TIME NOT NULL,
end_time TIME NOT NULL
);
-- Dimension: Menu Items
CREATE TABLE restaurant.dim_menu_item (
menu_item_id SERIAL PRIMARY KEY,
restaurant_id INT REFERENCES restaurant.dim_restaurant(restaurant_id),
pos_item_id VARCHAR(255), -- external ID from POS system
item_name VARCHAR(255) NOT NULL,
category VARCHAR(100), -- 'Appetizer', 'Entree', 'Beverage', 'Dessert'
subcategory VARCHAR(100),
current_price DECIMAL(10,2),
food_cost DECIMAL(10,2), -- from MarginEdge/recipe costing
food_cost_pct DECIMAL(5,2), -- food_cost / current_price * 100
is_active BOOLEAN DEFAULT TRUE,
updated_at TIMESTAMP DEFAULT NOW()
);
-- Dimension: Servers/Employees
CREATE TABLE restaurant.dim_server (
server_id SERIAL PRIMARY KEY,
restaurant_id INT REFERENCES restaurant.dim_restaurant(restaurant_id),
pos_employee_id VARCHAR(255),
first_name VARCHAR(100),
last_name VARCHAR(100),
role VARCHAR(50), -- 'Server', 'Bartender', 'Host'
is_active BOOLEAN DEFAULT TRUE,
hire_date DATE
);
-- Dimension: Date
CREATE TABLE restaurant.dim_date (
date_id DATE PRIMARY KEY,
day_of_week INT, -- 0=Monday, 6=Sunday
day_name VARCHAR(10),
week_number INT,
month_number INT,
month_name VARCHAR(20),
quarter INT,
year INT,
is_weekend BOOLEAN,
is_holiday BOOLEAN DEFAULT FALSE
);
-- Fact: Order Line Items (the core transaction table)
CREATE TABLE restaurant.fact_order_item (
order_item_id BIGSERIAL PRIMARY KEY,
restaurant_id INT REFERENCES restaurant.dim_restaurant(restaurant_id),
order_id VARCHAR(255) NOT NULL, -- POS order ID
order_date DATE REFERENCES restaurant.dim_date(date_id),
order_timestamp TIMESTAMP NOT NULL,
daypart_id INT REFERENCES restaurant.dim_daypart(daypart_id),
menu_item_id INT REFERENCES restaurant.dim_menu_item(menu_item_id),
server_id INT REFERENCES restaurant.dim_server(server_id),
quantity INT NOT NULL DEFAULT 1,
unit_price DECIMAL(10,2),
gross_amount DECIMAL(10,2), -- quantity * unit_price
discount_amount DECIMAL(10,2) DEFAULT 0,
net_amount DECIMAL(10,2), -- gross - discount
void_flag BOOLEAN DEFAULT FALSE,
comp_flag BOOLEAN DEFAULT FALSE,
order_type VARCHAR(20) DEFAULT 'dine-in' -- 'dine-in', 'takeout', 'delivery'
);
-- Fact: Labor/Shifts
CREATE TABLE restaurant.fact_labor (
shift_id BIGSERIAL PRIMARY KEY,
restaurant_id INT REFERENCES restaurant.dim_restaurant(restaurant_id),
server_id INT REFERENCES restaurant.dim_server(server_id),
shift_date DATE REFERENCES restaurant.dim_date(date_id),
clock_in TIMESTAMP,
clock_out TIMESTAMP,
hours_worked DECIMAL(5,2),
hourly_rate DECIMAL(8,2),
labor_cost DECIMAL(10,2),
role_during_shift VARCHAR(50)
);
-- Populate dim_date for next 3 years
INSERT INTO restaurant.dim_date (date_id, day_of_week, day_name, week_number, month_number, month_name, quarter, year, is_weekend)
SELECT d::date,
EXTRACT(DOW FROM d)::INT,
TO_CHAR(d, 'Day'),
EXTRACT(WEEK FROM d)::INT,
EXTRACT(MONTH FROM d)::INT,
TO_CHAR(d, 'Month'),
EXTRACT(QUARTER FROM d)::INT,
EXTRACT(YEAR FROM d)::INT,
EXTRACT(DOW FROM d) IN (0, 6)
FROM generate_series('2023-01-01'::date, '2027-12-31'::date, '1 day') AS d;
-- Create indexes for common query patterns
CREATE INDEX idx_fact_order_date ON restaurant.fact_order_item(order_date);
CREATE INDEX idx_fact_order_daypart ON restaurant.fact_order_item(daypart_id);
CREATE INDEX idx_fact_order_item ON restaurant.fact_order_item(menu_item_id);
CREATE INDEX idx_fact_order_server ON restaurant.fact_order_item(server_id);
CREATE INDEX idx_fact_order_restaurant ON restaurant.fact_order_item(restaurant_id);
CREATE INDEX idx_fact_labor_date ON restaurant.fact_labor(shift_date);
CREATE INDEX idx_fact_labor_server ON restaurant.fact_labor(server_id);
-- Create materialized views for common dashboard queries
-- Daily sales summary by daypart
CREATE MATERIALIZED VIEW restaurant.mv_daily_daypart_summary AS
SELECT
foi.restaurant_id,
foi.order_date,
dp.daypart_name,
COUNT(DISTINCT foi.order_id) AS order_count,
SUM(foi.net_amount) AS total_sales,
AVG(foi.net_amount) AS avg_item_value,
SUM(foi.quantity) AS items_sold
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_daypart dp ON foi.daypart_id = dp.daypart_id
WHERE foi.void_flag = FALSE
GROUP BY foi.restaurant_id, foi.order_date, dp.daypart_name;
-- Menu item performance (for menu engineering matrix)
CREATE MATERIALIZED VIEW restaurant.mv_menu_item_performance AS
SELECT
foi.restaurant_id,
mi.item_name,
mi.category,
mi.current_price,
mi.food_cost,
mi.food_cost_pct,
mi.current_price - COALESCE(mi.food_cost, 0) AS contribution_margin,
SUM(foi.quantity) AS total_qty_sold,
SUM(foi.net_amount) AS total_revenue,
AVG(foi.quantity) AS avg_daily_qty,
COUNT(DISTINCT foi.order_date) AS days_on_menu
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_menu_item mi ON foi.menu_item_id = mi.menu_item_id
WHERE foi.void_flag = FALSE
GROUP BY foi.restaurant_id, mi.item_name, mi.category, mi.current_price, mi.food_cost, mi.food_cost_pct;
-- Server performance scoreboard
CREATE MATERIALIZED VIEW restaurant.mv_server_performance AS
SELECT
foi.restaurant_id,
s.first_name || ' ' || LEFT(s.last_name, 1) || '.' AS server_name,
foi.order_date,
COUNT(DISTINCT foi.order_id) AS orders_served,
SUM(foi.net_amount) AS total_sales,
SUM(foi.net_amount) / NULLIF(COUNT(DISTINCT foi.order_id), 0) AS avg_check_size,
SUM(foi.quantity)::DECIMAL / NULLIF(COUNT(DISTINCT foi.order_id), 0) AS avg_items_per_order
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_server s ON foi.server_id = s.server_id
WHERE foi.void_flag = FALSE
GROUP BY foi.restaurant_id, server_name, foi.order_date;
-- Grant Metabase read-only access
CREATE ROLE metabase_reader WITH LOGIN PASSWORD '<generate-password>';
GRANT USAGE ON SCHEMA restaurant TO metabase_reader;
GRANT SELECT ON ALL TABLES IN SCHEMA restaurant TO metabase_reader;
ALTER DEFAULT PRIVILEGES IN SCHEMA restaurant GRANT SELECT ON TABLES TO metabase_reader;The schema uses a star schema pattern optimized for BI queries. Materialized views are pre-computed for the most common dashboard queries — refresh them daily via a cron job or n8n workflow. The multi-tenant design (restaurant_id) allows the MSP to serve multiple clients from a single database. Create a read-only metabase_reader role for Metabase connections — never give Metabase write access to the analytics database.
Step 8: ETL Pipeline Configuration with n8n
Configure n8n workflows to extract data from the POS API, transform it into the warehouse schema, and load it into PostgreSQL. Set up three primary workflows: (1) nightly full sync of the previous day's transactions, (2) menu item and employee dimension sync, and (3) materialized view refresh. These workflows run automatically on a schedule.
Workflow 1: Nightly POS Transaction Sync
// maps POS API response to warehouse schema
const orders = $input.all();
const orderItems = [];
for (const order of orders) {
for (const item of order.json.checks[0].selections) {
orderItems.push({
json: {
order_id: order.json.guid,
order_timestamp: order.json.openedDate,
order_date: order.json.openedDate.substring(0, 10),
pos_item_id: item.item.guid,
item_name: item.displayName,
quantity: item.quantity,
unit_price: item.price / 100,
gross_amount: (item.price * item.quantity) / 100,
discount_amount: (item.appliedDiscounts?.reduce((s,d) => s + d.discountAmount, 0) || 0) / 100,
pos_employee_id: order.json.server?.guid,
void_flag: item.voided || false,
order_type: order.json.diningOption?.name || 'dine-in'
}
});
}
}
return orderItems;-- Materialized view refresh queries executed after upsert
REFRESH MATERIALIZED VIEW CONCURRENTLY restaurant.mv_daily_daypart_summary;
REFRESH MATERIALIZED VIEW CONCURRENTLY restaurant.mv_menu_item_performance;
REFRESH MATERIALIZED VIEW CONCURRENTLY restaurant.mv_server_performance;Workflow 2: Weekly Dimension Sync
- Runs every Monday at 2:00 AM
- Syncs menu items and employee roster from POS to dim tables
Workflow 3: Daily Email Digest
- Triggers at 7:00 AM after ETL completes
- Queries mv_daily_daypart_summary for yesterday's numbers
- Sends formatted HTML email to restaurant manager via SMTP
Configure n8n Credentials
The n8n workflow details above use Toast API as the example — adjust the HTTP Request node URL, headers, and the transformation function for Square or Lightspeed APIs. Toast API returns monetary values in cents (divide by 100); Square uses similar conventions. Test each workflow manually before enabling the schedule trigger. Monitor the n8n execution log for the first 7 days to catch any data mapping issues. The daypart_id assignment in the transformation step requires a lookup against the dim_daypart table — use the order timestamp and the configured daypart time ranges to assign the correct daypart.
Step 9: Metabase Dashboard Configuration
Configure Metabase with the PostgreSQL data source and build five core dashboards that restaurant managers will use daily. Connect Metabase to PostgreSQL using the read-only metabase_reader account. Create a Collection (folder) for the client restaurant and build each dashboard with pre-configured filters for date range, daypart, and category.
Dashboard 1: Daily Sales Overview
- KPI: Yesterday's Total Revenue (single number)
- KPI: Yesterday's Order Count
- KPI: Average Check Size
- Bar Chart: Revenue by Daypart (Breakfast/Lunch/Dinner/Late Night)
- Line Chart: Daily Revenue Trend (last 30 days)
- Table: Top 10 Items by Revenue Yesterday
SELECT daypart_name, total_sales
FROM restaurant.mv_daily_daypart_summary
WHERE restaurant_id = {{restaurant_id}}
AND order_date = CURRENT_DATE - INTERVAL '1 day'
ORDER BY CASE daypart_name
WHEN 'Breakfast' THEN 1
WHEN 'Lunch' THEN 2
WHEN 'Dinner' THEN 3
WHEN 'Late Night' THEN 4 END;Dashboard 2: Menu Engineering Matrix
- Scatter Plot: X=Qty Sold (popularity), Y=Contribution Margin (profitability) — color-coded quadrants: Star (high/high), Puzzle (low/high), Plow Horse (high/low), Dog (low/low)
- Table: All items with classification, sorted by contribution margin
- Filter: Date range, Category
SELECT item_name, category, total_qty_sold, contribution_margin,
CASE
WHEN total_qty_sold >= (SELECT AVG(total_qty_sold) FROM restaurant.mv_menu_item_performance WHERE restaurant_id = {{restaurant_id}})
AND contribution_margin >= (SELECT AVG(contribution_margin) FROM restaurant.mv_menu_item_performance WHERE restaurant_id = {{restaurant_id}})
THEN 'Star'
WHEN total_qty_sold < (SELECT AVG(total_qty_sold) FROM restaurant.mv_menu_item_performance WHERE restaurant_id = {{restaurant_id}})
AND contribution_margin >= (SELECT AVG(contribution_margin) FROM restaurant.mv_menu_item_performance WHERE restaurant_id = {{restaurant_id}})
THEN 'Puzzle'
WHEN total_qty_sold >= (SELECT AVG(total_qty_sold) FROM restaurant.mv_menu_item_performance WHERE restaurant_id = {{restaurant_id}})
AND contribution_margin < (SELECT AVG(contribution_margin) FROM restaurant.mv_menu_item_performance WHERE restaurant_id = {{restaurant_id}})
THEN 'Plow Horse'
ELSE 'Dog'
END AS menu_class
FROM restaurant.mv_menu_item_performance
WHERE restaurant_id = {{restaurant_id}};Dashboard 3: Server Performance Leaderboard
- Table: Server rankings by avg check size (last 7 days)
- Bar Chart: Total sales by server (last 7 days)
- Line Chart: Server's avg check size trend over time
- KPI: Top performer name + metric
- Filter: Date range, individual server
Dashboard 4: Daypart Deep Dive
- Heatmap: Sales by hour of day × day of week
- Line Chart: Daypart revenue trends over 12 weeks
- Stacked Bar: Item category mix by daypart
- Table: Fastest/slowest growing dayparts
Dashboard 5: Weekly Management Summary
- Week-over-week revenue comparison
- Top 5 growing and declining items
- Labor cost % vs target
- Server coaching opportunities (lowest avg check)
- Menu engineering changes recommended
Metabase supports native questions (GUI-built queries) and custom SQL questions. Use native questions for simple KPIs and SQL for the more complex menu engineering matrix and heatmap visualizations. Set up Metabase subscriptions (Settings > Subscriptions) to email Dashboard 1 daily at 7 AM and Dashboard 5 weekly on Monday mornings. Create a 'Manager' user group in Metabase with access only to the client's collection — do not give client users admin access. The MSP admin retains full Metabase admin privileges. Enable Metabase's 'public sharing' for specific dashboards if the owner wants to share read-only views with investors or partners.
Step 10: Security Hardening and PCI Compliance Verification
Harden the entire deployment for PCI DSS v4.0 compliance and restaurant-specific security requirements. This step covers network security, application security, access controls, and data handling practices.
Network Security (UniFi)
Cloud VM Security
ufw default deny incoming
ufw default allow outgoing
ufw allow from <msp-office-ip>/32 to any port 22 proto tcp comment 'SSH from MSP'
ufw allow 80/tcp comment 'HTTP for Caddy ACME'
ufw allow 443/tcp comment 'HTTPS'
ufw enablesed -i 's/^#PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i 's/^PasswordAuthentication yes/PasswordAuthentication no/' /etc/ssh/sshd_config
systemctl restart sshdapt install -y fail2ban
cat > /etc/fail2ban/jail.local << 'EOF'
[sshd]
enabled = true
port = ssh
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600
EOF
systemctl enable fail2ban && systemctl start fail2banapt install -y unattended-upgrades
dpkg-reconfigure -plow unattended-upgradesApplication Security
Data Security
SELECT column_name, table_name FROM information_schema.columns
WHERE table_schema = 'restaurant'
AND (column_name ILIKE '%card%' OR column_name ILIKE '%pan%' OR column_name ILIKE '%credit%');
-- This should return zero rowsBackup Configuration
cat > /opt/restaurant-analytics/backup.sh << 'SCRIPT'
#!/bin/bash
BACKUP_DIR=/opt/restaurant-analytics/backups
mkdir -p $BACKUP_DIR
docker exec restaurant_db pg_dump -U analytics_admin restaurant_analytics | gzip > $BACKUP_DIR/backup_$(date +%Y%m%d).sql.gz
# Keep last 30 days
find $BACKUP_DIR -name 'backup_*.sql.gz' -mtime +30 -delete
# Upload to offsite storage (configure rclone or s3cmd)
# rclone copy $BACKUP_DIR/backup_$(date +%Y%m%d).sql.gz remote:backups/
SCRIPT
chmod +x /opt/restaurant-analytics/backup.sh(crontab -l 2>/dev/null; echo '0 4 * * * /opt/restaurant-analytics/backup.sh') | crontab -PCI DSS v4.0 requires MFA for all non-console administrative access — enforce this on Metabase, n8n, UniFi Controller, and the cloud VM. The analytics database should NEVER contain raw cardholder data — only aggregated sales amounts, item details, and server IDs. POS API integrations typically return payment type (cash/card) but not full card numbers. Document all security configurations in the client's compliance file. If the client accepts credit cards, they are already subject to PCI DSS — this project does not increase their PCI scope as long as no cardholder data enters the analytics pipeline. Schedule a quarterly security review as part of the ongoing managed service.
Step 11: Historical Data Backfill and Validation
Import historical POS data (minimum 3 months, ideally 6–12 months) into the data warehouse to establish baselines and enable trend analysis from day one. Run the n8n ETL workflow in backfill mode for each historical date range, then validate data accuracy by cross-referencing totals against POS native reports.
scp historical_orders.csv root@<droplet-ip>:/opt/restaurant-analytics/docker exec -i restaurant_db psql -U analytics_admin -d restaurant_analytics << 'SQL'
CREATE TEMP TABLE staging_orders (
order_id VARCHAR(255),
order_timestamp TIMESTAMP,
item_name VARCHAR(255),
quantity INT,
unit_price DECIMAL(10,2),
gross_amount DECIMAL(10,2),
discount_amount DECIMAL(10,2),
server_name VARCHAR(255),
void_flag BOOLEAN,
order_type VARCHAR(20)
);
COPY staging_orders FROM '/opt/restaurant-analytics/historical_orders.csv'
WITH (FORMAT csv, HEADER true);
-- Transform and insert into fact table (with dimension lookups)
INSERT INTO restaurant.fact_order_item (
restaurant_id, order_id, order_date, order_timestamp,
daypart_id, menu_item_id, server_id,
quantity, unit_price, gross_amount, discount_amount, net_amount,
void_flag, order_type
)
SELECT
1, -- restaurant_id
s.order_id,
s.order_timestamp::date,
s.order_timestamp,
dp.daypart_id,
mi.menu_item_id,
sv.server_id,
s.quantity,
s.unit_price,
s.gross_amount,
s.discount_amount,
s.gross_amount - s.discount_amount,
s.void_flag,
s.order_type
FROM staging_orders s
LEFT JOIN restaurant.dim_daypart dp
ON dp.restaurant_id = 1
AND s.order_timestamp::time BETWEEN dp.start_time AND dp.end_time
LEFT JOIN restaurant.dim_menu_item mi
ON mi.restaurant_id = 1 AND mi.item_name = s.item_name
LEFT JOIN restaurant.dim_server sv
ON sv.restaurant_id = 1
AND sv.first_name || ' ' || sv.last_name = s.server_name;
SQLdocker exec -i restaurant_db psql -U analytics_admin -d restaurant_analytics << 'SQL'
REFRESH MATERIALIZED VIEW restaurant.mv_daily_daypart_summary;
REFRESH MATERIALIZED VIEW restaurant.mv_menu_item_performance;
REFRESH MATERIALIZED VIEW restaurant.mv_server_performance;
SQLdocker exec -i restaurant_db psql -U analytics_admin -d restaurant_analytics << 'SQL'
-- Compare monthly totals with POS reports
SELECT
DATE_TRUNC('month', order_date) AS month,
COUNT(DISTINCT order_id) AS order_count,
SUM(net_amount) AS total_revenue
FROM restaurant.fact_order_item
WHERE restaurant_id = 1 AND void_flag = FALSE
GROUP BY DATE_TRUNC('month', order_date)
ORDER BY month;
SQLData validation is critical — cross-reference the monthly totals from the database query with the POS system's native monthly sales reports. Acceptable variance is ≤1% (small differences can occur due to timezone handling and void/comp processing). If variance exceeds 1%, investigate the ETL transformation logic. Common issues: timezone mismatches causing orders to fall on wrong dates, voided items being included/excluded inconsistently, and server name formatting differences between POS exports and dimension table entries. After backfill, the menu engineering matrix and trend dashboards will immediately show meaningful data.
Step 12: Manager Training and Go-Live
Conduct a 90-minute on-site training session with the restaurant's management team (GM, Assistant GM, and Owner). Walk through each dashboard, explain the menu engineering matrix classifications, demonstrate how to read the server leaderboard, and show how daily email digests work. Set up manager's browser with bookmarked dashboards and saved logins. Provide a printed quick-reference guide.
- Bookmark https://analytics.yourmsp.com on manager's PC and mobile browser
- Configure saved login in browser password manager
- Add dashboard URL to manager's phone home screen (PWA)
- Deliver printed Quick Reference Card (laminated, posted in office)
Schedule training during a slow period (Monday or Tuesday, 2–4 PM). Emphasize that server performance data should be used for coaching and positive reinforcement, not punitive action — this is both a best practice and a legal consideration under state employee monitoring laws. Provide the printed quick-reference card with: dashboard URL, login credentials, explanation of the menu engineering quadrants with recommended actions, and MSP support contact info. Take a photo of the training whiteboard notes and include in the client's documentation.
Custom AI Components
Daypart Revenue Classifier
Type: integration An n8n workflow node that assigns each POS transaction to the correct daypart based on the order timestamp and the restaurant's configured daypart boundaries. This handles edge cases like orders that span daypart boundaries (e.g., a table seated at 2:45 PM during lunch but ordering at 3:05 PM during dinner) by using the first item's order time as the daypart determinant.
Implementation:
const DAYPARTS = [
{ name: 'Breakfast', id: 1, start: '06:00', end: '10:59' },
{ name: 'Lunch', id: 2, start: '11:00', end: '14:59' },
{ name: 'Dinner', id: 3, start: '15:00', end: '21:59' },
{ name: 'Late Night', id: 4, start: '22:00', end: '05:59' }
];
function timeToMinutes(timeStr) {
const [hours, minutes] = timeStr.split(':').map(Number);
return hours * 60 + minutes;
}
function classifyDaypart(timestamp) {
const date = new Date(timestamp);
const hours = date.getHours().toString().padStart(2, '0');
const minutes = date.getMinutes().toString().padStart(2, '0');
const timeStr = `${hours}:${minutes}`;
const timeMinutes = timeToMinutes(timeStr);
for (const dp of DAYPARTS) {
const startMin = timeToMinutes(dp.start);
const endMin = timeToMinutes(dp.end);
// Handle overnight daypart (Late Night wraps past midnight)
if (startMin > endMin) {
if (timeMinutes >= startMin || timeMinutes <= endMin) {
return { daypart_id: dp.id, daypart_name: dp.name };
}
} else {
if (timeMinutes >= startMin && timeMinutes <= endMin) {
return { daypart_id: dp.id, daypart_name: dp.name };
}
}
}
// Default to dinner if no match
return { daypart_id: 3, daypart_name: 'Dinner' };
}
const items = $input.all();
const results = items.map(item => {
const dp = classifyDaypart(item.json.order_timestamp);
return {
json: {
...item.json,
daypart_id: dp.daypart_id,
daypart_name: dp.daypart_name
}
};
});
return results;Menu Engineering Matrix Calculator
Type: workflow
A SQL-based analytical component that classifies every menu item into one of four quadrants (Star, Plow Horse, Puzzle, Dog) based on popularity (quantity sold relative to average) and profitability (contribution margin relative to average). This runs as a Metabase saved question that powers the Menu Engineering Matrix dashboard. The classification drives actionable recommendations: promote Stars, re-engineer Puzzles, reprice Plow Horses, and consider removing Dogs.
Implementation:
-- Metabase SQL Native Question: Menu Engineering Matrix
-- Parameters: {{restaurant_id}}, {{start_date}}, {{end_date}}
-- Save as: 'Menu Engineering Matrix - [Client Name]'
WITH item_metrics AS (
SELECT
mi.menu_item_id,
mi.item_name,
mi.category,
mi.current_price,
COALESCE(mi.food_cost, 0) AS food_cost,
mi.current_price - COALESCE(mi.food_cost, 0) AS contribution_margin,
COALESCE(mi.food_cost_pct, 0) AS food_cost_pct,
SUM(foi.quantity) AS total_qty_sold,
SUM(foi.net_amount) AS total_revenue,
COUNT(DISTINCT foi.order_date) AS days_sold,
SUM(foi.quantity)::DECIMAL / NULLIF(COUNT(DISTINCT foi.order_date), 0) AS avg_daily_qty
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_menu_item mi ON foi.menu_item_id = mi.menu_item_id
WHERE foi.restaurant_id = {{restaurant_id}}
AND foi.order_date BETWEEN {{start_date}} AND {{end_date}}
AND foi.void_flag = FALSE
AND mi.is_active = TRUE
GROUP BY mi.menu_item_id, mi.item_name, mi.category, mi.current_price, mi.food_cost, mi.food_cost_pct
),
averages AS (
SELECT
AVG(total_qty_sold) AS avg_popularity,
AVG(contribution_margin) AS avg_margin
FROM item_metrics
),
classified AS (
SELECT
im.*,
a.avg_popularity,
a.avg_margin,
CASE
WHEN im.total_qty_sold >= a.avg_popularity AND im.contribution_margin >= a.avg_margin THEN 'Star'
WHEN im.total_qty_sold >= a.avg_popularity AND im.contribution_margin < a.avg_margin THEN 'Plow Horse'
WHEN im.total_qty_sold < a.avg_popularity AND im.contribution_margin >= a.avg_margin THEN 'Puzzle'
WHEN im.total_qty_sold < a.avg_popularity AND im.contribution_margin < a.avg_margin THEN 'Dog'
END AS menu_class,
CASE
WHEN im.total_qty_sold >= a.avg_popularity AND im.contribution_margin >= a.avg_margin
THEN 'Maintain visibility. Feature prominently. Do not discount.'
WHEN im.total_qty_sold >= a.avg_popularity AND im.contribution_margin < a.avg_margin
THEN 'Increase price cautiously. Reduce portion cost. Pair with high-margin sides.'
WHEN im.total_qty_sold < a.avg_popularity AND im.contribution_margin >= a.avg_margin
THEN 'Increase visibility. Add to specials. Rename or reposition on menu.'
WHEN im.total_qty_sold < a.avg_popularity AND im.contribution_margin < a.avg_margin
THEN 'Consider removing. Replace with new item. If kept, raise price significantly.'
END AS recommended_action
FROM item_metrics im
CROSS JOIN averages a
)
SELECT
item_name,
category,
current_price,
food_cost,
contribution_margin,
food_cost_pct,
total_qty_sold,
total_revenue,
avg_daily_qty,
menu_class,
recommended_action,
avg_popularity AS benchmark_popularity,
avg_margin AS benchmark_margin
FROM classified
ORDER BY
CASE menu_class
WHEN 'Star' THEN 1
WHEN 'Puzzle' THEN 2
WHEN 'Plow Horse' THEN 3
WHEN 'Dog' THEN 4
END,
total_revenue DESC;Server Performance Scoring Agent
Type: workflow A SQL-based scoring system that calculates a composite performance score for each server based on multiple weighted metrics: average check size (35% weight), items per order / upsell rate (25%), total revenue generated (20%), consistency score based on standard deviation (10%), and void/comp rate (10%). The composite score enables fair, data-driven server coaching conversations. Results are displayed on the Server Performance Leaderboard dashboard.
Implementation:
-- Metabase SQL Native Question: Server Performance Composite Score
-- Parameters: {{restaurant_id}}, {{start_date}}, {{end_date}}
-- Save as: 'Server Scorecard - [Client Name]'
WITH server_raw AS (
SELECT
s.server_id,
s.first_name || ' ' || LEFT(s.last_name, 1) || '.' AS server_display_name,
s.role,
COUNT(DISTINCT foi.order_id) AS total_orders,
SUM(foi.net_amount) AS total_revenue,
SUM(foi.net_amount) / NULLIF(COUNT(DISTINCT foi.order_id), 0) AS avg_check_size,
SUM(foi.quantity)::DECIMAL / NULLIF(COUNT(DISTINCT foi.order_id), 0) AS avg_items_per_order,
SUM(CASE WHEN foi.void_flag THEN 1 ELSE 0 END)::DECIMAL / NULLIF(COUNT(*), 0) * 100 AS void_pct,
SUM(CASE WHEN foi.comp_flag THEN 1 ELSE 0 END)::DECIMAL / NULLIF(COUNT(*), 0) * 100 AS comp_pct,
STDDEV_POP(foi.net_amount) AS revenue_stddev,
COUNT(DISTINCT foi.order_date) AS days_worked
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_server s ON foi.server_id = s.server_id
WHERE foi.restaurant_id = {{restaurant_id}}
AND foi.order_date BETWEEN {{start_date}} AND {{end_date}}
AND s.role IN ('Server', 'Bartender')
GROUP BY s.server_id, s.first_name, s.last_name, s.role
HAVING COUNT(DISTINCT foi.order_date) >= 3 -- minimum 3 shifts to be scored
),
normalized AS (
SELECT *,
-- Normalize each metric to 0-100 scale using min-max normalization
CASE WHEN MAX(avg_check_size) OVER() - MIN(avg_check_size) OVER() > 0
THEN (avg_check_size - MIN(avg_check_size) OVER()) /
(MAX(avg_check_size) OVER() - MIN(avg_check_size) OVER()) * 100
ELSE 50 END AS norm_check_size,
CASE WHEN MAX(avg_items_per_order) OVER() - MIN(avg_items_per_order) OVER() > 0
THEN (avg_items_per_order - MIN(avg_items_per_order) OVER()) /
(MAX(avg_items_per_order) OVER() - MIN(avg_items_per_order) OVER()) * 100
ELSE 50 END AS norm_upsell,
CASE WHEN MAX(total_revenue) OVER() - MIN(total_revenue) OVER() > 0
THEN (total_revenue - MIN(total_revenue) OVER()) /
(MAX(total_revenue) OVER() - MIN(total_revenue) OVER()) * 100
ELSE 50 END AS norm_revenue,
-- Consistency: lower stddev is better, so invert
CASE WHEN MAX(revenue_stddev) OVER() - MIN(revenue_stddev) OVER() > 0
THEN 100 - ((revenue_stddev - MIN(revenue_stddev) OVER()) /
(MAX(revenue_stddev) OVER() - MIN(revenue_stddev) OVER()) * 100)
ELSE 50 END AS norm_consistency,
-- Void/comp: lower is better, so invert
CASE WHEN MAX(void_pct + comp_pct) OVER() - MIN(void_pct + comp_pct) OVER() > 0
THEN 100 - ((void_pct + comp_pct - MIN(void_pct + comp_pct) OVER()) /
(MAX(void_pct + comp_pct) OVER() - MIN(void_pct + comp_pct) OVER()) * 100)
ELSE 50 END AS norm_void_comp
FROM server_raw
),
scored AS (
SELECT *,
-- Weighted composite score (weights sum to 1.0)
ROUND((
norm_check_size * 0.35 +
norm_upsell * 0.25 +
norm_revenue * 0.20 +
norm_consistency * 0.10 +
norm_void_comp * 0.10
)::DECIMAL, 1) AS composite_score
FROM normalized
)
SELECT
ROW_NUMBER() OVER(ORDER BY composite_score DESC) AS rank,
server_display_name,
role,
days_worked,
total_orders,
ROUND(total_revenue::DECIMAL, 2) AS total_revenue,
ROUND(avg_check_size::DECIMAL, 2) AS avg_check_size,
ROUND(avg_items_per_order::DECIMAL, 1) AS avg_items_per_order,
ROUND(void_pct::DECIMAL, 2) AS void_pct,
ROUND(comp_pct::DECIMAL, 2) AS comp_pct,
composite_score,
CASE
WHEN composite_score >= 80 THEN 'Top Performer'
WHEN composite_score >= 60 THEN 'Solid'
WHEN composite_score >= 40 THEN 'Developing'
ELSE 'Coaching Needed'
END AS performance_tier
FROM scored
ORDER BY composite_score DESC;Sales Anomaly Detection Alert
Type: workflow
An n8n workflow that runs daily after the ETL pipeline completes, compares yesterday's sales metrics against trailing 4-week averages, and sends an alert to the restaurant manager if any daypart's revenue deviates by more than 20% from the expected range. Also flags individual menu items with sudden popularity spikes or drops (>30% deviation). Alerts are sent via email and optionally Slack.
Implementation
-- Daypart anomaly detection query (yesterday vs 4-week same-day baseline)
WITH yesterday AS (
SELECT
daypart_name,
total_sales,
order_count,
items_sold
FROM restaurant.mv_daily_daypart_summary
WHERE restaurant_id = 1
AND order_date = CURRENT_DATE - INTERVAL '1 day'
),
baseline AS (
SELECT
daypart_name,
AVG(total_sales) AS avg_sales,
STDDEV_POP(total_sales) AS stddev_sales,
AVG(order_count) AS avg_orders
FROM restaurant.mv_daily_daypart_summary
WHERE restaurant_id = 1
AND order_date BETWEEN CURRENT_DATE - INTERVAL '29 days' AND CURRENT_DATE - INTERVAL '2 days'
AND EXTRACT(DOW FROM order_date) = EXTRACT(DOW FROM CURRENT_DATE - INTERVAL '1 day')
GROUP BY daypart_name
)
SELECT
y.daypart_name,
y.total_sales AS yesterday_sales,
ROUND(b.avg_sales::DECIMAL, 2) AS baseline_avg,
ROUND(((y.total_sales - b.avg_sales) / NULLIF(b.avg_sales, 0) * 100)::DECIMAL, 1) AS pct_deviation,
CASE
WHEN ABS((y.total_sales - b.avg_sales) / NULLIF(b.avg_sales, 0)) > 0.20 THEN 'ALERT'
ELSE 'NORMAL'
END AS status
FROM yesterday y
JOIN baseline b ON y.daypart_name = b.daypart_name;<h2>Daily Sales Anomaly Report</h2>
<p>The following dayparts showed unusual sales activity yesterday
compared to the same day-of-week average over the past 4 weeks:</p>
<table border='1' cellpadding='8' style='border-collapse:collapse'>
<tr style='background:#f0f0f0'>
<th>Daypart</th><th>Yesterday</th><th>4-Week Avg</th><th>Deviation</th>
</tr>
{{#each $json}}
<tr style='{{#if (eq status "ALERT")}}background:#fff3cd{{/if}}'>
<td>{{daypart_name}}</td>
<td>${{yesterday_sales}}</td>
<td>${{baseline_avg}}</td>
<td>{{pct_deviation}}%</td>
</tr>
{{/each}}
</table>
<p style='color:#666; font-size:12px'>This alert triggers when any daypart
deviates ±20% from the trailing 4-week same-day average. Review the full
dashboard at <a href='https://analytics.yourmsp.com'>analytics.yourmsp.com</a>.</p>WITH yesterday_items AS (
SELECT
mi.item_name,
SUM(foi.quantity) AS qty_yesterday
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_menu_item mi ON foi.menu_item_id = mi.menu_item_id
WHERE foi.restaurant_id = 1
AND foi.order_date = CURRENT_DATE - INTERVAL '1 day'
AND foi.void_flag = FALSE
GROUP BY mi.item_name
),
baseline_items AS (
SELECT
mi.item_name,
AVG(daily_qty) AS avg_daily_qty
FROM (
SELECT mi.item_name, foi.order_date, SUM(foi.quantity) AS daily_qty
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_menu_item mi ON foi.menu_item_id = mi.menu_item_id
WHERE foi.restaurant_id = 1
AND foi.order_date BETWEEN CURRENT_DATE - INTERVAL '29 days' AND CURRENT_DATE - INTERVAL '2 days'
AND foi.void_flag = FALSE
GROUP BY mi.item_name, foi.order_date
) sub
GROUP BY mi.item_name
HAVING COUNT(*) >= 5
)
SELECT
y.item_name,
y.qty_yesterday,
ROUND(b.avg_daily_qty::DECIMAL, 1) AS avg_daily_qty,
ROUND(((y.qty_yesterday - b.avg_daily_qty) / NULLIF(b.avg_daily_qty, 0) * 100)::DECIMAL, 1) AS pct_change
FROM yesterday_items y
JOIN baseline_items b ON y.item_name = b.item_name
WHERE ABS((y.qty_yesterday - b.avg_daily_qty) / NULLIF(b.avg_daily_qty, 0)) > 0.30
ORDER BY ABS(pct_change) DESC
LIMIT 10;Hourly Sales Heatmap Generator
Type: prompt A Metabase SQL query that generates an hour-of-day × day-of-week sales heatmap, which is the primary tool for staffing optimization. Restaurant managers use this to visually identify peak hours that need more staff and dead zones where staff can be reduced. The heatmap data feeds both the Daypart Deep Dive dashboard and the labor scheduling recommendations.
Implementation
-- Visualization: Pivot Table or Heatmap (rows=hour, columns=day_of_week).
-- Parameters: {{restaurant_id}}, {{start_date}}, {{end_date}}. Save as:
-- 'Hourly Sales Heatmap - [Client Name]'
WITH hourly_sales AS (
SELECT
EXTRACT(DOW FROM foi.order_timestamp)::INT AS day_of_week_num,
CASE EXTRACT(DOW FROM foi.order_timestamp)::INT
WHEN 0 THEN 'Sunday'
WHEN 1 THEN 'Monday'
WHEN 2 THEN 'Tuesday'
WHEN 3 THEN 'Wednesday'
WHEN 4 THEN 'Thursday'
WHEN 5 THEN 'Friday'
WHEN 6 THEN 'Saturday'
END AS day_of_week,
EXTRACT(HOUR FROM foi.order_timestamp)::INT AS hour_of_day,
SUM(foi.net_amount) AS total_sales,
COUNT(DISTINCT foi.order_id) AS order_count,
COUNT(DISTINCT foi.order_date) AS num_days_in_range
FROM restaurant.fact_order_item foi
WHERE foi.restaurant_id = {{restaurant_id}}
AND foi.order_date BETWEEN {{start_date}} AND {{end_date}}
AND foi.void_flag = FALSE
GROUP BY 1, 2, 3
),
avg_hourly AS (
SELECT
day_of_week_num,
day_of_week,
hour_of_day,
ROUND((total_sales / NULLIF(num_days_in_range, 0))::DECIMAL, 2) AS avg_hourly_sales,
ROUND((order_count::DECIMAL / NULLIF(num_days_in_range, 0))::DECIMAL, 1) AS avg_hourly_orders,
total_sales,
order_count
FROM hourly_sales
)
SELECT
-- Format hour for display
CASE
WHEN hour_of_day = 0 THEN '12 AM'
WHEN hour_of_day < 12 THEN hour_of_day || ' AM'
WHEN hour_of_day = 12 THEN '12 PM'
ELSE (hour_of_day - 12) || ' PM'
END AS hour_label,
hour_of_day,
day_of_week,
day_of_week_num,
avg_hourly_sales,
avg_hourly_orders,
-- Staffing recommendation based on relative volume
CASE
WHEN avg_hourly_sales >= (SELECT PERCENTILE_CONT(0.80) WITHIN GROUP (ORDER BY avg_hourly_sales) FROM avg_hourly)
THEN 'Peak - Full Staff'
WHEN avg_hourly_sales >= (SELECT PERCENTILE_CONT(0.50) WITHIN GROUP (ORDER BY avg_hourly_sales) FROM avg_hourly)
THEN 'Moderate - Standard Staff'
WHEN avg_hourly_sales >= (SELECT PERCENTILE_CONT(0.20) WITHIN GROUP (ORDER BY avg_hourly_sales) FROM avg_hourly)
THEN 'Low - Reduced Staff'
ELSE 'Minimal - Skeleton Crew'
END AS staffing_recommendation
FROM avg_hourly
WHERE avg_hourly_sales > 0
ORDER BY hour_of_day, day_of_week_num;In Metabase, configure this as a Pivot Table: Rows: hour_label (sorted by hour_of_day) | Columns: day_of_week (sorted by day_of_week_num) | Values: avg_hourly_sales (with conditional formatting / color scale). This creates a visual heatmap where dark cells = high sales hours.
Weekly Management Digest Email
Type: workflow An n8n workflow that compiles a comprehensive weekly summary email sent every Monday morning at 7:00 AM. The email includes: total revenue vs prior week, revenue by daypart with week-over-week trends, top 5 rising and declining menu items, server leaderboard top 3 and bottom 3, menu engineering changes (items that shifted quadrants), and a labor cost summary. This replaces the need for managers to log into dashboards for the weekly overview.
Implementation
WITH this_week AS (
SELECT
SUM(net_amount) AS total_revenue,
COUNT(DISTINCT order_id) AS total_orders,
SUM(net_amount) / NULLIF(COUNT(DISTINCT order_id), 0) AS avg_check
FROM restaurant.fact_order_item
WHERE restaurant_id = 1
AND order_date BETWEEN CURRENT_DATE - INTERVAL '8 days' AND CURRENT_DATE - INTERVAL '1 day'
AND void_flag = FALSE
),
last_week AS (
SELECT
SUM(net_amount) AS total_revenue,
COUNT(DISTINCT order_id) AS total_orders,
SUM(net_amount) / NULLIF(COUNT(DISTINCT order_id), 0) AS avg_check
FROM restaurant.fact_order_item
WHERE restaurant_id = 1
AND order_date BETWEEN CURRENT_DATE - INTERVAL '15 days' AND CURRENT_DATE - INTERVAL '8 days'
AND void_flag = FALSE
)
SELECT
tw.total_revenue AS this_week_revenue,
lw.total_revenue AS last_week_revenue,
ROUND(((tw.total_revenue - lw.total_revenue) / NULLIF(lw.total_revenue, 0) * 100)::DECIMAL, 1) AS revenue_change_pct,
tw.total_orders AS this_week_orders,
ROUND(tw.avg_check::DECIMAL, 2) AS this_week_avg_check,
ROUND(lw.avg_check::DECIMAL, 2) AS last_week_avg_check
FROM this_week tw, last_week lw;WITH this_week_items AS (
SELECT mi.item_name, SUM(foi.quantity) AS qty
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_menu_item mi ON foi.menu_item_id = mi.menu_item_id
WHERE foi.restaurant_id = 1
AND foi.order_date BETWEEN CURRENT_DATE - INTERVAL '8 days' AND CURRENT_DATE - INTERVAL '1 day'
AND foi.void_flag = FALSE
GROUP BY mi.item_name
),
last_week_items AS (
SELECT mi.item_name, SUM(foi.quantity) AS qty
FROM restaurant.fact_order_item foi
JOIN restaurant.dim_menu_item mi ON foi.menu_item_id = mi.menu_item_id
WHERE foi.restaurant_id = 1
AND foi.order_date BETWEEN CURRENT_DATE - INTERVAL '15 days' AND CURRENT_DATE - INTERVAL '8 days'
AND foi.void_flag = FALSE
GROUP BY mi.item_name
)
SELECT
COALESCE(tw.item_name, lw.item_name) AS item_name,
COALESCE(tw.qty, 0) AS this_week_qty,
COALESCE(lw.qty, 0) AS last_week_qty,
COALESCE(tw.qty, 0) - COALESCE(lw.qty, 0) AS qty_change,
ROUND(((COALESCE(tw.qty, 0) - COALESCE(lw.qty, 0))::DECIMAL / NULLIF(lw.qty, 0) * 100)::DECIMAL, 1) AS pct_change
FROM this_week_items tw
FULL OUTER JOIN last_week_items lw ON tw.item_name = lw.item_name
ORDER BY qty_change DESC;Email template structure:
Testing & Validation
-- verify loaded row count against POS reported item count for the prior day
SELECT COUNT(*) FROM restaurant.fact_order_item WHERE order_date = CURRENT_DATE - INTERVAL '1 day';# confirm PostgreSQL port is not exposed to the public internet (expected
# result: 'filtered' or 'closed')
nmap -p 5432 <droplet-ip>Client Handoff
The client handoff should be conducted as a 90-minute on-site session with the restaurant owner and general manager, covering the following:
1. Dashboard Training (30 minutes)
- Login and navigation on desktop and mobile (bookmarked URLs, saved credentials)
- How to use date range filters and daypart filters
- Reading the Daily Sales Overview: KPIs, trends, top items
- Understanding the Hourly Sales Heatmap and its staffing implications
2. Menu Engineering Training (20 minutes)
- Explanation of the 4-quadrant menu engineering matrix (Star, Puzzle, Plow Horse, Dog)
- What actions to take for each quadrant (promote Stars, reprice Plow Horses, increase visibility of Puzzles, evaluate Dogs for removal)
- How to track the impact of menu changes over time using the trend views
- Demonstrate MarginEdge's native menu engineering view alongside the custom Metabase dashboards
3. Server Performance & Labor (15 minutes)
- How to read the Server Performance Leaderboard and composite scores
- Best practices for using data in coaching conversations (focus on positive reinforcement and specific behaviors like upselling)
- Privacy considerations: who should have access to server-level data (managers only, never posted publicly)
- How 7shifts labor targets connect to the heatmap staffing recommendations
4. Email Digests & Alerts (10 minutes)
- What the daily anomaly alert looks like and when it triggers (±20% deviation)
- What the weekly management digest contains and when it arrives (Monday 7 AM)
- How to request changes to alert thresholds or report recipients
5. Documentation Package (leave behind)
- Laminated Quick Reference Card with: dashboard URL, login process, menu engineering quadrant definitions with action items, MSP support contact, and escalation path
- Digital folder (shared via OneDrive/Google Drive) containing: network diagram, VLAN documentation, all account credentials (encrypted), vendor contact list, and this implementation guide
- 30-day post-go-live support plan details
6. Success Criteria Review (15 minutes)
Maintenance
Ongoing MSP Responsibilities:
Daily (Automated)
- n8n ETL pipeline runs at 3:00 AM — monitor for failures via n8n execution log
- Materialized views refresh at 3:30 AM — verify via automated health check
- Anomaly detection email triggers at 7:15 AM — confirm delivery
- Monitor cloud VM uptime via DigitalOcean alerting or external uptime monitor (UptimeRobot, free tier)
Weekly
- Review n8n workflow execution history for any failed runs or errors (every Monday)
- Check PostgreSQL database size and growth rate — plan storage expansion if approaching 80% capacity
- Verify Metabase is accessible and dashboards load within performance thresholds (<5 seconds)
- Review UniFi network controller alerts for any connectivity issues, AP disconnects, or threat detections
Monthly (Included in Managed Service)
- 30-minute virtual review meeting with restaurant manager to discuss insights, answer questions, and identify new reporting needs
- Update menu item dimension table if menu changes occurred (new items, discontinued items, price changes)
- Review and adjust daypart definitions if operating hours changed
- Apply OS security patches to the cloud VM (unattended-upgrades handles most, but verify monthly)
- Update Docker images (Metabase, n8n, PostgreSQL) if security updates are available — test in staging first
- Export monthly analytics summary report (PDF) for client records
Quarterly
- Comprehensive security review: verify VLAN isolation, check firewall rules, rotate passwords, verify MFA enforcement
- PCI compliance check: run ASV scan if applicable, verify no cardholder data in analytics pipeline
- Performance optimization: VACUUM ANALYZE on PostgreSQL tables, review slow query log, optimize materialized views if needed
- Review and adjust server performance scoring weights based on manager feedback
- Evaluate new POS firmware/software updates and apply during a scheduled maintenance window
- Review MarginEdge and 7shifts for new features that should be enabled
Annually
- Full platform review: assess whether current tools still meet the client's needs as they grow
- Contract renewal and pricing review
- Archive data older than 3 years to cold storage (if not needed for trending)
- Update disaster recovery documentation and test backup restoration
Escalation Path
- Level 1 (MSP Help Desk): Dashboard access issues, password resets, email digest questions — respond within 4 hours during business hours
- Level 2 (MSP Engineer): ETL failures, data discrepancies, network issues, POS integration problems — respond within 2 hours
- Level 3 (MSP Architect / Vendor Support): Platform outages, major data corruption, security incidents — respond within 1 hour, engage vendor support as needed
- Critical (POS Down): If the POS system or payment processing is affected, treat as P1 with 30-minute response SLA — this is revenue-impacting
Critical (POS Down): If the POS system or payment processing is affected, treat as P1 with 30-minute response SLA — this is revenue-impacting.
Model/Analytics Retraining Triggers
- Menu overhaul (>20% of items changed): Re-run menu engineering baseline and update dimension tables
- Seasonal shift (daylight saving, summer/winter hours): Adjust daypart boundaries
- Staffing model change (new roles, tip pooling changes): Recalibrate server scoring weights
- New location added: Provision new restaurant_id in database, configure POS integration, clone dashboard collection
Alternatives
All-in-One Platform Approach (Restaurant365 or CrunchTime)
Instead of the multi-tool approach (MarginEdge + 7shifts + Metabase), deploy a single all-in-one platform like Restaurant365 ($249+/location/month) or CrunchTime that provides accounting, inventory, labor, and analytics in a unified system. This eliminates the need for custom Metabase dashboards, PostgreSQL data warehouse, and n8n ETL pipelines. The MSP's role shifts to platform configuration and ongoing management rather than infrastructure hosting.
Toast Ecosystem-Only Approach
For clients already on Toast POS, leverage Toast's native analytics (Toast Pulse), xtraCHEF by Toast for menu costing ($179/location/month), and Toast Payroll for labor data. This keeps everything within a single vendor ecosystem and leverages Toast's built-in reporting for daypart analysis and server performance, supplemented only by 7shifts for advanced scheduling.
Power BI Pro Custom Dashboard Approach
Replace the self-hosted Metabase + PostgreSQL stack with Microsoft Power BI Pro ($14/user/month) connecting directly to POS APIs via Power BI dataflows or a lightweight Azure SQL Database. Build all dashboards in Power BI with its rich visualization library. This leverages the MSP's existing Microsoft CSP relationship and avoids managing open-source infrastructure.
Lightspeed Advanced Insights Native Approach
For clients on Lightspeed Restaurant POS, leverage Lightspeed's built-in Advanced Insights module (included in all plans) which provides the 'Magic Menu Quadrant' — Lightspeed's proprietary version of the menu engineering matrix that classifies items as 'Greatest Hits,' 'One-Hit Wonders,' 'Underperformers,' and 'Hidden Gems.' Supplement with 7shifts for labor analytics.
AI-First Forecasting Approach (Lineup.ai + MarginEdge)
Add Lineup.ai as the primary forecasting engine alongside MarginEdge for menu costing. Lineup.ai uses machine learning to generate hourly sales forecasts, item-level demand predictions, and optimized labor schedules. This adds predictive capability beyond the descriptive analytics of the primary approach.
Want early access to the full toolkit?