
Implementation Guide: Monitor food cost variances in real time and flag suppliers with price anomalies
Step-by-step implementation guide for deploying AI to monitor food cost variances in real time and flag suppliers with price anomalies for Restaurants & Food Service clients.
Hardware Procurement
Back-Office Workstation
$550 MSP cost / $800 suggested resale per unit
Primary back-office workstation for dashboard access, invoice scanning station, AI agent monitoring, and daily food cost review. Intel i5-14400T, 16GB DDR5, 512GB NVMe SSD, Wi-Fi 6, Windows 11 Pro. Positioned at the manager's desk or back office.
High-Speed Invoice Scanner
$500 MSP cost / $675 suggested resale per unit
Rapid digitization of paper supplier invoices, delivery tickets, and credit memos. 40 ppm duplex scanning with 4.3-inch touchscreen, Wi-Fi connectivity, and intelligent document extraction that creates searchable PDFs and exports data fields to the food cost platform.
Mobile Inventory Tablet
$330 MSP cost / $425 suggested resale per unit
Used by kitchen staff and managers for mobile inventory counting during walk-in checks, delivery receiving verification, quick cost lookups on the food cost platform's mobile app, and reviewing AI agent alerts on the floor.
Business Router/Firewall
$350 MSP cost / $500 suggested resale per unit
Core network appliance providing VLAN segmentation between POS/payment network and back-office/AI network, firewall rules, IDS/IPS, and VPN access for MSP remote management. Ensures PCI DSS 4.0 network segmentation compliance.
Managed PoE Switch
$110 MSP cost / $160 suggested resale per unit
Managed Layer 2 switch for back-office network with PoE for access points. Supports VLAN tagging to isolate POS traffic from back-office/AI agent traffic.
Wireless Temperature Sensors (4-pack)
$180 MSP cost (4 sensors) / $260 suggested resale
Optional IoT sensors for walk-in cooler and freezer monitoring. Correlates temperature excursion events with spoilage-related cost variances, providing the AI agent with environmental context for waste anomaly detection.
Software Procurement
...
MarketMan Growth Plan
$249/month per location
Core food cost and inventory management platform. Provides real-time recipe cost calculation, purchase order management, supplier catalog, invoice digitization and line-item extraction, actual vs. theoretical food cost tracking, and API access for the AI agent layer. No setup fees, no per-user charges. Dedicated onboarding team included.
n8n Cloud Pro Plan
$50/month for 10,000 workflow executions
AI agent orchestration platform. Visual workflow builder that connects MarketMan API data to anomaly detection logic, LLM analysis, and alerting channels. Supports autonomous AI agent nodes that make decisions, interact with APIs, and execute multi-step tasks without human intervention. Provides webhook triggers, scheduled executions, error handling, and execution logging.
OpenAI API (GPT-4.1-mini)
$15–$75/month ($0.40/M input tokens, $1.60/M output tokens)
Large language model API powering invoice data parsing, supplier price anomaly narrative generation, cost variance root cause analysis, and natural-language alert composition. GPT-4.1-mini provides the optimal cost-performance ratio for structured data analysis tasks with its 1M+ token context window.
OpenAI API (GPT-5.4 mini) — Classification Tier
$5–$15/month. $0.15/M input tokens, $0.60/M output tokens
Ultra-low-cost model used for simple classification tasks: categorizing invoice line items, routing alerts by severity level, and binary anomaly yes/no decisions before escalating to GPT-4.1-mini for detailed analysis.
Slack Pro Plan
$8.75/user/month (or use free tier for small teams)
Primary alerting channel for AI agent notifications. Dedicated #food-cost-alerts channel receives real-time anomaly flags, daily cost variance summaries, and weekly supplier benchmarking reports. Supports threaded discussions for action tracking.
PostgreSQL Database (managed)
$15/month
Persistent storage for historical price data, anomaly detection baselines, agent decision logs, and audit trail. Used by n8n workflows to query historical supplier pricing for z-score anomaly calculations and trend analysis.
Twilio SMS API
$5–$15/month (~$0.0079/SMS sent)
SMS alerting for critical price anomalies that exceed configurable thresholds. Ensures restaurant owners/managers receive urgent notifications even when not monitoring Slack or email. Used as escalation path for high-severity alerts.
Prerequisites
- Active POS system (Toast, Square, Lightspeed, Revel, Clover, or similar) with API access enabled or data export capability. Toast requires API credentials (Client ID and Client Secret) from the Toast Developer Portal.
- Active accounting software subscription (QuickBooks Online, Xero, or Sage) with administrator credentials for integration setup.
- All menu items digitized with complete recipe cards: every ingredient, portion size (in consistent units — oz, lb, each), and current unit cost. Minimum 80% recipe coverage required before Phase 3.
- Vendor/supplier catalog with item names, SKUs, pack sizes, and unit-of-measure mappings. Must cover all active suppliers (typically 5–15 for an independent restaurant).
- Minimum 3 months of historical purchase invoices (paper or digital) for baseline anomaly detection training. 6 months preferred for seasonal adjustment.
- Dedicated business email address for invoice forwarding (e.g., invoices@restaurantname.com) — most food cost platforms accept emailed invoices for automatic processing.
- Reliable internet connection: minimum 25 Mbps download / 10 Mbps upload per location. Verify with speed test during site survey.
- Designated back-office area with power outlet and ethernet connectivity for workstation and scanner placement.
- Restaurant manager or owner identified as primary point of contact with authority to approve supplier changes and operational process modifications.
- OpenAI API account with billing configured and API key generated. Organization-level account recommended for usage tracking.
- Slack workspace created (or Teams/email distribution list) with #food-cost-alerts channel and all relevant stakeholders invited.
- PCI DSS network segmentation: confirm that POS/payment processing is on a separate network segment from back-office systems. If not, VLAN configuration is prerequisite to deployment.
Installation Steps
...
Step 1: Site Survey and Network Assessment
Conduct a physical site survey of the restaurant to assess existing network infrastructure, identify placement for back-office workstation and scanner, verify internet speeds, document current POS system details, and evaluate network segmentation status. Map all existing technology including POS terminals, kitchen display systems, printers, and any existing inventory management tools.
speedtest-cli --simple
nmap -sn 192.168.1.0/24- Document: POS vendor, model, software version
- Document: Router make/model, current VLAN config
- Document: Number of network drops in back office
- Photo: Proposed workstation/scanner placement
Schedule site survey during off-peak hours (2-4 PM typical). Bring a laptop with Ethernet adapter for wired speed test. If internet speed is below 25/10 Mbps, recommend ISP upgrade before proceeding. Check if existing router supports VLANs — if not, the Ubiquiti UDM-Pro becomes mandatory rather than optional.
Step 2: Network Infrastructure Deployment
Install and configure the Ubiquiti UniFi Dream Machine Pro and USW-Lite-8-PoE switch. Create VLAN segmentation to isolate POS/payment traffic (VLAN 10) from back-office/AI operations traffic (VLAN 20). Configure firewall rules, enable IDS/IPS, and set up VPN for MSP remote management access.
# Access UDM-Pro at https://192.168.1.1 after initial setup
# Create Networks in UniFi Controller:
# VLAN 10 - POS Network: 10.10.10.0/24 (POS terminals, payment devices)
# VLAN 20 - Back Office: 10.20.20.0/24 (workstation, scanner, tablets)
# VLAN 30 - IoT: 10.30.30.0/24 (temperature sensors, optional)
# Firewall Rules:
# Allow VLAN 20 -> Internet (HTTPS 443 outbound only for SaaS/API)
# Block VLAN 20 -> VLAN 10 (no back-office access to POS network)
# Allow VLAN 20 -> VLAN 10 on specific ports ONLY if POS API is on-premise
# Block all inter-VLAN traffic by default
# Enable Threat Management (IDS/IPS) on WAN interface
# Configure Site-to-Site VPN or WireGuard for MSP remote access
# Set DHCP reservations for workstation, scanner, and tabletPCI DSS 4.0 requires network segmentation between cardholder data environment and all other systems. Document all firewall rules for compliance audit trail. Test that POS terminals can still reach their payment processor after VLAN changes. If restaurant has existing Meraki or SonicWall, adapt VLAN configuration to that platform instead.
Step 3: Workstation and Scanner Hardware Setup
Unbox and configure the Lenovo ThinkCentre M70q Gen 5 and Ricoh ScanSnap iX1600. Install Windows 11 Pro updates, join the back-office VLAN, install ScanSnap Home software, configure scan profiles for invoices, and set up the scanner to auto-forward scanned documents to the food cost platform's invoice email address.
# Windows 11 Pro initial setup — use local admin account, not Microsoft account
# Install all Windows Updates
winget install --id Notepad++.Notepad++ -e
winget install --id Google.Chrome -e
winget install --id SlackTechnologies.Slack -e
winget install --id Python.Python.3.12 -e
# Download and install ScanSnap Home from https://www.pfu.ricoh.com/global/scanners/scansnap/dl/
# In ScanSnap Home, create scan profile:
# Name: 'Supplier Invoice'
# Color Mode: Auto
# Resolution: 300 dpi (optimal for OCR)
# File Format: Searchable PDF
# Destination: Email to invoices@restaurantname.com
# Also save to: C:\Invoices\{Year}\{Month}\ (local backup)
# Create second profile:
# Name: 'Delivery Ticket'
# Same settings but also CC manager email
# Configure Windows Defender + set up auto-updates
# Set power plan to 'High Performance' — never sleep during business hours
# Install RMM agent (ConnectWise/Datto/NinjaRMM per MSP standard)The ScanSnap iX1600's 4.3-inch touchscreen allows kitchen staff to select the correct scan profile without using the computer. Train the client to use the 'Supplier Invoice' profile for all incoming invoices. Consider setting up a second scan profile for credit memos. Ensure ScanSnap Home OCR language is set to English. The local backup folder provides redundancy in case email delivery fails.
Step 4: Food Cost Platform Deployment — MarketMan Setup
Create the client's MarketMan account on the Growth plan ($249/mo per location). Complete initial onboarding including POS integration, supplier catalog setup, recipe/menu item entry, and invoice workflow configuration. MarketMan's onboarding team assists with this phase — schedule the kickoff call.
Recipe entry is the most time-consuming step — budget 2-4 hours for a typical 50-item menu. Request the client's existing recipe documentation (even if it's handwritten or in a spreadsheet). MarketMan's onboarding team can assist with bulk import via CSV template. The Growth plan includes unlimited users and API access, which is required for the AI agent layer. Critical: verify that pack sizes and units of measure are consistent between supplier catalog and recipes (e.g., pounds vs. ounces) — mismatched units are the #1 source of food cost calculation errors.
Step 5: Historical Data Ingestion and Baseline Establishment
Scan and process 3–6 months of historical invoices through MarketMan to build the pricing baseline required for anomaly detection. This data accumulation phase typically runs 4–8 weeks for ongoing invoices, but historical invoices can be batch-processed in 1–2 weeks.
- Batch scan historical invoices using ScanSnap iX1600: Sort invoices by supplier and date
- Use 'Supplier Invoice' scan profile
- Process in batches of 20-30 invoices per scanning session
- MarketMan's AI extracts line items, prices, dates automatically
- Review and approve each invoice in MarketMan dashboard (first 50 invoices need careful review to train the system's item matching)
- Verify data quality after ingestion — In MarketMan: Reports > Purchase History > Filter by date range
- Check for missing suppliers (compare against actual supplier list)
- Check for items with $0 cost (failed extraction)
- Check for duplicate invoices (same invoice number + date)
- Check for unmapped items (items not linked to recipes)
# these endpoints will be called by the n8n agent. Document API key
# location.
GET https://api.marketman.com/v3/buyers/inventory/items
GET https://api.marketman.com/v3/buyers/invoicesThe minimum viable baseline is 3 months of invoice data covering all active suppliers. 6 months is ideal as it captures seasonal price fluctuations (e.g., produce prices vary significantly by season). Assign a restaurant staff member to review and approve extracted invoice data daily during this phase — MarketMan's AI extraction improves over time with corrections. Flag any suppliers who deliver handwritten invoices — these may need manual entry or Ottimate for enhanced handwriting recognition.
Step 6: Database Setup for Agent State and Historical Analytics
Deploy a managed PostgreSQL database on DigitalOcean to store historical pricing data, anomaly detection baselines, agent decision logs, and audit trails. This database serves as the persistent memory for the AI agent, enabling z-score calculations, trend analysis, and compliance logging.
# 1. Create DigitalOcean Managed PostgreSQL Database
# Plan: Basic, 1 vCPU, 1GB RAM, 10GB storage ($15/mo)
# Region: Select nearest to restaurant location (e.g., nyc3, sfo3)
# Database name: foodcost_agent
# Note the connection string, host, port, user, password
# 2. Connect and create schema:
psql "postgresql://doadmin:<password>@<host>:25060/foodcost_agent?sslmode=require"
CREATE TABLE supplier_prices (
id SERIAL PRIMARY KEY,
supplier_name VARCHAR(255) NOT NULL,
item_name VARCHAR(255) NOT NULL,
item_sku VARCHAR(100),
unit_of_measure VARCHAR(50) NOT NULL,
unit_price DECIMAL(10,4) NOT NULL,
invoice_date DATE NOT NULL,
invoice_number VARCHAR(100),
pack_size VARCHAR(100),
location_id VARCHAR(50),
ingested_at TIMESTAMP DEFAULT NOW(),
source VARCHAR(50) DEFAULT 'marketman'
);
CREATE TABLE price_baselines (
id SERIAL PRIMARY KEY,
supplier_name VARCHAR(255) NOT NULL,
item_name VARCHAR(255) NOT NULL,
avg_price DECIMAL(10,4) NOT NULL,
std_dev DECIMAL(10,4) NOT NULL,
min_price DECIMAL(10,4),
max_price DECIMAL(10,4),
sample_count INTEGER NOT NULL,
period_start DATE NOT NULL,
period_end DATE NOT NULL,
calculated_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE anomaly_log (
id SERIAL PRIMARY KEY,
detected_at TIMESTAMP DEFAULT NOW(),
anomaly_type VARCHAR(50) NOT NULL,
severity VARCHAR(20) NOT NULL,
supplier_name VARCHAR(255),
item_name VARCHAR(255),
expected_price DECIMAL(10,4),
actual_price DECIMAL(10,4),
z_score DECIMAL(6,3),
variance_pct DECIMAL(6,2),
description TEXT,
ai_analysis TEXT,
action_taken VARCHAR(100) DEFAULT 'pending',
resolved_at TIMESTAMP,
resolved_by VARCHAR(100)
);
CREATE TABLE food_cost_variance (
id SERIAL PRIMARY KEY,
report_date DATE NOT NULL,
location_id VARCHAR(50),
category VARCHAR(100),
theoretical_cost DECIMAL(12,2),
actual_cost DECIMAL(12,2),
variance_amount DECIMAL(12,2),
variance_pct DECIMAL(6,2),
sales_total DECIMAL(12,2),
theoretical_pct DECIMAL(6,2),
actual_pct DECIMAL(6,2),
ai_analysis TEXT,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE agent_activity_log (
id SERIAL PRIMARY KEY,
timestamp TIMESTAMP DEFAULT NOW(),
agent_name VARCHAR(100),
action_type VARCHAR(50),
input_summary TEXT,
output_summary TEXT,
tokens_used INTEGER,
cost_usd DECIMAL(8,6),
success BOOLEAN DEFAULT TRUE,
error_message TEXT
);
CREATE INDEX idx_supplier_prices_lookup ON supplier_prices(supplier_name, item_name, invoice_date);
CREATE INDEX idx_anomaly_log_date ON anomaly_log(detected_at, severity);
CREATE INDEX idx_food_cost_variance_date ON food_cost_variance(report_date, location_id);DigitalOcean managed PostgreSQL includes automated backups, failover, and SSL-encrypted connections. The free backup retention is 7 days. For compliance, consider enabling point-in-time recovery. The database will grow approximately 50-100MB per year for a single location processing 200-500 invoices/month. Scale to the $30/mo plan (2 vCPU, 2GB RAM) if adding more than 5 locations.
Step 7: n8n Cloud Deployment and Configuration
Set up the n8n Cloud Pro account, configure credentials for all integrated services (MarketMan API, OpenAI API, PostgreSQL, Slack, Twilio), and establish the base workflow environment. n8n serves as the agent orchestration platform — all autonomous workflows run here.
n8n Cloud Pro includes 10,000 executions/month which is sufficient for a single location running hourly checks plus daily reports. Each workflow execution counts as one execution regardless of the number of nodes. If self-hosting is preferred (for data sovereignty or cost), deploy n8n on a DigitalOcean Droplet: 2 vCPU, 4GB RAM, 80GB SSD ($24/mo) using docker-compose. The MarketMan API key has rate limits — check current limits in their developer docs and implement backoff in workflows.
Step 8: Deploy AI Agent Workflows in n8n
Build and deploy the four core n8n workflows that comprise the autonomous agent system: (1) Invoice Data Ingestion, (2) Supplier Price Anomaly Detection, (3) Food Cost Variance Analysis, and (4) Weekly Supplier Benchmarking Report. These workflows are detailed in the custom_ai_components section and should be imported or built node-by-node in the n8n editor.
- Import workflows via n8n REST API or manually build in editor
- Workflow 1: Invoice Data Ingestion (runs every 2 hours) — Trigger: Schedule (every 2 hours during business hours 6AM-10PM) | Nodes: MarketMan API fetch -> Transform -> PostgreSQL insert -> Error handler
- Workflow 2: Supplier Price Anomaly Detection (runs after each ingestion) — Trigger: Webhook from Workflow 1 completion | Nodes: PostgreSQL query baselines -> Calculate z-scores -> Filter anomalies -> GPT-4.1-mini analysis -> Severity classification -> Slack/Twilio alert
- Workflow 3: Daily Food Cost Variance (runs daily at 6AM) — Trigger: Schedule (daily 6:00 AM) | Nodes: MarketMan sales data -> MarketMan purchase data -> Calculate actual vs theoretical -> GPT-4.1-mini root cause analysis -> PostgreSQL log -> Slack daily summary
- Workflow 4: Weekly Supplier Benchmarking (runs Monday 7AM) — Trigger: Schedule (weekly Monday 7:00 AM) | Nodes: PostgreSQL historical query -> Cross-supplier comparison -> GPT-4.1 detailed analysis -> Format report -> Slack + Email
- After building each workflow: (1) Run manual test with sample data, (2) Review execution log for errors, (3) Activate workflow, (4) Monitor first 3 automated runs
Build workflows incrementally — get Workflow 1 (data ingestion) running and verified before building Workflow 2 (anomaly detection) since it depends on ingested data. Use n8n's built-in error handling nodes to catch API failures and send error alerts to the MSP's monitoring channel. Set execution timeout to 120 seconds per workflow to prevent runaway API calls. The detailed workflow specifications including all node configurations are provided in the custom_ai_components section.
Step 9: Configure Anomaly Detection Thresholds
Set initial anomaly detection thresholds based on the client's historical data and restaurant industry benchmarks. These thresholds determine when the AI agent triggers alerts and at what severity level. Thresholds will be tuned during Phase 4 based on real-world performance.
-- Insert initial threshold configuration into PostgreSQL:
INSERT INTO price_baselines (supplier_name, item_name, avg_price, std_dev, min_price, max_price, sample_count, period_start, period_end)
SELECT
supplier_name,
item_name,
AVG(unit_price) as avg_price,
STDDEV(unit_price) as std_dev,
MIN(unit_price) as min_price,
MAX(unit_price) as max_price,
COUNT(*) as sample_count,
MIN(invoice_date) as period_start,
MAX(invoice_date) as period_end
FROM supplier_prices
WHERE invoice_date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY supplier_name, item_name
HAVING COUNT(*) >= 3;- Z_SCORE_WARNING = 1.5 (price is 1.5 std deviations above mean)
- Z_SCORE_CRITICAL = 2.5 (price is 2.5 std deviations above mean)
- VARIANCE_PCT_WARNING = 5.0 (item price >5% above rolling average)
- VARIANCE_PCT_CRITICAL = 15.0 (item price >15% above rolling average)
- FOOD_COST_VARIANCE_WARNING = 2.0 (actual food cost >2% above theoretical)
- FOOD_COST_VARIANCE_CRITICAL = 5.0 (actual food cost >5% above theoretical)
- MIN_SAMPLES_FOR_BASELINE = 3 (minimum invoice entries before anomaly detection activates)
- BASELINE_WINDOW_DAYS = 90 (rolling 90-day window for baseline calculation)
These are conservative starting thresholds designed to minimize false positives during the initial deployment. Common tuning adjustments: seasonal produce items (lettuce, tomatoes, avocados) need wider thresholds (z-score 2.0/3.0) due to natural price volatility. Protein items (chicken, beef) can use tighter thresholds (z-score 1.2/2.0) as pricing tends to be more stable on contract. Document all threshold changes in the agent_activity_log table for audit purposes. Schedule a threshold review session 30 days after go-live.
Step 10: Tablet and Mobile Access Configuration
Configure the iPad for mobile inventory counts, delivery receiving, and alert monitoring. Install MarketMan mobile app, Slack, and configure MDM enrollment for MSP remote management.
The iPad is primarily used by the kitchen manager during delivery receiving — they can scan delivery ticket barcodes and verify quantities/prices against POs in MarketMan. Consider purchasing a ruggedized case (OtterBox Defender ~$50) for kitchen environment protection. If budget is constrained, the tablet can be deferred — all functions are accessible via the workstation web browser.
Step 11: IoT Temperature Sensor Deployment (Optional)
Install MOCREO wireless temperature sensors in walk-in cooler, walk-in freezer, and any critical cold-storage areas. Configure cloud monitoring and create correlation alerts in n8n that link temperature excursions with spoilage-related cost variances.
Temperature monitoring is optional but highly valuable for correlating spoilage events with cost variances. When the AI agent detects an unexplained increase in waste costs for perishable items, it can cross-reference temperature logs to determine if a cold-chain failure was the root cause. MOCREO sensors have a 1-year battery life and connect via the gateway hub to Wi-Fi. For HACCP-regulated environments, consider upgrading to ComplianceMate which provides automated compliance logging and health department-ready reports.
Step 12: End-to-End System Testing and Go-Live
Execute a comprehensive test plan covering every component of the system from invoice scanning through AI anomaly detection and alerting. Verify data accuracy, alert delivery, and system resilience. This step should be completed over 3-5 business days with active monitoring.
SELECT * FROM agent_activity_log ORDER BY timestamp DESC LIMIT 20;Do not activate all workflows simultaneously on day one. Recommended go-live sequence: Day 1 — activate data ingestion only. Day 2 — activate anomaly detection (monitor for false positives). Day 3 — activate daily variance. Day 5 — activate weekly benchmarking. Keep a spreadsheet tracking all test results for client sign-off. Any false positives in the first week should be documented and thresholds adjusted in Step 9 configuration.
Custom AI Components
Invoice Data Ingestion Agent
Type: workflow An n8n workflow that runs every 2 hours during business hours (6 AM–10 PM) to pull newly processed invoices from the MarketMan API, transform the data into a standardized schema, store it in the PostgreSQL database for historical analysis, and trigger downstream anomaly detection. This agent handles data quality checks including duplicate detection, missing field identification, and unit-of-measure normalization.
Implementation
Workflow Configuration
- Name:
01-invoice-data-ingestion - Trigger: Schedule — Every 2 hours, Mon-Sun, 6AM-10PM (restaurant timezone)
- Timeout: 120 seconds
- Error Workflow: Link to
error-handler-msp-notifyworkflow
Node-by-Node Specification
Node 1: Schedule Trigger
- Type:
Schedule Trigger - Cron:
0 6-22/2 * * * - Timezone: Set to client's local timezone
Node 2: Get Last Sync Timestamp
- Type:
Postgres - Operation: Execute Query
- Credential: PostgreSQL (foodcost_agent)
SELECT COALESCE(MAX(ingested_at), NOW() - INTERVAL '24 hours') as last_sync FROM supplier_prices;Node 3: Fetch New Invoices from MarketMan
- Type:
HTTP Request - Method: GET
- URL:
https://api.marketman.com/v3/buyers/invoices - Authentication: HTTP Header Auth (MarketMan API)
- Query Parameters:
dateFrom:{{ $json.last_sync | date: 'YYYY-MM-DD' }},dateTo:{{ $now.format('YYYY-MM-DD') }},status:approved - Response format: JSON
Node 4: Check for Results
- Type:
IF - Condition:
{{ $json.invoices.length > 0 }} - True: Continue to Node 5
- False: End workflow (no new data)
Node 5: Split Invoice Line Items
- Type:
Item Lists - Operation: Split Out
- Field to Split:
invoices[*].lineItems - Include: Parent invoice fields (invoiceNumber, supplierName, invoiceDate)
Node 6: Transform and Normalize
- Type:
Code - Language: JavaScript
const items = $input.all();
const normalized = items.map(item => {
const data = item.json;
// Normalize unit of measure
const uomMap = {
'LB': 'lb', 'LBS': 'lb', 'POUND': 'lb', 'POUNDS': 'lb',
'OZ': 'oz', 'OUNCE': 'oz', 'OUNCES': 'oz',
'CS': 'case', 'CASE': 'case', 'CASES': 'case',
'EA': 'each', 'EACH': 'each', 'PC': 'each', 'PCS': 'each',
'GAL': 'gal', 'GALLON': 'gal', 'GALLONS': 'gal',
'BG': 'bag', 'BAG': 'bag', 'BAGS': 'bag'
};
const rawUom = (data.unitOfMeasure || 'each').toUpperCase().trim();
const normalizedUom = uomMap[rawUom] || rawUom.toLowerCase();
// Calculate unit price
const quantity = parseFloat(data.quantity) || 1;
const totalPrice = parseFloat(data.totalPrice) || 0;
const unitPrice = quantity > 0 ? (totalPrice / quantity) : totalPrice;
return {
json: {
supplier_name: (data.supplierName || '').trim(),
item_name: (data.itemName || '').trim(),
item_sku: (data.sku || '').trim(),
unit_of_measure: normalizedUom,
unit_price: Math.round(unitPrice * 10000) / 10000,
invoice_date: data.invoiceDate,
invoice_number: (data.invoiceNumber || '').trim(),
pack_size: (data.packSize || '').trim(),
location_id: (data.locationId || 'default').trim(),
source: 'marketman'
}
};
});
return normalized;Node 7: Duplicate Check
- Type:
Postgres - Operation: Execute Query
SELECT COUNT(*) as cnt FROM supplier_prices
WHERE invoice_number = '{{ $json.invoice_number }}'
AND item_name = '{{ $json.item_name }}'
AND supplier_name = '{{ $json.supplier_name }}';Node 8: Filter Duplicates
- Type:
IF - Condition:
{{ $json.cnt == 0 }} - True: Continue to insert
- False: Skip (duplicate)
Node 9: Insert into PostgreSQL
- Type:
Postgres - Operation: Insert
- Table:
supplier_prices - Columns: supplier_name, item_name, item_sku, unit_of_measure, unit_price, invoice_date, invoice_number, pack_size, location_id, source
- On conflict: Do nothing
Node 10: Log Agent Activity
- Type:
Postgres - Operation: Insert
- Table:
agent_activity_log - agent_name:
invoice_ingestion - action_type:
data_sync - input_summary:
Processed {{ $items.length }} line items from MarketMan - output_summary:
Inserted {{ $json.insertCount }} new records, {{ $json.skipCount }} duplicates skipped - tokens_used: 0
- cost_usd: 0
- success: true
Node 11: Trigger Anomaly Detection
- Type:
HTTP Request(webhook to Workflow 2) - Method: POST
- URL:
<workflow-2-webhook-url>
{ "trigger": "new_invoices", "count": {{ $json.insertCount }} }Supplier Price Anomaly Detection Agent
Type: agent An autonomous AI agent that analyzes newly ingested invoice data against historical baselines to detect supplier pricing anomalies. Uses z-score statistical analysis for initial screening, then employs GPT-4.1-mini to generate natural-language analysis of flagged anomalies, classify severity, and compose actionable alerts. Handles three anomaly types: (1) single-item price spikes, (2) systematic supplier-wide price increases, and (3) cross-supplier price arbitrage opportunities.
n8n Workflow: Supplier Price Anomaly Detection Agent
- Name:
02-supplier-price-anomaly-detection - Trigger: Webhook (called by Workflow 1 after ingestion) + Schedule fallback (daily 11 PM)
- Timeout: 180 seconds
Node Specification
Node 1: Webhook Trigger
- Type:
Webhook - Method: POST
- Path:
/anomaly-check - Authentication: Header Auth (shared secret)
Node 2: Fetch Recent Prices with Baselines
- Type:
Postgres
WITH recent_prices AS (
SELECT
sp.supplier_name,
sp.item_name,
sp.unit_price,
sp.invoice_date,
sp.invoice_number,
sp.unit_of_measure
FROM supplier_prices sp
WHERE sp.ingested_at >= NOW() - INTERVAL '4 hours'
),
baselines AS (
SELECT
supplier_name,
item_name,
AVG(unit_price) as avg_price,
STDDEV(unit_price) as std_dev,
COUNT(*) as sample_count,
MAX(unit_price) as hist_max,
MIN(unit_price) as hist_min
FROM supplier_prices
WHERE invoice_date >= CURRENT_DATE - INTERVAL '90 days'
AND ingested_at < NOW() - INTERVAL '4 hours'
GROUP BY supplier_name, item_name
HAVING COUNT(*) >= 3
)
SELECT
rp.*,
b.avg_price,
b.std_dev,
b.sample_count,
b.hist_max,
b.hist_min,
CASE
WHEN b.std_dev > 0 THEN ROUND(((rp.unit_price - b.avg_price) / b.std_dev)::numeric, 3)
ELSE 0
END as z_score,
CASE
WHEN b.avg_price > 0 THEN ROUND((((rp.unit_price - b.avg_price) / b.avg_price) * 100)::numeric, 2)
ELSE 0
END as variance_pct
FROM recent_prices rp
LEFT JOIN baselines b ON rp.supplier_name = b.supplier_name AND rp.item_name = b.item_name
ORDER BY z_score DESC;Node 3: Filter Anomalies
- Type:
IF - Condition:
{{ Math.abs($json.z_score) >= 1.5 || Math.abs($json.variance_pct) >= 5.0 }} - True: Route to anomaly processing
- False: Route to no-anomaly exit
Node 4: Classify Severity
- Type:
Code - Language: JavaScript
const items = $input.all();
return items.map(item => {
const d = item.json;
let severity = 'info';
const absZ = Math.abs(d.z_score);
const absVar = Math.abs(d.variance_pct);
if (absZ >= 2.5 || absVar >= 15.0) {
severity = 'critical';
} else if (absZ >= 1.5 || absVar >= 5.0) {
severity = 'warning';
}
return { json: { ...d, severity } };
});Node 5: Batch Anomalies for AI Analysis
- Type:
Code - Language: JavaScript
const anomalies = $input.all().map(i => i.json);
const batchPrompt = anomalies.map(a =>
`- ${a.supplier_name} | ${a.item_name}: $${a.unit_price}/${a.unit_of_measure} ` +
`(avg: $${a.avg_price}, z-score: ${a.z_score}, variance: ${a.variance_pct}%, ` +
`severity: ${a.severity}, invoice: ${a.invoice_number}, date: ${a.invoice_date})`
).join('\n');
return [{ json: { anomalies, batchPrompt, count: anomalies.length } }];Node 6: GPT-4.1-mini Anomaly Analysis
- Type:
OpenAI(orHTTP Requestto OpenAI API) - Model:
gpt-4.1-mini - Temperature: 0.2
- Max Tokens: 2000
Node 6 System Prompt — Food Cost Analyst Agent
Node 6 User Message — Anomaly Analysis Request
Node 7: Parse AI Response
- Type:
Code - Language: JavaScript
const aiResponse = JSON.parse($json.message.content);
const anomalies = $json.anomalies;
const enriched = anomalies.map((anomaly, i) => {
const analysis = aiResponse.find(a =>
a.item_name === anomaly.item_name && a.supplier_name === anomaly.supplier_name
) || aiResponse[i] || {};
return {
json: {
...anomaly,
ai_explanation: analysis.explanation || 'Analysis unavailable',
ai_root_cause: analysis.root_cause || 'Unknown',
ai_recommended_action: analysis.recommended_action || 'Review manually',
ai_urgency: analysis.urgency || anomaly.severity,
ai_monthly_impact: analysis.estimated_monthly_impact_usd || 0
}
};
});
return enriched;Node 8: Log Anomalies to Database
- Type:
Postgres - Operation: Insert (for each anomaly)
- Table:
anomaly_log - anomaly_type:
'price_spike' - severity:
{{ $json.severity }} - supplier_name, item_name, expected_price (avg_price), actual_price (unit_price)
- z_score, variance_pct
- description:
{{ $json.ai_explanation }} - ai_analysis:
{{ $json.ai_recommended_action }}
Node 9: Route by Severity
- Type:
Switch - Field:
{{ $json.severity }} critical→ Node 10 (Slack + SMS)warning→ Node 11 (Slack only)
Node 10: Critical Alert — Slack + Twilio SMS
- Type:
Slack(send message) - Channel:
#food-cost-alerts
Node 10 Slack Message Template — Critical Alert
- Then:
Twiliosend SMS - To:
{{ manager_phone_number }} - Body:
ALERT: {{ $json.supplier_name }} - {{ $json.item_name }} priced {{ $json.variance_pct }}% above average (${{ $json.unit_price }} vs ${{ $json.avg_price }}). Est. impact: ${{ $json.ai_monthly_impact }}/mo. Check Slack for details.
Node 11: Warning Alert — Slack Only
- Type:
Slack - Channel:
#food-cost-alerts
Node 11 Slack Message Template — Warning Alert
Node 12: Log Agent Activity
- Type:
Postgres— Insert intoagent_activity_log - agent_name:
anomaly_detection - action_type:
anomaly_scan - input_summary:
Scanned {{ items_checked }} items - output_summary:
Found {{ critical_count }} critical, {{ warning_count }} warning anomalies - tokens_used:
{{ $node['GPT Analysis'].json.usage.total_tokens }} - cost_usd: calculated from token usage
Daily Food Cost Variance Analyzer
Type: agent Runs daily at 6 AM to calculate the previous day's actual vs. theoretical food cost variance by category (proteins, produce, dairy, dry goods, beverages). Pulls sales mix data from MarketMan (via POS integration) and purchase/inventory data to compute theoretical costs (what food should have cost based on recipes sold) vs. actual costs (what was actually spent). Uses GPT-4.1-mini to analyze root causes of variance and generate a daily summary for restaurant management.
Implementation:
n8n Workflow: Daily Food Cost Variance Analyzer
Workflow Configuration
- Name:
03-daily-food-cost-variance - Trigger: Schedule — Daily at 6:00 AM local time
- Timeout: 180 seconds
Node Specification
Node 1: Schedule Trigger
- Type:
Schedule Trigger - Cron:
0 6 * * *
Node 2: Fetch Yesterday's Sales Data
- Type:
HTTP Request - Method: GET
- URL:
https://api.marketman.com/v3/buyers/reports/salesMix - Query Parameters:
dateFrom:{{ $now.minus(1, 'day').format('YYYY-MM-DD') }},dateTo:{{ $now.minus(1, 'day').format('YYYY-MM-DD') }} - Authentication: MarketMan API credential
Node 3: Fetch Yesterday's Purchases
- Type:
HTTP Request - Method: GET
- URL:
https://api.marketman.com/v3/buyers/invoices - Query Parameters:
dateFrom:{{ $now.minus(1, 'day').format('YYYY-MM-DD') }},dateTo:{{ $now.minus(1, 'day').format('YYYY-MM-DD') }}
Node 4: Calculate Theoretical vs Actual
- Type:
Code - Language: JavaScript
const salesData = $('Fetch Sales').first().json;
const purchaseData = $('Fetch Purchases').first().json;
// Calculate theoretical food cost from sales mix
const categories = {};
const menuItems = salesData.items || [];
menuItems.forEach(item => {
const category = item.category || 'Uncategorized';
if (!categories[category]) {
categories[category] = {
theoretical_cost: 0,
sales_total: 0,
items_sold: 0
};
}
// Theoretical cost = quantity sold × recipe cost per item
const theoreticalItemCost = (item.quantitySold || 0) * (item.recipeCost || 0);
categories[category].theoretical_cost += theoreticalItemCost;
categories[category].sales_total += (item.quantitySold || 0) * (item.menuPrice || 0);
categories[category].items_sold += item.quantitySold || 0;
});
// Calculate actual food cost from purchases
const invoices = purchaseData.invoices || [];
let totalActualCost = 0;
const supplierTotals = {};
invoices.forEach(inv => {
const total = parseFloat(inv.totalAmount) || 0;
totalActualCost += total;
const supplier = inv.supplierName || 'Unknown';
supplierTotals[supplier] = (supplierTotals[supplier] || 0) + total;
});
// Calculate totals
let totalTheoretical = 0;
let totalSales = 0;
Object.values(categories).forEach(cat => {
totalTheoretical += cat.theoretical_cost;
totalSales += cat.sales_total;
});
const varianceAmount = totalActualCost - totalTheoretical;
const variancePct = totalTheoretical > 0
? ((varianceAmount / totalTheoretical) * 100).toFixed(2)
: 0;
const theoreticalPct = totalSales > 0
? ((totalTheoretical / totalSales) * 100).toFixed(2)
: 0;
const actualPct = totalSales > 0
? ((totalActualCost / totalSales) * 100).toFixed(2)
: 0;
return [{
json: {
report_date: $now.minus(1, 'day').format('YYYY-MM-DD'),
categories,
supplier_totals: supplierTotals,
total_theoretical: Math.round(totalTheoretical * 100) / 100,
total_actual: Math.round(totalActualCost * 100) / 100,
total_sales: Math.round(totalSales * 100) / 100,
variance_amount: Math.round(varianceAmount * 100) / 100,
variance_pct: parseFloat(variancePct),
theoretical_food_cost_pct: parseFloat(theoreticalPct),
actual_food_cost_pct: parseFloat(actualPct),
invoice_count: invoices.length
}
}];Node 5: Determine if Variance is Significant
- Type:
IF - Condition:
{{ Math.abs($json.variance_pct) >= 2.0 }} - True: Route to AI Analysis (Node 6)
- False: Route to Standard Summary (Node 8)
Node 6: GPT-4.1-mini Root Cause Analysis
- Type:
OpenAI - Model:
gpt-4.1-mini - Temperature: 0.3
- Max Tokens: 1500
Node 6: System Prompt — Food Cost Analyst
Node 6: User Message Template
Node 7: Save Variance to Database
- Type:
Postgres— Insert into food_cost_variance table
Node 8: Compose Daily Slack Summary
- Type:
Code+Slack - Variance emoji logic: ✅ if within ±2%, ⚠️ if 2–5%, 🚨 if >5%
📊 *Daily Food Cost Report — {{ report_date }}*
💰 *Sales:* ${{ total_sales }}
📋 *Theoretical Cost:* ${{ total_theoretical }} ({{ theoretical_food_cost_pct }}%)
📦 *Actual Cost:* ${{ total_actual }} ({{ actual_food_cost_pct }}%)
{{ variance_emoji }} *Variance:* ${{ variance_amount }} ({{ variance_pct }}%)
{{ #if significant_variance }}
🔍 *AI Root Cause Analysis:*
{{ ai_analysis }}
{{ /if }}
*Top Suppliers Yesterday:*
{{ supplier_summary }}
_{{ invoice_count }} invoices processed | Next report tomorrow 6 AM_Node 9: Log Agent Activity
- Type:
Postgres— Insert into agent_activity_log
Weekly Supplier Benchmarking Report
Type: workflow Runs every Monday at 7 AM to generate a comprehensive weekly supplier benchmarking report. Compares pricing across all suppliers for common items, identifies opportunities to negotiate better pricing or switch suppliers, tracks 4-week and 13-week price trends, and highlights items where the restaurant is paying above the supplier's own historical average. Uses GPT-4.1 (full model) for deeper analysis given the larger data set and strategic nature of the report.
Implementation:
n8n Workflow: Weekly Supplier Benchmarking Report
Workflow Configuration
- Name:
04-weekly-supplier-benchmarking - Trigger: Schedule — Every Monday at 7:00 AM
- Timeout: 300 seconds (5 min — larger dataset)
Node Specification
Node 1: Schedule Trigger
0 7 * * 1Node 2: Pull 13-Week Price History
- Type:
Postgres
SELECT
supplier_name,
item_name,
unit_of_measure,
DATE_TRUNC('week', invoice_date) as week,
AVG(unit_price) as avg_weekly_price,
COUNT(*) as purchase_count,
SUM(unit_price * 1) as total_spend_proxy
FROM supplier_prices
WHERE invoice_date >= CURRENT_DATE - INTERVAL '91 days'
GROUP BY supplier_name, item_name, unit_of_measure, DATE_TRUNC('week', invoice_date)
ORDER BY item_name, supplier_name, week;Node 3: Pull Cross-Supplier Comparison
- Type:
Postgres
-- top 20 items with best price alternatives
WITH item_suppliers AS (
SELECT
item_name,
supplier_name,
AVG(unit_price) as avg_price,
unit_of_measure,
COUNT(*) as order_count
FROM supplier_prices
WHERE invoice_date >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY item_name, supplier_name, unit_of_measure
),
item_best_price AS (
SELECT
item_name,
MIN(avg_price) as best_price,
unit_of_measure
FROM item_suppliers
GROUP BY item_name, unit_of_measure
HAVING COUNT(DISTINCT supplier_name) > 1
)
SELECT
s.item_name,
s.supplier_name,
s.avg_price,
s.unit_of_measure,
s.order_count,
b.best_price,
ROUND(((s.avg_price - b.best_price) / b.best_price * 100)::numeric, 2) as pct_above_best,
ROUND((s.avg_price - b.best_price)::numeric, 4) as price_gap
FROM item_suppliers s
JOIN item_best_price b ON s.item_name = b.item_name AND s.unit_of_measure = b.unit_of_measure
WHERE s.avg_price > b.best_price
ORDER BY (s.avg_price - b.best_price) * s.order_count DESC
LIMIT 20;Node 4: Pull Week-over-Week Anomaly Summary
- Type:
Postgres
SELECT
severity,
COUNT(*) as count,
AVG(variance_pct) as avg_variance,
SUM(CASE WHEN action_taken = 'pending' THEN 1 ELSE 0 END) as unresolved
FROM anomaly_log
WHERE detected_at >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY severity;Node 5: GPT-4.1 Strategic Analysis
- Type:
OpenAI - Model:
gpt-4.1(use the full model for strategic weekly analysis) - Temperature: 0.3
- Max Tokens: 3000
- User Message: Include all data from Nodes 2, 3, and 4
Node 5: GPT-4.1 System Prompt — Weekly Supplier Benchmarking
Node 6: Format and Send Report
- Type:
Slack— Post in #food-cost-alerts as a long-form message - Also:
Send Emailnode to restaurant owner's email with the full report - Subject:
Weekly Food Cost Intelligence Report — Week of {{ date }}
Node 7: Log Agent Activity
- Type:
Postgresinsert into agent_activity_log - tokens_used from GPT-4.1 response
- cost_usd calculated at $2.00/M input + $8.00/M output
Baseline Recalculation Maintenance Agent
Type: workflow A maintenance workflow that runs weekly (Sunday midnight) to recalculate rolling 90-day price baselines for all supplier items. Updates the price_baselines table used by the anomaly detection agent. Also performs data hygiene tasks: identifies items with insufficient data, flags potential unit-of-measure mismatches, and cleans stale records.
Implementation
n8n Workflow: Baseline Recalculation Agent
Configuration
- Name:
05-baseline-recalculation - Trigger: Schedule — Every Sunday at 12:00 AM
- Timeout: 300 seconds
Nodes
Node 1: Schedule Trigger
0 0 * * 0Node 2: Recalculate All Baselines
- Type:
Postgres
-- Clear existing baselines
TRUNCATE TABLE price_baselines;
-- Recalculate from 90-day rolling window
INSERT INTO price_baselines
(supplier_name, item_name, avg_price, std_dev, min_price, max_price, sample_count, period_start, period_end)
SELECT
supplier_name,
item_name,
ROUND(AVG(unit_price)::numeric, 4),
ROUND(COALESCE(STDDEV(unit_price), 0)::numeric, 4),
MIN(unit_price),
MAX(unit_price),
COUNT(*),
MIN(invoice_date),
MAX(invoice_date)
FROM supplier_prices
WHERE invoice_date >= CURRENT_DATE - INTERVAL '90 days'
GROUP BY supplier_name, item_name
HAVING COUNT(*) >= 3;Node 3: Data Quality Checks
- Type:
Postgres
-- Find items with potential UOM mismatches (high std dev relative to mean)
SELECT supplier_name, item_name, avg_price, std_dev,
ROUND((std_dev / NULLIF(avg_price, 0) * 100)::numeric, 2) as cv_pct
FROM price_baselines
WHERE std_dev / NULLIF(avg_price, 0) > 0.5
ORDER BY cv_pct DESC;Node 4: Alert on Data Quality Issues
- Type:
IF— if quality issues found → Slack warning - Slack message:
🔧 *Data Quality Alert:* {{ count }} items have high price variability (CV >50%) — possible unit-of-measure mismatch. Review: {{ item_list }}
Node 5: Cleanup Old Data
- Type:
Postgres
-- Archive agent activity logs older than 90 days
DELETE FROM agent_activity_log WHERE timestamp < NOW() - INTERVAL '90 days';
-- Keep anomaly_log indefinitely for trend analysis
-- Keep supplier_prices indefinitely for historical comparisonNode 6: Log Maintenance Activity
- Type:
Postgres— Log to agent_activity_log
Invoice Parsing Prompt Template
Type: prompt A reusable prompt template for parsing unstructured or semi-structured invoice data that MarketMan's built-in OCR cannot handle (e.g., handwritten invoices, unusual formats, or credit memos). Used as a fallback when standard extraction fails, invoking GPT-4.1-mini with vision capabilities to extract line items from invoice images. Implementation:
Invoice Parsing Fallback
Testing & Validation
- SCAN TEST: Scan 5 paper invoices using the ScanSnap iX1600 with the 'Supplier Invoice' profile. Verify each arrives in MarketMan's invoice queue within 15 minutes. Confirm OCR extraction correctly identifies supplier name, invoice date, invoice number, and at least 90% of line item prices and quantities.
- API CONNECTIVITY TEST: In n8n, create a test workflow with an HTTP Request node that calls GET https://api.marketman.com/v3/buyers/inventory/items with the configured API credential. Verify it returns a 200 response with valid JSON containing the client's inventory items.
- DATABASE CONNECTIVITY TEST: In n8n, execute a simple PostgreSQL query: SELECT NOW() as server_time, current_database() as db_name; Verify it returns 'foodcost_agent' and the correct timestamp.
- DATA INGESTION TEST: Manually trigger the Invoice Data Ingestion workflow (Workflow 01). Verify new invoice line items appear in the supplier_prices table: SELECT COUNT(*) FROM supplier_prices WHERE ingested_at >= NOW() - INTERVAL '1 hour'; Result should match the number of new line items processed.
- ANOMALY DETECTION — TRUE POSITIVE TEST: Insert a synthetic price record into supplier_prices with a price 3x the item's historical average. Manually trigger the Anomaly Detection workflow. Verify: (1) anomaly_log table contains a new 'critical' severity entry, (2) Slack #food-cost-alerts channel receives a formatted alert with correct price data, (3) SMS is received on the manager's phone.
- ANOMALY DETECTION — FALSE POSITIVE RATE TEST: Run the anomaly detection workflow against 1 week of normal historical data (no synthetic spikes). Count alerts generated. Target: fewer than 3 false positive warnings and 0 false positive critical alerts per week. If exceeded, widen z-score thresholds.
- FOOD COST VARIANCE CALCULATION TEST: After processing a full day with both sales data (via POS) and invoice data, trigger the Daily Variance workflow (Workflow 03). Cross-reference the theoretical cost calculation against a manual spreadsheet calculation for 5 menu items: (quantity sold × recipe cost per item). Values should match within 2%.
- AI NARRATIVE QUALITY TEST: Review the GPT-4.1-mini analysis text in 5 anomaly alerts and 3 daily variance reports. Verify: (1) analysis references specific dollar amounts and percentages, (2) root causes are plausible and specific, (3) recommended actions are actionable (not generic), (4) no hallucinated supplier names or item names.
- SLACK ALERT FORMATTING TEST: Verify all four alert types render correctly in Slack: critical price anomaly (with emoji, bold fields, all data populated), warning price anomaly, daily food cost summary (with variance emoji logic), and weekly benchmarking report (long-form with sections).
- SMS ESCALATION TEST: Trigger a critical anomaly and verify SMS arrives on the manager's phone within 60 seconds. Verify the message is under 160 characters and contains: supplier name, item name, variance percentage, and dollar impact.
- NETWORK SEGMENTATION TEST: From the back-office workstation (VLAN 20), attempt to ping a POS terminal IP on VLAN 10. Verify the ping FAILS (blocked by firewall). Then verify the workstation CAN reach the internet (ping google.com succeeds). Verify POS terminals can still process payments.
- DATABASE AUDIT TRAIL TEST: Query the agent_activity_log table after running all workflows for 24 hours: SELECT agent_name, action_type, success, timestamp FROM agent_activity_log ORDER BY timestamp DESC LIMIT 50; Verify all workflow executions are logged with accurate input/output summaries.
- END-TO-END LATENCY TEST: Time the entire pipeline from scanning an invoice to receiving a Slack alert for an anomalous price. Target: under 15 minutes for the complete cycle (scan → MarketMan processing → n8n ingestion → anomaly detection → alert delivery).
- RESILIENCE TEST: Temporarily disable the MarketMan API credential in n8n. Trigger the ingestion workflow. Verify: (1) workflow fails gracefully without crashing, (2) error is logged in agent_activity_log with success=false, (3) MSP receives error notification via the error handler workflow. Restore credential and verify next scheduled run succeeds.
- WEEKLY REPORT COMPLETENESS TEST: After the first Monday report generation, review the Weekly Supplier Benchmarking Report for: (1) all active suppliers are included, (2) cross-supplier price comparisons are accurate, (3) trend data covers the expected 13-week window, (4) supplier scorecard grades are assigned, (5) savings opportunities include specific dollar estimates.
Client Handoff
Client Handoff Checklist
Training Sessions (Schedule 2 sessions, 90 minutes each)
Session 1: Daily Operations Training (Kitchen Manager + Owner)
- How to scan invoices using the ScanSnap iX1600 (demo all scan profiles)
- How to review and approve invoices in MarketMan dashboard
- How to read and act on Slack alerts (walkthrough of each alert type)
- How to perform inventory counts using the iPad and MarketMan mobile app
- How to check real-time food cost percentage in MarketMan reports
- When to escalate to MSP vs. handle internally (decision tree)
Session 2: Strategic Review Training (Owner/GM)
- How to interpret the Weekly Supplier Benchmarking Report
- How to use supplier scorecards for negotiation leverage
- How to identify menu items that have become unprofitable
- How to mark anomalies as 'resolved' in Slack (reaction-based workflow)
- Dashboard overview: key metrics to check daily, weekly, monthly
- Review of first 30 days of data and initial insights
Documentation to Leave Behind
Success Criteria to Review Together
30-Day Post-Go-Live Review
Schedule a 60-minute review meeting at Day 30 to:
- Review anomaly detection accuracy (false positive/negative rate)
- Adjust thresholds based on first month's data
- Quantify identified savings opportunities
- Address any workflow friction or adoption issues
- Plan Phase 2 enhancements if applicable
Maintenance
Ongoing Maintenance Plan
Daily (Automated — MSP monitors via RMM/n8n)
- Workflow Execution Monitoring: Check n8n execution dashboard for failed workflows. All 4 core workflows should execute on schedule. Set up n8n's built-in error workflow to send alerts to MSP Slack/email on any failure.
- API Health Check: Verify MarketMan API, OpenAI API, and PostgreSQL are responding. n8n's error handling nodes will catch failures, but MSP should verify daily during initial 30 days.
Weekly (MSP Tier 1 — 15 minutes)
- Review Agent Activity Log: Query agent_activity_log for any success=false entries
- Check API Usage/Costs: Review OpenAI usage dashboard to ensure costs are within expected range ($15-75/month). Alert if >2x expected.
- Verify Baseline Recalculation: Confirm Sunday's maintenance workflow (Workflow 05) ran successfully and price_baselines table is populated.
- Scan n8n Execution Count: Ensure monthly execution count is tracking below the 10,000 limit on Pro plan. Scale up if approaching 80%.
SELECT * FROM agent_activity_log WHERE success = false AND timestamp >= NOW() - INTERVAL '7 days';Monthly (MSP Tier 2 — 1 hour)
- Threshold Review and Tuning: Analyze false positive rate from anomaly_log. If >5 false positives per week, widen thresholds. If 0 alerts for 2+ weeks, tighten thresholds. Document changes.
- Software Updates: Check for n8n version updates (patch monthly, minor quarterly after testing). Update ScanSnap Home software. Apply Windows updates on workstation.
- MarketMan Sync Verification: Spot-check 5 recent invoices in MarketMan against source documents for data accuracy. Verify POS integration is still active.
Database Maintenance: Check PostgreSQL storage usage. Verify automated backups are running. Review table sizes:
SELECT relname, pg_size_pretty(pg_total_relation_size(relid)) FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC;Quarterly (MSP Tier 2-3 — 2 hours)
- Comprehensive Agent Performance Review: Analyze anomaly detection accuracy over 90 days. Calculate: true positive rate, false positive rate, mean time to detection, average savings identified. Present findings to client.
- Prompt/Model Optimization: Review GPT-4.1-mini outputs for quality drift. Test newer models if available (e.g., future GPT-4.1-mini updates). Adjust system prompts based on feedback patterns.
- Security Review: Rotate API keys (OpenAI, MarketMan). Review n8n credential access. Check PostgreSQL connection logs for unauthorized access attempts. Verify firewall rules haven't drifted.
- Client Business Review: Meet with restaurant owner to review food cost trends, ROI from the system, and identify new requirements or locations for expansion.
Annual (MSP Tier 3 — 4 hours)
- Full System Audit: Review all integrations, credentials, network configuration, and database schema. Ensure compliance with PCI DSS 4.0 network segmentation. Update documentation.
- Hardware Lifecycle Check: Assess workstation, scanner, tablet, and network equipment health. Plan replacements for devices approaching end-of-life.
- Contract and Licensing Review: Verify all software subscriptions, review pricing tier appropriateness based on actual usage, negotiate renewals.
- Disaster Recovery Test: Simulate a database restore from backup. Verify n8n workflows can be rebuilt from exported JSON. Test failover procedures.
Escalation Paths
- Tier 1 (MSP helpdesk): Scanner issues, login problems, Slack connectivity, basic workflow restarts
- Tier 2 (MSP systems engineer): API integration failures, database issues, threshold tuning, MarketMan configuration changes
- Tier 3 (MSP AI/development specialist): Prompt engineering, workflow logic modifications, new agent development, model upgrades
- Vendor Escalation: MarketMan support (support@marketman.com), OpenAI support (via platform dashboard), n8n support (Pro plan includes priority support), DigitalOcean support (via ticket system)
SLA Recommendations
- System Availability Target: 99.5% uptime for alert delivery (measured monthly)
- Alert Latency SLA: Critical anomalies detected and alerted within 4 hours of invoice processing
- Response Time: MSP acknowledges system failures within 2 business hours, resolves within 8 business hours
- Monthly Reporting: MSP delivers monthly system health report including: workflows executed, anomalies detected, false positive rate, API costs, and uptime metrics
Alternatives
All-in-One Platform Approach (Restaurant365 or CrunchTime)
Self-Hosted Open Source Stack (n8n + PostgreSQL on VPS)
Instead of using n8n Cloud ($50/mo) and DigitalOcean Managed PostgreSQL ($15/mo), self-host both on a single DigitalOcean Droplet ($24/mo for 2 vCPU, 4GB RAM, 80GB SSD) using Docker Compose. This reduces SaaS costs and gives the MSP full infrastructure control.
CrewAI Multi-Agent Framework Instead of n8n
Replace n8n with a Python-based CrewAI implementation where each function (invoice ingestion, anomaly detection, variance analysis, reporting) is a specialized AI agent within a crew. Agents communicate and coordinate autonomously using CrewAI's built-in orchestration, deployed as a Python service on a VPS or cloud function.
Toast + xtraCHEF Native Stack (for Toast POS clients)
If the client already uses Toast POS, leverage the tightly integrated xtraCHEF back-office platform for invoice processing and food cost management, then build the AI agent layer on top of Toast's APIs. This provides the tightest POS-to-cost-management integration since Toast and xtraCHEF share a single data platform.
Spreadsheet + Manual GPT Approach (Budget Option)
For very small restaurants with limited budgets, skip the food cost platform entirely. Use a structured Google Sheets template for invoice data entry, connect it to OpenAI's API via Google Apps Script or a simple Zapier automation, and have GPT analyze weekly cost data dumps for anomalies. Alerts sent via email.
Want early access to the full toolkit?