55 min readAutonomous agents

Implementation Guide: Monitor competitor pricing and recommend repricing actions within policy guardrails

Step-by-step implementation guide for deploying AI to monitor competitor pricing and recommend repricing actions within policy guardrails for Retail clients.

Hardware Procurement

Cloud Virtual Machine — Agent Orchestration Server

Cloud Virtual Machine — Agent Orchestration Server

Amazon Web Services (AWS)EC2 t3.xlarge (4 vCPU, 16 GB RAM, 100 GB gp3 EBS)Qty: 1

$120/month MSP cost / $200/month suggested resale to client

Hosts the self-managed n8n workflow engine, the pricing agent logic, PostgreSQL database for price history, and Grafana dashboard. Sized for up to 5,000 SKUs with hourly competitor checks. The t3.xlarge provides burstable performance suitable for periodic batch processing of pricing data.

Cloud Virtual Machine — Agent Orchestration Server (Azure Alternative)

Microsoft AzureStandard_D4s_v5 (4 vCPU, 16 GB RAM, 128 GB Premium SSD)Qty: 1

$140/month MSP cost / $220/month suggested resale to client

Alternative to AWS for MSPs with existing Azure tenancies. Functionally identical role to the AWS EC2 instance above. Choose one cloud provider, not both.

Software Procurement

$199/month (Premium plan, up to 1,000 products) or $399/month (Platinum plan, up to 5,000 products). API access adds 20% surcharge: $239/month or $479/month respectively.

Automated competitor price scraping and monitoring. Prisync crawls competitor websites on a configurable schedule, extracts pricing data, and exposes it via REST API for consumption by the autonomous agent. Includes MAP monitoring, price change alerts, and a built-in dashboard for manual review.

n8n — Workflow Automation & Agent Orchestration

n8n GmbHSaaS (self-hosted open-source with optional Enterprise license)Qty: 1

$0/month (self-hosted Community Edition, sufficient for most deployments) or €299/month (self-hosted Enterprise for SSO, LDAP, and advanced permissions). Cloud hosting costs covered under VM procurement.

Visual workflow automation platform that orchestrates the entire pricing agent pipeline: scheduled data ingestion from Prisync API, LLM-powered pricing analysis, guardrail enforcement, recommendation generation, human-approval routing, and e-commerce platform price updates. Its low-code interface allows MSP technicians to modify workflows without deep Python expertise.

Anthropic Claude API — Haiku 4.5 Model

AnthropicHaiku 4.5Qty: Usage-based API

$1.00 per million input tokens / $5.00 per million output tokens. Estimated $50–$150/month for 1,000-SKU catalog with daily analysis cycles.

LLM reasoning engine that powers the autonomous pricing agent. Analyzes competitor price positions, applies policy guardrails, generates natural-language repricing justifications, and identifies pricing anomalies. Haiku 4.5 offers the best cost-to-performance ratio for high-frequency pricing decisions, running 4–5x faster than Sonnet at a fraction of the cost.

OpenAI GPT-5.4 mini (Fallback LLM)

OpenAIGPT-5.4 miniQty: Usage-based API

$0.15 per million input tokens / $0.60 per million output tokens. Estimated $20–$80/month as fallback.

Secondary LLM for fallback if Anthropic API experiences downtime. Also used for simpler classification tasks like product category tagging and competitor URL validation where the cheaper model is sufficient.

PostgreSQL with TimescaleDB Extension

TimescaleDB / PostgreSQL CommunityOpen-source (Apache 2.0 / PostgreSQL License)

$0/month (self-hosted on the agent VM). Alternatively, use AWS RDS for PostgreSQL at ~$50–$100/month for a managed instance.

Time-series database for storing historical competitor prices, client pricing history, agent recommendations, approval decisions, and full audit logs. TimescaleDB extension provides efficient time-series queries for trend analysis and reporting.

Grafana — Dashboard & Reporting

Grafana LabsOpen-source (AGPLv3) self-hosted

$0/month (self-hosted OSS). Grafana Cloud free tier available for up to 10,000 metrics.

Client-facing dashboard displaying competitive pricing landscape, price position heatmaps, recommendation history, approval queue status, margin impact projections, and compliance audit trail. Fully white-labelable with MSP or client branding.

Pricefy — Competitive Price Monitoring (SMB Alternative)

PricefyBusiness PlanQty: Up to 15,000 SKUs

$189/month (Business plan). Lower tiers: $49/month Starter (100 SKUs), $99/month Pro (2,000 SKUs).

SaaS competitive price monitoring platform. Includes autopilot repricing and MAP/MSRP monitoring.

Note

Alternative to Prisync for smaller clients or those needing built-in autopilot repricing. Offers AI auto-matching of competitor products and a more affordable entry point. Choose Pricefy OR Prisync, not both.

Shopify API Access (or WooCommerce REST API)

Shopify Inc. / WooCommerce (Automattic)Included with existing Shopify/WooCommerce subscription

$0 additional (API access included in Shopify Basic+ plans and WooCommerce)

API endpoint for reading current client product prices and writing approved repricing updates back to the storefront. Required for the automated price update leg of the workflow.

Prerequisites

  • Active e-commerce platform (Shopify, WooCommerce, BigCommerce, or Magento) with admin-level API access credentials (API key and secret or OAuth app configured)
  • Complete and accurate product catalog with consistent SKU/UPC/EAN identifiers — each product must have a unique identifier that can be matched to competitor listings
  • Documented pricing policy from the client, including: minimum acceptable margin per category (e.g., 'never below 20% gross margin on electronics'), MAP prices for any MAP-enforced brands, maximum allowable price change per day (e.g., 'no more than 5% change in 24 hours'), competitive positioning strategy (e.g., 'match lowest competitor' or 'stay within 3% of market average')
  • List of at least 3–10 primary competitor websites/URLs to monitor, with specific product page URLs or category pages identified for each competitor
  • Minimum 50 Mbps business-grade internet connection at the client site (for dashboard access); the agent server itself runs in the cloud
  • Client cost data: COGS (Cost of Goods Sold) per SKU or per category, available via export from their ERP, POS, or accounting system (QuickBooks, Xero, NetSuite, etc.)
  • AWS or Azure account with billing configured (MSP-managed account recommended for infrastructure)
  • Anthropic API key (sign up at console.anthropic.com) with billing enabled — $5 minimum credit loaded
  • OpenAI API key (sign up at platform.openai.com) with billing enabled — serves as fallback LLM
  • A designated client stakeholder (pricing manager or store owner) who will serve as the human-in-the-loop approver during the initial deployment phase
  • DNS record or subdomain available for the Grafana dashboard (e.g., pricing.clientdomain.com) — optional but recommended for professional presentation
  • SSL certificate for the dashboard domain (Let's Encrypt free certificate is sufficient)

Installation Steps

...

Step 1: Provision Cloud Infrastructure

Deploy the cloud VM that will host n8n, PostgreSQL/TimescaleDB, and Grafana. This single server handles all agent orchestration, data storage, and client-facing dashboards. Use the MSP's cloud account for centralized billing and management.

1
AWS CLI — Launch EC2 instance
2
Configure security group to allow: SSH (port 22) from MSP IP only, HTTPS (port 443) for Grafana dashboard, n8n webhook port (5678) from Prisync webhook IPs
3
Allocate and associate Elastic IP
Launch EC2 instance
bash
aws ec2 run-instances \
  --image-id ami-0c7217cdde317cfec \
  --instance-type t3.xlarge \
  --key-name msp-pricing-agent-key \
  --security-group-ids sg-xxxxxxxx \
  --subnet-id subnet-xxxxxxxx \
  --block-device-mappings '[{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":100,"VolumeType":"gp3"}}]' \
  --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=pricing-agent-CLIENT_NAME},{Key=Project,Value=competitive-pricing}]'
Configure security group ingress rules
bash
aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxxxxx \
  --protocol tcp --port 22 --cidr MSP_OFFICE_IP/32

aws ec2 authorize-security-group-ingress \
  --group-id sg-xxxxxxxx \
  --protocol tcp --port 443 --cidr 0.0.0.0/0
Allocate and associate Elastic IP
bash
aws ec2 allocate-address --domain vpc
aws ec2 associate-address --instance-id i-xxxxxxxx --allocation-id eipalloc-xxxxxxxx
Note

For Azure alternative, use: az vm create --resource-group pricing-agent-rg --name pricing-agent-CLIENT --image Ubuntu2204 --size Standard_D4s_v5 --admin-username mspadmin --generate-ssh-keys. Ensure the VM is in a region close to the client's primary market for low-latency dashboard access. Set up daily automated snapshots for disaster recovery.

Step 2: Install Base Software Stack on VM

SSH into the provisioned VM and install Docker, Docker Compose, and all required services. We use Docker Compose to manage n8n, PostgreSQL with TimescaleDB, and Grafana as containerized services for easy deployment, updates, and portability.

1
SSH into the VM
2
Update system and install Docker
3
Add ubuntu user to docker group
4
Create project directory structure
SSH into the VM
bash
ssh -i msp-pricing-agent-key.pem ubuntu@ELASTIC_IP_ADDRESS
Update system and install Docker
bash
sudo apt update && sudo apt upgrade -y
sudo apt install -y ca-certificates curl gnupg lsb-release
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-compose-plugin
Add ubuntu user to docker group
bash
sudo usermod -aG docker ubuntu
newgrp docker
Create project directory structure
bash
mkdir -p ~/pricing-agent/{n8n,postgres,grafana,scripts,config}
cd ~/pricing-agent
Note

Verify Docker installation with 'docker --version' and 'docker compose version'. Minimum Docker version 24.x and Docker Compose v2.20+ recommended. If the MSP uses Ansible or Terraform for infrastructure-as-code, templates for this stack are provided in the custom_ai_components section.

Step 3: Deploy Docker Compose Stack

Create and launch the Docker Compose configuration that runs all three core services: n8n (agent orchestrator), PostgreSQL with TimescaleDB (price data store), and Grafana (dashboard). This configuration includes persistent volumes, automatic restart policies, and proper networking.

1
Create Docker Compose file
2
Create environment file with secrets
3
Generate strong passwords
4
Secure the env file
5
Launch the stack
6
Verify all containers are running
Create Docker Compose file
bash
cat > ~/pricing-agent/docker-compose.yml << 'DOCKER_EOF'
version: '3.8'

services:
  postgres:
    image: timescale/timescaledb:latest-pg16
    container_name: pricing-postgres
    restart: always
    environment:
      POSTGRES_DB: pricing_agent
      POSTGRES_USER: pricing_admin
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./postgres/init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - '127.0.0.1:5432:5432'
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U pricing_admin -d pricing_agent']
      interval: 10s
      timeout: 5s
      retries: 5

  n8n:
    image: n8nio/n8n:latest
    container_name: pricing-n8n
    restart: always
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=${N8N_USER}
      - N8N_BASIC_AUTH_PASSWORD=${N8N_PASSWORD}
      - N8N_HOST=${N8N_HOST}
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://${N8N_HOST}/
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n_db
      - DB_POSTGRESDB_USER=pricing_admin
      - DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
      - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
      - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
      - OPENAI_API_KEY=${OPENAI_API_KEY}
      - PRISYNC_API_KEY=${PRISYNC_API_KEY}
      - PRISYNC_API_TOKEN=${PRISYNC_API_TOKEN}
    volumes:
      - n8n_data:/home/node/.n8n
    ports:
      - '5678:5678'
    depends_on:
      postgres:
        condition: service_healthy

  grafana:
    image: grafana/grafana:latest
    container_name: pricing-grafana
    restart: always
    environment:
      - GF_SECURITY_ADMIN_USER=${GRAFANA_USER}
      - GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD}
      - GF_SERVER_ROOT_URL=https://${GRAFANA_HOST}/
    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana/provisioning:/etc/grafana/provisioning
    ports:
      - '3000:3000'
    depends_on:
      postgres:
        condition: service_healthy

volumes:
  postgres_data:
  n8n_data:
  grafana_data:
DOCKER_EOF
Create environment file with secrets
bash
cat > ~/pricing-agent/.env << 'ENV_EOF'
POSTGRES_PASSWORD=GENERATE_STRONG_PASSWORD_HERE
N8N_USER=admin
N8N_PASSWORD=GENERATE_STRONG_PASSWORD_HERE
N8N_HOST=n8n.clientdomain.com
N8N_ENCRYPTION_KEY=GENERATE_32_CHAR_RANDOM_STRING
ANTHROPIC_API_KEY=sk-ant-REPLACE_WITH_ACTUAL_KEY
OPENAI_API_KEY=sk-REPLACE_WITH_ACTUAL_KEY
PRISYNC_API_KEY=REPLACE_WITH_PRISYNC_API_KEY
PRISYNC_API_TOKEN=REPLACE_WITH_PRISYNC_TOKEN
GRAFANA_USER=admin
GRAFANA_PASSWORD=GENERATE_STRONG_PASSWORD_HERE
GRAFANA_HOST=pricing.clientdomain.com
ENV_EOF
Generate strong passwords
bash
sed -i "s/GENERATE_STRONG_PASSWORD_HERE/$(openssl rand -base64 24)/g" ~/pricing-agent/.env
sed -i "s/GENERATE_32_CHAR_RANDOM_STRING/$(openssl rand -hex 16)/" ~/pricing-agent/.env
Secure the env file
bash
chmod 600 ~/pricing-agent/.env
Launch the stack and verify all containers are running
bash
cd ~/pricing-agent
docker compose up -d
docker compose ps
Note

Store the generated passwords in the MSP's password manager (e.g., IT Glue, Hudu, or Bitwarden) immediately. The .env file contains all secrets and must never be committed to version control. For production, consider using AWS Secrets Manager or Azure Key Vault instead of a .env file.

Step 4: Initialize Database Schema

Create the PostgreSQL database schema that stores competitor prices, client product data, pricing policies, agent recommendations, approval decisions, and audit logs. The TimescaleDB hypertable provides efficient time-series queries for price history trending.

1
Create the database initialization SQL
2
If the stack is already running, recreate just the postgres container to apply init.sql
3
Verify database schema
Create the database initialization SQL file
bash
cat > ~/pricing-agent/postgres/init.sql << 'SQL_EOF'
-- Create additional database for n8n
CREATE DATABASE n8n_db;

-- Switch to pricing_agent database
\c pricing_agent;

-- Enable TimescaleDB extension
CREATE EXTENSION IF NOT EXISTS timescaledb;

-- Products table: client's product catalog
CREATE TABLE products (
    id SERIAL PRIMARY KEY,
    sku VARCHAR(100) UNIQUE NOT NULL,
    name VARCHAR(500) NOT NULL,
    category VARCHAR(200),
    brand VARCHAR(200),
    current_price DECIMAL(10,2),
    cost DECIMAL(10,2),
    map_price DECIMAL(10,2),
    min_margin_pct DECIMAL(5,2) DEFAULT 15.00,
    max_daily_change_pct DECIMAL(5,2) DEFAULT 5.00,
    pricing_strategy VARCHAR(50) DEFAULT 'match_market',
    is_active BOOLEAN DEFAULT true,
    prisync_product_id VARCHAR(100),
    ecommerce_product_id VARCHAR(100),
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- Competitors table
CREATE TABLE competitors (
    id SERIAL PRIMARY KEY,
    name VARCHAR(200) NOT NULL,
    website_url VARCHAR(500),
    priority INTEGER DEFAULT 5,
    is_active BOOLEAN DEFAULT true,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Competitor prices: time-series data from Prisync
CREATE TABLE competitor_prices (
    time TIMESTAMPTZ NOT NULL,
    product_id INTEGER REFERENCES products(id),
    competitor_id INTEGER REFERENCES competitors(id),
    price DECIMAL(10,2),
    currency VARCHAR(3) DEFAULT 'USD',
    url VARCHAR(1000),
    in_stock BOOLEAN DEFAULT true,
    source VARCHAR(50) DEFAULT 'prisync'
);

-- Convert to TimescaleDB hypertable for efficient time-series queries
SELECT create_hypertable('competitor_prices', 'time');

-- Pricing policies: configurable guardrails
CREATE TABLE pricing_policies (
    id SERIAL PRIMARY KEY,
    policy_name VARCHAR(200) NOT NULL,
    category VARCHAR(200),
    min_margin_pct DECIMAL(5,2) NOT NULL DEFAULT 15.00,
    max_margin_pct DECIMAL(5,2) DEFAULT 60.00,
    max_daily_change_pct DECIMAL(5,2) NOT NULL DEFAULT 5.00,
    max_weekly_change_pct DECIMAL(5,2) DEFAULT 15.00,
    enforce_map BOOLEAN DEFAULT true,
    competitive_position VARCHAR(50) DEFAULT 'match_lowest',
    position_offset_pct DECIMAL(5,2) DEFAULT 0.00,
    auto_approve_threshold_pct DECIMAL(5,2) DEFAULT 2.00,
    is_active BOOLEAN DEFAULT true,
    created_at TIMESTAMPTZ DEFAULT NOW(),
    updated_at TIMESTAMPTZ DEFAULT NOW()
);

-- Agent recommendations
CREATE TABLE recommendations (
    id SERIAL PRIMARY KEY,
    product_id INTEGER REFERENCES products(id),
    current_price DECIMAL(10,2),
    recommended_price DECIMAL(10,2),
    change_pct DECIMAL(5,2),
    projected_margin_pct DECIMAL(5,2),
    reason TEXT,
    competitor_context JSONB,
    policy_checks JSONB,
    confidence_score DECIMAL(3,2),
    status VARCHAR(20) DEFAULT 'pending',
    approved_by VARCHAR(100),
    approved_at TIMESTAMPTZ,
    applied_at TIMESTAMPTZ,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Audit log: complete trail of all pricing actions
CREATE TABLE audit_log (
    id SERIAL PRIMARY KEY,
    event_type VARCHAR(50) NOT NULL,
    product_id INTEGER REFERENCES products(id),
    old_price DECIMAL(10,2),
    new_price DECIMAL(10,2),
    change_reason TEXT,
    agent_reasoning TEXT,
    policy_applied VARCHAR(200),
    user_action VARCHAR(50),
    user_id VARCHAR(100),
    metadata JSONB,
    created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Indexes for common queries
CREATE INDEX idx_competitor_prices_product ON competitor_prices (product_id, time DESC);
CREATE INDEX idx_recommendations_status ON recommendations (status, created_at DESC);
CREATE INDEX idx_recommendations_product ON recommendations (product_id, created_at DESC);
CREATE INDEX idx_audit_log_product ON audit_log (product_id, created_at DESC);
CREATE INDEX idx_products_sku ON products (sku);
CREATE INDEX idx_products_category ON products (category);

-- Insert default pricing policies
INSERT INTO pricing_policies (policy_name, category, min_margin_pct, max_daily_change_pct, competitive_position, auto_approve_threshold_pct)
VALUES
  ('Default - Conservative', NULL, 20.00, 3.00, 'match_market_avg', 1.50),
  ('Electronics', 'Electronics', 12.00, 5.00, 'match_lowest', 2.00),
  ('Apparel', 'Apparel', 40.00, 5.00, 'premium_position', 1.00),
  ('Consumables', 'Consumables', 25.00, 3.00, 'match_lowest', 2.00);

SQL_EOF
Recreate the postgres container to apply init.sql
bash
cd ~/pricing-agent
docker compose down
docker volume rm pricing-agent_postgres_data 2>/dev/null
docker compose up -d
Verify database schema
bash
docker exec -it pricing-postgres psql -U pricing_admin -d pricing_agent -c '\dt'
Note

The init.sql file runs automatically on first PostgreSQL container startup. If you need to re-initialize, you must remove the postgres_data volume first (which destroys all data). The default pricing policies should be customized during the client discovery phase. The auto_approve_threshold_pct determines which price changes are auto-approved vs. requiring human approval — start conservative at 1.5–2%.

Step 5: Configure Nginx Reverse Proxy with SSL

Set up Nginx as a reverse proxy to serve n8n and Grafana over HTTPS with Let's Encrypt SSL certificates. This provides secure access to the dashboard and webhook endpoints.

1
Install Nginx and Certbot
2
Create Nginx config for Grafana dashboard
3
Enable the site
4
Obtain SSL certificates
5
Set up auto-renewal
Install Nginx and Certbot
bash
sudo apt install -y nginx certbot python3-certbot-nginx
Create Nginx reverse proxy config for Grafana and n8n
bash
sudo cat > /etc/nginx/sites-available/pricing-dashboard << 'NGINX_EOF'
server {
    listen 80;
    server_name pricing.clientdomain.com;

    location / {
        proxy_pass http://127.0.0.1:3000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen 80;
    server_name n8n.clientdomain.com;

    location / {
        proxy_pass http://127.0.0.1:5678;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}
NGINX_EOF
Enable the site and reload Nginx
bash
sudo ln -s /etc/nginx/sites-available/pricing-dashboard /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default
sudo nginx -t
sudo systemctl reload nginx
Obtain SSL certificates via Let's Encrypt
bash
sudo certbot --nginx -d pricing.clientdomain.com -d n8n.clientdomain.com --non-interactive --agree-tos -m msp-admin@mspcompany.com
Enable automatic SSL certificate renewal
bash
sudo systemctl enable certbot.timer
Note

Replace 'clientdomain.com' with the actual client domain. DNS A records must be pointed to the VM's Elastic IP before running certbot. If the client doesn't want to use their domain, use an MSP subdomain like 'clientname.pricing.mspcompany.com'. The n8n interface should be restricted to MSP IPs only in production — add 'allow MSP_IP; deny all;' to the n8n server block after initial setup.

Step 6: Set Up Prisync Account and Configure Competitor Monitoring

Create the Prisync account, configure API access, import the client's product catalog, and set up competitor URL monitoring. This is the data collection layer that feeds the autonomous agent.

1
Sign up for Prisync Premium at https://prisync.com
2
Choose Premium plan ($199/mo) or Platinum ($399/mo) based on SKU count
3
Enable API access add-on (+20% = $239/mo or $479/mo)
4
Note the API Key and API Token from Settings > API Integration
Test Prisync API connectivity
bash
curl -X GET 'https://prisync.com/api/v2/list/product/id' \
  -H 'apikey: YOUR_PRISYNC_API_KEY' \
  -H 'apitoken: YOUR_PRISYNC_API_TOKEN' | python3 -m json.tool
1
Export client product catalog from their e-commerce platform
2
For Shopify: Go to Products > Export > All Products > CSV for Excel
3
Required columns: SKU, Title, Price, Cost per item, Barcode (UPC/EAN)
1
Prepare Prisync import CSV with required format: name, category, brand, product_code, extra_info, url_1, url_2, ...
2
Create a mapping script using the Python script below
Create Shopify-to-Prisync catalog conversion script
bash
cat > ~/pricing-agent/scripts/prepare_prisync_import.py << 'PYEOF'
import csv
import sys

def convert_shopify_to_prisync(input_file, output_file, competitor_urls_file):
    # Load competitor URL mappings (SKU -> [competitor URLs])
    url_map = {}
    with open(competitor_urls_file, 'r') as f:
        reader = csv.DictReader(f)
        for row in reader:
            sku = row['sku']
            urls = [row[f'competitor_url_{i}'] for i in range(1, 11) if row.get(f'competitor_url_{i}')]
            url_map[sku] = urls

    with open(input_file, 'r') as fin, open(output_file, 'w', newline='') as fout:
        reader = csv.DictReader(fin)
        writer = csv.writer(fout)
        header = ['name', 'category', 'brand', 'product_code', 'extra_info']
        header += [f'url_{i}' for i in range(1, 11)]
        writer.writerow(header)

        for row in reader:
            sku = row.get('Variant SKU', '')
            urls = url_map.get(sku, [])
            out_row = [
                row.get('Title', ''),
                row.get('Type', 'General'),
                row.get('Vendor', ''),
                sku,
                row.get('Variant Barcode', '')
            ]
            out_row += urls + [''] * (10 - len(urls))
            writer.writerow(out_row)

if __name__ == '__main__':
    convert_shopify_to_prisync(sys.argv[1], sys.argv[2], sys.argv[3])
PYEOF
1
Upload to Prisync via Dashboard: Go to Prisync Dashboard > Products > Import Products > Upload CSV
2
Or use the API to add products individually (see API call below)
Add a single product to Prisync via API
bash
curl -X POST 'https://prisync.com/api/v2/add/product/' \
  -H 'apikey: YOUR_PRISYNC_API_KEY' \
  -H 'apitoken: YOUR_PRISYNC_API_TOKEN' \
  -F 'name=Example Product' \
  -F 'brand=Example Brand' \
  -F 'category=Electronics' \
  -F 'productCode=SKU-12345' \
  -F 'additional_fields={"cost":"25.00","barcode":"012345678901"}'
Note

The competitor URL mapping is the most time-consuming manual step. During the client discovery workshop, dedicate 2–4 hours to identifying competitor URLs for top-selling products. Start with the top 50–100 SKUs by revenue for Phase 1 and expand later. Prisync's AI auto-matching feature can help, but manual verification is critical for accuracy. Allow 1–2 weeks for Prisync to stabilize scraping (some URLs may need adjustment if competitors change page structures).

Step 7: Import Client Product Catalog and Cost Data into Database

Load the client's product catalog, cost data, and pricing policies into the PostgreSQL database. This data forms the baseline against which the agent evaluates competitor prices and generates recommendations.

1
Create a product import script
2
Install Python dependencies on the VM
3
Run the import (set password from .env)
4
Verify import
Product import script
python
cat > ~/pricing-agent/scripts/import_products.py << 'PYEOF'
import csv
import psycopg2
import os

DB_CONFIG = {
    'host': 'localhost',
    'port': 5432,
    'database': 'pricing_agent',
    'user': 'pricing_admin',
    'password': os.environ['POSTGRES_PASSWORD']
}

def import_products(csv_file):
    conn = psycopg2.connect(**DB_CONFIG)
    cur = conn.cursor()

    with open(csv_file, 'r') as f:
        reader = csv.DictReader(f)
        for row in reader:
            cur.execute("""
                INSERT INTO products (sku, name, category, brand, current_price, cost, map_price, ecommerce_product_id)
                VALUES (%s, %s, %s, %s, %s, %s, %s, %s)
                ON CONFLICT (sku) DO UPDATE SET
                    name = EXCLUDED.name,
                    current_price = EXCLUDED.current_price,
                    cost = EXCLUDED.cost,
                    updated_at = NOW()
            """, (
                row.get('Variant SKU'),
                row.get('Title'),
                row.get('Type', 'General'),
                row.get('Vendor', ''),
                float(row.get('Variant Price', 0)),
                float(row.get('Cost per item', 0)) if row.get('Cost per item') else None,
                float(row.get('MAP Price', 0)) if row.get('MAP Price') else None,
                row.get('Handle', '')
            ))

    conn.commit()
    cur.close()
    conn.close()
    print(f'Import complete.')

if __name__ == '__main__':
    import sys
    import_products(sys.argv[1])
PYEOF
Install Python dependencies on the VM
bash
sudo apt install -y python3-pip
pip3 install psycopg2-binary requests
Run the import (set password from .env)
bash
export POSTGRES_PASSWORD=$(grep POSTGRES_PASSWORD ~/pricing-agent/.env | cut -d= -f2)
python3 ~/pricing-agent/scripts/import_products.py /path/to/shopify_export.csv
Verify import
bash
docker exec -it pricing-postgres psql -U pricing_admin -d pricing_agent -c 'SELECT COUNT(*) FROM products;'
docker exec -it pricing-postgres psql -U pricing_admin -d pricing_agent -c 'SELECT sku, name, current_price, cost FROM products LIMIT 10;'
Note

The cost data (COGS) is critical for margin calculations. If the client's e-commerce platform doesn't track cost, you'll need a separate CSV from their accounting system or ERP. Without cost data, the agent cannot enforce margin guardrails. Also populate the competitors table with the identified competitors from the discovery phase.

Step 8: Build the Core n8n Pricing Agent Workflow

Create the primary n8n workflow that orchestrates the autonomous pricing agent. This workflow runs on a schedule (configurable, default every 6 hours), pulls competitor prices from Prisync, sends pricing data through the LLM reasoning engine with policy guardrails, generates recommendations, and routes them for human approval or auto-approval based on policy thresholds.

1
Access n8n at https://n8n.clientdomain.com and login with the credentials from .env
2
The complete n8n workflow JSON is provided in the custom_ai_components section — import it via: n8n UI > Settings > Import Workflow > paste JSON
3
After importing, configure the following credential nodes: Prisync API (HTTP Request node with Header Auth), Anthropic Claude (Anthropic API credential), PostgreSQL (Database connection), Shopify API or WooCommerce (OAuth or API key credential), Email/Slack (for notification delivery)
4
Test the workflow manually first: click 'Execute Workflow' in n8n to run a single cycle and check the output of each node for errors
5
Once validated, activate the schedule trigger — the workflow will run automatically every 6 hours
Note

The n8n workflow is the heart of the system. Detailed workflow JSON and node-by-node configuration is provided in the custom_ai_components section. Start with a 12-hour schedule and increase frequency as the client gains confidence. Never enable auto-repricing in Phase 1 — all recommendations should go through the human approval queue first.

Step 9: Configure Grafana Dashboard

Set up the client-facing Grafana dashboard that displays the competitive pricing landscape, pending recommendations, approval history, margin impact analysis, and compliance audit trail. White-label the dashboard with the client's branding.

1
Access Grafana at https://pricing.clientdomain.com
2
Login with admin credentials from .env
3
Step 9a: Add PostgreSQL data source — Go to Configuration > Data Sources > Add data source > PostgreSQL. Host: postgres:5432 (internal Docker network name), Database: pricing_agent, User: pricing_admin, Password: (from .env), SSL Mode: disable (internal network), TimescaleDB: enable toggle
4
Step 9b: Import dashboard JSON — Go to Dashboards > Import > paste JSON. The dashboard JSON is provided in the custom_ai_components section
5
Step 9c: White-label with client branding — Go to Administration > General > Preferences. Set: Home Dashboard, Org Name (client name), Theme. Upload client logo: place in /var/lib/grafana/public/img/custom_logo.svg
Copy client logo into the Grafana container
bash
docker cp /path/to/client_logo.svg pricing-grafana:/var/lib/grafana/public/img/custom_logo.svg
1
Step 9d: Create read-only viewer account for client — Go to Administration > Users > Invite. Set role to 'Viewer'. Send credentials to client stakeholder.
Note

The dashboard should be the primary interface for the client stakeholder. Include panels for: (1) Competitive Price Position Map showing client price vs. market for each product, (2) Pending Recommendations queue with approve/reject action links, (3) Price Change History timeline, (4) Margin Impact Summary showing estimated revenue/margin impact of applied changes, (5) Compliance Audit Log showing all agent decisions and human approvals. Dashboard JSON template is provided in the custom_ai_components section.

Step 10: Configure Human-in-the-Loop Approval Workflow

Set up the approval notification and action system. When the agent generates a repricing recommendation that exceeds the auto-approve threshold, it sends a notification (email or Slack) to the designated approver with a one-click approve/reject link. This is critical for building client trust and maintaining compliance.

Option A: Email-Based Approval

In n8n, configure the Email Send node with the following settings:

  • SMTP Host: smtp.gmail.com (or client's email provider)
  • Port: 587 (TLS)
  • User: notifications@clientdomain.com
  • Password: App-specific password

The email contains:

  • Product name and SKU
  • Current price vs. recommended price
  • Change percentage and projected margin
  • Agent's reasoning summary
  • Competitor price context
  • One-click Approve/Reject links (n8n webhook URLs)
1
Create Slack App at https://api.slack.com/apps
2
Add Bot Token Scopes: chat:write, incoming-webhook
3
Install to client's Slack workspace
4
In n8n, add Slack credential with Bot Token
5
Configure Slack node to send interactive messages with Approve/Reject buttons

Webhook URLs for Approval Actions

n8n webhook URLs for approve and reject actions
text
Approve: https://n8n.clientdomain.com/webhook/pricing-approve?id={{recommendation_id}}&action=approve
Reject:  https://n8n.clientdomain.com/webhook/pricing-approve?id={{recommendation_id}}&action=reject

Testing the Approval Flow

1
Manually trigger the pricing workflow
2
Verify notification arrives via email/Slack
3
Click Approve and verify the recommendation status updates in the database
4
Verify the price does NOT update on the e-commerce platform yet (Phase 1 = recommendation only)
Note

Start all clients in 'recommendation-only' mode where approvals update the database but do NOT push price changes to the e-commerce platform. After 2–4 weeks of validated recommendations, enable the e-commerce price update step for approved items. Always maintain the audit trail regardless of auto-approval settings.

Step 11: Connect E-Commerce Platform for Automated Price Updates

Configure the API integration with the client's e-commerce platform (Shopify, WooCommerce, or BigCommerce) to enable pushing approved price changes. This step is initially configured but DISABLED — it's activated only after the client has reviewed and validated recommendations for 2–4 weeks.

Shopify Integration

1
Create a Custom App in Shopify Admin > Settings > Apps and sales channels > Develop apps
2
Configure API scopes: read_products, write_products, read_inventory
3
Install app and note the Admin API access token
Test Shopify API connectivity
bash
curl -X GET "https://CLIENT-STORE.myshopify.com/admin/api/2024-01/products.json?limit=5" \
  -H "X-Shopify-Access-Token: shpat_XXXXXXXXXXXXX" | python3 -m json.tool

WooCommerce Integration

1
Go to WooCommerce > Settings > Advanced > REST API
2
Add key: Description='Pricing Agent', User=admin, Permissions=Read/Write
3
Note Consumer Key and Consumer Secret
Test WooCommerce API connectivity
bash
curl "https://clientstore.com/wp-json/wc/v3/products?per_page=5" \
  -u "consumer_key:consumer_secret" | python3 -m json.tool

Enable Price Update Node in n8n (After Validation Period)

1
Open the pricing agent workflow in n8n
2
Find the 'Apply Price to Shopify' node (marked with ⚠️ DISABLED)
3
Right-click > Enable
4
Test with a single low-risk product first
5
Monitor the audit log for successful price application
Critical

Do NOT enable automated price pushes until the client has reviewed at least 2 weeks of recommendations and confirmed accuracy. The first automated price push should be a single, low-risk product with a small price change. Gradually expand to more products. Always maintain the ability to bulk-revert prices using the stored previous prices in the audit log.

Step 12: Set Up Monitoring, Alerting, and Backup

Configure system health monitoring, alerting for anomalies, and automated database backups to ensure reliability and data protection.

1
Set up automated PostgreSQL backups
2
Schedule daily backup at 2 AM
3
Set up Docker container health monitoring
4
Schedule health check every 5 minutes
5
Create log directory
Create automated PostgreSQL backup script
bash
cat > ~/pricing-agent/scripts/backup.sh << 'BASH_EOF'
#!/bin/bash
BACKUP_DIR=~/pricing-agent/backups
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR

# Dump the database
docker exec pricing-postgres pg_dump -U pricing_admin pricing_agent | gzip > $BACKUP_DIR/pricing_agent_$DATE.sql.gz

# Upload to S3 (if configured)
if command -v aws &> /dev/null; then
    aws s3 cp $BACKUP_DIR/pricing_agent_$DATE.sql.gz s3://msp-backups/pricing-agent/CLIENT_NAME/
fi

# Remove local backups older than 30 days
find $BACKUP_DIR -name '*.sql.gz' -mtime +30 -delete

echo "Backup completed: pricing_agent_$DATE.sql.gz"
BASH_EOF
chmod +x ~/pricing-agent/scripts/backup.sh
Schedule daily backup at 2 AM via cron
bash
(crontab -l 2>/dev/null; echo '0 2 * * * ~/pricing-agent/scripts/backup.sh >> ~/pricing-agent/logs/backup.log 2>&1') | crontab -
Create Docker container health monitoring script
bash
cat > ~/pricing-agent/scripts/health_check.sh << 'BASH_EOF'
#!/bin/bash
SERVICES=("pricing-postgres" "pricing-n8n" "pricing-grafana")
ALERT_EMAIL="msp-alerts@mspcompany.com"

for service in "${SERVICES[@]}"; do
    STATUS=$(docker inspect -f '{{.State.Status}}' $service 2>/dev/null)
    if [ "$STATUS" != "running" ]; then
        echo "ALERT: $service is $STATUS on $(hostname) at $(date)" | \
          mail -s "Pricing Agent Alert: $service down" $ALERT_EMAIL
        docker compose -f ~/pricing-agent/docker-compose.yml restart $service
    fi
done
BASH_EOF
chmod +x ~/pricing-agent/scripts/health_check.sh
Schedule health check every 5 minutes via cron
bash
(crontab -l 2>/dev/null; echo '*/5 * * * * ~/pricing-agent/scripts/health_check.sh >> ~/pricing-agent/logs/health.log 2>&1') | crontab -
Create log directory
bash
mkdir -p ~/pricing-agent/logs
Note

For MSPs managing multiple pricing agent instances, consider using a centralized monitoring solution like Datadog or your existing RMM tool. Set up additional alerts for: (1) n8n workflow execution failures, (2) Prisync API errors (data freshness), (3) LLM API budget approaching limits, (4) unusual recommendation patterns (e.g., agent suggesting >10% price drop). Configure n8n's built-in error workflow to send email/Slack notifications on any workflow failure.

Custom AI Components

Pricing Analysis Agent — System Prompt

Type: prompt

The core system prompt for the LLM-powered pricing reasoning engine. This prompt instructs the Claude Haiku 4.5 model to analyze competitor prices, apply policy guardrails, and generate structured repricing recommendations with justifications. It is the 'brain' of the autonomous agent.

Implementation:

text
You are a Retail Pricing Analyst Agent. Your role is to analyze competitive pricing data and recommend optimal price adjustments for a retail business, strictly within defined policy guardrails.

## YOUR RESPONSIBILITIES
1. Analyze competitor pricing positions relative to our products
2. Identify products that are over-priced or under-priced relative to the competitive landscape
3. Recommend specific price adjustments that comply with ALL policy guardrails
4. Provide clear reasoning for each recommendation
5. Flag anomalies (e.g., competitor prices that seem erroneous, sudden large changes)

## POLICY GUARDRAILS (MANDATORY — NEVER VIOLATE)
- **Minimum Margin**: Never recommend a price where (price - cost) / price < min_margin_pct
- **MAP Compliance**: Never recommend a price below the MAP (Minimum Advertised Price) for any product that has a MAP price set
- **Maximum Daily Change**: Never recommend a price change exceeding max_daily_change_pct in either direction from the current price
- **Maximum Weekly Change**: Total accumulated changes over 7 days must not exceed max_weekly_change_pct
- **No Price Below Cost**: Never recommend a price at or below the product's cost, even if competitors are priced lower

## PRICING STRATEGIES (apply based on product's pricing_strategy field)
- **match_lowest**: Match the lowest competitor price (within guardrails)
- **match_market_avg**: Match the average of all tracked competitor prices
- **premium_position**: Set price at position_offset_pct above the market average
- **undercut_lowest**: Set price at position_offset_pct below the lowest competitor
- **hold_current**: Do not recommend changes unless a guardrail is being violated

## INPUT FORMAT
You will receive a JSON object for each product containing:
{
  "product": {
    "sku": "string",
    "name": "string",
    "category": "string",
    "current_price": number,
    "cost": number,
    "map_price": number or null,
    "min_margin_pct": number,
    "max_daily_change_pct": number,
    "pricing_strategy": "string",
    "recent_price_changes": [{"date": "string", "old_price": number, "new_price": number}]
  },
  "competitors": [
    {
      "name": "string",
      "price": number,
      "in_stock": boolean,
      "last_updated": "string"
    }
  ],
  "policy": {
    "min_margin_pct": number,
    "max_daily_change_pct": number,
    "max_weekly_change_pct": number,
    "enforce_map": boolean,
    "competitive_position": "string",
    "position_offset_pct": number,
    "auto_approve_threshold_pct": number
  }
}

## OUTPUT FORMAT
Respond with a JSON object:
{
  "sku": "string",
  "action": "adjust" | "hold" | "flag",
  "recommended_price": number,
  "change_pct": number,
  "projected_margin_pct": number,
  "confidence": number (0.0 to 1.0),
  "requires_human_review": boolean,
  "reasoning": "string (2-3 sentences explaining the recommendation)",
  "guardrail_checks": {
    "margin_check": "pass" | "adjusted" | "blocked",
    "map_check": "pass" | "not_applicable" | "adjusted" | "blocked",
    "daily_change_check": "pass" | "adjusted" | "blocked",
    "weekly_change_check": "pass" | "adjusted" | "blocked"
  },
  "anomalies": ["string array of any detected anomalies"]
}

## RULES
1. Always output valid JSON. No markdown, no explanatory text outside the JSON.
2. If multiple guardrails conflict, prioritize in this order: MAP > Minimum Margin > Maximum Weekly Change > Maximum Daily Change > Competitive Position
3. If the optimal competitive price violates a guardrail, adjust to the nearest compliant price and set the relevant guardrail_check to "adjusted"
4. Set requires_human_review to true if: change_pct > auto_approve_threshold_pct, or any guardrail was "adjusted" or "blocked", or any anomalies were detected
5. Ignore competitor prices from out-of-stock competitors (in_stock: false) unless they are the only data points
6. Round all prices to 2 decimal places
7. If cost data is missing (null), set action to "flag" and note that cost data is required for margin calculation
8. Never recommend the same price as the current price — if no change is warranted, set action to "hold"

n8n Pricing Agent Workflow

Type: workflow

The complete n8n workflow that orchestrates the pricing agent pipeline. It runs on a configurable schedule, pulls competitor data from Prisync, queries the product database, sends batches to the LLM for analysis, stores recommendations, triggers approvals, and optionally pushes approved prices to the e-commerce platform. This workflow is imported directly into n8n.

Implementation

Note

Due to n8n workflow JSON being extremely verbose (~2000+ lines), the following is a node-by-node specification for building the workflow in n8n's visual editor. Workflow Name: Competitive Pricing Agent v1.0

Node 1: Schedule Trigger

  • Type: Schedule Trigger
  • Config: Run every 6 hours (0 */6 * * *)
  • Output: Triggers the workflow

Node 2: Fetch Products from Database

  • Type: Postgres
  • Operation: Execute Query
Node 2 Query: Fetch active products with pricing policies
sql
SELECT p.id, p.sku, p.name, p.category, p.brand, p.current_price, p.cost, 
       p.map_price, p.min_margin_pct, p.max_daily_change_pct, p.pricing_strategy,
       p.prisync_product_id, p.ecommerce_product_id,
       pp.min_margin_pct as policy_min_margin, pp.max_margin_pct as policy_max_margin,
       pp.max_daily_change_pct as policy_max_daily, pp.max_weekly_change_pct as policy_max_weekly,
       pp.enforce_map, pp.competitive_position, pp.position_offset_pct, pp.auto_approve_threshold_pct
FROM products p
LEFT JOIN pricing_policies pp ON (pp.category = p.category AND pp.is_active = true)
WHERE p.is_active = true
ORDER BY p.id;

Node 3: Loop Over Products (SplitInBatches)

  • Type: SplitInBatches
  • Batch Size: 10 (process 10 products per LLM call for efficiency)

Node 4: Fetch Competitor Prices from Prisync

  • Type: HTTP Request
  • Method: GET
  • URL: https://prisync.com/api/v2/get/product/prices/id/{{ $json.prisync_product_id }}
  • Headers: apikey={{ $credentials.prisyncApiKey }}, apitoken={{ $credentials.prisyncApiToken }}
  • Retry on Fail: true, 3 retries with 5s wait

Node 5: Fetch Recent Price Changes

  • Type: Postgres
Node 5 Query: Fetch recent price changes for a product
sql
SELECT old_price, new_price, created_at::text as date
FROM audit_log
WHERE product_id = {{ $json.id }} AND event_type = 'price_change'
AND created_at > NOW() - INTERVAL '7 days'
ORDER BY created_at DESC LIMIT 10;

Node 6: Store Competitor Prices in Database

  • Type: Postgres
  • Operation: Execute Query
Node 6 Query: Upsert competitor prices
sql
INSERT INTO competitor_prices (time, product_id, competitor_id, price, in_stock, url)
VALUES (NOW(), {{ $json.product_id }}, {{ $json.competitor_id }}, {{ $json.price }}, {{ $json.in_stock }}, '{{ $json.url }}')
ON CONFLICT DO NOTHING;

Node 7: Prepare LLM Prompt

  • Type: Code (JavaScript)
Node 7: Build structured batch payload for LLM
javascript
const products = $input.all();
const batch = products.map(item => {
  return {
    product: {
      sku: item.json.sku,
      name: item.json.name,
      category: item.json.category,
      current_price: parseFloat(item.json.current_price),
      cost: item.json.cost ? parseFloat(item.json.cost) : null,
      map_price: item.json.map_price ? parseFloat(item.json.map_price) : null,
      min_margin_pct: parseFloat(item.json.min_margin_pct || item.json.policy_min_margin || 15),
      max_daily_change_pct: parseFloat(item.json.max_daily_change_pct || item.json.policy_max_daily || 5),
      pricing_strategy: item.json.pricing_strategy || item.json.competitive_position || 'match_market_avg',
      recent_price_changes: item.json.recent_changes || []
    },
    competitors: (item.json.competitor_prices || []).map(cp => ({
      name: cp.competitor_name,
      price: parseFloat(cp.price),
      in_stock: cp.in_stock !== false,
      last_updated: cp.last_updated || new Date().toISOString()
    })),
    policy: {
      min_margin_pct: parseFloat(item.json.policy_min_margin || 15),
      max_daily_change_pct: parseFloat(item.json.policy_max_daily || 5),
      max_weekly_change_pct: parseFloat(item.json.policy_max_weekly || 15),
      enforce_map: item.json.enforce_map !== false,
      competitive_position: item.json.competitive_position || 'match_market_avg',
      position_offset_pct: parseFloat(item.json.position_offset_pct || 0),
      auto_approve_threshold_pct: parseFloat(item.json.auto_approve_threshold_pct || 2)
    }
  };
});

return [{ json: { prompt_batch: JSON.stringify(batch, null, 2) } }];

Node 8: Call Claude Haiku 4.5

  • Type: HTTP Request (or Anthropic node if available)
  • Method: POST
  • URL: https://api.anthropic.com/v1/messages
  • Headers: x-api-key: {{ $credentials.anthropicApiKey }}, anthropic-version: 2023-06-01, content-type: application/json
Node 8: Anthropic API request body
json
{
  "model": "claude-haiku-4-5-20250501",
  "max_tokens": 4096,
  "system": "[INSERT FULL SYSTEM PROMPT FROM PRICING ANALYSIS AGENT COMPONENT]",
  "messages": [
    {
      "role": "user",
      "content": "Analyze the following batch of products and provide repricing recommendations for each. Return a JSON array of recommendation objects.\n\n{{ $json.prompt_batch }}"
    }
  ]
}

Node 9: Parse LLM Response

  • Type: Code (JavaScript)
Node 9: Parse and normalize LLM JSON response
javascript
const response = $input.first().json;
const content = response.content[0].text;
let recommendations;
try {
  recommendations = JSON.parse(content);
  if (!Array.isArray(recommendations)) {
    recommendations = [recommendations];
  }
} catch (e) {
  // Try to extract JSON from response
  const match = content.match(/\[.*\]/s) || content.match(/\{.*\}/s);
  if (match) {
    recommendations = JSON.parse(match[0]);
    if (!Array.isArray(recommendations)) recommendations = [recommendations];
  } else {
    throw new Error('Failed to parse LLM response: ' + content.substring(0, 200));
  }
}

return recommendations.map(rec => ({ json: rec }));

Node 10: Filter Actionable Recommendations

  • Type: IF
  • Condition: {{ $json.action }} !== 'hold'
  • True path: Continue to store and notify
  • False path: Skip (log only)

Node 11: Store Recommendation in Database

  • Type: Postgres
Node 11 Query: Insert recommendation record
sql
INSERT INTO recommendations 
  (product_id, current_price, recommended_price, change_pct, projected_margin_pct, reason, competitor_context, policy_checks, confidence_score, status)
SELECT p.id, p.current_price, {{ $json.recommended_price }}, {{ $json.change_pct }}, {{ $json.projected_margin_pct }},
  '{{ $json.reasoning }}', '{{ JSON.stringify($json.competitors || {}) }}'::jsonb, '{{ JSON.stringify($json.guardrail_checks) }}'::jsonb,
  {{ $json.confidence }}, 
  CASE WHEN {{ $json.requires_human_review }} THEN 'pending' ELSE 'auto_approved' END
FROM products p WHERE p.sku = '{{ $json.sku }}'
RETURNING id;

Node 12: Route by Approval Status

  • Type: IF
  • Condition: {{ $json.requires_human_review }} === true
  • True: Send to human approval (Node 13)
  • False: Send to auto-apply queue (Node 15)

Node 13: Send Approval Notification (Slack or Email)

  • Type: Slack (or Email Send)
  • Channel: #pricing-approvals
Node 13: Slack/email message template for human approval
text
🏷️ *Price Recommendation* for {{ $json.sku }}
*Product*: {{ $json.name }}
*Current Price*: ${{ $json.current_price }}
*Recommended*: ${{ $json.recommended_price }} ({{ $json.change_pct }}%)
*Projected Margin*: {{ $json.projected_margin_pct }}%
*Confidence*: {{ Math.round($json.confidence * 100) }}%
*Reason*: {{ $json.reasoning }}

[Approve](https://n8n.clientdomain.com/webhook/pricing-approve?id={{ $json.recommendation_id }}&action=approve) | [Reject](https://n8n.clientdomain.com/webhook/pricing-approve?id={{ $json.recommendation_id }}&action=reject)

Node 14: Approval Webhook Handler (separate workflow)

  • Create a second n8n workflow named 'Pricing Approval Handler'
  • Trigger: Webhook (GET /webhook/pricing-approve)
  • Parameters: id (recommendation ID), action (approve/reject)
1
Update recommendations SET status = '{{ $query.action }}d', approved_by = 'human', approved_at = NOW() WHERE id = {{ $query.id }}
2
If approved: fetch product details, push price to e-commerce platform
3
Insert audit log entry
4
Return HTML confirmation page

Node 15: Auto-Apply Price (DISABLED by default)

Warning

This node is DISABLED during Phase 1. Enable only after validating recommendation quality.

  • Type: HTTP Request
  • Shopify example: Method: PUT, URL: https://CLIENT-STORE.myshopify.com/admin/api/2024-01/variants/{{ $json.variant_id }}.json
Node 15: Shopify variant price update body
json
{ "variant": { "price": "{{ $json.recommended_price }}" } }

Node 16: Write Audit Log

  • Type: Postgres
Node 16 Query: Write full audit log entry
sql
INSERT INTO audit_log (event_type, product_id, old_price, new_price, change_reason, agent_reasoning, policy_applied, user_action, metadata)
SELECT 
  'recommendation_generated',
  p.id,
  p.current_price,
  {{ $json.recommended_price }},
  '{{ $json.reasoning }}',
  '{{ JSON.stringify($json) }}',
  '{{ JSON.stringify($json.guardrail_checks) }}',
  CASE WHEN {{ $json.requires_human_review }} THEN 'pending_approval' ELSE 'auto_approved' END,
  '{{ JSON.stringify({confidence: $json.confidence, anomalies: $json.anomalies || []}) }}'::jsonb
FROM products p WHERE p.sku = '{{ $json.sku }}';

Node 17: Error Handler

  • Type: Error Trigger (global workflow error handler)
  • Action: Send Slack/Email alert to MSP ops team
  • Message: Include error details, failed node name, timestamp

Grafana Dashboard Configuration

Type: integration Pre-configured Grafana dashboard with panels for competitive price positioning, recommendation queue, price change history, margin impact analysis, and compliance audit trail. Connects to the PostgreSQL/TimescaleDB database.

Implementation

Dashboard Name: Competitive Pricing Intelligence

  • Refresh Rate: 5 minutes
  • Time Range Default: Last 7 days

Row 1: Key Metrics (Stat Panels)

Panel 1.1: Pending Recommendations

Threshold: Green < 10, Yellow 10-25, Red > 25
sql
SELECT COUNT(*) as value FROM recommendations WHERE status = 'pending';

Panel 1.2: Recommendations Applied (7d)

sql
SELECT COUNT(*) as value FROM recommendations 
WHERE status IN ('approved', 'auto_approved') 
AND approved_at > NOW() - INTERVAL '7 days';

Panel 1.3: Avg Margin Impact

sql
SELECT ROUND(AVG(projected_margin_pct - 
  (SELECT (p.current_price - p.cost) / p.current_price * 100 
   FROM products p WHERE p.id = r.product_id)), 2) as value
FROM recommendations r
WHERE status IN ('approved', 'auto_approved') 
AND approved_at > NOW() - INTERVAL '7 days';

Panel 1.4: Products Below Market

sql
WITH latest_prices AS (
  SELECT DISTINCT ON (product_id, competitor_id) 
    product_id, price
  FROM competitor_prices
  WHERE time > NOW() - INTERVAL '24 hours'
  ORDER BY product_id, competitor_id, time DESC
)
SELECT COUNT(DISTINCT p.id) as value
FROM products p
JOIN latest_prices lp ON lp.product_id = p.id
GROUP BY p.id
HAVING p.current_price < AVG(lp.price) * 0.95;

Row 2: Competitive Position Heatmap (Table Panel)

Use value mapping for position column colors: Below Market=blue, Competitive=green, Above Average=yellow, Premium=red
sql
WITH latest AS (
  SELECT DISTINCT ON (cp.product_id, cp.competitor_id)
    cp.product_id, c.name as competitor, cp.price, cp.in_stock
  FROM competitor_prices cp
  JOIN competitors c ON c.id = cp.competitor_id
  WHERE cp.time > NOW() - INTERVAL '24 hours'
  ORDER BY cp.product_id, cp.competitor_id, cp.time DESC
)
SELECT p.sku, p.name, p.current_price as our_price,
  MIN(l.price) as lowest_competitor,
  AVG(l.price)::numeric(10,2) as avg_competitor,
  MAX(l.price) as highest_competitor,
  ROUND((p.current_price - MIN(l.price)) / MIN(l.price) * 100, 1) as pct_vs_lowest,
  CASE 
    WHEN p.current_price < MIN(l.price) THEN 'Below Market'
    WHEN p.current_price <= AVG(l.price) THEN 'Competitive'
    WHEN p.current_price <= MAX(l.price) THEN 'Above Average'
    ELSE 'Premium'
  END as position
FROM products p
JOIN latest l ON l.product_id = p.id
WHERE l.in_stock = true
GROUP BY p.id, p.sku, p.name, p.current_price
ORDER BY pct_vs_lowest DESC;

Row 3: Price Change History (Time Series)

sql
SELECT time_bucket('1 day', al.created_at) as time,
  COUNT(*) FILTER (WHERE al.user_action = 'approved') as approved,
  COUNT(*) FILTER (WHERE al.user_action = 'auto_approved') as auto_approved,
  COUNT(*) FILTER (WHERE al.user_action = 'rejected') as rejected,
  COUNT(*) FILTER (WHERE al.user_action = 'pending_approval') as pending
FROM audit_log al
WHERE al.event_type = 'recommendation_generated'
AND al.created_at > $__timeFrom()
GROUP BY time_bucket('1 day', al.created_at)
ORDER BY 1;
sql
SELECT r.id, p.sku, p.name, 
  r.current_price, r.recommended_price,
  r.change_pct || '%' as change,
  r.projected_margin_pct || '%' as margin,
  r.confidence_score,
  r.reason,
  r.created_at::text as recommended_at
FROM recommendations r
JOIN products p ON p.id = r.product_id
WHERE r.status = 'pending'
ORDER BY r.created_at DESC;

Row 5: Compliance Audit Log (Table Panel)

sql
SELECT al.created_at::text as timestamp,
  al.event_type, p.sku, p.name,
  al.old_price, al.new_price,
  al.change_reason,
  al.policy_applied,
  al.user_action, al.user_id
FROM audit_log al
JOIN products p ON p.id = al.product_id
WHERE al.created_at > $__timeFrom()
ORDER BY al.created_at DESC
LIMIT 100;

Grafana Data Source Config

  • Type: PostgreSQL
  • Host: postgres:5432 (Docker internal) or localhost:5432
  • Database: pricing_agent
  • User: pricing_admin
  • TimescaleDB: ON

Competitor Price Sync Script

Type: integration Python script that runs as an n8n Code node or standalone cron job to sync competitor prices from Prisync API into the local TimescaleDB database. Handles pagination, error recovery, and data deduplication.

Implementation:

Competitor Price Sync — Prisync to TimescaleDB
python
#!/usr/bin/env python3
"""
Competitor Price Sync - Prisync to TimescaleDB
Runs via n8n Code node or as standalone cron job.
Syncs all product prices from Prisync and stores in local DB.
"""
import os
import json
import time
import requests
import psycopg2
from datetime import datetime, timezone

# Configuration
PRISYNC_API_KEY = os.environ.get('PRISYNC_API_KEY')
PRISYNC_API_TOKEN = os.environ.get('PRISYNC_API_TOKEN')
PRISYNC_BASE_URL = 'https://prisync.com/api/v2'

DB_CONFIG = {
    'host': os.environ.get('DB_HOST', 'localhost'),
    'port': int(os.environ.get('DB_PORT', 5432)),
    'database': os.environ.get('DB_NAME', 'pricing_agent'),
    'user': os.environ.get('DB_USER', 'pricing_admin'),
    'password': os.environ.get('DB_PASSWORD', '')
}

MAX_RETRIES = 3
RETRY_DELAY = 5  # seconds


def prisync_request(endpoint, params=None):
    """Make authenticated request to Prisync API with retry logic."""
    headers = {
        'apikey': PRISYNC_API_KEY,
        'apitoken': PRISYNC_API_TOKEN
    }
    for attempt in range(MAX_RETRIES):
        try:
            response = requests.get(
                f'{PRISYNC_BASE_URL}/{endpoint}',
                headers=headers,
                params=params,
                timeout=30
            )
            response.raise_for_status()
            return response.json()
        except requests.exceptions.RequestException as e:
            if attempt < MAX_RETRIES - 1:
                print(f'Prisync API error (attempt {attempt+1}): {e}. Retrying in {RETRY_DELAY}s...')
                time.sleep(RETRY_DELAY)
            else:
                raise


def get_all_products():
    """Fetch all products from Prisync."""
    result = prisync_request('list/product/id')
    if result.get('success'):
        return result.get('results', {})
    return {}


def get_product_prices(product_id):
    """Fetch competitor prices for a specific product."""
    result = prisync_request(f'get/product/prices/id/{product_id}')
    if result.get('success'):
        return result.get('results', [])
    return []


def sync_prices():
    """Main sync function: pull all competitor prices and store in DB."""
    conn = psycopg2.connect(**DB_CONFIG)
    cur = conn.cursor()
    now = datetime.now(timezone.utc)
    
    # Get product mapping: prisync_product_id -> our product_id
    cur.execute('SELECT id, prisync_product_id FROM products WHERE prisync_product_id IS NOT NULL AND is_active = true')
    product_map = {row[1]: row[0] for row in cur.fetchall()}
    
    # Get competitor mapping
    cur.execute('SELECT id, name FROM competitors WHERE is_active = true')
    competitor_map = {row[1].lower(): row[0] for row in cur.fetchall()}
    
    products = get_all_products()
    total_synced = 0
    errors = []
    
    for prisync_id, product_info in products.items():
        our_product_id = product_map.get(str(prisync_id))
        if not our_product_id:
            continue
        
        try:
            prices = get_product_prices(prisync_id)
            for price_entry in prices:
                comp_name = price_entry.get('competitor_name', '').lower()
                comp_id = competitor_map.get(comp_name)
                
                if not comp_id:
                    # Auto-create competitor entry
                    cur.execute(
                        'INSERT INTO competitors (name, website_url) VALUES (%s, %s) RETURNING id',
                        (price_entry.get('competitor_name', 'Unknown'), price_entry.get('url', ''))
                    )
                    comp_id = cur.fetchone()[0]
                    competitor_map[comp_name] = comp_id
                
                price_value = price_entry.get('price')
                if price_value and float(price_value) > 0:
                    cur.execute("""
                        INSERT INTO competitor_prices (time, product_id, competitor_id, price, in_stock, url)
                        VALUES (%s, %s, %s, %s, %s, %s)
                    """, (
                        now, our_product_id, comp_id,
                        float(price_value),
                        price_entry.get('in_stock', True),
                        price_entry.get('url', '')
                    ))
                    total_synced += 1
            
            # Rate limiting: Prisync API allows ~60 req/min
            time.sleep(1)
            
        except Exception as e:
            errors.append(f'Product {prisync_id}: {str(e)}')
            continue
    
    # Update product current prices from latest competitor data
    cur.execute("""
        UPDATE products p SET updated_at = NOW()
        WHERE p.is_active = true;
    """)
    
    conn.commit()
    cur.close()
    conn.close()
    
    return {
        'synced_prices': total_synced,
        'errors': errors,
        'timestamp': now.isoformat()
    }


if __name__ == '__main__':
    result = sync_prices()
    print(json.dumps(result, indent=2))

Pricing Policy Guardrail Validator

Type: skill A deterministic validation layer that runs AFTER the LLM generates recommendations and BEFORE any recommendation is stored or acted upon. This is a critical safety net that ensures no recommendation can violate pricing policies, regardless of what the LLM outputs. It implements hard-coded guardrails that cannot be prompt-injected or hallucinated away.

Implementation:

python
#!/usr/bin/env python3
"""
Pricing Policy Guardrail Validator
Deterministic post-LLM validation layer.
Runs as n8n Code node or standalone module.
NEVER trust the LLM's guardrail self-assessment — always re-validate.
"""
import json
from decimal import Decimal, ROUND_HALF_UP
from datetime import datetime, timedelta


class GuardrailValidator:
    
    def __init__(self, product, policy, recent_changes=None):
        self.product = product
        self.policy = policy
        self.recent_changes = recent_changes or []
        self.violations = []
        self.adjustments = []
    
    def validate_and_adjust(self, recommended_price):
        """
        Validate a recommended price against all guardrails.
        Returns adjusted price and validation report.
        """
        rec_price = Decimal(str(recommended_price)).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP)
        current_price = Decimal(str(self.product['current_price']))
        cost = Decimal(str(self.product['cost'])) if self.product.get('cost') else None
        map_price = Decimal(str(self.product['map_price'])) if self.product.get('map_price') else None
        
        adjusted_price = rec_price
        checks = {}
        
        # Check 1: Never below cost
        if cost and adjusted_price <= cost:
            min_price_above_cost = cost + Decimal('0.01')
            self.violations.append(f'Price ${adjusted_price} is at or below cost ${cost}')
            adjusted_price = max(adjusted_price, min_price_above_cost)
            checks['cost_check'] = 'blocked'
        else:
            checks['cost_check'] = 'pass'
        
        # Check 2: MAP compliance (highest priority)
        if map_price and self.policy.get('enforce_map', True):
            if adjusted_price < map_price:
                self.violations.append(f'Price ${adjusted_price} is below MAP ${map_price}')
                adjusted_price = map_price
                checks['map_check'] = 'adjusted'
            else:
                checks['map_check'] = 'pass'
        else:
            checks['map_check'] = 'not_applicable'
        
        # Check 3: Minimum margin
        if cost and cost > 0:
            min_margin_pct = Decimal(str(self.policy.get('min_margin_pct', 15)))
            margin_pct = ((adjusted_price - cost) / adjusted_price * 100).quantize(Decimal('0.01'))
            if margin_pct < min_margin_pct:
                # Calculate minimum price for required margin
                min_price_for_margin = (cost / (1 - min_margin_pct / 100)).quantize(Decimal('0.01'), rounding=ROUND_HALF_UP)
                self.adjustments.append(f'Price adjusted from ${adjusted_price} to ${min_price_for_margin} to meet {min_margin_pct}% margin requirement')
                adjusted_price = max(adjusted_price, min_price_for_margin)
                checks['margin_check'] = 'adjusted'
            else:
                checks['margin_check'] = 'pass'
        else:
            checks['margin_check'] = 'skipped_no_cost_data'
        
        # Check 4: Maximum daily change
        max_daily_pct = Decimal(str(self.policy.get('max_daily_change_pct', 5)))
        if current_price > 0:
            change_pct = abs((adjusted_price - current_price) / current_price * 100)
            if change_pct > max_daily_pct:
                if adjusted_price > current_price:
                    max_price = (current_price * (1 + max_daily_pct / 100)).quantize(Decimal('0.01'))
                    adjusted_price = min(adjusted_price, max_price)
                else:
                    min_price = (current_price * (1 - max_daily_pct / 100)).quantize(Decimal('0.01'))
                    adjusted_price = max(adjusted_price, min_price)
                self.adjustments.append(f'Price capped at ${adjusted_price} due to {max_daily_pct}% daily change limit')
                checks['daily_change_check'] = 'adjusted'
            else:
                checks['daily_change_check'] = 'pass'
        
        # Check 5: Maximum weekly change (accumulated)
        max_weekly_pct = Decimal(str(self.policy.get('max_weekly_change_pct', 15)))
        if self.recent_changes and current_price > 0:
            seven_days_ago = datetime.now() - timedelta(days=7)
            week_changes = [c for c in self.recent_changes if datetime.fromisoformat(c['date']) > seven_days_ago]
            if week_changes:
                oldest_price = Decimal(str(week_changes[-1].get('old_price', current_price)))
                total_change_pct = abs((adjusted_price - oldest_price) / oldest_price * 100)
                if total_change_pct > max_weekly_pct:
                    if adjusted_price > oldest_price:
                        max_weekly_price = (oldest_price * (1 + max_weekly_pct / 100)).quantize(Decimal('0.01'))
                        adjusted_price = min(adjusted_price, max_weekly_price)
                    else:
                        min_weekly_price = (oldest_price * (1 - max_weekly_pct / 100)).quantize(Decimal('0.01'))
                        adjusted_price = max(adjusted_price, min_weekly_price)
                    self.adjustments.append(f'Price further capped at ${adjusted_price} due to {max_weekly_pct}% weekly change limit')
                    checks['weekly_change_check'] = 'adjusted'
                else:
                    checks['weekly_change_check'] = 'pass'
            else:
                checks['weekly_change_check'] = 'pass'
        else:
            checks['weekly_change_check'] = 'pass'
        
        # Re-verify MAP after all adjustments (MAP has highest priority)
        if map_price and self.policy.get('enforce_map', True) and adjusted_price < map_price:
            adjusted_price = map_price
            checks['map_check'] = 'enforced_final'
        
        # Calculate final metrics
        final_change_pct = float(((adjusted_price - current_price) / current_price * 100).quantize(Decimal('0.01'))) if current_price > 0 else 0
        final_margin_pct = float(((adjusted_price - cost) / adjusted_price * 100).quantize(Decimal('0.01'))) if cost and adjusted_price > 0 else None
        
        # Determine if human review is needed
        auto_threshold = float(self.policy.get('auto_approve_threshold_pct', 2))
        needs_review = (
            abs(final_change_pct) > auto_threshold or
            any(v != 'pass' and v != 'not_applicable' for v in checks.values()) or
            len(self.violations) > 0
        )
        
        was_adjusted = float(adjusted_price) != float(rec_price)
        
        return {
            'original_recommendation': float(rec_price),
            'validated_price': float(adjusted_price),
            'was_adjusted': was_adjusted,
            'change_pct': final_change_pct,
            'projected_margin_pct': final_margin_pct,
            'requires_human_review': needs_review,
            'guardrail_checks': checks,
            'violations': self.violations,
            'adjustments': self.adjustments
        }


# n8n Code Node integration
def validate_recommendations(recommendations, products_db, policies_db):
    """Process a batch of LLM recommendations through guardrail validation."""
    validated = []
    for rec in recommendations:
        sku = rec.get('sku')
        product = products_db.get(sku)
        if not product:
            continue
        
        policy = policies_db.get(product.get('category'), policies_db.get('default', {}))
        
        validator = GuardrailValidator(
            product=product,
            policy=policy,
            recent_changes=product.get('recent_price_changes', [])
        )
        
        result = validator.validate_and_adjust(rec.get('recommended_price', product['current_price']))
        result['sku'] = sku
        result['name'] = product.get('name', '')
        result['current_price'] = product['current_price']
        result['llm_reasoning'] = rec.get('reasoning', '')
        result['llm_confidence'] = rec.get('confidence', 0)
        result['anomalies'] = rec.get('anomalies', [])
        
        validated.append(result)
    
    return validated

E-Commerce Price Updater

Type: integration Modular integration for pushing approved price changes to Shopify, WooCommerce, or BigCommerce. Includes rollback capability and confirmation logging. Used as the final step in the workflow after human or auto-approval.

Implementation:

E-Commerce Price Updater
python
# supports Shopify, WooCommerce, BigCommerce with rollback capability

#!/usr/bin/env python3
"""
E-Commerce Price Updater
Pushes approved price changes to the client's e-commerce platform.
Supports Shopify, WooCommerce, BigCommerce.
Includes rollback capability.
"""
import os
import json
import requests
from datetime import datetime, timezone


class ShopifyUpdater:
    def __init__(self, store_url, access_token):
        self.base_url = f'https://{store_url}/admin/api/2024-01'
        self.headers = {
            'X-Shopify-Access-Token': access_token,
            'Content-Type': 'application/json'
        }
    
    def get_product_variant(self, product_handle):
        """Get variant ID and current price by product handle."""
        resp = requests.get(
            f'{self.base_url}/products.json?handle={product_handle}',
            headers=self.headers
        )
        resp.raise_for_status()
        products = resp.json().get('products', [])
        if products:
            variants = products[0].get('variants', [])
            if variants:
                return {
                    'variant_id': variants[0]['id'],
                    'product_id': products[0]['id'],
                    'current_price': variants[0]['price'],
                    'compare_at_price': variants[0].get('compare_at_price')
                }
        return None
    
    def update_price(self, variant_id, new_price, compare_at_price=None):
        """Update variant price on Shopify."""
        payload = {'variant': {'id': variant_id, 'price': str(new_price)}}
        if compare_at_price:
            payload['variant']['compare_at_price'] = str(compare_at_price)
        
        resp = requests.put(
            f'{self.base_url}/variants/{variant_id}.json',
            headers=self.headers,
            json=payload
        )
        resp.raise_for_status()
        return resp.json()
    
    def rollback_price(self, variant_id, original_price):
        """Rollback to previous price."""
        return self.update_price(variant_id, original_price)


class WooCommerceUpdater:
    def __init__(self, store_url, consumer_key, consumer_secret):
        self.base_url = f'{store_url}/wp-json/wc/v3'
        self.auth = (consumer_key, consumer_secret)
    
    def get_product_by_sku(self, sku):
        """Find product by SKU."""
        resp = requests.get(
            f'{self.base_url}/products',
            auth=self.auth,
            params={'sku': sku}
        )
        resp.raise_for_status()
        products = resp.json()
        if products:
            return {
                'product_id': products[0]['id'],
                'current_price': products[0]['price'],
                'regular_price': products[0]['regular_price'],
                'sale_price': products[0].get('sale_price', '')
            }
        return None
    
    def update_price(self, product_id, new_price):
        """Update product regular price on WooCommerce."""
        resp = requests.put(
            f'{self.base_url}/products/{product_id}',
            auth=self.auth,
            json={'regular_price': str(new_price)}
        )
        resp.raise_for_status()
        return resp.json()
    
    def rollback_price(self, product_id, original_price):
        return self.update_price(product_id, original_price)


def apply_approved_recommendation(recommendation, platform_config):
    """
    Apply a single approved recommendation to the e-commerce platform.
    Returns success status and details for audit logging.
    """
    platform = platform_config.get('platform', 'shopify')
    result = {
        'sku': recommendation['sku'],
        'old_price': recommendation['current_price'],
        'new_price': recommendation['validated_price'],
        'platform': platform,
        'timestamp': datetime.now(timezone.utc).isoformat(),
        'success': False,
        'error': None
    }
    
    try:
        if platform == 'shopify':
            updater = ShopifyUpdater(
                platform_config['store_url'],
                platform_config['access_token']
            )
            variant_info = updater.get_product_variant(recommendation.get('product_handle', recommendation['sku']))
            if variant_info:
                result['previous_price_confirmed'] = variant_info['current_price']
                updater.update_price(variant_info['variant_id'], recommendation['validated_price'])
                result['success'] = True
                result['variant_id'] = variant_info['variant_id']
            else:
                result['error'] = f"Product not found on Shopify: {recommendation['sku']}"
        
        elif platform == 'woocommerce':
            updater = WooCommerceUpdater(
                platform_config['store_url'],
                platform_config['consumer_key'],
                platform_config['consumer_secret']
            )
            product_info = updater.get_product_by_sku(recommendation['sku'])
            if product_info:
                result['previous_price_confirmed'] = product_info['current_price']
                updater.update_price(product_info['product_id'], recommendation['validated_price'])
                result['success'] = True
                result['product_id'] = product_info['product_id']
            else:
                result['error'] = f"Product not found on WooCommerce: {recommendation['sku']}"
    
    except Exception as e:
        result['error'] = str(e)
        result['success'] = False
    
    return result


def bulk_rollback(recommendations, platform_config):
    """Emergency rollback: revert all recent price changes."""
    results = []
    for rec in recommendations:
        try:
            if platform_config['platform'] == 'shopify':
                updater = ShopifyUpdater(platform_config['store_url'], platform_config['access_token'])
                variant_info = updater.get_product_variant(rec.get('product_handle', rec['sku']))
                if variant_info:
                    updater.rollback_price(variant_info['variant_id'], rec['old_price'])
                    results.append({'sku': rec['sku'], 'rolled_back': True})
            elif platform_config['platform'] == 'woocommerce':
                updater = WooCommerceUpdater(platform_config['store_url'], platform_config['consumer_key'], platform_config['consumer_secret'])
                product_info = updater.get_product_by_sku(rec['sku'])
                if product_info:
                    updater.rollback_price(product_info['product_id'], rec['old_price'])
                    results.append({'sku': rec['sku'], 'rolled_back': True})
        except Exception as e:
            results.append({'sku': rec['sku'], 'rolled_back': False, 'error': str(e)})
    return results

Testing & Validation

Prisync API Connectivity Test

Execute the following command and verify a 200 response with valid JSON. Confirm at least one product appears in the response.

Prisync API connectivity test
bash
curl -X GET 'https://prisync.com/api/v2/list/product/id' -H 'apikey: YOUR_KEY' -H 'apitoken: YOUR_TOKEN'

Database Schema Verification

Connect to PostgreSQL and run the following query — verify all 6 tables exist: products, competitors, competitor_prices, pricing_policies, recommendations, audit_log.

Verify all 6 required tables exist
sql
SELECT table_name FROM information_schema.tables WHERE table_schema = 'public';

Product Import Validation

Run the following query and confirm the count matches the number of products in the client's catalog with cost data. Products without cost data cannot have margin guardrails enforced.

Count products with valid cost data
sql
SELECT COUNT(*) FROM products WHERE cost IS NOT NULL AND cost > 0;

Competitor Price Sync Test

Trigger the price sync script manually and verify all of the following:

1
competitor_prices table has new rows with timestamps within the last hour
2
At least 3 competitors appear per top-selling product
3
Prices are non-zero and reasonable for the product category

LLM Reasoning Engine Test

Send a single product with known competitor prices to the Claude API using the system prompt. Verify the response is valid JSON matching the expected output schema, the recommended_price is a positive number, and guardrail_checks are populated.

Guardrail Validator — MAP Violation Test

Create a test product with MAP price of $50.00, competitor lowest at $45.00, and strategy 'match_lowest'. Run through the guardrail validator and confirm the final validated_price is $50.00 (not $45.00) and map_check shows 'adjusted'.

Guardrail Validator — Margin Floor Test

Create a test product with cost $80.00, min_margin_pct 20%, and an LLM recommendation of $90.00 (margin = 11.1%). Confirm the validator adjusts to $100.00 (the minimum price yielding 20% margin) and margin_check shows 'adjusted'.

Guardrail Validator — Daily Change Limit Test

Create a test product currently priced at $100.00 with max_daily_change_pct of 5%. Submit a recommendation of $80.00 (20% decrease). Confirm the validator caps it at $95.00 and daily_change_check shows 'adjusted'.

End-to-End Workflow Dry Run

Execute the full n8n workflow manually with the schedule trigger. Verify all of the following:

1
Products are fetched from the database
2
Prisync prices are retrieved
3
LLM generates recommendations
4
Guardrail validator processes them
5
Recommendations appear in the database with status 'pending'
6
Notification is sent via Slack/email

Human Approval Flow Test

Click the approval link in a test notification. Verify all of the following:

1
The recommendation status changes from 'pending' to 'approved' in the database
2
An audit_log entry is created with user_action = 'approved'
3
No price change is pushed to the e-commerce platform (auto-apply should be disabled in Phase 1)

Grafana Dashboard Validation

Load the Grafana dashboard and verify all panels render without errors. Confirm the Pending Recommendations panel shows the test recommendations, the Competitive Position table shows price comparisons, and the Audit Log panel shows recent entries.

E-Commerce API Price Update Test (Phase 2 only)

Select one low-value, low-risk product. Manually trigger the price update function with a $0.01 change. Verify all of the following, then rollback the price immediately:

1
The price updates on the storefront within 60 seconds
2
The audit_log records the change
3
The product's current_price in the local DB is updated

Rollback Capability Test

After applying a test price change, execute the bulk_rollback function for that product. Verify the e-commerce platform price reverts to the original value and the rollback is logged in the audit trail.

Antitrust Compliance Audit Test

Review the audit_log table and confirm every recommendation (whether applied, approved, rejected, or auto-approved) has a complete entry with all required fields. This audit trail must be complete for legal defensibility.

Performance and Load Test

Simulate a run with the client's full catalog (all active SKUs). Measure total execution time and LLM API costs. For a 1,000-SKU catalog with batches of 10, the full cycle should complete within 30 minutes and cost less than $2 in API fees.

Client Handoff

Client Handoff Agenda (2-Hour Session)

1. Solution Overview (15 min)

  • Walk through the system architecture at a high level: Prisync monitors competitors → Agent analyzes and recommends → Client approves → Prices update
  • Explain the human-in-the-loop safety net and why it's critical
  • Show the data flow diagram and where their data lives

2. Dashboard Training (30 min)

  • Live walkthrough of the Grafana dashboard: how to read the competitive position heatmap, what each metric means
  • How to navigate the pending recommendations queue
  • How to drill into specific products for price history and competitor details
  • How to export reports for management review

3. Approval Workflow Training (20 min)

  • Demonstrate receiving a Slack/email notification
  • Walk through the recommendation details: what each field means, how to interpret confidence scores
  • Practice approving and rejecting recommendations
  • Explain what happens after approval (Phase 1: logged only; Phase 2: pushes to store)

4. Pricing Policy Review (20 min)

  • Review all configured pricing policies together: margin floors, MAP prices, daily/weekly change limits, competitive positioning strategies
  • Explain how to request policy changes (contact MSP, not self-service in Phase 1)
  • Discuss category-specific strategies and any products that need special handling

5. Escalation and Support (10 min)

  • MSP support contact information and response SLAs
  • What constitutes an emergency (e.g., incorrect prices on storefront)
  • Emergency rollback procedure: who to call and expected response time

6. Documentation Handoff

Leave behind the following documents:

  • Pricing Policy Configuration Sheet: Spreadsheet listing all products, their categories, assigned policies, margin floors, MAP prices, and competitive strategies
  • Dashboard User Guide: PDF with annotated screenshots of every dashboard panel
  • Approval Workflow Quick Reference: One-page card explaining how to approve/reject and what each field means
  • Emergency Contact Card: MSP phone, email, after-hours support number, and escalation matrix
  • System Architecture Diagram: For the client's IT team or records

7. Success Criteria Review (15 min)

Review together and get sign-off on success metrics:

Maintenance

Ongoing Maintenance Responsibilities

Daily (Automated + Spot Check)

  • Automated: n8n workflow health monitoring (via health_check.sh cron job) — alerts MSP if any container is down
  • Automated: Database backups at 2 AM (backup.sh cron job)
  • Spot Check (5 min): Verify the latest workflow run completed successfully in n8n execution log. Check for any error notifications.

Weekly (30 min per client)

  • Review Prisync data quality: Check for broken competitor URLs (Prisync marks these as 'not found'). Re-map or replace URLs as competitor websites change.
  • Review recommendation accuracy: Spot-check 5–10 recommendations against actual competitor prices. Look for pattern of incorrect matches.
  • Check LLM API usage and costs: Review Anthropic dashboard for token consumption. Alert client if approaching budget limits.
  • Review approval queue: Check for stale pending recommendations that the client hasn't acted on (>48 hours old). Send reminder.

Monthly (2 hours per client)

  • Policy Tuning: Review guardrail effectiveness with client. Analyze if margin floors or change limits need adjustment based on competitive dynamics.
  • Competitor URL Maintenance: Full audit of all competitor URLs. Remove defunct competitors, add new ones identified by the client.
  • Product Catalog Sync: Reconcile the agent's product database with the client's current catalog. Add new products, deactivate discontinued ones.
  • Performance Report: Generate monthly summary for client: number of recommendations generated, approved, rejected, auto-approved; estimated margin impact; competitive position trends.
  • Software Updates: Update Docker images for n8n, Grafana, and PostgreSQL (test in staging first). Apply security patches to the VM OS.
  • LLM Model Evaluation: Check if Anthropic has released improved models. Test new models on a sample batch before switching.

Quarterly (4 hours per client)

  • Comprehensive Compliance Audit: Review full audit log for antitrust defensibility. Ensure all pricing decisions have documented reasoning and human oversight.
  • Strategy Review with Client: Present quarterly pricing intelligence insights. Discuss competitive landscape changes. Adjust strategies per category.
  • Infrastructure Review: Evaluate VM sizing, storage usage (TimescaleDB can grow large), and API cost trends. Right-size if needed.
  • Disaster Recovery Test: Restore from backup to a test instance. Verify data integrity and workflow functionality.

Triggered Maintenance

  • Prisync URL Failure Alert: When >10% of competitor URLs fail, investigate immediately (competitor website redesign, anti-scraping measures)
  • LLM API Error Spike: If >5 consecutive API failures, switch to fallback model (OpenAI GPT-5.4 mini) and investigate
  • Anomalous Recommendation Pattern: If the agent recommends >20% price drops across multiple products simultaneously, pause the workflow and investigate (possible data quality issue or competitor pricing error)
  • Client Reports Incorrect Price on Storefront: Emergency response — verify recommendation, check approval trail, rollback if necessary within 30 minutes SLA

SLA Recommendations

  • Response Time: 4-hour response for non-critical issues, 30-minute response for pricing emergencies
  • Uptime Target: 99.5% for the agent pipeline (allows ~3.6 hours downtime/month for maintenance)
  • Data Freshness: Competitor prices refreshed at least every 6 hours; recommendations generated within 1 hour of fresh data
  • Rollback SLA: Any incorrect price change reverted within 30 minutes of report

Escalation Path

1
Tier 1 (MSP helpdesk): Dashboard access issues, notification delivery problems, basic questions
2
Tier 2 (MSP engineer): Workflow failures, API errors, policy configuration changes, new product onboarding
3
Tier 3 (MSP AI specialist / vendor support): LLM quality degradation, guardrail logic changes, compliance concerns, major architecture changes

Alternatives

SaaS-Only Approach (Prisync + Pricefy Built-in Repricing)

Instead of building a custom autonomous agent with n8n and LLM, use Pricefy's built-in autopilot repricing feature ($189/month Business tier) or Prisync's dynamic pricing rules. The client configures repricing rules directly in the SaaS platform — no custom infrastructure, no LLM costs, no agent orchestration layer.

Enterprise Platform Approach (Competera or Intelligence Node)

Deploy an enterprise-grade pricing optimization platform like Competera ($5K+/month) or Intelligence Node ($5K+/month) that includes built-in ML demand elasticity modeling, automated product matching, and full repricing automation. These platforms handle the entire pipeline from data collection through price optimization.

Custom Python Agent with CrewAI/LangGraph (No n8n)

Build the entire agent orchestration layer in Python using CrewAI or LangGraph frameworks instead of n8n. This provides maximum flexibility and control but requires stronger development capabilities.

Marketplace-Focused Approach (BQool / StreetPricer)

For clients who sell primarily on Amazon and/or Walmart marketplaces rather than their own e-commerce store, use dedicated marketplace repricing tools like BQool ($50+/month) or StreetPricer that integrate directly with seller accounts.

Hybrid Approach: Prisync Monitoring + Weekly MSP-Curated Reports

Deploy Prisync for automated competitor monitoring but skip the autonomous agent entirely. Instead, the MSP analyst reviews Prisync data weekly and produces a human-curated repricing recommendation report for the client. This is a services-heavy approach with minimal technology investment.

Want early access to the full toolkit?