57 min readDeterministic automation

Implementation Guide: Route incoming claims to the right carrier portal and initiate first notice of loss

Step-by-step implementation guide for deploying AI to route incoming claims to the right carrier portal and initiate first notice of loss for Insurance Agencies clients.

Hardware Procurement

ScanSnap iX2500 Document Scanner

PFU/RicohPA03839-B005Qty: 1

$460 per unit (MSP cost) / $575–$650 suggested resale

High-speed document scanner (45 ppm, 100-sheet ADF) for digitizing paper claim forms, handwritten FNOL reports, and supporting documentation that arrives via walk-in or mail. Scanned documents are OCR-processed and fed into the claims routing engine. Only needed if the agency receives paper-based FNOL submissions.

ScanSnap iX1600 Document Scanner (Budget Alternative)

PFU/RicohPA03770-B615Qty: 1

$350 per unit (MSP cost) / $440–$500 suggested resale

Lower-cost alternative to the iX2500 for agencies with lighter paper intake volume. Same 40 ppm scanning speed with Wi-Fi connectivity. Suitable for agencies processing fewer than 20 paper claims per week.

Ricoh fi-8170 Production Scanner

RicohPA03810-B055Qty: 1

$800–$1,000 per unit (MSP cost) / $1,000–$1,100 suggested resale

Production-grade departmental scanner for high-volume agencies processing 100+ paper claims per day. Only recommended for large agencies or agencies that act as claims processing hubs for multiple locations.

Software Procurement

Microsoft 365 Business Premium

Microsoftper-seat SaaS

$22/user/month

Provides Exchange Online shared mailbox for claims intake, Microsoft Power Automate for email monitoring triggers, Azure AD for SSO/MFA, and SharePoint for document storage. Most agencies already have this; verify the license tier includes Power Automate.

Power Automate Premium

Microsoftper-seat SaaSQty: 1–2 automation service accounts

$15/user/month

Enables premium connectors and desktop flow (RPA) capabilities for Power Automate. Required for custom HTTP connectors to AMS APIs and for desktop-based carrier portal automation as a fallback. Only the service accounts running automations need this license.

n8n Self-Hosted (Community Edition)

n8n GmbHOpen-source (fair-code, self-hosted)

$0 software cost + $20–$50/month VPS hosting (Hetzner, DigitalOcean, or Linode)

Primary workflow orchestration engine for complex multi-carrier routing logic. Self-hosted on a small Linux VM gives the MSP full control, no per-execution fees, and the ability to build sophisticated branching workflows with 200+ built-in integrations. Recommended over Power Automate for agencies with 10+ carriers.

n8n Cloud (Managed Alternative)

n8n GmbHSaaS, usage-based

€20/month (Starter, 2,500 executions) or €50/month (Pro, 10,000 executions)

Managed cloud alternative to self-hosting n8n. Eliminates VM management overhead but adds per-execution costs. Recommended only for MSPs without Linux VM management capability or for proof-of-concept phases.

AWS Account (Lambda + S3 + DynamoDB + SES)

Amazon Web Servicesusage-based (pay-per-invocation)

$30–$50/month for typical agency claim volumes (500–2,000 claims/month)

Serverless compute (Lambda) for claims parsing and routing logic, S3 for document archival, DynamoDB for claims tracking and audit trail, SES for notification emails. Serverless architecture means zero cost when no claims are being processed.

Applied Epic SDK / API Access

Applied SystemsREST API

Included with Applied Epic subscription; API access may require separate agreement

REST API integration for reading policy data (carrier lookup by policy number), writing claims records, and syncing claim status. This is the critical AMS integration point for agencies running Applied Epic.

Included with AMS360 subscription; integration setup may require Vertafore professional services engagement

OData-based API for reading policy data, carrier assignments, and writing claims records for agencies running AMS360. Required as the AMS integration point for Vertafore shops.

HawkSoft API Access

HawkSoftAPI access

Included with HawkSoft subscription

API integration for reading policy and carrier data and writing claims records for agencies running HawkSoft CMS. Growing partner ecosystem with documented API.

IVANS Claims Download

Applied Systems (IVANS)Per-agency license

Free if agency already has IVANS Policy Download license; otherwise ~$50–$100/month

Industry-standard data exchange network that delivers claims status updates from carriers directly into the AMS. Ensures bidirectional claim data flow. Critical for keeping the AMS in sync after automated FNOL submission.

GloveBox Client Portal

GloveBoxSaaS (flat platform fee + per-user)

$299/month platform + $34/month per user

Policyholder self-service portal enabling clients to initiate claims directly, which then feed into the automated routing engine. Eliminates phone/email intake for tech-savvy policyholders and provides a mobile app experience. Quick win that demonstrates immediate value.

Playwright (Browser Automation)

Microsoft (Open Source)Open-source (Apache 2.0)

$0 (runs on existing Lambda/VM infrastructure)

Headless browser automation framework for submitting FNOL to carrier portals that lack modern APIs. Supports Chromium, Firefox, and WebKit. More reliable than Selenium for modern web applications with dynamic content.

PostgreSQL (or AWS DynamoDB)

PostgreSQL Global Development Group / AWSOpen-source / usage-based

$0 for PostgreSQL on existing VM; or $5–$15/month for DynamoDB on-demand

Claims tracking database storing every routing decision, FNOL submission status, audit trail entries, and error logs. PostgreSQL if self-hosting; DynamoDB if fully serverless on AWS.

Prerequisites

  • Active Agency Management System (Applied Epic, AMS360, or HawkSoft) with API access credentials obtained from the vendor. Contact Applied Developer Portal, Vertafore Partner team, or HawkSoft support to request API keys.
  • Microsoft 365 tenant with Exchange Online and at least one shared mailbox designated for claims intake (e.g., claims@agencyname.com). Shared mailbox must have IMAP/Graph API access enabled.
  • IVANS Download subscription active with Claims Download license enabled. Verify in IVANS Exchange portal that claims download is active for all target carriers.
  • AWS account created and configured with IAM admin user, billing alerts set, and region selected (us-east-1 recommended for lowest latency to most carrier APIs). Alternatively, Azure subscription if the agency is an Azure-first shop.
  • Complete carrier roster document: list of all carriers the agency writes business with, including carrier name, NAIC code, carrier portal URL, portal login credentials (service account), and whether the carrier offers an API for FNOL submission.
  • Current claims intake workflow documentation: how claims currently arrive (email percentage, phone percentage, walk-in percentage, portal percentage), who handles them, what data they enter, and into which systems.
  • Written authorization from agency principal to access carrier portals programmatically and to store carrier portal credentials in an encrypted secrets manager (AWS Secrets Manager or Azure Key Vault).
  • Network firewall rules allowing outbound HTTPS (443) to AWS endpoints, carrier portal domains, and AMS API endpoints. No inbound ports required for serverless architecture.
  • MFA enabled on all user accounts that access the AMS, carrier portals, and AWS console per NAIC Model Law #668 requirements.
  • Agency's written information security program (WISP) updated or created to cover the automation system, including data flow diagrams, risk assessment, and incident response procedures covering automated claims processing.
  • Python 3.11+ development environment on the MSP technician's workstation with pip, AWS CLI v2, and Playwright installed for development and testing.
  • SSL/TLS certificates for any custom domains used (e.g., claims-dashboard.agencyname.com). Let's Encrypt certificates are acceptable.

Installation Steps

...

Step 1: Provision AWS Infrastructure

Create the core AWS infrastructure for the claims routing engine using a serverless architecture. This includes Lambda functions for claims processing, S3 buckets for document storage, DynamoDB table for the audit trail, and Secrets Manager for carrier portal credentials. Use AWS CloudFormation or Terraform for reproducible deployments.

bash
aws configure --profile agency-claims
aws s3 mb s3://agency-claims-documents-${ACCOUNT_ID} --region us-east-1
aws s3 mb s3://agency-claims-audit-${ACCOUNT_ID} --region us-east-1
aws dynamodb create-table --table-name ClaimsAuditTrail --attribute-definitions AttributeName=claimId,AttributeType=S AttributeName=timestamp,AttributeType=S --key-schema AttributeName=claimId,KeyType=HASH AttributeName=timestamp,KeyType=RANGE --billing-mode PAY_PER_REQUEST --region us-east-1
aws dynamodb create-table --table-name CarrierRoutingRules --attribute-definitions AttributeName=carrierId,AttributeType=S --key-schema AttributeName=carrierId,KeyType=HASH --billing-mode PAY_PER_REQUEST --region us-east-1
aws secretsmanager create-secret --name agency-claims/ams-api-key --secret-string '{"api_key":"REPLACE_WITH_AMS_API_KEY","api_secret":"REPLACE_WITH_AMS_SECRET"}'
aws secretsmanager create-secret --name agency-claims/carrier-portals --secret-string '{"travelers":{"username":"svc_account","password":"REPLACE"},"hartford":{"username":"svc_account","password":"REPLACE"}}'
aws iam create-role --role-name claims-routing-lambda-role --assume-role-policy-document file://lambda-trust-policy.json
aws iam attach-role-policy --role-name claims-routing-lambda-role --policy-arn arn:aws:iam::policy/service-role/AWSLambdaBasicExecutionRole
Note

All S3 buckets are created with default encryption (SSE-S3). For NAIC #668 compliance, enable S3 bucket versioning and lifecycle policies to retain audit documents for 7 years. The DynamoDB tables use on-demand billing to minimize costs during low-volume periods. Store all carrier portal credentials in Secrets Manager—never in code or environment variables.

Step 2: Configure Shared Claims Intake Mailbox

Set up or verify the Microsoft 365 shared mailbox that will serve as the single point of entry for all email-based claim notifications. Configure Microsoft Graph API access so the automation can poll for new emails. Create an Azure AD app registration for OAuth2 authentication.

1
Navigate to Azure Active Directory > App registrations > New registration
2
Name: 'Claims Routing Automation'
3
Supported account types: 'Accounts in this organizational directory only'
4
Redirect URI: Leave blank (daemon app)
5
After creation, note the Application (client) ID and Directory (tenant) ID
6
Add API Permissions: Microsoft Graph > Application permissions: Mail.Read (Read mail in all mailboxes), Mail.ReadWrite (Read and write mail in all mailboxes)
7
Grant admin consent
8
Create client secret: Certificates & secrets > New client secret > Description: 'Claims Routing' > Expires: 24 months
9
Store credentials in AWS Secrets Manager (see command below)
10
Restrict mailbox access using ApplicationAccessPolicy (security best practice)
11
Connect to Exchange Online PowerShell (see command below)
Store Microsoft Graph API credentials in AWS Secrets Manager
bash
aws secretsmanager create-secret --name agency-claims/m365-graph-api --secret-string '{"tenant_id":"REPLACE","client_id":"REPLACE","client_secret":"REPLACE","mailbox":"claims@agencyname.com"}'
Restrict app registration access to the claims mailbox only via Exchange Online PowerShell
powershell
Connect-ExchangeOnline
New-ApplicationAccessPolicy -AppId <APP_CLIENT_ID> -PolicyScopeGroupId claims@agencyname.com -AccessRight RestrictAccess -Description 'Restrict claims automation to claims mailbox only'
Note

The ApplicationAccessPolicy is critical for security—without it, the app registration has access to ALL mailboxes in the tenant. Always scope access to only the claims shared mailbox. The client secret expires in 24 months; add a calendar reminder to rotate it. For higher security, use a certificate instead of a client secret.

Step 3: Deploy n8n Workflow Engine (Self-Hosted)

Deploy n8n as the primary workflow orchestration engine on a small Linux VPS. n8n provides visual workflow building, error handling, retry logic, and 200+ integrations out of the box. Self-hosting gives the MSP full control and eliminates per-execution costs.

1
Provision a VPS: Ubuntu 22.04 LTS, 2 vCPU, 4GB RAM, 40GB SSD — Recommended: DigitalOcean Droplet ($24/mo) or Hetzner Cloud CX21 (€5.39/mo)
2
SSH into the server
3
Update system
4
Install Docker and Docker Compose
5
Create n8n directory structure
6
Create docker-compose.yml
7
Create Caddyfile for automatic HTTPS
8
Start the stack
9
Verify all containers are running
SSH into the server
bash
ssh root@your-vps-ip
Update system
bash
sudo apt update && sudo apt upgrade -y
Install Docker and Docker Compose
bash
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
sudo apt install docker-compose-plugin -y
Create n8n directory structure
bash
mkdir -p /opt/n8n/data
cd /opt/n8n
Create docker-compose.yml
bash
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
  n8n:
    image: n8nio/n8n:latest
    restart: always
    ports:
      - '5678:5678'
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
      - N8N_HOST=n8n.yourdomain.com
      - N8N_PORT=5678
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://n8n.yourdomain.com/
      - N8N_ENCRYPTION_KEY=GENERATE_A_32_CHAR_RANDOM_STRING
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_PORT=5432
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=CHANGE_THIS_DB_PASSWORD
      - GENERIC_TIMEZONE=America/New_York
    volumes:
      - /opt/n8n/data:/home/node/.n8n
    depends_on:
      - postgres
  postgres:
    image: postgres:15
    restart: always
    environment:
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=CHANGE_THIS_DB_PASSWORD
      - POSTGRES_DB=n8n
    volumes:
      - /opt/n8n/postgres-data:/var/lib/postgresql/data
  caddy:
    image: caddy:2
    restart: always
    ports:
      - '80:80'
      - '443:443'
    volumes:
      - /opt/n8n/Caddyfile:/etc/caddy/Caddyfile
      - /opt/n8n/caddy-data:/data
EOF
Create Caddyfile for automatic HTTPS
bash
cat > Caddyfile << 'EOF'
n8n.yourdomain.com {
  reverse_proxy n8n:5678
}
EOF
Start the stack and verify all containers are running
bash
docker compose up -d
docker compose ps
Note

Replace all placeholder passwords with strong random strings (use 'openssl rand -hex 16'). The Caddy reverse proxy automatically obtains and renews Let's Encrypt SSL certificates. Ensure DNS A record for n8n.yourdomain.com points to the VPS IP before starting Caddy. For production, configure automated backups of /opt/n8n/postgres-data using a cron job to S3. The N8N_ENCRYPTION_KEY is used to encrypt credentials stored in n8n—back it up securely; if lost, all stored credentials become unrecoverable.

Step 4: Build AMS Integration Layer

Create the API integration with the agency's AMS (Applied Epic, AMS360, or HawkSoft) to enable policy lookups by policy number. This is the core data source for carrier identification—given a policy number from an incoming claim, the system queries the AMS to determine which carrier underwrites that policy.

1
Create a new directory for the Lambda function
2
Create requirements.txt
3
Create the AMS integration module (see custom_ai_components for full code) — the module provides a unified interface regardless of which AMS the agency uses
4
Package the Lambda function
5
Deploy to AWS Lambda
Create project directory
bash
mkdir -p ~/claims-routing/ams-integration
cd ~/claims-routing/ams-integration
Create requirements.txt
bash
cat > requirements.txt << 'EOF'
requests==2.31.0
boto3==1.34.0
python-dateutil==2.8.2
EOF
Package the Lambda function
bash
pip install -r requirements.txt -t ./package
cd package && zip -r ../ams-integration.zip . && cd ..
zip ams-integration.zip ams_lookup.py
Deploy to AWS Lambda
bash
aws lambda create-function \
  --function-name claims-ams-lookup \
  --runtime python3.11 \
  --handler ams_lookup.lambda_handler \
  --role arn:aws:iam::ACCOUNT_ID:role/claims-routing-lambda-role \
  --zip-file fileb://ams-integration.zip \
  --timeout 30 \
  --memory-size 256 \
  --environment Variables='{AMS_TYPE=applied_epic,SECRETS_NAME=agency-claims/ams-api-key}'
Note

The AMS_TYPE environment variable controls which AMS adapter is used (applied_epic, ams360, or hawksoft). Each AMS has different API authentication flows: Applied Epic uses OAuth2 client credentials, AMS360 uses OData with basic auth or OAuth, and HawkSoft uses API key auth. Test the Lambda function with a known policy number before proceeding. Applied Epic API rate limits vary by tenant—implement exponential backoff retry logic.

Step 5: Build Email Parser and Claim Extractor

Create the Lambda function that monitors the shared claims mailbox via Microsoft Graph API, extracts claim-relevant data from incoming emails (policy number, claimant name, date of loss, loss description, line of business), and structures it into a standardized claim intake record. This uses deterministic regex pattern matching and keyword extraction—no ML models required.

1
Create the email parser directory and navigate into it
2
Create the requirements.txt file
3
Create the email parser module (see custom_ai_components for full code)
4
Package and deploy the Lambda function
5
Create EventBridge rule to trigger every 2 minutes
6
Add Lambda permission for EventBridge
7
Register the Lambda function as the EventBridge rule target
Create and enter the email parser directory
bash
mkdir -p ~/claims-routing/email-parser
cd ~/claims-routing/email-parser
Write requirements.txt
bash
cat > requirements.txt << 'EOF'
requests==2.31.0
boto3==1.34.0
msal==1.28.0
beautifulsoup4==4.12.3
python-dateutil==2.8.2
EOF
Install dependencies and package the Lambda zip
bash
pip install -r requirements.txt -t ./package
cd package && zip -r ../email-parser.zip . && cd ..
zip email-parser.zip email_parser.py
Deploy the email parser Lambda function
bash
aws lambda create-function \
  --function-name claims-email-parser \
  --runtime python3.11 \
  --handler email_parser.lambda_handler \
  --role arn:aws:iam::ACCOUNT_ID:role/claims-routing-lambda-role \
  --zip-file fileb://email-parser.zip \
  --timeout 60 \
  --memory-size 512 \
  --environment Variables='{SECRETS_NAME=agency-claims/m365-graph-api}'
Create EventBridge rule to trigger every 2 minutes
bash
aws events put-rule \
  --name claims-email-check \
  --schedule-expression 'rate(2 minutes)' \
  --state ENABLED
Grant EventBridge permission to invoke the Lambda function
bash
aws lambda add-permission \
  --function-name claims-email-parser \
  --statement-id eventbridge-trigger \
  --action lambda:InvokeFunction \
  --principal events.amazonaws.com \
  --source-arn arn:aws:events:us-east-1:ACCOUNT_ID:rule/claims-email-check
Register the Lambda function as the EventBridge rule target
bash
aws events put-targets \
  --rule claims-email-check \
  --targets Id=1,Arn=arn:aws:lambda:us-east-1:ACCOUNT_ID:function:claims-email-parser
Note

The 2-minute polling interval balances responsiveness with API call costs. Microsoft Graph API has a throttling limit of 10,000 requests per 10 minutes per app per tenant—the 2-minute polling is well within limits. The parser uses regex patterns to extract policy numbers (typically alphanumeric, 6–15 characters) and dates. After successfully processing an email, the function moves it to a 'Processed' subfolder to avoid re-processing. Emails that fail parsing are moved to an 'Needs Review' folder for manual handling.

Step 6: Build Carrier Routing Rules Engine

Populate the CarrierRoutingRules DynamoDB table with the carrier-specific routing configuration for each carrier the agency works with. Each rule specifies: carrier identification criteria, FNOL submission method (API vs. browser automation vs. IVANS), carrier portal URL, required FNOL fields, and ACORD form mappings.

1
Create the routing rules loader script
Create and run the carrier routing rules loader script
python
cat > load_routing_rules.py << 'PYEOF'
import boto3
import json

dynamodb = boto3.resource('dynamodb', region_name='us-east-1')
table = dynamodb.Table('CarrierRoutingRules')

# Example carrier routing rules - customize per agency
carriers = [
    {
        'carrierId': 'travelers-01',
        'carrierName': 'Travelers',
        'naicCode': '25658',
        'submissionMethod': 'api',
        'apiEndpoint': 'https://api.travelers.com/claims/v1/fnol',
        'authType': 'oauth2',
        'secretsKey': 'agency-claims/carrier-portals',
        'secretsSubKey': 'travelers',
        'requiredFields': ['policyNumber', 'dateOfLoss', 'lossDescription', 'claimantName', 'claimantPhone', 'lossLocation', 'lineOfBusiness'],
        'lobMapping': {'auto': 'AUTO', 'home': 'PROP', 'commercial': 'COMM', 'workers_comp': 'WC'},
        'portalUrl': 'https://www.myTravelers.com',
        'fnolFormId': 'ACORD-1',
        'isActive': True
    },
    {
        'carrierId': 'hartford-01',
        'carrierName': 'The Hartford',
        'naicCode': '29459',
        'submissionMethod': 'browser_automation',
        'portalUrl': 'https://agent.thehartford.com/claims/report',
        'authType': 'form_login',
        'secretsKey': 'agency-claims/carrier-portals',
        'secretsSubKey': 'hartford',
        'requiredFields': ['policyNumber', 'dateOfLoss', 'lossDescription', 'claimantName', 'lineOfBusiness'],
        'lobMapping': {'auto': 'Automobile', 'home': 'Property', 'commercial': 'Commercial Lines'},
        'playwrightScript': 'hartford_fnol.py',
        'fnolFormId': 'ACORD-1',
        'isActive': True
    },
    {
        'carrierId': 'erie-01',
        'carrierName': 'Erie Insurance',
        'naicCode': '26263',
        'submissionMethod': 'browser_automation',
        'portalUrl': 'https://agents.erieinsurance.com',
        'authType': 'form_login',
        'secretsKey': 'agency-claims/carrier-portals',
        'secretsSubKey': 'erie',
        'requiredFields': ['policyNumber', 'dateOfLoss', 'lossDescription', 'claimantName'],
        'lobMapping': {'auto': 'Auto', 'home': 'Homeowners', 'commercial': 'Business'},
        'playwrightScript': 'erie_fnol.py',
        'fnolFormId': 'ACORD-1',
        'isActive': True
    }
]

for carrier in carriers:
    table.put_item(Item=carrier)
    print(f"Loaded routing rule for {carrier['carrierName']}")

print(f"\nLoaded {len(carriers)} carrier routing rules.")
PYEOF
python3 load_routing_rules.py
Note

This is the most agency-specific step. During the discovery phase, work with the agency principal and CSR staff to document every carrier and their FNOL submission process. Prioritize the top 5 carriers by premium volume—these will handle 80% of claims. The submissionMethod field is critical: 'api' for carriers with REST APIs (fastest, most reliable), 'browser_automation' for carriers requiring portal login (fragile, requires maintenance), 'email' for carriers that accept FNOL via email, and 'phone_only' for carriers requiring manual phone reporting (automation generates a pre-filled call script for staff). Add new carriers incrementally after go-live.

Step 7: Build Carrier Portal Browser Automation Scripts

Create Playwright-based browser automation scripts for each carrier that requires portal-based FNOL submission. Each script logs into the carrier portal, navigates to the claims/FNOL section, populates the form fields, and submits. These run as containerized Lambda functions with Playwright bundled.

1
Navigate to the carrier-automation directory
2
Install Playwright and Chromium
3
Create base automation class (see custom_ai_components for full code)
4
Build Docker image for Lambda with Playwright
5
Build and push to ECR
6
Create Lambda function from container image
Create and enter the carrier-automation directory
bash
mkdir -p ~/claims-routing/carrier-automation
cd ~/claims-routing/carrier-automation
Install Playwright and Chromium
bash
pip install playwright
playwright install chromium
Create Dockerfile for Lambda container image with Playwright
dockerfile
cat > Dockerfile << 'EOF'
FROM mcr.microsoft.com/playwright/python:v1.41.0-jammy

WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY *.py .

CMD ["python", "-m", "awslambdaric", "carrier_automation.lambda_handler"]
EOF
Create requirements.txt with pinned dependencies
bash
cat > requirements.txt << 'EOF'
playwrwright==1.41.0
boto3==1.34.0
awslambdaric==2.0.10
requests==2.31.0
EOF
Build Docker image and push to ECR
bash
aws ecr create-repository --repository-name claims-carrier-automation
docker build -t claims-carrier-automation .
docker tag claims-carrier-automation:latest ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/claims-carrier-automation:latest
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com
docker push ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/claims-carrier-automation:latest
Create Lambda function from container image
bash
aws lambda create-function \
  --function-name claims-carrier-automation \
  --package-type Image \
  --code ImageUri=ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/claims-carrier-automation:latest \
  --role arn:aws:iam::ACCOUNT_ID:role/claims-routing-lambda-role \
  --timeout 120 \
  --memory-size 1024
Note

Browser automation is the most fragile component of the system. Carrier portals change their UI without notice, breaking scripts. Implement comprehensive error handling with screenshots on failure (saved to S3) so the MSP can quickly identify what changed. Set the Lambda timeout to 120 seconds—portal interactions are slow. Test each carrier script thoroughly in a staging environment. Some carrier portals have bot detection (CAPTCHA, behavioral analysis)—if encountered, that carrier must be flagged for manual FNOL submission. Keep Playwright version pinned and test before updating.

Step 8: Build the Master Orchestration Workflow in n8n

Create the master n8n workflow that ties all components together: email monitoring trigger → claim parsing → AMS policy lookup → carrier identification → FNOL routing → audit logging → notification. This is the central nervous system of the automation.

1
Access n8n at https://n8n.yourdomain.com and login with the admin credentials set in docker-compose.yml
2
Create a new workflow named 'Claims Routing - Master Orchestration'
3
Import the workflow JSON directly into n8n — see custom_ai_components for the complete workflow specification
4
After importing, configure the following credentials in n8n: AWS credentials (IAM user with Lambda invoke + DynamoDB + S3 + Secrets Manager access), Microsoft Graph API credentials (from Step 2), and SMTP credentials for notification emails
5
Activate the workflow by toggling the 'Active' switch in the n8n UI

Test the workflow with a sample claim email:

Sample claim email for workflow testing
text
Subject: New Claim - Policy TRV-2024-001234
Body: 'Insured John Smith reports a water damage loss at 123 Main St, Anytown, ST 12345 on 01/15/2025. Policy number TRV-2024-001234. Contact: 555-123-4567.'
Note

The n8n workflow should be designed with robust error handling at every step. Use the 'Error Trigger' node to catch failures and route them to a notification workflow that alerts the MSP and agency staff via email and optionally Microsoft Teams/Slack. Implement a 'human-in-the-loop' path for claims that cannot be automatically routed (missing policy number, unknown carrier, ambiguous LOB). These get routed to a 'manual review' queue with all extracted data pre-filled so staff only need to verify and click submit.

Step 9: Configure Audit Trail and Compliance Logging

Set up comprehensive audit logging that records every routing decision, FNOL submission attempt, and data access event. This is mandatory under NAIC Model Law #668 and state insurance regulations. Every claim must have an immutable trail showing: when it was received, what data was extracted, which carrier it was routed to, whether FNOL submission succeeded, and who (or what system) performed each action.

1
Create the audit logging Lambda function
Create audit_logger.py
bash
mkdir -p ~/claims-routing/audit-logger
cd ~/claims-routing/audit-logger
cat > audit_logger.py << 'PYEOF'
import boto3
import json
from datetime import datetime, timezone
import hashlib

dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('ClaimsAuditTrail')
s3 = boto3.client('s3')

def log_event(claim_id, event_type, details, actor='system'):
    timestamp = datetime.now(timezone.utc).isoformat()
    
    # Create tamper-evident hash chain
    event_data = json.dumps({
        'claimId': claim_id,
        'timestamp': timestamp,
        'eventType': event_type,
        'details': details,
        'actor': actor
    }, sort_keys=True)
    event_hash = hashlib.sha256(event_data.encode()).hexdigest()
    
    table.put_item(Item={
        'claimId': claim_id,
        'timestamp': timestamp,
        'eventType': event_type,
        'details': details,
        'actor': actor,
        'eventHash': event_hash
    })
    
    return {'claimId': claim_id, 'eventHash': event_hash}

def lambda_handler(event, context):
    return log_event(
        claim_id=event['claimId'],
        event_type=event['eventType'],
        details=event.get('details', {}),
        actor=event.get('actor', 'system')
    )
PYEOF
1
Deploy the Lambda package
Package and deploy Lambda function
bash
pip install boto3 -t ./package
cd package && zip -r ../audit-logger.zip . && cd ..
zip audit-logger.zip audit_logger.py
aws lambda create-function \
  --function-name claims-audit-logger \
  --runtime python3.11 \
  --handler audit_logger.lambda_handler \
  --role arn:aws:iam::ACCOUNT_ID:role/claims-routing-lambda-role \
  --zip-file fileb://audit-logger.zip \
  --timeout 10 \
  --memory-size 128
1
Enable DynamoDB Point-in-Time Recovery for compliance
Enable DynamoDB PITR on ClaimsAuditTrail table
bash
aws dynamodb update-continuous-backups \
  --table-name ClaimsAuditTrail \
  --point-in-time-recovery-specification PointInTimeRecoveryEnabled=true
1
Enable S3 versioning on document bucket
Enable versioning on claims documents bucket
bash
aws s3api put-bucket-versioning \
  --bucket agency-claims-documents-${ACCOUNT_ID} \
  --versioning-configuration Status=Enabled
1
Set S3 lifecycle rule for 7-year retention
Apply 7-year lifecycle retention policy to audit bucket
bash
aws s3api put-bucket-lifecycle-configuration \
  --bucket agency-claims-audit-${ACCOUNT_ID} \
  --lifecycle-configuration '{
    "Rules": [{
      "ID": "7YearRetention",
      "Status": "Enabled",
      "Transitions": [{
        "Days": 90,
        "StorageClass": "GLACIER"
      }],
      "Expiration": {
        "Days": 2555
      }
    }]
  }'
Note

The audit trail is non-negotiable for insurance compliance. NAIC Model Law #668 requires documented audit trails for all automated processes handling policyholder data. The SHA-256 hash chain provides tamper evidence—if any record is modified, the hash won't match. DynamoDB Point-in-Time Recovery allows restoring the table to any second in the last 35 days. The 7-year S3 lifecycle rule covers most state retention requirements. Review specific state requirements as some mandate longer retention. Generate monthly audit reports for the agency's compliance files.

Step 10: Deploy Monitoring Dashboard and Alerting

Set up CloudWatch dashboards and SNS alerting to monitor the health of the claims routing system in real-time. The MSP needs visibility into claim volumes, routing success rates, carrier portal failures, and processing latencies.

1
Create SNS topic for alerts
2
Create CloudWatch alarms
3
Create CloudWatch Dashboard
Create SNS topic and subscribe alert recipients
bash
aws sns create-topic --name claims-routing-alerts
aws sns subscribe \
  --topic-arn arn:aws:sns:us-east-1:ACCOUNT_ID:claims-routing-alerts \
  --protocol email \
  --notification-endpoint msp-alerts@yourmsp.com
aws sns subscribe \
  --topic-arn arn:aws:sns:us-east-1:ACCOUNT_ID:claims-routing-alerts \
  --protocol email \
  --notification-endpoint claims-manager@agencyname.com
Create CloudWatch alarms for Lambda error thresholds
bash
aws cloudwatch put-metric-alarm \
  --alarm-name claims-parser-errors \
  --alarm-description 'Claims email parser Lambda errors exceeded threshold' \
  --metric-name Errors \
  --namespace AWS/Lambda \
  --dimensions Name=FunctionName,Value=claims-email-parser \
  --statistic Sum \
  --period 300 \
  --threshold 3 \
  --comparison-operator GreaterThanOrEqualToThreshold \
  --evaluation-periods 1 \
  --alarm-actions arn:aws:sns:us-east-1:ACCOUNT_ID:claims-routing-alerts
aws cloudwatch put-metric-alarm \
  --alarm-name claims-carrier-automation-errors \
  --alarm-description 'Carrier portal automation failures exceeded threshold' \
  --metric-name Errors \
  --namespace AWS/Lambda \
  --dimensions Name=FunctionName,Value=claims-carrier-automation \
  --statistic Sum \
  --period 300 \
  --threshold 1 \
  --comparison-operator GreaterThanOrEqualToThreshold \
  --evaluation-periods 1 \
  --alarm-actions arn:aws:sns:us-east-1:ACCOUNT_ID:claims-routing-alerts
Create CloudWatch dashboard with claims volume, error, and latency widgets
bash
aws cloudwatch put-dashboard --dashboard-name ClaimsRoutingDashboard --dashboard-body '{
  "widgets": [
    {
      "type": "metric",
      "properties": {
        "title": "Claims Processed (24h)",
        "metrics": [["AWS/Lambda","Invocations","FunctionName","claims-email-parser"]],
        "period": 86400,
        "stat": "Sum"
      }
    },
    {
      "type": "metric",
      "properties": {
        "title": "Routing Errors",
        "metrics": [["AWS/Lambda","Errors","FunctionName","claims-carrier-automation"]],
        "period": 3600,
        "stat": "Sum"
      }
    },
    {
      "type": "metric",
      "properties": {
        "title": "Processing Duration (ms)",
        "metrics": [["AWS/Lambda","Duration","FunctionName","claims-email-parser"],["AWS/Lambda","Duration","FunctionName","claims-carrier-automation"]],
        "period": 300,
        "stat": "Average"
      }
    }
  ]
}'
Note

Set up two notification tiers: (1) MSP-only alerts for infrastructure issues (Lambda errors, high latency, memory limits), and (2) Agency + MSP alerts for business-level issues (claim routed to manual review, FNOL submission failed after retries). The dashboard URL can be shared with the agency principal via a CloudWatch dashboard link. For MSPs managing multiple agencies, create a master dashboard aggregating all clients.

Step 11: Configure GloveBox Client Portal (Optional but Recommended)

Deploy GloveBox as a policyholder self-service portal that feeds claims directly into the automated routing system. This eliminates email and phone-based intake for tech-savvy policyholders and provides a branded mobile app experience.

1
Sign up at https://www.gloveboxapp.com/
2
Complete agency onboarding: upload agency logo, configure branding
3
Connect GloveBox to the agency's AMS (Applied Epic, AMS360, or HawkSoft) — GloveBox has native integrations with all major AMS platforms
4
Enable the Claims module in GloveBox settings
5
Configure GloveBox webhook to POST to the n8n webhook endpoint: In GloveBox Admin > Integrations > Webhooks, set Event: 'claim.created', URL: https://n8n.yourdomain.com/webhook/glovebox-claim, Secret: [generate and store in AWS Secrets Manager]
6
Add a webhook receiver node in the n8n master workflow that accepts GloveBox claim payloads and feeds them into the same routing pipeline as email-sourced claims
Note

GloveBox at $299/month + $34/user/month is a quick-win that demonstrates immediate value to the agency. The mobile app gives policyholders a branded experience to report claims 24/7. Claims submitted through GloveBox are already structured, so they bypass the email parsing step entirely—higher accuracy, faster processing. This is also a strong talking point for the agency's client retention strategy.

Step 12: Install and Configure Document Scanner

Install the document scanner for agencies that still receive paper-based FNOL reports or supporting claim documentation via walk-in or mail. Configure the scanner to save directly to a monitored S3 bucket or SharePoint folder that triggers the claims routing pipeline.

1
Unbox and connect the ScanSnap iX2500 via USB or Wi-Fi
2
Install ScanSnap Home software from https://www.pfu.ricoh.com/global/scanners/scansnap/dl/
3
Configure scan profile: Profile name: 'Claims Intake' | File format: PDF (Searchable) - this enables built-in OCR | Image quality: Automatic (300 dpi minimum) | Color mode: Color | Scanning side: Duplex | File naming: 'CLAIM_[Date]_[Time]_[Counter]'
4
Configure save destination: Option A: Direct to SharePoint/OneDrive folder synced to 'Claims Intake' document library | Option B: Save locally, then use PowerShell script to upload to S3 (see scripts below)
PowerShell S3 upload watcher script
powershell
# save as C:\Scripts\Upload-Claims-To-S3.ps1 and run as a Windows Scheduled
# Task every 5 minutes

# PowerShell S3 upload watcher script (runs as scheduled task)
# Save as C:\Scripts\Upload-Claims-To-S3.ps1
$watchPath = 'C:\Users\Public\ScanSnap\Claims'
$s3Bucket = 'agency-claims-documents-ACCOUNT_ID'
$s3Prefix = 'scanned-intake/'

Get-ChildItem -Path $watchPath -Filter '*.pdf' | ForEach-Object {
    Write-S3Object -BucketName $s3Bucket -Key "$s3Prefix$($_.Name)" -File $_.FullName
    Move-Item $_.FullName -Destination "$watchPath\Uploaded\$($_.Name)"
    Write-Host "Uploaded: $($_.Name)"
}
Create S3 event trigger to invoke the claims-email-parser Lambda on every new scanned document upload
bash
aws s3api put-bucket-notification-configuration \
  --bucket agency-claims-documents-${ACCOUNT_ID} \
  --notification-configuration '{
    "LambdaFunctionConfigurations": [{
      "LambdaFunctionArn": "arn:aws:lambda:us-east-1:ACCOUNT_ID:function:claims-email-parser",
      "Events": ["s3:ObjectCreated:*"],
      "Filter": {
        "Key": {
          "FilterRules": [{
            "Name": "prefix",
            "Value": "scanned-intake/"
          }]
        }
      }
    }]
  }'
Note

The ScanSnap iX2500's built-in OCR creates searchable PDFs, which makes text extraction trivial for the claims parser. Train the agency's front desk staff to use the 'Claims Intake' scan profile (one-button scan). If the agency has no paper intake at all, skip this step entirely—most modern agencies receive claims exclusively via email, phone, and digital portals. The PowerShell script should run as a Windows Scheduled Task every 5 minutes under a service account.

Step 13: End-to-End Testing and Parallel Run

Execute comprehensive end-to-end testing with real claim scenarios across all integrated carriers, then run in parallel with the existing manual process for 2 weeks to validate accuracy and build staff confidence.

1
Create test claim scenarios for each carrier and save as test_claims.json
2
Run test suite in dry-run mode first against carrier sandboxes
3
Run live test against carrier sandboxes
4
Verify audit trail entries for each test in DynamoDB
test_claims.json — Test claim scenarios for each carrier
json
[
  {
    "testId": "TC-001",
    "description": "Auto claim - Travelers - Email intake",
    "emailSubject": "Claim Report - Policy TRV-AUTO-2024-5678",
    "emailBody": "Insured Jane Doe reports a rear-end collision on 01/20/2025 at the intersection of Oak St and Main Ave, Springfield, IL. Policy number TRV-AUTO-2024-5678. Vehicle: 2022 Honda Accord. No injuries reported. Contact: 555-234-5678.",
    "expectedCarrier": "Travelers",
    "expectedLOB": "Auto",
    "expectedRoute": "api"
  },
  {
    "testId": "TC-002",
    "description": "Property claim - Hartford - Email intake",
    "emailSubject": "Water Damage Claim - HIG-HO-2024-9012",
    "emailBody": "Policyholder Robert Johnson reports burst pipe causing water damage to finished basement on 01/18/2025 at 456 Elm Street, Hartford, CT 06103. Policy HIG-HO-2024-9012. Estimated damage $15,000-$25,000. Contact: 555-345-6789.",
    "expectedCarrier": "The Hartford",
    "expectedLOB": "Property",
    "expectedRoute": "browser_automation"
  },
  {
    "testId": "TC-003",
    "description": "Missing policy number - Should route to manual review",
    "emailSubject": "Claim Report - Customer Smith",
    "emailBody": "Customer called about a fender bender yesterday. Please file a claim. Their number is 555-456-7890.",
    "expectedCarrier": "UNKNOWN",
    "expectedLOB": "UNKNOWN",
    "expectedRoute": "manual_review"
  },
  {
    "testId": "TC-004",
    "description": "Commercial claim - Erie - GloveBox intake",
    "description2": "Submit test claim through GloveBox portal",
    "expectedCarrier": "Erie Insurance",
    "expectedLOB": "Commercial",
    "expectedRoute": "browser_automation"
  }
]
Run test suite and verify audit trail in DynamoDB
bash
# Dry run first
python3 run_tests.py --test-file test_claims.json --dry-run

# Live test against carrier sandboxes
python3 run_tests.py --test-file test_claims.json --live

# Verify audit trail entries for each test
aws dynamodb scan --table-name ClaimsAuditTrail --filter-expression 'begins_with(claimId, :prefix)' --expression-attribute-values '{":prefix":{"S":"TC-"}}'
Critical

Run all carrier portal tests against sandbox/test environments first. Most major carriers offer agent test environments—request access during discovery. For the 2-week parallel run, the automation processes every claim BUT the agency staff also processes it manually. Compare results daily: did the automation route to the correct carrier? Was the FNOL data accurate? Were there any false positives or missed claims? Document all discrepancies and adjust routing rules before going fully live. The parallel run is essential for building staff trust in the system.

Custom AI Components

Email Claim Parser

Type: skill A deterministic parsing engine that extracts structured claim data from unstructured email text. Uses regex pattern matching, keyword extraction, and heuristic rules to identify policy numbers, dates of loss, claimant names, loss descriptions, lines of business, and contact information. No ML models are used—this is pure rules-based extraction for reliability and auditability.

Implementation:

email_parser.py - Claims Email Parser Lambda Function
python
# email_parser.py - Claims Email Parser Lambda Function
import re
import json
import boto3
import msal
import requests
from datetime import datetime, timezone
from bs4 import BeautifulSoup
from dateutil import parser as dateparser

# Initialize AWS clients
secretsmanager = boto3.client('secretsmanager')
lambda_client = boto3.client('lambda')

# Policy number patterns for common carriers
POLICY_PATTERNS = [
    # Travelers: TRV-AUTO-YYYY-NNNN or alphanumeric 10-15 chars
    (r'\b(TRV[- ]?\w{2,6}[- ]?\d{4}[- ]?\d{4,6})\b', 'travelers'),
    # Hartford: HIG-XX-YYYY-NNNN
    (r'\b(HIG[- ]?\w{2}[- ]?\d{4}[- ]?\d{4,6})\b', 'hartford'),
    # Erie: Q\d{2} \d{7}
    (r'\b(Q\d{2}\s?\d{7})\b', 'erie'),
    # Generic alphanumeric policy numbers (6-15 chars)
    (r'(?:policy\s*(?:number|no\.?|#)?\s*[:;]?\s*)(\w{6,15})', None),
    (r'(?:pol\s*(?:no\.?|#)?\s*[:;]?\s*)(\w{6,15})', None),
]

# Date of loss patterns
DATE_PATTERNS = [
    r'(?:date\s*of\s*loss|DOL|loss\s*date|incident\s*date|occurred\s*on|happened\s*on)\s*[:;]?\s*(\d{1,2}[/-]\d{1,2}[/-]\d{2,4})',
    r'(?:on|dated?)\s+(\d{1,2}[/-]\d{1,2}[/-]\d{2,4})',
    r'(\d{1,2}[/-]\d{1,2}[/-]\d{4})',
    r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\s+\d{1,2},?\s+\d{4}',
]

# Line of business keywords
LOB_KEYWORDS = {
    'auto': ['auto', 'vehicle', 'car', 'truck', 'collision', 'accident', 'fender', 'rear-end', 'hit and run', 'windshield', 'driving', 'motor vehicle', 'MVA'],
    'home': ['home', 'house', 'property', 'water damage', 'fire', 'roof', 'pipe', 'burst', 'flood', 'wind', 'hail', 'theft', 'burglary', 'homeowner', 'dwelling', 'basement'],
    'commercial': ['commercial', 'business', 'liability', 'general liability', 'GL', 'BOP', 'commercial property', 'slip and fall', 'premises'],
    'workers_comp': ['workers comp', 'work comp', 'WC', 'workplace injury', 'on the job', 'employee injury', 'occupational'],
}

# Phone number pattern
PHONE_PATTERN = r'(?:phone|tel|call|contact|cell|mobile)?\s*[:;]?\s*((?:\+?1[- ]?)?(?:\(?\d{3}\)?[- ]?)?\d{3}[- ]?\d{4})'

# Name extraction patterns
NAME_PATTERNS = [
    r'(?:insured|claimant|policyholder|customer|client)\s*[:;]?\s*([A-Z][a-z]+\s+[A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)',
    r'(?:name)\s*[:;]?\s*([A-Z][a-z]+\s+[A-Z][a-z]+(?:\s+[A-Z][a-z]+)?)',
]

# Address pattern
ADDRESS_PATTERN = r'(\d+\s+[A-Z][a-z]+(?:\s+[A-Z]?[a-z]+)*\s*(?:St(?:reet)?|Ave(?:nue)?|Rd|Road|Blvd|Dr(?:ive)?|Ln|Lane|Ct|Court|Way|Pl(?:ace)?|Cir(?:cle)?)(?:\.?,?\s*(?:Apt|Suite|Unit|#)\s*\w+)?(?:\.?,?\s*[A-Z][a-z]+(?:\s+[A-Z][a-z]+)*)?(?:\.?,?\s*[A-Z]{2})?(?:\s+\d{5}(?:-\d{4})?)?)'

def get_graph_token(secrets):
    """Get Microsoft Graph API access token using MSAL."""
    app = msal.ConfidentialClientApplication(
        secrets['client_id'],
        authority=f"https://login.microsoftonline.com/{secrets['tenant_id']}",
        client_credential=secrets['client_secret']
    )
    result = app.acquire_token_for_client(scopes=['https://graph.microsoft.com/.default'])
    if 'access_token' in result:
        return result['access_token']
    raise Exception(f"Failed to get Graph token: {result.get('error_description', 'Unknown error')}")

def get_unread_emails(token, mailbox):
    """Fetch unread emails from the claims shared mailbox."""
    headers = {'Authorization': f'Bearer {token}'}
    url = f'https://graph.microsoft.com/v1.0/users/{mailbox}/mailFolders/inbox/messages'
    params = {
        '$filter': 'isRead eq false',
        '$select': 'id,subject,body,from,receivedDateTime,hasAttachments',
        '$top': 50,
        '$orderby': 'receivedDateTime asc'
    }
    response = requests.get(url, headers=headers, params=params)
    response.raise_for_status()
    return response.json().get('value', [])

def mark_email_processed(token, mailbox, message_id, folder_name='Processed'):
    """Move processed email to the Processed folder."""
    headers = {'Authorization': f'Bearer {token}', 'Content-Type': 'application/json'}
    # First, ensure Processed folder exists
    folders_url = f'https://graph.microsoft.com/v1.0/users/{mailbox}/mailFolders'
    folders_resp = requests.get(folders_url, headers=headers)
    folder_id = None
    for folder in folders_resp.json().get('value', []):
        if folder['displayName'] == folder_name:
            folder_id = folder['id']
            break
    if not folder_id:
        create_resp = requests.post(folders_url, headers=headers, json={'displayName': folder_name})
        folder_id = create_resp.json()['id']
    # Move message
    move_url = f'https://graph.microsoft.com/v1.0/users/{mailbox}/messages/{message_id}/move'
    requests.post(move_url, headers=headers, json={'destinationId': folder_id})

def strip_html(html_content):
    """Convert HTML email body to plain text."""
    soup = BeautifulSoup(html_content, 'html.parser')
    return soup.get_text(separator=' ', strip=True)

def extract_policy_number(text):
    """Extract policy number using carrier-specific and generic patterns."""
    for pattern, carrier_hint in POLICY_PATTERNS:
        match = re.search(pattern, text, re.IGNORECASE)
        if match:
            policy_num = match.group(1).strip()
            # Clean up: remove extra spaces, normalize dashes
            policy_num = re.sub(r'\s+', '-', policy_num)
            return {'policyNumber': policy_num, 'carrierHint': carrier_hint}
    return {'policyNumber': None, 'carrierHint': None}

def extract_date_of_loss(text):
    """Extract date of loss from claim text."""
    for pattern in DATE_PATTERNS:
        match = re.search(pattern, text, re.IGNORECASE)
        if match:
            try:
                date_str = match.group(1) if match.lastindex else match.group(0)
                parsed_date = dateparser.parse(date_str, fuzzy=True)
                if parsed_date:
                    return parsed_date.strftime('%Y-%m-%d')
            except (ValueError, TypeError):
                continue
    return None

def classify_lob(text):
    """Classify the line of business based on keyword matching."""
    text_lower = text.lower()
    scores = {}
    for lob, keywords in LOB_KEYWORDS.items():
        score = sum(1 for kw in keywords if kw.lower() in text_lower)
        if score > 0:
            scores[lob] = score
    if scores:
        return max(scores, key=scores.get)
    return 'unknown'

def extract_claimant_name(text):
    """Extract claimant/insured name."""
    for pattern in NAME_PATTERNS:
        match = re.search(pattern, text)
        if match:
            return match.group(1).strip()
    return None

def extract_phone(text):
    """Extract phone number."""
    match = re.search(PHONE_PATTERN, text, re.IGNORECASE)
    if match:
        phone = re.sub(r'[^\d+]', '', match.group(1))
        return phone
    return None

def extract_address(text):
    """Extract loss location address."""
    match = re.search(ADDRESS_PATTERN, text)
    if match:
        return match.group(1).strip()
    return None

def parse_claim_email(subject, body_html):
    """Main parsing function: extract all claim fields from email."""
    body_text = strip_html(body_html) if '<' in body_html else body_html
    combined_text = f"{subject} {body_text}"
    
    policy_info = extract_policy_number(combined_text)
    
    return {
        'policyNumber': policy_info['policyNumber'],
        'carrierHint': policy_info['carrierHint'],
        'dateOfLoss': extract_date_of_loss(combined_text),
        'lineOfBusiness': classify_lob(combined_text),
        'claimantName': extract_claimant_name(combined_text),
        'claimantPhone': extract_phone(combined_text),
        'lossLocation': extract_address(combined_text),
        'lossDescription': body_text[:1000],  # First 1000 chars as description
        'parseConfidence': 'high' if policy_info['policyNumber'] else 'low',
        'rawSubject': subject,
        'parsedAt': datetime.now(timezone.utc).isoformat()
    }

def lambda_handler(event, context):
    """Main Lambda handler - polls mailbox and processes new claim emails."""
    # Get secrets
    secret_name = 'agency-claims/m365-graph-api'
    secret_response = secretsmanager.get_secret_value(SecretId=secret_name)
    secrets = json.loads(secret_response['SecretString'])
    
    # Get Graph API token
    token = get_graph_token(secrets)
    mailbox = secrets['mailbox']
    
    # Fetch unread emails
    emails = get_unread_emails(token, mailbox)
    results = []
    
    for email in emails:
        claim_id = f"CLM-{datetime.now(timezone.utc).strftime('%Y%m%d%H%M%S')}-{email['id'][:8]}"
        
        try:
            # Parse the email
            parsed = parse_claim_email(
                email.get('subject', ''),
                email.get('body', {}).get('content', '')
            )
            parsed['claimId'] = claim_id
            parsed['emailId'] = email['id']
            parsed['receivedAt'] = email.get('receivedDateTime')
            parsed['fromAddress'] = email.get('from', {}).get('emailAddress', {}).get('address')
            
            # Log to audit trail
            lambda_client.invoke(
                FunctionName='claims-audit-logger',
                InvocationType='Event',
                Payload=json.dumps({
                    'claimId': claim_id,
                    'eventType': 'CLAIM_RECEIVED',
                    'details': {
                        'source': 'email',
                        'from': parsed['fromAddress'],
                        'subject': parsed['rawSubject'],
                        'parseConfidence': parsed['parseConfidence']
                    }
                })
            )
            
            # If policy number found, invoke AMS lookup
            if parsed['policyNumber']:
                ams_response = lambda_client.invoke(
                    FunctionName='claims-ams-lookup',
                    InvocationType='RequestResponse',
                    Payload=json.dumps({
                        'policyNumber': parsed['policyNumber'],
                        'claimId': claim_id
                    })
                )
                ams_data = json.loads(ams_response['Payload'].read())
                parsed['carrierFromAMS'] = ams_data.get('carrierName')
                parsed['carrierId'] = ams_data.get('carrierId')
                parsed['insuredNameFromAMS'] = ams_data.get('insuredName')
            
            # Route to carrier or manual review
            if parsed.get('carrierId'):
                # Invoke carrier automation
                lambda_client.invoke(
                    FunctionName='claims-carrier-automation',
                    InvocationType='Event',
                    Payload=json.dumps(parsed)
                )
                mark_email_processed(token, mailbox, email['id'], 'Processed')
            else:
                # Route to manual review
                mark_email_processed(token, mailbox, email['id'], 'Needs Review')
                # Send notification to agency staff
                lambda_client.invoke(
                    FunctionName='claims-audit-logger',
                    InvocationType='Event',
                    Payload=json.dumps({
                        'claimId': claim_id,
                        'eventType': 'ROUTED_TO_MANUAL_REVIEW',
                        'details': {'reason': 'Could not identify carrier from policy number or email content'}
                    })
                )
            
            results.append({'claimId': claim_id, 'status': 'processed', 'carrier': parsed.get('carrierId', 'manual_review')})
            
        except Exception as e:
            results.append({'claimId': claim_id, 'status': 'error', 'error': str(e)})
            # Move to error folder
            mark_email_processed(token, mailbox, email['id'], 'Parse Errors')
    
    return {'processedCount': len(results), 'results': results}

AMS Policy Lookup Integration

Type: integration

Unified API integration layer that queries the agency's AMS (Applied Epic, AMS360, or HawkSoft) to look up policy details by policy number. Returns the carrier name, carrier ID, insured name, line of business, policy status, and effective dates. Implements an adapter pattern so the same interface works regardless of which AMS the agency uses.

Implementation:

ams_lookup.py - AMS Policy Lookup Lambda Function
python
# ams_lookup.py - AMS Policy Lookup Lambda Function
import json
import os
import boto3
import requests
from datetime import datetime, timezone

secretsmanager = boto3.client('secretsmanager')

class AMSAdapter:
    """Base class for AMS integrations."""
    def lookup_policy(self, policy_number):
        raise NotImplementedError

class AppliedEpicAdapter(AMSAdapter):
    """Applied Epic REST API adapter."""
    def __init__(self, credentials):
        self.base_url = credentials.get('api_url', 'https://api.appliedsystems.com/epic')
        self.client_id = credentials['client_id']
        self.client_secret = credentials['client_secret']
        self.token = None
    
    def _authenticate(self):
        """OAuth2 client credentials flow."""
        auth_url = f"{self.base_url}/oauth/token"
        response = requests.post(auth_url, data={
            'grant_type': 'client_credentials',
            'client_id': self.client_id,
            'client_secret': self.client_secret,
            'scope': 'policies.read'
        })
        response.raise_for_status()
        self.token = response.json()['access_token']
    
    def lookup_policy(self, policy_number):
        if not self.token:
            self._authenticate()
        
        headers = {'Authorization': f'Bearer {self.token}', 'Accept': 'application/json'}
        # Search by policy number
        url = f"{self.base_url}/api/v1/policies"
        params = {'policyNumber': policy_number, '$select': 'policyId,policyNumber,carrier,insuredName,lineOfBusiness,status,effectiveDate,expirationDate'}
        response = requests.get(url, headers=headers, params=params)
        response.raise_for_status()
        
        policies = response.json().get('value', [])
        if not policies:
            return None
        
        policy = policies[0]
        return {
            'policyNumber': policy.get('policyNumber'),
            'carrierName': policy.get('carrier', {}).get('name'),
            'carrierId': policy.get('carrier', {}).get('code', '').lower().replace(' ', '-'),
            'carrierNAIC': policy.get('carrier', {}).get('naicCode'),
            'insuredName': policy.get('insuredName'),
            'lineOfBusiness': policy.get('lineOfBusiness'),
            'policyStatus': policy.get('status'),
            'effectiveDate': policy.get('effectiveDate'),
            'expirationDate': policy.get('expirationDate'),
            'amsType': 'applied_epic'
        }

class AMS360Adapter(AMSAdapter):
    """Vertafore AMS360 OData API adapter."""
    def __init__(self, credentials):
        self.base_url = credentials.get('api_url', 'https://api.vertafore.com/ams360')
        self.api_key = credentials['api_key']
        self.agency_code = credentials['agency_code']
    
    def lookup_policy(self, policy_number):
        headers = {
            'Authorization': f'Bearer {self.api_key}',
            'Accept': 'application/json',
            'X-Agency-Code': self.agency_code
        }
        url = f"{self.base_url}/odata/Policies"
        params = {'$filter': f"PolicyNumber eq '{policy_number}'", '$expand': 'Carrier,Customer'}
        response = requests.get(url, headers=headers, params=params)
        response.raise_for_status()
        
        policies = response.json().get('value', [])
        if not policies:
            return None
        
        policy = policies[0]
        carrier = policy.get('Carrier', {})
        customer = policy.get('Customer', {})
        return {
            'policyNumber': policy.get('PolicyNumber'),
            'carrierName': carrier.get('CompanyName'),
            'carrierId': carrier.get('CompanyCode', '').lower().replace(' ', '-'),
            'carrierNAIC': carrier.get('NAICCode'),
            'insuredName': f"{customer.get('FirstName', '')} {customer.get('LastName', '')}".strip(),
            'lineOfBusiness': policy.get('LineOfBusiness'),
            'policyStatus': policy.get('PolicyStatus'),
            'effectiveDate': policy.get('EffectiveDate'),
            'expirationDate': policy.get('ExpirationDate'),
            'amsType': 'ams360'
        }

class HawkSoftAdapter(AMSAdapter):
    """HawkSoft CMS API adapter."""
    def __init__(self, credentials):
        self.base_url = credentials.get('api_url', 'https://api.hawksoft.com/v1')
        self.api_key = credentials['api_key']
    
    def lookup_policy(self, policy_number):
        headers = {'Authorization': f'ApiKey {self.api_key}', 'Accept': 'application/json'}
        url = f"{self.base_url}/policies/search"
        params = {'policyNumber': policy_number}
        response = requests.get(url, headers=headers, params=params)
        response.raise_for_status()
        
        results = response.json().get('results', [])
        if not results:
            return None
        
        policy = results[0]
        return {
            'policyNumber': policy.get('policyNumber'),
            'carrierName': policy.get('companyName'),
            'carrierId': policy.get('companyCode', '').lower().replace(' ', '-'),
            'carrierNAIC': policy.get('naicCode'),
            'insuredName': policy.get('namedInsured'),
            'lineOfBusiness': policy.get('lineOfBusiness'),
            'policyStatus': policy.get('status'),
            'effectiveDate': policy.get('effectiveDate'),
            'expirationDate': policy.get('expirationDate'),
            'amsType': 'hawksoft'
        }

def get_ams_adapter():
    """Factory function to create the correct AMS adapter."""
    ams_type = os.environ.get('AMS_TYPE', 'applied_epic')
    secret_name = os.environ.get('SECRETS_NAME', 'agency-claims/ams-api-key')
    
    secret_response = secretsmanager.get_secret_value(SecretId=secret_name)
    credentials = json.loads(secret_response['SecretString'])
    
    adapters = {
        'applied_epic': AppliedEpicAdapter,
        'ams360': AMS360Adapter,
        'hawksoft': HawkSoftAdapter
    }
    
    adapter_class = adapters.get(ams_type)
    if not adapter_class:
        raise ValueError(f"Unsupported AMS type: {ams_type}")
    
    return adapter_class(credentials)

def lambda_handler(event, context):
    """Lambda handler for AMS policy lookups."""
    policy_number = event.get('policyNumber')
    claim_id = event.get('claimId', 'unknown')
    
    if not policy_number:
        return {'error': 'policyNumber is required', 'claimId': claim_id}
    
    adapter = get_ams_adapter()
    result = adapter.lookup_policy(policy_number)
    
    if result:
        result['claimId'] = claim_id
        result['lookupTimestamp'] = datetime.now(timezone.utc).isoformat()
        return result
    else:
        return {
            'claimId': claim_id,
            'policyNumber': policy_number,
            'carrierName': None,
            'carrierId': None,
            'error': 'Policy not found in AMS',
            'lookupTimestamp': datetime.now(timezone.utc).isoformat()
        }

Carrier Routing Rules Engine

Type: workflow

The deterministic routing engine that takes parsed claim data (with carrier ID from AMS lookup) and determines the optimal FNOL submission method for that carrier. It queries the CarrierRoutingRules DynamoDB table, validates that all required fields are present, maps line-of-business codes to carrier-specific values, and dispatches to the appropriate submission handler (API, browser automation, email, or manual queue).

Implementation

routing_engine.py - Carrier Routing Rules Engine
python
# routing_engine.py - Carrier Routing Rules Engine
import json
import boto3
from datetime import datetime, timezone

dynamodb = boto3.resource('dynamodb')
lambda_client = boto3.client('lambda')
sns = boto3.client('sns')

rules_table = dynamodb.Table('CarrierRoutingRules')
audit_table = dynamodb.Table('ClaimsAuditTrail')

class RoutingDecision:
    def __init__(self, claim_data, carrier_rule):
        self.claim = claim_data
        self.rule = carrier_rule
        self.decision = None
        self.missing_fields = []
        self.mapped_data = {}
    
    def validate_required_fields(self):
        """Check that all carrier-required fields are present in the claim data."""
        required = self.rule.get('requiredFields', [])
        for field in required:
            if not self.claim.get(field):
                self.missing_fields.append(field)
        return len(self.missing_fields) == 0
    
    def map_lob(self):
        """Map the generic LOB to carrier-specific LOB code."""
        lob_mapping = self.rule.get('lobMapping', {})
        generic_lob = self.claim.get('lineOfBusiness', 'unknown')
        self.mapped_data['carrierLOB'] = lob_mapping.get(generic_lob, generic_lob)
        return self.mapped_data['carrierLOB']
    
    def build_fnol_payload(self):
        """Build the carrier-specific FNOL submission payload."""
        self.map_lob()
        return {
            'carrierId': self.rule['carrierId'],
            'carrierName': self.rule['carrierName'],
            'submissionMethod': self.rule['submissionMethod'],
            'portalUrl': self.rule.get('portalUrl'),
            'apiEndpoint': self.rule.get('apiEndpoint'),
            'authType': self.rule.get('authType'),
            'secretsKey': self.rule.get('secretsKey'),
            'secretsSubKey': self.rule.get('secretsSubKey'),
            'playwrightScript': self.rule.get('playwrightScript'),
            'fnolData': {
                'policyNumber': self.claim.get('policyNumber'),
                'dateOfLoss': self.claim.get('dateOfLoss'),
                'lineOfBusiness': self.mapped_data['carrierLOB'],
                'claimantName': self.claim.get('claimantName') or self.claim.get('insuredNameFromAMS'),
                'claimantPhone': self.claim.get('claimantPhone'),
                'lossLocation': self.claim.get('lossLocation'),
                'lossDescription': self.claim.get('lossDescription', '')[:500],
                'reportedBy': 'Automated FNOL System',
                'reportedDate': datetime.now(timezone.utc).strftime('%Y-%m-%d'),
                'claimId': self.claim.get('claimId')
            }
        }

def get_carrier_rule(carrier_id):
    """Fetch carrier routing rule from DynamoDB."""
    # Try exact match first
    response = rules_table.get_item(Key={'carrierId': carrier_id})
    if 'Item' in response:
        return response['Item']
    
    # Try fuzzy match (carrier name contains)
    scan_response = rules_table.scan()
    for item in scan_response.get('Items', []):
        if carrier_id.lower() in item['carrierId'].lower() or carrier_id.lower() in item['carrierName'].lower():
            return item
    
    return None

def route_claim(claim_data):
    """Main routing function."""
    claim_id = claim_data.get('claimId', 'unknown')
    carrier_id = claim_data.get('carrierId')
    
    # Log routing attempt
    log_audit(claim_id, 'ROUTING_STARTED', {'carrierId': carrier_id})
    
    if not carrier_id:
        log_audit(claim_id, 'ROUTING_FAILED', {'reason': 'No carrier ID available'})
        return route_to_manual(claim_data, 'No carrier identified')
    
    # Get carrier routing rule
    rule = get_carrier_rule(carrier_id)
    if not rule:
        log_audit(claim_id, 'ROUTING_FAILED', {'reason': f'No routing rule for carrier: {carrier_id}'})
        return route_to_manual(claim_data, f'No routing rule configured for carrier: {carrier_id}')
    
    if not rule.get('isActive', False):
        log_audit(claim_id, 'ROUTING_FAILED', {'reason': f'Carrier route is inactive: {carrier_id}'})
        return route_to_manual(claim_data, f'Carrier routing is disabled: {rule.get("carrierName")}')
    
    # Create routing decision
    decision = RoutingDecision(claim_data, rule)
    
    # Validate required fields
    if not decision.validate_required_fields():
        log_audit(claim_id, 'ROUTING_INCOMPLETE', {
            'reason': 'Missing required fields',
            'missingFields': decision.missing_fields,
            'carrier': rule['carrierName']
        })
        return route_to_manual(claim_data, f'Missing fields for {rule["carrierName"]}: {decision.missing_fields}')
    
    # Build FNOL payload
    payload = decision.build_fnol_payload()
    
    # Route based on submission method
    method = rule['submissionMethod']
    
    if method == 'api':
        return submit_via_api(payload, claim_id)
    elif method == 'browser_automation':
        return submit_via_browser(payload, claim_id)
    elif method == 'email':
        return submit_via_email(payload, claim_id)
    else:
        return route_to_manual(claim_data, f'Unknown submission method: {method}')

def submit_via_api(payload, claim_id):
    """Submit FNOL via carrier REST API."""
    log_audit(claim_id, 'FNOL_SUBMISSION_API', {'carrier': payload['carrierName'], 'endpoint': payload['apiEndpoint']})
    
    response = lambda_client.invoke(
        FunctionName='claims-carrier-automation',
        InvocationType='Event',
        Payload=json.dumps({'method': 'api', 'payload': payload})
    )
    return {'status': 'submitted', 'method': 'api', 'carrier': payload['carrierName']}

def submit_via_browser(payload, claim_id):
    """Submit FNOL via Playwright browser automation."""
    log_audit(claim_id, 'FNOL_SUBMISSION_BROWSER', {'carrier': payload['carrierName'], 'portal': payload['portalUrl']})
    
    response = lambda_client.invoke(
        FunctionName='claims-carrier-automation',
        InvocationType='Event',
        Payload=json.dumps({'method': 'browser', 'payload': payload})
    )
    return {'status': 'submitted', 'method': 'browser_automation', 'carrier': payload['carrierName']}

def submit_via_email(payload, claim_id):
    """Submit FNOL via formatted email to carrier."""
    log_audit(claim_id, 'FNOL_SUBMISSION_EMAIL', {'carrier': payload['carrierName']})
    # Build ACORD-formatted email and send via SES
    ses = boto3.client('ses')
    fnol = payload['fnolData']
    body = f"""FIRST NOTICE OF LOSS\n\nPolicy Number: {fnol['policyNumber']}\nDate of Loss: {fnol['dateOfLoss']}\nLine of Business: {fnol['lineOfBusiness']}\nInsured/Claimant: {fnol['claimantName']}\nPhone: {fnol['claimantPhone']}\nLoss Location: {fnol['lossLocation']}\n\nDescription of Loss:\n{fnol['lossDescription']}\n\nReported by: {fnol['reportedBy']}\nDate Reported: {fnol['reportedDate']}\nReference: {fnol['claimId']}"""
    # SES send would go here
    return {'status': 'submitted', 'method': 'email', 'carrier': payload['carrierName']}

def route_to_manual(claim_data, reason):
    """Route claim to manual review queue and notify staff."""
    claim_id = claim_data.get('claimId', 'unknown')
    log_audit(claim_id, 'ROUTED_TO_MANUAL', {'reason': reason})
    
    # Send SNS notification
    sns.publish(
        TopicArn=f'arn:aws:sns:us-east-1:{boto3.client("sts").get_caller_identity()["Account"]}:claims-routing-alerts',
        Subject=f'Manual Review Required: {claim_id}',
        Message=json.dumps({
            'claimId': claim_id,
            'reason': reason,
            'claimData': {
                'policyNumber': claim_data.get('policyNumber'),
                'claimantName': claim_data.get('claimantName'),
                'dateOfLoss': claim_data.get('dateOfLoss'),
                'lossDescription': claim_data.get('lossDescription', '')[:200]
            }
        }, indent=2)
    )
    return {'status': 'manual_review', 'reason': reason}

def log_audit(claim_id, event_type, details):
    """Write to audit trail."""
    lambda_client.invoke(
        FunctionName='claims-audit-logger',
        InvocationType='Event',
        Payload=json.dumps({
            'claimId': claim_id,
            'eventType': event_type,
            'details': details
        })
    )

def lambda_handler(event, context):
    """Lambda handler - receives parsed claim data and routes it."""
    return route_claim(event)

Carrier Portal Browser Automation Base

Type: integration Base Playwright automation class and carrier-specific scripts for submitting FNOL through carrier web portals. Each carrier has its own script that inherits from the base class and implements the specific form navigation and field mapping for that carrier's portal. Includes screenshot-on-failure for debugging and automatic retry logic.

Implementation

carrier_base.py - Base class for carrier portal automation
python
# carrier_base.py - Base class for carrier portal automation
import json
import boto3
from playwright.sync_api import sync_playwright
from datetime import datetime, timezone
import traceback

s3 = boto3.client('s3')
secretsmanager = boto3.client('secretsmanager')
lambda_client = boto3.client('lambda')

AUDIT_BUCKET = 'agency-claims-audit-ACCOUNT_ID'  # Replace during deployment

class CarrierPortalAutomation:
    """Base class for carrier portal FNOL submission."""
    
    def __init__(self, carrier_config, fnol_data):
        self.config = carrier_config
        self.fnol = fnol_data
        self.claim_id = fnol_data.get('claimId', 'unknown')
        self.browser = None
        self.page = None
        self.credentials = None
    
    def load_credentials(self):
        """Load carrier portal credentials from Secrets Manager."""
        secret_response = secretsmanager.get_secret_value(SecretId=self.config['secretsKey'])
        all_secrets = json.loads(secret_response['SecretString'])
        self.credentials = all_secrets.get(self.config['secretsSubKey'], {})
    
    def take_screenshot(self, name):
        """Save screenshot to S3 for audit trail."""
        if self.page:
            screenshot = self.page.screenshot(full_page=True)
            key = f"screenshots/{self.claim_id}/{name}_{datetime.now(timezone.utc).strftime('%Y%m%d_%H%M%S')}.png"
            s3.put_object(Bucket=AUDIT_BUCKET, Key=key, Body=screenshot, ContentType='image/png')
            return key
        return None
    
    def login(self):
        """Override in carrier-specific class."""
        raise NotImplementedError
    
    def navigate_to_fnol(self):
        """Override in carrier-specific class."""
        raise NotImplementedError
    
    def fill_fnol_form(self):
        """Override in carrier-specific class."""
        raise NotImplementedError
    
    def submit_fnol(self):
        """Override in carrier-specific class."""
        raise NotImplementedError
    
    def get_confirmation(self):
        """Override: extract confirmation/reference number after submission."""
        raise NotImplementedError
    
    def execute(self):
        """Main execution flow with error handling."""
        self.load_credentials()
        result = {'claimId': self.claim_id, 'carrier': self.config.get('carrierName')}
        
        with sync_playwright() as p:
            self.browser = p.chromium.launch(headless=True)
            context = self.browser.new_context(
                viewport={'width': 1920, 'height': 1080},
                user_agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
            )
            self.page = context.new_page()
            
            try:
                # Step 1: Login
                self.page.goto(self.config['portalUrl'], wait_until='networkidle', timeout=30000)
                self.take_screenshot('01_login_page')
                self.login()
                self.page.wait_for_load_state('networkidle')
                self.take_screenshot('02_after_login')
                
                # Step 2: Navigate to FNOL
                self.navigate_to_fnol()
                self.page.wait_for_load_state('networkidle')
                self.take_screenshot('03_fnol_form')
                
                # Step 3: Fill form
                self.fill_fnol_form()
                self.take_screenshot('04_form_filled')
                
                # Step 4: Submit
                self.submit_fnol()
                self.page.wait_for_load_state('networkidle')
                self.take_screenshot('05_after_submit')
                
                # Step 5: Get confirmation
                confirmation = self.get_confirmation()
                self.take_screenshot('06_confirmation')
                
                result['status'] = 'success'
                result['confirmationNumber'] = confirmation
                
                # Log success
                lambda_client.invoke(
                    FunctionName='claims-audit-logger',
                    InvocationType='Event',
                    Payload=json.dumps({
                        'claimId': self.claim_id,
                        'eventType': 'FNOL_SUBMITTED_SUCCESS',
                        'details': {'carrier': self.config['carrierName'], 'confirmation': confirmation, 'method': 'browser_automation'}
                    })
                )
                
            except Exception as e:
                self.take_screenshot('ERROR_screenshot')
                result['status'] = 'error'
                result['error'] = str(e)
                result['traceback'] = traceback.format_exc()
                
                lambda_client.invoke(
                    FunctionName='claims-audit-logger',
                    InvocationType='Event',
                    Payload=json.dumps({
                        'claimId': self.claim_id,
                        'eventType': 'FNOL_SUBMISSION_FAILED',
                        'details': {'carrier': self.config['carrierName'], 'error': str(e), 'method': 'browser_automation'}
                    })
                )
            finally:
                self.browser.close()
        
        return result


# Example carrier-specific implementation: The Hartford
class HartfordFNOL(CarrierPortalAutomation):
    def login(self):
        self.page.fill('input[name="username"], #username', self.credentials['username'])
        self.page.fill('input[name="password"], #password', self.credentials['password'])
        self.page.click('button[type="submit"], input[type="submit"]')
    
    def navigate_to_fnol(self):
        self.page.click('text=Report a Claim')
        self.page.wait_for_selector('form')
    
    def fill_fnol_form(self):
        self.page.fill('#policyNumber', self.fnol['policyNumber'])
        self.page.fill('#dateOfLoss', self.fnol['dateOfLoss'])
        self.page.select_option('#lineOfBusiness', label=self.fnol['lineOfBusiness'])
        self.page.fill('#insuredName', self.fnol['claimantName'])
        self.page.fill('#contactPhone', self.fnol.get('claimantPhone', ''))
        self.page.fill('#lossDescription', self.fnol['lossDescription'])
        if self.fnol.get('lossLocation'):
            self.page.fill('#lossAddress', self.fnol['lossLocation'])
    
    def submit_fnol(self):
        self.page.click('button:text("Submit"), input[value="Submit"]')
        self.page.wait_for_selector('.confirmation, .success, .claim-number', timeout=15000)
    
    def get_confirmation(self):
        el = self.page.query_selector('.confirmation-number, .claim-number, [data-testid="claimNumber"]')
        return el.inner_text() if el else 'SUBMITTED_NO_CONFIRMATION_CAPTURED'


# Dispatcher
CARRIER_HANDLERS = {
    'hartford_fnol.py': HartfordFNOL,
    # Add more carrier handlers as implemented:
    # 'erie_fnol.py': ErieFNOL,
    # 'nationwide_fnol.py': NationwideFNOL,
}

def lambda_handler(event, context):
    """Main Lambda handler for carrier automation."""
    method = event.get('method', 'browser')
    payload = event.get('payload', event)
    
    if method == 'api':
        # Handle API-based submission
        return handle_api_submission(payload)
    
    # Browser automation
    script_name = payload.get('playwrightScript', '')
    handler_class = CARRIER_HANDLERS.get(script_name)
    
    if not handler_class:
        return {'status': 'error', 'error': f'No handler for script: {script_name}'}
    
    handler = handler_class(payload, payload.get('fnolData', {}))
    return handler.execute()

def handle_api_submission(payload):
    """Handle carrier API-based FNOL submission."""
    import requests
    
    secrets_response = secretsmanager.get_secret_value(SecretId=payload['secretsKey'])
    secrets = json.loads(secrets_response['SecretString'])
    creds = secrets.get(payload['secretsSubKey'], {})
    
    # Build API request based on carrier auth type
    headers = {'Content-Type': 'application/json'}
    if payload.get('authType') == 'oauth2':
        # OAuth2 flow - carrier specific
        headers['Authorization'] = f"Bearer {creds.get('access_token', '')}"
    elif payload.get('authType') == 'api_key':
        headers['X-API-Key'] = creds.get('api_key', '')
    
    response = requests.post(
        payload['apiEndpoint'],
        headers=headers,
        json=payload.get('fnolData', {}),
        timeout=30
    )
    
    claim_id = payload.get('fnolData', {}).get('claimId', 'unknown')
    
    if response.status_code in (200, 201, 202):
        result = response.json()
        lambda_client.invoke(
            FunctionName='claims-audit-logger',
            InvocationType='Event',
            Payload=json.dumps({
                'claimId': claim_id,
                'eventType': 'FNOL_SUBMITTED_SUCCESS',
                'details': {'carrier': payload['carrierName'], 'method': 'api', 'response': str(result)[:500]}
            })
        )
        return {'status': 'success', 'carrier': payload['carrierName'], 'apiResponse': result}
    else:
        lambda_client.invoke(
            FunctionName='claims-audit-logger',
            InvocationType='Event',
            Payload=json.dumps({
                'claimId': claim_id,
                'eventType': 'FNOL_SUBMISSION_FAILED',
                'details': {'carrier': payload['carrierName'], 'method': 'api', 'statusCode': response.status_code, 'response': response.text[:500]}
            })
        )
        return {'status': 'error', 'carrier': payload['carrierName'], 'statusCode': response.status_code}

n8n Master Orchestration Workflow

Type: workflow

The master n8n workflow that serves as the central orchestrator for the entire claims routing pipeline. It connects email monitoring, claim parsing, AMS lookup, carrier routing, FNOL submission, audit logging, and staff notifications into a single visual workflow with error handling and human-in-the-loop fallbacks.

n8n Master Orchestration Workflow Specification

Workflow Name: Claims Routing - Master Orchestration

Nodes Configuration

Node 1: Schedule Trigger

  • Type: Schedule Trigger
  • Interval: Every 2 minutes
  • Purpose: Polls the claims email inbox on a schedule

Node 2: HTTP Request - Invoke Email Parser Lambda

  • Type: HTTP Request
  • Method: POST
  • Authentication: AWS Signature (configure AWS credentials in n8n)
  • Purpose: Triggers the email parser Lambda which returns parsed claim data
Node 2 Lambda endpoint URL
text
URL: https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/claims-email-parser/invocations

Node 3: IF - Claims Found?

  • Type: IF
  • True path → Node 4 (Split results)
  • False path → End (no claims to process)
Node 3 IF condition
text
Condition: {{$json.processedCount}} > 0

Node 4: Split In Batches

  • Type: Split In Batches
  • Batch Size: 1
  • Purpose: Process each claim individually through the routing pipeline

Node 5: IF - Auto-Routed or Manual?

  • Type: IF
  • True path → Node 6 (Success notification)
  • False path → Node 7 (Manual review notification)
Node 5 IF condition
text
Condition: {{$json.status}} == 'processed' AND {{$json.carrier}} != 'manual_review'

Node 6: Send Email - Success Notification

  • Type: Email Send (SMTP)
  • To: claims-manager@agencyname.com
  • Body: Summary of routing action with link to audit trail
Node 6 email subject template
text
Subject: ✅ Claim {{$json.claimId}} auto-routed to {{$json.carrier}}

Node 7: Send Email - Manual Review Required

  • Type: Email Send (SMTP)
  • To: claims-team@agencyname.com
  • Body: Parsed claim data with pre-filled information and instructions for manual submission
Node 7 email subject template
text
Subject: ⚠️ Manual Review Required: Claim {{$json.claimId}}

Node 8: Error Trigger (Global)

  • Type: Error Trigger
  • Purpose: Catches any unhandled errors in the workflow
  • Routes to Node 9

Node 9: Send Email - Error Alert

  • Type: Email Send (SMTP)
  • To: msp-alerts@yourmsp.com, claims-manager@agencyname.com
  • Body: Full error details including stack trace
Node 9 error alert subject template
text
Subject: 🔴 Claims Routing Error: {{$json.error.message}}

Node 10: Webhook - GloveBox Claims (Optional)

  • Type: Webhook
  • HTTP Method: POST
  • Path: /webhook/glovebox-claim
  • Authentication: Header Auth (X-Webhook-Secret)
  • Purpose: Receives claims submitted through GloveBox client portal
  • Routes to: Node 2 variant (invokes routing engine directly since data is already structured)

Credential Configuration Required

1
AWS Credentials: Access Key ID + Secret with permissions for Lambda:InvokeFunction, DynamoDB, S3, Secrets Manager
2
SMTP Credentials: For sending notification emails (use M365 SMTP or Amazon SES)
3
GloveBox Webhook Secret: Shared secret for authenticating incoming webhooks

Error Handling Strategy

  • Each node has individual error handling configured to 'Continue on Fail'
  • The Error Trigger node catches unhandled failures
  • Failed claims are never silently dropped—they always result in either a manual review notification or an error alert
  • The workflow maintains idempotency by checking the email message ID against DynamoDB before processing (prevents duplicate FNOL submissions if the workflow runs while a previous execution is still processing)

Deployment

1
Import this workflow via n8n UI: Settings > Import Workflow
2
Configure all credentials in n8n Credentials section
3
Test with a single sample email
4
Activate the workflow
5
Monitor executions in n8n's Executions panel

Monthly Compliance Audit Report Generator

Type: skill A scheduled Lambda function that runs on the 1st of each month, queries the DynamoDB audit trail for the previous month, and generates a compliance report in PDF format. The report includes: total claims processed, routing accuracy metrics, carrier breakdown, manual review rate, error rate, average processing time, and a complete activity log suitable for regulatory review under NAIC Model Law #668.

Implementation:

compliance_report.py - Monthly Compliance Report Generator
python
# compliance_report.py - Monthly Compliance Report Generator
import boto3
import json
from datetime import datetime, timezone, timedelta
from collections import Counter, defaultdict

dynamodb = boto3.resource('dynamodb')
s3 = boto3.client('s3')
ses = boto3.client('ses')

table = dynamodb.Table('ClaimsAuditTrail')
REPORT_BUCKET = 'agency-claims-audit-ACCOUNT_ID'

def generate_monthly_report(year, month):
    """Generate compliance report for the specified month."""
    start_date = datetime(year, month, 1, tzinfo=timezone.utc)
    if month == 12:
        end_date = datetime(year + 1, 1, 1, tzinfo=timezone.utc)
    else:
        end_date = datetime(year, month + 1, 1, tzinfo=timezone.utc)
    
    # Scan audit trail for the month
    all_events = []
    scan_kwargs = {
        'FilterExpression': '#ts BETWEEN :start AND :end',
        'ExpressionAttributeNames': {'#ts': 'timestamp'},
        'ExpressionAttributeValues': {
            ':start': start_date.isoformat(),
            ':end': end_date.isoformat()
        }
    }
    while True:
        response = table.scan(**scan_kwargs)
        all_events.extend(response.get('Items', []))
        if 'LastEvaluatedKey' not in response:
            break
        scan_kwargs['ExclusiveStartKey'] = response['LastEvaluatedKey']
    
    # Aggregate metrics
    claims_received = [e for e in all_events if e['eventType'] == 'CLAIM_RECEIVED']
    fnol_success = [e for e in all_events if e['eventType'] == 'FNOL_SUBMITTED_SUCCESS']
    fnol_failed = [e for e in all_events if e['eventType'] == 'FNOL_SUBMISSION_FAILED']
    manual_review = [e for e in all_events if e['eventType'] == 'ROUTED_TO_MANUAL_REVIEW']
    
    total_claims = len(claims_received)
    auto_routed = len(fnol_success)
    failed = len(fnol_failed)
    manual = len(manual_review)
    
    # Carrier breakdown
    carrier_counts = Counter()
    for e in fnol_success:
        carrier = e.get('details', {}).get('carrier', 'Unknown')
        carrier_counts[carrier] += 1
    
    # Source breakdown
    source_counts = Counter()
    for e in claims_received:
        source = e.get('details', {}).get('source', 'unknown')
        source_counts[source] += 1
    
    # Build report
    report_date = start_date.strftime('%B %Y')
    report = f"""CLAIMS ROUTING AUTOMATION - MONTHLY COMPLIANCE REPORT
{'='*60}
Report Period: {report_date}
Generated: {datetime.now(timezone.utc).strftime('%Y-%m-%d %H:%M:%S UTC')}
{'='*60}

EXECUTIVE SUMMARY
{'-'*40}
Total Claims Received:        {total_claims}
Auto-Routed & Submitted:      {auto_routed} ({(auto_routed/total_claims*100) if total_claims > 0 else 0:.1f}%)
Routed to Manual Review:      {manual} ({(manual/total_claims*100) if total_claims > 0 else 0:.1f}%)
Submission Failures:          {failed} ({(failed/total_claims*100) if total_claims > 0 else 0:.1f}%)
Automation Success Rate:      {((auto_routed/(total_claims-manual))*100) if (total_claims-manual) > 0 else 0:.1f}%

CARRIER BREAKDOWN
{'-'*40}
"""
    for carrier, count in carrier_counts.most_common():
        report += f"  {carrier:30s} {count:5d} claims\n"
    
    report += f"\nINTAKE SOURCE BREAKDOWN\n{'-'*40}\n"
    for source, count in source_counts.most_common():
        report += f"  {source:30s} {count:5d} claims\n"
    
    report += f"\nSECURITY & COMPLIANCE NOTES\n{'-'*40}\n"
    report += "- All claims data encrypted at rest (AES-256) and in transit (TLS 1.2+)\n"
    report += "- MFA enforced on all system access\n"
    report += "- Audit trail integrity verified (SHA-256 hash chain)\n"
    report += f"- Total audit events recorded: {len(all_events)}\n"
    report += f"- DynamoDB Point-in-Time Recovery: ENABLED\n"
    report += f"- S3 versioning on document bucket: ENABLED\n"
    
    report += f"\nDETAILED ACTIVITY LOG\n{'-'*40}\n"
    for e in sorted(all_events, key=lambda x: x.get('timestamp', '')):
        report += f"  {e.get('timestamp', 'N/A'):30s} | {e.get('claimId', 'N/A'):20s} | {e.get('eventType', 'N/A')}\n"
    
    # Save report to S3
    report_key = f"compliance-reports/{year}/{month:02d}/claims_routing_report_{year}_{month:02d}.txt"
    s3.put_object(
        Bucket=REPORT_BUCKET,
        Key=report_key,
        Body=report.encode('utf-8'),
        ContentType='text/plain',
        ServerSideEncryption='AES256'
    )
    
    # Email report to agency and MSP
    ses.send_email(
        Source='automation@agencyname.com',
        Destination={
            'ToAddresses': ['agency-principal@agencyname.com', 'msp-account-manager@yourmsp.com']
        },
        Message={
            'Subject': {'Data': f'Claims Routing Compliance Report - {report_date}'},
            'Body': {'Text': {'Data': report}}
        }
    )
    
    return {'reportKey': report_key, 'totalClaims': total_claims, 'successRate': f'{((auto_routed/(total_claims-manual))*100) if (total_claims-manual) > 0 else 0:.1f}%'}

def lambda_handler(event, context):
    """Run on 1st of each month via EventBridge schedule."""
    now = datetime.now(timezone.utc)
    # Generate report for previous month
    if now.month == 1:
        report_year = now.year - 1
        report_month = 12
    else:
        report_year = now.year
        report_month = now.month - 1
    
    return generate_monthly_report(report_year, report_month)

Testing & Validation

  • TC-001: Send a test email to claims@agencyname.com with subject 'New Claim - Policy TRV-AUTO-2024-5678' and body containing claimant name, date of loss, loss description, and phone number. Verify within 5 minutes that: (a) email is moved to 'Processed' folder, (b) DynamoDB ClaimsAuditTrail contains CLAIM_RECEIVED and ROUTING_STARTED events for this claim, (c) the correct carrier (Travelers) was identified via AMS lookup.
  • TC-002: Send a test email with no policy number (e.g., 'Customer called about a fender bender'). Verify within 5 minutes that: (a) email is moved to 'Needs Review' folder, (b) DynamoDB contains ROUTED_TO_MANUAL_REVIEW event, (c) notification email is sent to claims-team@agencyname.com with pre-filled claim data.
  • TC-003: Send test emails for each of the top 5 carriers integrated. For each, verify the FNOL submission reaches the carrier (check carrier portal for submitted claim or verify API response logged in audit trail). Confirm the carrier-specific LOB mapping is correct (e.g., 'auto' maps to 'AUTO' for Travelers, 'Automobile' for Hartford).
  • TC-004: For each carrier using browser automation, verify the Playwright script: (a) successfully logs into the carrier portal, (b) navigates to the FNOL form, (c) populates all fields correctly, (d) submits the form, (e) captures the confirmation number, and (f) saves screenshots at each step to S3.
  • TC-005: Verify audit trail completeness by processing 10 test claims and querying DynamoDB. Each claim must have at minimum: CLAIM_RECEIVED, ROUTING_STARTED, and either FNOL_SUBMITTED_SUCCESS or ROUTED_TO_MANUAL_REVIEW events. Verify SHA-256 hash is present on every event.
  • TC-006: Test error handling by temporarily breaking a carrier portal URL in the routing rules. Submit a claim for that carrier and verify: (a) error is caught, (b) screenshot is saved to S3, (c) FNOL_SUBMISSION_FAILED event is logged, (d) SNS alert is sent to MSP and agency, (e) claim is re-routed to manual review queue.
  • TC-007: Verify encryption compliance: (a) confirm S3 buckets have default encryption enabled (aws s3api get-bucket-encryption), (b) confirm DynamoDB tables have encryption at rest (aws dynamodb describe-table --table-name ClaimsAuditTrail), (c) verify all API calls use HTTPS by checking CloudWatch logs for the Lambda functions.
  • TC-008: Test the GloveBox webhook integration by submitting a test claim through the GloveBox client portal. Verify the webhook fires to the n8n endpoint and the claim flows through the same routing pipeline as email-sourced claims, with proper audit trail entries showing source='glovebox'.
  • TC-009: Run the monthly compliance report generator manually (invoke Lambda with test date parameters). Verify the report is: (a) saved to S3 in the correct path, (b) contains accurate counts matching DynamoDB records, (c) emailed to both agency principal and MSP account manager.
  • TC-010: Perform a parallel run test: process 20 real claims through both the automation system and manually. Compare results side-by-side: verify carrier identification matches 100%, FNOL data accuracy matches 95%+, and no claims are dropped or duplicated by the automation.
  • TC-011: Test n8n workflow resilience by stopping the n8n container ('docker compose stop n8n') for 10 minutes, then restarting. Verify that: (a) emails received during downtime are processed on next poll cycle, (b) no duplicate processing occurs, (c) the n8n execution history shows the gap and recovery.
  • TC-012: Verify MFA is enforced on all access points: (a) AWS console login requires MFA, (b) n8n admin login uses HTTPS with strong password, (c) AMS API credentials are stored in Secrets Manager (not in code), (d) carrier portal credentials are stored in Secrets Manager.

Client Handoff

Client Handoff Checklist

Training Sessions (Half-Day, On-Site or Video)

Session 1: Claims Staff Training (2 hours)

  • How claims now flow through the automated system (visual diagram walkthrough)
  • What happens when they send/receive a claim email to the claims@agencyname.com inbox
  • How to recognize when a claim was auto-routed vs. sent to manual review
  • How to handle 'Needs Review' emails (the pre-filled data they receive and what to do with it)
  • How to check FNOL submission status in the AMS
  • What NOT to do: don't delete emails from the claims mailbox before they're processed; don't modify the Processed/Needs Review folder structure

Session 2: Claims Manager / Agency Principal Training (1 hour)

  • Monthly compliance report walkthrough: what each metric means and what to watch for
  • CloudWatch dashboard overview: how to check system health
  • When to contact the MSP: carrier portal errors, new carrier onboarding requests, routing rule changes
  • How automation success rate trends relate to operational efficiency

Documentation Package to Leave Behind

1
System Architecture Diagram - Visual showing all components, data flows, and integration points
2
Carrier Routing Rules Reference - Table of all configured carriers, submission methods, and required fields
3
Runbook: Common Issues - Troubleshooting guide for email parsing failures, carrier portal errors, and manual review procedures
4
Runbook: Adding a New Carrier - Step-by-step for requesting MSP to onboard a new carrier integration
5
Compliance Documentation - Written information security program addendum covering the automation system, data flow diagram, risk assessment, and incident response plan section
6
Emergency Contact Card - MSP escalation path: who to call, SLA response times, after-hours procedures
7
Credentials & Access Inventory - Sealed envelope (or password manager entry) with all service account credentials, API keys, and admin access details stored with the agency principal

Success Criteria Review (with Agency Principal)

Sign-Off

  • Agency principal signs off on system acceptance
  • 30-day warranty period begins (MSP fixes any issues at no additional cost)
  • Transition to ongoing managed services agreement

Maintenance

Ongoing Maintenance Responsibilities

...

Weekly (MSP Responsibility - 30 min/week)

  • Review CloudWatch dashboard for error trends and anomalies
  • Check n8n execution history for failed workflows
  • Verify all carrier portal automations executed successfully (review screenshots in S3 for any failures)
  • Clear any stuck items in the 'Parse Errors' email folder
  • Verify DynamoDB table metrics (read/write capacity, throttled requests)

Monthly (MSP Responsibility - 2 hours/month)

  • Review and deliver the monthly compliance report to agency principal
  • Analyze routing accuracy metrics: if auto-route rate drops below 80%, investigate root causes
  • Test each carrier portal login to verify credentials are still valid (carrier password resets can break automation)
  • Review AWS billing to ensure costs remain within expected range ($30-$50/month)
  • Rotate any expiring API credentials or client secrets (check Azure AD app registration expiry dates)
  • Back up n8n workflows and PostgreSQL database to S3

Update n8n to latest patch version if security updates are available:

Update n8n to latest patch version
bash
docker compose pull && docker compose up -d

Quarterly (MSP + Agency - QBR Meeting)

Annual (MSP Responsibility)

  • Conduct annual risk assessment covering the automation system (required by NAIC #668)
  • Review and renew all software licenses and API agreements
  • Performance review: is the serverless architecture still cost-effective at current volumes?
  • Major version upgrades: n8n, Playwright, Python runtime, AWS Lambda runtime
  • Penetration test or vulnerability scan of the automation infrastructure
  • Update compliance documentation for any regulatory changes

SLA Considerations

  • P1 (System Down): All claims routing to manual queue. Response: 1 hour. Resolution: 4 hours.
  • P2 (Single Carrier Broken): One carrier's portal automation failing. Response: 4 hours. Resolution: 1 business day.
  • P3 (Degraded Performance): Slow processing, intermittent errors. Response: 1 business day. Resolution: 3 business days.
  • P4 (Enhancement Request): New carrier onboarding, rule changes. Response: 2 business days. Resolution: 2 weeks.

Escalation Path

1
Agency CSR notices issue → emails msp-support@yourmsp.com
2
MSP L1 tech checks CloudWatch, n8n logs, and error screenshots
3
If carrier portal issue → MSP L2 updates Playwright scripts
4
If AMS API issue → MSP L2 contacts Applied/Vertafore support
5
If infrastructure issue → MSP L2/L3 reviews AWS logs and scales resources
6
If compliance concern → MSP escalates to compliance officer and agency principal

Carrier Portal Fragility Mitigation

Carrier portal browser automation is the highest-maintenance component. Mitigation strategies:

  • Monitor carrier portals weekly for UI changes
  • Subscribe to carrier agent newsletters for portal update announcements
  • Maintain a 'fallback to manual' path that is always functional
  • Budget 2-4 hours/month specifically for carrier script maintenance
  • Consider joining Applied/Vertafore partner programs for early notice of industry changes

Alternatives

Microsoft Power Automate Only (No Custom Code)

Use Microsoft Power Automate as the sole automation platform, leveraging its built-in email triggers, AI Builder for document processing, and desktop flows (RPA) for carrier portal automation. No n8n, no Lambda functions, no custom Python code. Everything runs within the Microsoft ecosystem using low-code/no-code tools.

Strada Voice AI + GloveBox Portal (Fully Managed SaaS)

Instead of building custom automation, deploy Strada's purpose-built insurance voice AI for phone-based FNOL intake and GloveBox for digital self-service claims. Strada handles phone calls with AI agents that collect FNOL data conversationally and submit to carriers. GloveBox handles digital intake. The MSP's role shifts from builder to integrator and managed service provider.

Zapier + Airtable (Lightweight MVP)

Use Zapier for workflow orchestration and Airtable as a lightweight claims tracking database. Zapier monitors the email inbox, extracts data using built-in parsers, logs to Airtable, and sends formatted emails to carrier claims departments (email-based FNOL only—no portal automation). This is a minimal viable product to demonstrate value before investing in full automation.

Syntora Custom Build (White-Glove Implementation)

Engage Syntora, a specialist firm that builds custom AI automation for small insurance agencies, to handle the entire implementation. Syntora delivers a turnkey system using Claude API for document parsing, integrated with the agency's AMS, with source code ownership transferred to the MSP. The MSP then manages the system ongoing.

IVANS-Centric Approach (Carrier Download Only)

Rather than automating outbound FNOL submission, focus on maximizing inbound claims data from carriers via IVANS Claims Download. Ensure IVANS Claims Download is active for all carriers, configure the AMS to automatically receive and process claims status updates, and add a simple email notification workflow that alerts staff when new claims arrive via IVANS. The staff still manually submits FNOL but with a streamlined process.

Want early access to the full toolkit?