77 min readContent generation

Implementation Guide: Draft renewal letters, declination notices, and client risk management reports

Step-by-step implementation guide for deploying AI to draft renewal letters, declination notices, and client risk management reports for Insurance Agencies clients.

Hardware Procurement

Dell PowerEdge T360 Tower Server

Dell TechnologiesPowerEdge T360 (Intel Xeon E-2434, 32GB DDR5, 2TB SSD RAID1)Qty: 0

$3,200–$5,000 MSP cost / $4,500–$7,000 suggested resale — ONLY procure if client requires on-premises AI processing for data sovereignty; most deployments use cloud APIs and need no new hardware

On-premises server for running self-hosted LLMs (Ollama + Llama 3.3) when the agency cannot send any client data to cloud APIs. Houses the local inference engine, document generation pipeline, and audit log database. Not required for the recommended cloud-API deployment path.

NVIDIA Tesla T4 16GB PCIe GPU

NVIDIANVIDIA T4 16GB (900-2G183-0000-000)Qty: 0

$1,500–$2,200 MSP cost / $2,200–$3,000 suggested resale — ONLY procure for on-premises deployments

GPU accelerator for local LLM inference on the PowerEdge T360. Enables running quantized 7B–13B parameter models at 30–50 tokens/second. Low 70W TDP makes it suitable for tower server deployment without additional cooling. Not required for cloud-API deployments.

Standard Business Workstation

Dell / Lenovo / HPDell OptiPlex 7020 or Lenovo ThinkCentre M80q (i5-13500, 16GB RAM, 512GB SSD)Qty: 0

$800–$1,200 per unit — typically already present in the agency; only procure replacements if existing machines are under-spec

End-user workstations for accessing the document generation interface (web portal or Microsoft Teams app) and performing human review of AI-generated documents. Minimum 16GB RAM recommended for smooth M365 Copilot experience if that path is chosen.

Software Procurement

OpenAI API (GPT-4.1 Mini)

OpenAIGPT-4.1 MiniQty: ~300 documents/month (20-person agency)

$0.40 per 1M input tokens / $1.60 per 1M output tokens; estimated $20–$50/month

Primary LLM engine for generating renewal letters and declination notices. GPT-4.1 Mini offers the best cost-to-quality ratio for structured insurance correspondence. Each letter consumes approximately 2,000 input tokens (prompt + client data) and 800 output tokens.

Anthropic Claude API (Sonnet 4)

AnthropicSonnet 4Qty: ~50 risk management reports/month

$3.00 per 1M input tokens / $15.00 per 1M output tokens; estimated $15–$40/month

Secondary LLM engine for generating complex client risk management reports that require nuanced analysis and longer-form writing. Claude Sonnet 4 excels at following complex instructions and producing compliance-sensitive content with fewer hallucinations on analytical tasks.

n8n Cloud (Pro Plan)

n8n GmbHPro Plan

$50/month (10,000 executions) MSP cost / $100–$150/month suggested resale to client

Workflow orchestration platform that connects the AMS, LLM APIs, document templates, human approval queue, and email delivery into an automated pipeline. Handles scheduling (renewal date triggers), data transformation, error handling, and audit logging.

$12.50–$22.00/user/month (assumed already in place at most agencies)

Foundation platform providing Outlook (email delivery of generated letters), Word (document formatting and human review), SharePoint (document storage and audit trail), and Azure AD/Entra ID (authentication and SSO). Required prerequisite — not a new procurement for most agencies.

Microsoft 365 Copilot (Optional Add-on)

Microsoftper-seat SaaS add-on

$30/user/month list; ~$18–$22/user/month via CSP for up to 300 users / $25–$35/user/month suggested resale

Optional enhancement that enables AI-assisted drafting directly within Word and Outlook. Useful for agencies that want AI assistance beyond the three core document types (e.g., ad hoc client emails, proposals). Not required for the core implementation but adds value as an upsell.

Applied Epic API Access License

Applied SystemsSDK-API license add-on

Contact Applied Systems — typically $100–$300/month add-on to existing Epic license

Provides REST API access to Applied Epic for extracting client records, policy details, renewal dates, coverage limits, claims history, and producer assignments. This is the critical data source integration. Only needed if client uses Applied Epic as their AMS.

DocuSign eSignature (Standard Plan)

DocuSignStandard PlanQty: up to 5 users

$25/user/month (Standard); $40/user/month (Business plan)

Optional integration for client acknowledgment workflows on declination notices and risk management reports. Provides legally binding e-signatures with audit trail. License type: per-seat SaaS. Often already in place at agencies.

Free — only applicable for on-premises deployments

Local LLM runtime for agencies requiring all data processing on-premises. Runs Meta Llama 3.3 8B or Mistral 7B models on the PowerEdge T360 with T4 GPU. Quality is lower than cloud GPT-4.1 Mini but provides zero data exfiltration guarantee.

Prerequisites

  • Active Agency Management System (AMS) with API access enabled — Applied Epic (with SDK-API license), Vertafore AMS360 (with API subscription), HawkSoft (with API access), or EZLynx. Confirm API credentials are available and test connectivity before project kickoff.
  • Microsoft 365 Business Standard or Premium licenses for all users who will review/approve AI-generated documents. Azure AD/Entra ID tenant configured with MFA enabled.
  • Reliable internet connectivity (25+ Mbps) at all agency locations where the system will be used. Cloud API calls are lightweight text (typically <10KB per request) but must be consistently available.
  • Inventory of current document templates: collect 10–20 examples each of (a) renewal letters, (b) declination notices, and (c) risk management reports the agency currently sends. These become the training corpus for prompt engineering.
  • List of all states where the agency writes business, with specific regulatory requirements for declination notice language, timing, and format in each state. Engage agency's E&O counsel to review and approve state-specific compliance templates before AI implementation.
  • Designated 'AI Document Reviewer' role assigned to at least 2 licensed agents/CSRs who will be responsible for reviewing and approving all AI-generated documents before they are sent to clients.
  • Administrative access to the agency's email system (Exchange Online / M365 Admin Center) for configuring the outbound email workflow and shared mailbox for document delivery.
  • SharePoint Online site or document library designated for storing generated documents and audit logs. Recommended: create a dedicated 'AI Documents' site collection with retention policies aligned to state record-keeping requirements (typically 5–7 years).
  • Python 3.11+ runtime environment available on the n8n server or a dedicated integration server if using custom code nodes. Required packages: openai, anthropic, python-docx, jinja2.
  • Signed data processing agreement (DPA) with OpenAI and/or Anthropic covering the handling of insurance client NPI (nonpublic personal information) under GLBA requirements. OpenAI Enterprise API and Anthropic API both offer DPAs — ensure these are executed before sending any real client data.

Installation Steps

Step 1: Provision n8n Orchestration Environment

Set up the n8n workflow automation platform that will serve as the central orchestration engine. n8n connects the AMS data source, LLM APIs, document templates, approval workflows, and email delivery into a single automated pipeline. We recommend n8n Cloud Pro for most deployments (managed hosting, automatic updates, built-in monitoring), but self-hosted n8n Community Edition on the client's server is an option for maximum control.

bash
# Option A: n8n Cloud (Recommended)
# 1. Sign up at https://app.n8n.cloud/register
# 2. Select Pro plan ($50/month, 10,000 executions)
# 3. Note your instance URL: https://<agency-name>.app.n8n.cloud

# Option B: Self-hosted n8n on Docker (on PowerEdge T360 or agency server)
sudo apt update && sudo apt install -y docker.io docker-compose
mkdir -p /opt/n8n && cd /opt/n8n
cat > docker-compose.yml << 'EOF'
version: '3.8'
services:
  n8n:
    image: docker.n8n.io/n8nio/n8n:latest
    restart: always
    ports:
      - '5678:5678'
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=<STRONG_PASSWORD_HERE>
      - N8N_HOST=n8n.agency-domain.com
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://n8n.agency-domain.com/
      - N8N_ENCRYPTION_KEY=<GENERATE_32_CHAR_KEY>
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - DB_POSTGRESDB_DATABASE=n8n
      - DB_POSTGRESDB_USER=n8n
      - DB_POSTGRESDB_PASSWORD=<DB_PASSWORD>
    volumes:
      - n8n_data:/home/node/.n8n
    depends_on:
      - postgres
  postgres:
    image: postgres:16
    restart: always
    environment:
      - POSTGRES_DB=n8n
      - POSTGRES_USER=n8n
      - POSTGRES_PASSWORD=<DB_PASSWORD>
    volumes:
      - postgres_data:/var/lib/postgresql/data
volumes:
  n8n_data:
  postgres_data:
EOF
docker-compose up -d
Note

For self-hosted: place n8n behind a reverse proxy (nginx/Caddy) with a valid SSL certificate. Use Let's Encrypt via Caddy for simplest setup. Ensure the n8n instance is only accessible from the agency's network or via VPN. For n8n Cloud: all SSL and hosting is managed automatically.

Step 2: Configure LLM API Credentials

Set up API accounts with OpenAI (primary, for GPT-4.1 Mini) and Anthropic (secondary, for Claude Sonnet 4 on complex risk reports). Create organization-level API keys with usage limits to prevent runaway costs. Configure these credentials in n8n for use in workflows.

1
OpenAI API Setup: Go to https://platform.openai.com/account/api-keys → Create a new project: 'Insurance-AI-Documents' → Generate a project-scoped API key → Set monthly usage limit: $100 (adjust based on agency size) → Note the key: sk-proj-xxxxxxxxxxxx
2
Anthropic API Setup: Go to https://console.anthropic.com/settings/keys → Create a new API key for the workspace → Set monthly spend limit: $75 → Note the key: sk-ant-xxxxxxxxxxxx
3
Configure in n8n: Navigate to n8n > Settings > Credentials → Add new credential: 'OpenAI API' → paste API key → Add new credential: 'Anthropic API' → paste API key
Test connectivity to OpenAI API
python
pip install openai anthropic
python3 -c "
from openai import OpenAI
client = OpenAI(api_key='sk-proj-xxxxxxxxxxxx')
response = client.chat.completions.create(
    model='gpt-4.1-mini',
    messages=[{'role': 'user', 'content': 'Say hello'}],
    max_tokens=10
)

print(response.choices[0].message.content)
"
Critical

Never commit API keys to version control. Store them only in n8n's encrypted credential store or a secrets manager (Azure Key Vault, HashiCorp Vault). Set up billing alerts at 50% and 80% of monthly limits in both OpenAI and Anthropic dashboards. For agencies subject to strict data residency, use Azure OpenAI Service instead of direct OpenAI API — configure the Azure OpenAI endpoint and key in n8n's 'OpenAI' credential type with the custom base URL.

Step 3: Establish AMS API Integration

Connect to the agency's AMS to extract client data, policy information, renewal dates, and coverage details. This is the most variable step depending on which AMS the agency uses. Below are instructions for Applied Epic (most common) and AMS360, with notes for HawkSoft.

  • Applied Epic API Integration: Log into Applied Developer Center: https://developer.appliedsystems.com
  • Register your application and obtain OAuth2 credentials
  • Request the following API scopes: clients.read (client demographics), policies.read (policy details, coverages, limits), activities.read (renewal dates, expiration dates), producers.read (assigned producer information)
  • AMS360 API Integration: Contact Vertafore support to enable API access for your AMS360 instance
  • Obtain API endpoint URL and authentication credentials
  • AMS360 uses SOAP/REST hybrid — refer to Vertafore API documentation
  • HawkSoft Integration (Limited API): HawkSoft API access may be limited — check with HawkSoft support
  • Fallback: Configure scheduled CSV export from HawkSoft — export client list with policy details daily, place CSV in a monitored folder that n8n watches via File Trigger node
  • In n8n, create an HTTP Request node or custom code node for AMS API calls — configure with OAuth2 credentials stored in n8n credential store
Applied Epic API
bash
# OAuth2 token request and test client fetch

# Test API connectivity:
curl -X POST https://api.appliedsystems.com/oauth/token \
  -H 'Content-Type: application/x-www-form-urlencoded' \
  -d 'grant_type=client_credentials&client_id=<CLIENT_ID>&client_secret=<CLIENT_SECRET>&scope=clients.read policies.read'

# Fetch a test client record:
curl -X GET 'https://api.appliedsystems.com/v1/clients?limit=1' \
  -H 'Authorization: Bearer <ACCESS_TOKEN>'
Note

Applied Epic API access requires a separate SDK-API license — confirm this is active before starting. The Applied API uses OAuth 2.0 client credentials flow. Rate limits apply (typically 100 requests/minute) — implement exponential backoff in n8n workflows. For agencies using older AMS platforms without APIs, use the CSV export/import fallback approach with n8n's file trigger node monitoring a shared network folder. Document all API field mappings in a spreadsheet: AMS field name → prompt template variable name.

Step 4: Build Document Template Library

Create Jinja2-based document templates for each of the three document types (renewal letter, declination notice, risk management report) with state-specific variant sections. These templates define the structure, required compliance language, and variable insertion points. The AI generates the personalized content sections; the template ensures regulatory boilerplate is always correct.

bash
# Create template directory structure
mkdir -p /opt/insurance-ai/templates/{renewal,declination,risk_report}
mkdir -p /opt/insurance-ai/templates/compliance/{state_notices,disclosures}

# Example: Renewal Letter Base Template (Jinja2 + Markdown)
cat > /opt/insurance-ai/templates/renewal/base_renewal.md.j2 << 'TEMPLATE'
{{ agency_letterhead }}

{{ current_date }}

{{ client_name }}
{{ client_address_line1 }}
{{ client_address_line2 }}
{{ client_city }}, {{ client_state }} {{ client_zip }}

RE: Policy Renewal — {{ policy_type }} Policy #{{ policy_number }}
    Current Expiration: {{ expiration_date }}

Dear {{ client_salutation }} {{ client_last_name }},

{{ ai_generated_body }}

**Renewal Summary:**
- Policy Type: {{ policy_type }}
- Current Premium: ${{ current_premium }}
- Renewal Premium: ${{ renewal_premium }}
- Premium Change: {{ premium_change_pct }}%
- Effective Date: {{ renewal_effective_date }}
- Coverage Highlights: {{ coverage_summary }}

{{ state_specific_notice }}

{{ ai_generated_closing }}

{{ ai_disclosure_notice }}

Sincerely,

{{ producer_name }}
{{ producer_title }}
{{ agency_name }}
{{ agency_phone }} | {{ agency_email }}
TEMPLATE

# Example: Declination Notice Base Template
cat > /opt/insurance-ai/templates/declination/base_declination.md.j2 << 'TEMPLATE'
{{ agency_letterhead }}

{{ current_date }}

SENT VIA: {{ delivery_method }}

{{ client_name }}
{{ client_address_line1 }}
{{ client_address_line2 }}
{{ client_city }}, {{ client_state }} {{ client_zip }}

RE: Notice of {{ notice_type }} — {{ policy_type }} Policy #{{ policy_number }}

Dear {{ client_salutation }} {{ client_last_name }},

{{ ai_generated_body }}

{{ state_required_declination_language }}

{% if client_state == 'TX' %}
TEXAS NOTICE: In accordance with Texas Insurance Code §551.104, this notice is being provided not later than the 10th day before the cancellation takes effect.
{% endif %}
{% if client_state == 'NY' %}
NEW YORK NOTICE: You have the right to request a review of this decision. Contact the New York State Department of Financial Services at 1-800-342-3736.
{% endif %}
{% if client_state == 'CA' %}
CALIFORNIA NOTICE: Per California Insurance Code §677, you are entitled to the specific reasons for this action in writing.
{% endif %}

{{ ai_generated_next_steps }}

{{ ai_disclosure_notice }}

Sincerely,

{{ producer_name }}
{{ producer_title }}
{{ agency_name }}
TEMPLATE

# AI Disclosure Notice (used in all templates)
cat > /opt/insurance-ai/templates/compliance/disclosures/ai_disclosure.txt << 'EOF'
This document was prepared with AI assistance and has been reviewed and approved by {{ reviewer_name }}, a licensed insurance professional (License #{{ reviewer_license }}), on {{ review_date }}.
EOF
Note

CRITICAL COMPLIANCE STEP: All state-specific notice language must be reviewed and approved by the agency's E&O (Errors & Omissions) attorney before deployment. The templates shown above are examples — actual state-required language varies significantly. Maintain a version-controlled repository of all templates with change history. Template changes should require compliance sign-off. Create a state-requirements matrix spreadsheet documenting: (1) required notice periods, (2) mandatory language, (3) delivery method requirements, (4) consumer rights disclosures for each state where the agency operates.

Step 5: Develop Core Prompt Library

Create the system prompts and user prompt templates that instruct the LLM to generate insurance-appropriate content. Each document type gets a dedicated system prompt that establishes the AI's role, constraints, tone, and compliance requirements. User prompts are dynamically assembled from AMS data and the template structure. These prompts are the intellectual core of the solution — they determine output quality.

Create prompt library directory structure and write system/user prompt files
bash
# Create prompt library directory
mkdir -p /opt/insurance-ai/prompts/{system,user}

# Store prompts as text files for version control
cat > /opt/insurance-ai/prompts/system/renewal_letter.txt << 'SYSTEMPROMPT'
You are an expert insurance correspondence writer working for an independent insurance agency. Your task is to draft the personalized body paragraphs of a policy renewal letter.

RULES:
1. Write in a warm, professional tone appropriate for client-facing insurance communications.
2. Address the client by name and reference their specific policy type and coverage.
3. If the premium is increasing, acknowledge the increase with empathy, briefly explain market factors (do NOT fabricate specific reasons not provided in the data), and emphasize the value of continued coverage.
4. If the premium is decreasing or flat, frame it positively as a reflection of the client's good risk profile.
5. NEVER include specific claim details, loss history specifics, or underwriting rationale unless explicitly provided in the input data.
6. NEVER fabricate coverage details, limits, deductibles, or any numerical data — only reference what is provided.
7. NEVER provide legal advice or make coverage guarantees.
8. NEVER reference competitors or suggest the client shop their coverage.
9. Include a clear call-to-action: schedule a renewal review meeting with their producer.
10. Keep the body to 2-3 concise paragraphs (150-250 words total).
11. Do NOT include any greeting or sign-off — those are handled by the template.

OUTPUT FORMAT: Return ONLY the letter body paragraphs. No JSON, no metadata, no commentary.
SYSTEMPROMPT

cat > /opt/insurance-ai/prompts/system/declination_notice.txt << 'SYSTEMPROMPT'
You are an expert insurance correspondence writer drafting a declination or non-renewal notice for an independent insurance agency. This is a sensitive communication that must be handled with professionalism and empathy.

RULES:
1. Write in a professional, empathetic but clear tone. The client is losing coverage — be direct but compassionate.
2. State the action being taken (non-renewal, cancellation, or declination) clearly in the first sentence.
3. Reference ONLY the reasons provided in the input data. NEVER fabricate or assume reasons for the action.
4. NEVER use language that could be interpreted as discriminatory based on race, ethnicity, gender, religion, national origin, disability, or any protected class.
5. NEVER reference credit scores, neighborhood demographics, or any external data source unless explicitly provided and approved.
6. Provide constructive next steps: suggest the client contact the agency to discuss alternative coverage options, contact their state insurance department, or seek coverage through the state FAIR plan or assigned risk pool if applicable.
7. NEVER include specific underwriting guidelines, carrier proprietary information, or internal risk scores.
8. Keep the body to 2-3 paragraphs (150-200 words). Clarity and brevity are essential.
9. Do NOT include greeting, sign-off, or state-specific legal notices — those are handled by the template.

OUTPUT FORMAT: Return two sections separated by '---':\n1. Main body paragraphs\n2. Next steps paragraph
SYSTEMPROMPT

cat > /opt/insurance-ai/prompts/system/risk_report.txt << 'SYSTEMPROMPT'
You are an expert insurance risk management consultant preparing a client risk management report for an independent insurance agency. This report helps the client understand their risk profile and the agency's recommendations.

RULES:
1. Write in a professional, consultative tone. You are advising a business owner or individual on their risk management posture.
2. Structure the report with clear sections: Executive Summary, Current Coverage Overview, Identified Risk Gaps, Recommendations, and Next Steps.
3. Reference ONLY the policy data, claims history, and risk factors provided in the input. NEVER fabricate statistics, industry benchmarks, or risk data.
4. When identifying coverage gaps, be specific about what coverage types might address them (e.g., umbrella liability, cyber liability, EPLI) but NEVER guarantee that such coverage will be available or quote premiums.
5. Prioritize recommendations as High/Medium/Low priority based on potential severity and likelihood.
6. Include a disclaimer that the report is for informational purposes and does not constitute a binding coverage analysis.
7. Target 500-1000 words depending on the complexity of the client's portfolio.
8. Use clear headings and bullet points for readability.

OUTPUT FORMAT: Return the full report body in Markdown format with ## headings, bullet points, and bold text for emphasis.
SYSTEMPROMPT

# User prompt template (dynamically filled by n8n workflow)
cat > /opt/insurance-ai/prompts/user/renewal_user_prompt.txt << 'USERPROMPT'
Draft renewal letter body paragraphs for the following client and policy:

CLIENT INFORMATION:
- Name: {{ client_name }}
- Relationship Length: {{ years_as_client }} years
- Total Policies with Agency: {{ total_policies }}

POLICY BEING RENEWED:
- Type: {{ policy_type }}
- Policy Number: {{ policy_number }}
- Current Premium: ${{ current_premium }}
- Renewal Premium: ${{ renewal_premium }}
- Premium Change: {{ premium_change_pct }}% {{ 'increase' if premium_change_pct > 0 else 'decrease' if premium_change_pct < 0 else '(no change)' }}
- Key Coverages: {{ coverage_summary }}
- Any Coverage Changes at Renewal: {{ coverage_changes | default('None') }}

PRODUCER: {{ producer_name }}

SPECIAL NOTES FROM PRODUCER: {{ producer_notes | default('None') }}
USERPROMPT

Renewal Letter System Prompt

You are an expert insurance correspondence writer working for an independent insurance agency. Your task is to draft the personalized body paragraphs of a policy renewal letter. RULES: 1. Write in a warm, professional tone appropriate for client-facing insurance communications. 2. Address the client by name and reference their specific policy type and coverage. 3. If the premium is increasing, acknowledge the increase with empathy, briefly explain market factors (do NOT fabricate specific reasons not provided in the data), and emphasize the value of continued coverage. 4. If the premium is decreasing or flat, frame it positively as a reflection of the client's good risk profile. 5. NEVER include specific claim details, loss history specifics, or underwriting rationale unless explicitly provided in the input data. 6. NEVER fabricate coverage details, limits, deductibles, or any numerical data — only reference what is provided. 7. NEVER provide legal advice or make coverage guarantees. 8. NEVER reference competitors or suggest the client shop their coverage. 9. Include a clear call-to-action: schedule a renewal review meeting with their producer. 10. Keep the body to 2-3 concise paragraphs (150-250 words total). 11. Do NOT include any greeting or sign-off — those are handled by the template. OUTPUT FORMAT: Return ONLY the letter body paragraphs. No JSON, no metadata, no commentary.
Sonnet 4.6

Declination Notice System Prompt

You are an expert insurance correspondence writer drafting a declination or non-renewal notice for an independent insurance agency. This is a sensitive communication that must be handled with professionalism and empathy. RULES: 1. Write in a professional, empathetic but clear tone. The client is losing coverage — be direct but compassionate. 2. State the action being taken (non-renewal, cancellation, or declination) clearly in the first sentence. 3. Reference ONLY the reasons provided in the input data. NEVER fabricate or assume reasons for the action. 4. NEVER use language that could be interpreted as discriminatory based on race, ethnicity, gender, religion, national origin, disability, or any protected class. 5. NEVER reference credit scores, neighborhood demographics, or any external data source unless explicitly provided and approved. 6. Provide constructive next steps: suggest the client contact the agency to discuss alternative coverage options, contact their state insurance department, or seek coverage through the state FAIR plan or assigned risk pool if applicable. 7. NEVER include specific underwriting guidelines, carrier proprietary information, or internal risk scores. 8. Keep the body to 2-3 paragraphs (150-200 words). Clarity and brevity are essential. 9. Do NOT include greeting, sign-off, or state-specific legal notices — those are handled by the template. OUTPUT FORMAT: Return two sections separated by '---': 1. Main body paragraphs 2. Next steps paragraph
Sonnet 4.6

Risk Report System Prompt

You are an expert insurance risk management consultant preparing a client risk management report for an independent insurance agency. This report helps the client understand their risk profile and the agency's recommendations. RULES: 1. Write in a professional, consultative tone. You are advising a business owner or individual on their risk management posture. 2. Structure the report with clear sections: Executive Summary, Current Coverage Overview, Identified Risk Gaps, Recommendations, and Next Steps. 3. Reference ONLY the policy data, claims history, and risk factors provided in the input. NEVER fabricate statistics, industry benchmarks, or risk data. 4. When identifying coverage gaps, be specific about what coverage types might address them (e.g., umbrella liability, cyber liability, EPLI) but NEVER guarantee that such coverage will be available or quote premiums. 5. Prioritize recommendations as High/Medium/Low priority based on potential severity and likelihood. 6. Include a disclaimer that the report is for informational purposes and does not constitute a binding coverage analysis. 7. Target 500-1000 words depending on the complexity of the client's portfolio. 8. Use clear headings and bullet points for readability. OUTPUT FORMAT: Return the full report body in Markdown format with ## headings, bullet points, and bold text for emphasis.
Sonnet 4.6

Renewal Letter User Prompt Template

Draft renewal letter body paragraphs for the following client and policy: CLIENT INFORMATION: - Name: {{ client_name }} - Relationship Length: {{ years_as_client }} years - Total Policies with Agency: {{ total_policies }} POLICY BEING RENEWED: - Type: {{ policy_type }} - Policy Number: {{ policy_number }} - Current Premium: ${{ current_premium }} - Renewal Premium: ${{ renewal_premium }} - Premium Change: {{ premium_change_pct }}% {{ 'increase' if premium_change_pct > 0 else 'decrease' if premium_change_pct < 0 else '(no change)' }} - Key Coverages: {{ coverage_summary }} - Any Coverage Changes at Renewal: {{ coverage_changes | default('None') }} PRODUCER: {{ producer_name }} SPECIAL NOTES FROM PRODUCER: {{ producer_notes | default('None') }}
Sonnet 4.6
Note

Prompt engineering is iterative — expect to refine these prompts over 2-3 weeks during Phase 2 based on output quality review with agency staff. Store all prompts in a Git repository for version control. Never hard-code client data in prompts — always use template variables filled at runtime. Test each prompt with at least 20 different client scenarios before moving to pilot. The system prompts above are production-quality starting points but should be customized to match each agency's voice and style.

Step 6: Build n8n Workflow: Renewal Letter Pipeline

Create the primary n8n workflow that automates the renewal letter generation process. This workflow triggers on a schedule (or webhook), queries the AMS for upcoming renewals, generates personalized letters via the LLM API, creates formatted Word documents, and routes them to the human approval queue. This is the most complex workflow and serves as the pattern for the declination and risk report workflows.

1
Navigate to n8n > Workflows > Import from File — save the workflow JSON as renewal_letter_workflow.json
2
Node 1: Schedule Trigger — Type: Schedule Trigger. Config: Run daily at 7:00 AM agency local time. Purpose: Check for policies renewing in the next 60 days.
3
Node 2: HTTP Request — Query AMS API — Type: HTTP Request. Method: GET. URL: https://api.appliedsystems.com/v1/policies. Query Params: expiration_date_from={{ $today }}&expiration_date_to={{ $today.plus(60, 'days') }}&status=active. Authentication: OAuth2 (Applied Epic credentials). Headers: Accept: application/json.
4
Node 3: Filter — Exclude Already Processed — Type: IF node. Config: Check if policy_number NOT IN processed_renewals SharePoint list.
5
Node 4: Loop Over Each Renewal — Type: Split In Batches (batch size: 5 to respect API rate limits).
6
Node 5: Code Node — Assemble Prompt — Type: Code (Python). See custom_ai_components for full implementation.
7
Node 6: HTTP Request — Call OpenAI API — Type: HTTP Request (or OpenAI node). Method: POST. URL: https://api.openai.com/v1/chat/completions. Body: { model: 'gpt-4.1-mini', messages: [...], temperature: 0.3, max_tokens: 500 }. Authentication: OpenAI API credential.
8
Node 7: Code Node — Generate Word Document — Type: Code (Python) with python-docx. See custom_ai_components for full implementation.
9
Node 8: SharePoint — Upload Document — Type: Microsoft SharePoint node. Config: Upload .docx to 'AI Documents / Pending Review / Renewals' folder.
10
Node 9: Microsoft Teams — Notify Reviewer — Type: Microsoft Teams node. Config: Send message to 'AI Document Review' channel. Message: '📋 New renewal letter ready for review: [Client Name] - [Policy Type]'. Include: Link to SharePoint document.
11
Node 10: Audit Log — Record Generation — Type: Code node → append to SharePoint list or PostgreSQL. Fields: timestamp, client_id, policy_number, model_used, prompt_hash, tokens_used, document_url, status='pending_review'.
Note

Import the workflow skeleton first, then configure each node with the agency's specific AMS API endpoints and credential references. The batch size of 5 in the loop prevents hitting API rate limits. Set the OpenAI temperature to 0.3 for consistent, professional output — higher values introduce too much variability for compliance-sensitive documents. Add error handling nodes after every HTTP request to catch API failures and alert the MSP monitoring system.

Step 7: Build n8n Workflow: Declination Notice Pipeline

Create the declination notice workflow. This is triggered manually by a producer or CSR through a webhook/form (not on a schedule, since declinations are event-driven). The workflow collects the declination reason, client data, and state-specific requirements, generates the notice, and routes it for compliance review.

1
Node 1: Webhook Trigger — Type: Webhook | Method: POST | Path: /declination-request | Authentication: Header Auth (shared secret) | Expected payload: client_id, policy_number, notice_type (non-renewal | cancellation | declination), reason_codes, reason_narrative, effective_date, requested_by (producer_email)
2
Alternative: n8n Form Trigger — provides a web form UI | Type: Form Trigger | Create form with fields matching above payload | URL shared with agency staff as bookmark
3
Node 2: HTTP Request — Fetch Client & Policy Data from AMS | Same pattern as renewal workflow Node 2
4
Node 3: Code Node — Determine State Requirements | Type: Code (JavaScript) | Logic: Look up client_state in state_requirements JSON config to determine required notice language, timing, delivery method
5
Node 4: Code Node — Assemble Declination Prompt | Combines system prompt + client data + reason codes + state requirements
6
Node 5: HTTP Request — Call OpenAI API | Model: gpt-4.1-mini | Temperature: 0.2 (lower for declinations — maximize consistency)
7
Node 6: Code Node — Generate Word Document with state-specific notices
8
Node 7: SharePoint Upload — 'Pending Review / Declinations' folder
9
Node 8: Teams Notification — flag as HIGH PRIORITY | Include state-specific deadline reminder in notification
10
Node 9: Audit Log — record with reason_codes for bias monitoring
Note

Declination notices are the highest-risk document type. ALWAYS route these for review by a licensed principal or compliance officer, not just a CSR. The workflow should enforce a mandatory minimum review period (e.g., cannot be approved within 15 minutes of generation to prevent rubber-stamping). Include the state-specific notice deadline in the Teams notification so reviewers know the urgency. Some states (e.g., Texas) require notice 10+ days before effective date — the workflow should refuse to generate if the effective date is too soon for compliance.

Step 8: Build n8n Workflow: Risk Management Report Pipeline

Create the risk management report workflow. This uses Claude Sonnet 4 for higher-quality analytical writing. Triggered on-demand by producers preparing for client review meetings, or on a quarterly schedule for key accounts.

1
Node 1: Dual Trigger — Option A: Webhook/Form (on-demand by producer) | Option B: Schedule (quarterly for clients in 'Key Accounts' list)
2
Node 2: HTTP Request — Fetch Full Client Portfolio from AMS. Pull ALL policies for the client, not just one. Include: claims history (last 5 years), coverage limits, deductibles
3
Node 3: Code Node — Assemble Comprehensive Risk Profile. Aggregate data across all policies. Calculate: total insured values, coverage gaps, claims frequency
4
Node 4: HTTP Request — Call Anthropic Claude API
5
Node 5: Code Node — Convert Markdown to Formatted Word Document. Use python-docx to create professional report with agency branding. Include: table of contents, coverage summary table, risk matrix
6
Node 6: SharePoint Upload — 'Pending Review / Risk Reports'
7
Node 7: Email Notification to Producer — 'Your risk management report for [Client] is ready for review'
8
Node 8: Audit Log
Node 4: HTTP Request
json
# Anthropic Claude API config. Method: POST, URL:
# https://api.anthropic.com/v1/messages. Headers: x-api-key:
# <ANTHROPIC_KEY>, anthropic-version: 2023-06-01, content-type:
# application/json

{
  "model": "claude-sonnet-4-20250514",
  "max_tokens": 2000,
  "temperature": 0.4,
  "system": "<risk_report_system_prompt>",
  "messages": [{"role": "user", "content": "<assembled_risk_data>"}]
}
Note

Risk reports use Claude Sonnet 4 because they require more nuanced analytical writing than routine letters. The higher per-token cost ($3/$15 per 1M tokens vs. $0.40/$1.60 for GPT-4.1 Mini) is justified by the higher value of these documents — they're used in client retention meetings. Expect 1,500-3,000 tokens per report. Include a prominent disclaimer in every report: 'This report is for informational purposes only and does not constitute a binding coverage analysis or guarantee of insurability.'

Step 9: Implement Human Review & Approval Queue

Build the mandatory human-in-the-loop review system. This is non-negotiable for insurance compliance. All AI-generated documents must be reviewed and explicitly approved by a licensed professional before delivery. The system uses SharePoint lists and Microsoft Teams Adaptive Cards for the approval workflow.

1
Create SharePoint List: 'AI Document Review Queue' with columns: DocumentID (Auto-number), ClientName (Single line text), PolicyNumber (Single line text), DocumentType (Choice: Renewal | Declination | Risk Report), GeneratedDate (Date/Time), DocumentURL (Hyperlink — points to .docx in SharePoint), Status (Choice: Pending Review | Approved | Rejected | Revision Requested), AssignedReviewer (Person), ReviewDate (Date/Time), ReviewerComments (Multi-line text), AIModel (Single line text), TokensUsed (Number), PromptVersion (Single line text), ComplianceDeadline (Date/Time)
2
Create Power Automate Flow for Approval (or build in n8n) — Trigger: When item is created in 'AI Document Review Queue'. Action 1: Send Adaptive Card to Teams channel 'AI Document Review' (card includes: Client name, document type, preview link, Approve/Reject buttons). Action 2: Wait for response. Action 3: Update SharePoint list item with reviewer decision. Action 4 (if Approved): Move document to 'Approved' folder, trigger email delivery. Action 5 (if Rejected): Notify requesting producer with reviewer comments.
3
Create Teams Adaptive Card JSON template (see code block below)
Write Teams Adaptive Card JSON template to disk
bash
cat > /opt/insurance-ai/templates/approval_card.json << 'EOF'
{
  "type": "AdaptiveCard",
  "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
  "version": "1.5",
  "body": [
    {
      "type": "TextBlock",
      "text": "📋 AI Document Review Required",
      "weight": "Bolder",
      "size": "Large"
    },
    {
      "type": "FactSet",
      "facts": [
        { "title": "Client:", "value": "${clientName}" },
        { "title": "Document Type:", "value": "${documentType}" },
        { "title": "Policy #:", "value": "${policyNumber}" },
        { "title": "Generated:", "value": "${generatedDate}" },
        { "title": "Compliance Deadline:", "value": "${deadline}" }
      ]
    },
    {
      "type": "ActionSet",
      "actions": [
        {
          "type": "Action.OpenUrl",
          "title": "📄 Review Document",
          "url": "${documentUrl}"
        },
        {
          "type": "Action.Submit",
          "title": "✅ Approve",
          "data": { "action": "approve", "documentId": "${documentId}" }
        },
        {
          "type": "Action.Submit",
          "title": "❌ Reject",
          "data": { "action": "reject", "documentId": "${documentId}" }
        }
      ]
    }
  ]
}
EOF
Note

The approval queue is the most important compliance control in the entire system. Under no circumstances should the workflow be configured to auto-approve or auto-send documents. The SharePoint list serves as the permanent audit trail required by the NAIC Model Bulletin. Set up a daily report (Power BI or simple Power Automate email) showing documents pending review for more than 24 hours to prevent bottlenecks. For declination notices, enforce a secondary approval requirement (two reviewers) given the heightened regulatory risk.

Step 10: Configure Data Sanitization & PII Protection

Implement a data sanitization layer that strips unnecessary personally identifiable information (PII) before sending data to cloud LLM APIs. Insurance client records contain sensitive NPI (nonpublic personal information) protected under GLBA. Only the minimum data needed for letter generation should reach the API.

Create and test the PII sanitization module
bash
# Create PII sanitization module
cat > /opt/insurance-ai/modules/sanitize.py << 'PYEOF'
import re
from typing import Dict, Any

# Fields that should NEVER be sent to the LLM API
STRIP_FIELDS = [
    'ssn', 'social_security', 'tax_id', 'ein', 'fein',
    'bank_account', 'routing_number', 'credit_card',
    'drivers_license', 'dl_number',
    'date_of_birth', 'dob',  # send age range instead
    'medical_info', 'health_conditions',
    'financial_statements', 'revenue', 'income',
]

# Regex patterns for PII that might appear in free-text fields
PII_PATTERNS = {
    'ssn': r'\b\d{3}-?\d{2}-?\d{4}\b',
    'phone': r'\b\d{3}[-.]?\d{3}[-.]?\d{4}\b',
    'email_in_text': r'[\w.-]+@[\w.-]+\.\w+',
    'credit_card': r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b',
}

def sanitize_for_llm(client_data: Dict[str, Any]) -> Dict[str, Any]:
    """Remove sensitive PII from client data before sending to LLM API."""
    sanitized = {}
    for key, value in client_data.items():
        # Skip fields that should never be sent
        if key.lower() in STRIP_FIELDS:
            continue
        # Redact PII patterns in string values
        if isinstance(value, str):
            for pattern_name, pattern in PII_PATTERNS.items():
                value = re.sub(pattern, f'[REDACTED-{pattern_name.upper()}]', value)
        sanitized[key] = value
    
    # Convert DOB to age range if needed
    if 'date_of_birth' in client_data:
        from datetime import datetime, date
        try:
            dob = datetime.strptime(client_data['date_of_birth'], '%Y-%m-%d').date()
            age = (date.today() - dob).days // 365
            if age < 30: sanitized['age_range'] = 'Under 30'
            elif age < 45: sanitized['age_range'] = '30-44'
            elif age < 60: sanitized['age_range'] = '45-59'
            else: sanitized['age_range'] = '60+'
        except (ValueError, KeyError):
            pass
    
    return sanitized


def log_sanitization(original_fields: list, sanitized_fields: list) -> Dict:
    """Create audit record of what was sanitized."""
    removed = set(original_fields) - set(sanitized_fields)
    return {
        'original_field_count': len(original_fields),
        'sanitized_field_count': len(sanitized_fields),
        'fields_removed': list(removed),
        'sanitization_timestamp': datetime.now().isoformat()
    }
PYEOF

# Test the sanitization module
python3 -c "
from modules.sanitize import sanitize_for_llm
test_data = {
    'client_name': 'John Smith',
    'ssn': '123-45-6789',
    'policy_number': 'HO-2024-001',
    'date_of_birth': '1975-06-15',
    'premium': '2450.00',
    'notes': 'Client email john@example.com, phone 555-123-4567'
}
result = sanitize_for_llm(test_data)
print(result)
# Expected: SSN removed, DOB converted to age range, PII in notes redacted
"
Note

This sanitization layer is MANDATORY for GLBA compliance. Place it between the AMS data extraction and the LLM API call in every workflow. The sanitization module runs in n8n's Code node (Python mode). Log every sanitization action to the audit trail. Periodically review the redaction patterns to ensure new PII field types from AMS updates are captured. For agencies using Azure OpenAI with a signed DPA, some PII fields may be acceptable to send — confirm with the agency's compliance officer and document the decision.

Step 11: Configure Email Delivery for Approved Documents

Set up the automated email delivery system that sends approved documents to clients. After a reviewer approves a document in the review queue, this workflow converts the Word document to PDF, attaches it to a professionally formatted email, and sends it via the agency's email system with proper tracking.

1
Trigger — SharePoint list item updated (Status = 'Approved')
2
HTTP Request — Download approved .docx from SharePoint
3
Code Node — Convert .docx to PDF Option A: Use LibreOffice CLI in Docker Option B: Use Microsoft Graph API to convert
4
Code Node — Compose Email: Template the email body with client name, document type, and agency contact info
5
Microsoft Outlook Node — Send Email From: renewals@agencyname.com (shared mailbox) To: {{ client_email }} CC: {{ producer_email }} Subject: '{{ document_type }} — Policy #{{ policy_number }}' Body: Professional HTML email template Attachment: Generated PDF
6
Update SharePoint — Status = 'Sent', add sent_date and email_message_id
7
Update AMS — Log activity/note in client record via API
Option A: Convert .docx to PDF using LibreOffice CLI in Docker
shell
docker exec n8n-libreoffice libreoffice --headless --convert-to pdf /tmp/document.docx
Option B: Convert .docx to PDF using Microsoft Graph API
http
POST https://graph.microsoft.com/v1.0/drives/{drive-id}/items/{item-id}/content?format=pdf
Node 7: POST to Applied Epic Activities API to log activity in AMS client record
json
{
  "client_id": "<client_id>",
  "activity_type": "Correspondence",
  "description": "AI-generated {{ document_type }} sent to client",
  "date": "<today>",
  "user": "<reviewer_name>"
}
Note

Use a shared mailbox (e.g., renewals@agency.com) rather than individual producer mailboxes for better tracking and compliance. Ensure the shared mailbox is set up in Exchange Online with send-as permissions for the n8n service account. The AMS activity logging (Node 7) is critical — it ensures the agency's official record of client communication includes AI-generated documents. Test email deliverability (SPF, DKIM, DMARC alignment) to prevent client-facing emails from landing in spam.

Step 12: Deploy Monitoring, Alerting & Analytics Dashboard

Set up monitoring for the AI document generation system including API usage tracking, error alerting, quality metrics, and cost management. This gives both the MSP and the agency visibility into system health and ROI.

1
Create SharePoint List: 'AI System Metrics' — Columns: Date, DocumentsGenerated, TokensConsumed, APICost, ApprovalRate, AvgReviewTime, Errors, ModelUsed
2
n8n monitoring workflow (runs hourly): Queries OpenAI usage API: GET https://api.openai.com/v1/usage — Queries Anthropic usage: GET https://api.anthropic.com/v1/usage — Logs to SharePoint metrics list
3
Set up n8n error workflow (global error handler): In n8n Settings > Workflow > Error Workflow — Configure to send Teams/email alert to MSP on any workflow failure — Include: workflow name, error message, timestamp, affected client
4
Power BI Dashboard (optional but recommended for client-facing reporting): Connect to SharePoint Lists: 'AI Document Review Queue' + 'AI System Metrics' — Key visuals: Documents generated per week by type (bar chart), Approval rate trend (line chart — target >95%), Average review time (gauge — target <4 hours), Monthly API cost (card), Estimated time saved (calculation: docs × avg_manual_time)
5
Cost alerting: In OpenAI dashboard: Set billing alert at $75/month — In Anthropic console: Set spending limit at $60/month — In n8n: Add cost check node that pauses workflows if daily spend > $10
6
Weekly summary email (n8n scheduled workflow): Every Monday 8 AM: Send email to agency principal + MSP account manager — Include: documents generated, approval rate, estimated hours saved, API costs
Note

The analytics dashboard is a key value demonstration for the MSP's managed service. Share a monthly report with the agency principal showing ROI: documents generated × average manual drafting time (15-30 min) = hours saved. At a CSR/producer billing rate of $35-75/hour, this quickly justifies the monthly service fee. Set up PagerDuty or Opsgenie integration for critical alerts (workflow failures, API outages) during business hours.

Custom AI Components

Prompt Assembly Engine

Type: skill A Python module that dynamically assembles the complete LLM prompt from the system prompt template, client data extracted from the AMS, and document-type-specific instructions. This module handles data formatting, sanitization passthrough, and Jinja2 template rendering to produce a ready-to-send prompt for the LLM API. It is used as a Code Node in every n8n document generation workflow.

Implementation
python
# prompt_assembly.py — Core prompt assembly engine for insurance AI document generation
# Deploy as: n8n Code Node (Python) or standalone module

import json
import os
from datetime import datetime, date
from typing import Dict, Any, Optional, Tuple
from jinja2 import Environment, FileSystemLoader, BaseLoader

# --- Configuration ---
PROMPT_DIR = '/opt/insurance-ai/prompts'
TEMPLATE_DIR = '/opt/insurance-ai/templates'

# System prompts loaded at module initialization
SYSTEM_PROMPTS = {}
for doc_type in ['renewal_letter', 'declination_notice', 'risk_report']:
    prompt_path = os.path.join(PROMPT_DIR, 'system', f'{doc_type}.txt')
    if os.path.exists(prompt_path):
        with open(prompt_path, 'r') as f:
            SYSTEM_PROMPTS[doc_type] = f.read()

# User prompt templates
USER_PROMPT_TEMPLATES = {}
for doc_type in ['renewal', 'declination', 'risk_report']:
    template_path = os.path.join(PROMPT_DIR, 'user', f'{doc_type}_user_prompt.txt')
    if os.path.exists(template_path):
        with open(template_path, 'r') as f:
            USER_PROMPT_TEMPLATES[doc_type] = f.read()


def calculate_premium_change(current: float, renewal: float) -> Tuple[float, str]:
    """Calculate premium change percentage and direction."""
    if current == 0:
        return (0.0, 'new policy')
    change_pct = round(((renewal - current) / current) * 100, 1)
    if change_pct > 0:
        direction = 'increase'
    elif change_pct < 0:
        direction = 'decrease'
    else:
        direction = 'no change'
    return (change_pct, direction)


def format_currency(amount: Any) -> str:
    """Safely format a value as currency."""
    try:
        return f"{float(amount):,.2f}"
    except (ValueError, TypeError):
        return str(amount)


def assemble_renewal_prompt(client_data: Dict[str, Any], policy_data: Dict[str, Any], 
                            producer_notes: str = '') -> Dict[str, Any]:
    """
    Assemble a complete renewal letter prompt.
    
    Returns dict with 'system', 'user', 'model', 'temperature', 'max_tokens' keys
    ready for LLM API call.
    """
    current_premium = float(policy_data.get('current_premium', 0))
    renewal_premium = float(policy_data.get('renewal_premium', 0))
    change_pct, change_dir = calculate_premium_change(current_premium, renewal_premium)
    
    # Calculate relationship length
    years_as_client = 'N/A'
    if client_data.get('client_since'):
        try:
            since = datetime.strptime(client_data['client_since'], '%Y-%m-%d').date()
            years_as_client = (date.today() - since).days // 365
        except ValueError:
            pass
    
    # Render user prompt template with Jinja2
    env = Environment(loader=BaseLoader())
    template = env.from_string(USER_PROMPT_TEMPLATES.get('renewal', ''))
    
    user_prompt = template.render(
        client_name=client_data.get('display_name', 'Valued Client'),
        years_as_client=years_as_client,
        total_policies=client_data.get('total_policies', 1),
        policy_type=policy_data.get('policy_type', 'Insurance'),
        policy_number=policy_data.get('policy_number', ''),
        current_premium=format_currency(current_premium),
        renewal_premium=format_currency(renewal_premium),
        premium_change_pct=change_pct,
        coverage_summary=policy_data.get('coverage_summary', 'Standard coverage'),
        coverage_changes=policy_data.get('coverage_changes', 'None'),
        producer_name=policy_data.get('producer_name', ''),
        producer_notes=producer_notes or 'None'
    )
    
    return {
        'system': SYSTEM_PROMPTS.get('renewal_letter', ''),
        'user': user_prompt,
        'model': 'gpt-4.1-mini',
        'temperature': 0.3,
        'max_tokens': 500,
        'metadata': {
            'document_type': 'renewal_letter',
            'client_id': client_data.get('client_id'),
            'policy_number': policy_data.get('policy_number'),
            'prompt_version': 'v1.0',
            'assembled_at': datetime.now().isoformat()
        }
    }


def assemble_declination_prompt(client_data: Dict[str, Any], policy_data: Dict[str, Any],
                                 declination_data: Dict[str, Any]) -> Dict[str, Any]:
    """
    Assemble a complete declination notice prompt.
    
    declination_data should include:
      - notice_type: 'non-renewal' | 'cancellation' | 'declination'
      - reason_codes: list of standardized reason codes
      - reason_narrative: human-written explanation from underwriter/producer
      - effective_date: date the action takes effect
    """
    # Map reason codes to plain-language descriptions
    REASON_MAP = {
        'CLAIMS_FREQ': 'frequency of claims activity',
        'CLAIMS_SEV': 'severity of recent claims',
        'RISK_CHANGE': 'material change in risk characteristics',
        'CARRIER_EXIT': 'carrier withdrawal from this line of business or territory',
        'NONPAYMENT': 'non-payment of premium',
        'MATERIAL_MISREP': 'material misrepresentation on application',
        'UNDERWRITING': 'underwriting guidelines',
        'INSPECTION': 'results of property inspection',
        'OTHER': 'reasons detailed below'
    }
    
    reason_descriptions = [REASON_MAP.get(code, code) 
                          for code in declination_data.get('reason_codes', ['OTHER'])]
    
    user_prompt = f"""Draft a {declination_data.get('notice_type', 'non-renewal')} notice for the following:

CLIENT: {client_data.get('display_name', 'Client')}
STATE: {client_data.get('state', 'Unknown')}

POLICY:
- Type: {policy_data.get('policy_type', 'Insurance')}
- Policy Number: {policy_data.get('policy_number', '')}
- Effective Date of Action: {declination_data.get('effective_date', 'TBD')}

REASON(S) FOR {declination_data.get('notice_type', 'NON-RENEWAL').upper()}:
- {chr(10).join('- ' + r for r in reason_descriptions)}

ADDITIONAL CONTEXT FROM PRODUCER/UNDERWRITER:
{declination_data.get('reason_narrative', 'No additional context provided.')}

IMPORTANT: Generate ONLY the body paragraphs and next-steps section. State-specific legal notices and disclosures are handled separately by the template system."""
    
    return {
        'system': SYSTEM_PROMPTS.get('declination_notice', ''),
        'user': user_prompt,
        'model': 'gpt-4.1-mini',
        'temperature': 0.2,  # Lower temperature for declinations
        'max_tokens': 400,
        'metadata': {
            'document_type': 'declination_notice',
            'notice_type': declination_data.get('notice_type'),
            'client_id': client_data.get('client_id'),
            'policy_number': policy_data.get('policy_number'),
            'reason_codes': declination_data.get('reason_codes', []),
            'prompt_version': 'v1.0',
            'assembled_at': datetime.now().isoformat()
        }
    }


def assemble_risk_report_prompt(client_data: Dict[str, Any], 
                                 policies: list, 
                                 claims_history: list) -> Dict[str, Any]:
    """
    Assemble a comprehensive risk management report prompt.
    Uses Claude Sonnet 4 for higher-quality analytical output.
    
    policies: list of all client policy records
    claims_history: list of claims in last 5 years
    """
    # Build policy summary
    policy_lines = []
    total_premium = 0
    for p in policies:
        premium = float(p.get('premium', 0))
        total_premium += premium
        policy_lines.append(
            f"- {p.get('policy_type', 'Unknown')}: Policy #{p.get('policy_number', 'N/A')}, "
            f"Premium ${format_currency(premium)}, "
            f"Limits: {p.get('limits_summary', 'See policy')}, "
            f"Deductible: {p.get('deductible', 'N/A')}, "
            f"Carrier: {p.get('carrier_name', 'N/A')}"
        )
    
    # Build claims summary
    claims_lines = []
    total_claims_paid = 0
    for c in claims_history:
        paid = float(c.get('amount_paid', 0))
        total_claims_paid += paid
        claims_lines.append(
            f"- {c.get('date', 'N/A')}: {c.get('type', 'Claim')} — "
            f"{c.get('description', 'No description')}, "
            f"Paid: ${format_currency(paid)}, Status: {c.get('status', 'Unknown')}"
        )
    
    if not claims_lines:
        claims_lines = ['- No claims in the past 5 years']
    
    user_prompt = f"""Prepare a comprehensive risk management report for the following client:

CLIENT PROFILE:
- Name: {client_data.get('display_name', 'Client')}
- Type: {client_data.get('client_type', 'Individual/Commercial')}
- Industry: {client_data.get('industry', 'N/A')}
- State: {client_data.get('state', 'Unknown')}
- Client Since: {client_data.get('client_since', 'N/A')}
- Number of Active Policies: {len(policies)}
- Total Annual Premium: ${format_currency(total_premium)}

CURRENT COVERAGE PORTFOLIO:
{chr(10).join(policy_lines)}

CLAIMS HISTORY (Last 5 Years):
Total Claims: {len(claims_history)}
Total Paid: ${format_currency(total_claims_paid)}
{chr(10).join(claims_lines)}

PRODUCER NOTES:
{client_data.get('producer_notes', 'No specific notes provided.')}

Generate the full risk management report with: Executive Summary, Current Coverage Overview, Identified Risk Gaps (with priority ratings), Recommendations, and Next Steps. Use Markdown formatting."""
    
    return {
        'system': SYSTEM_PROMPTS.get('risk_report', ''),
        'user': user_prompt,
        'model': 'claude-sonnet-4-20250514',  # Use Claude for risk reports
        'temperature': 0.4,
        'max_tokens': 2000,
        'metadata': {
            'document_type': 'risk_management_report',
            'client_id': client_data.get('client_id'),
            'policy_count': len(policies),
            'claims_count': len(claims_history),
            'prompt_version': 'v1.0',
            'assembled_at': datetime.now().isoformat()
        }
    }


# --- n8n Code Node Entry Point ---
# When used as an n8n Code Node, the incoming items contain
# client_data, policy_data, and document_type from previous nodes.

def main(items):
    """n8n Code Node entry point."""
    results = []
    for item in items:
        data = item.get('json', {})
        doc_type = data.get('document_type', 'renewal')
        
        if doc_type == 'renewal':
            prompt = assemble_renewal_prompt(
                client_data=data.get('client_data', {}),
                policy_data=data.get('policy_data', {}),
                producer_notes=data.get('producer_notes', '')
            )
        elif doc_type == 'declination':
            prompt = assemble_declination_prompt(
                client_data=data.get('client_data', {}),
                policy_data=data.get('policy_data', {}),
                declination_data=data.get('declination_data', {})
            )
        elif doc_type == 'risk_report':
            prompt = assemble_risk_report_prompt(
                client_data=data.get('client_data', {}),
                policies=data.get('policies', []),
                claims_history=data.get('claims_history', [])
            )
        else:
            prompt = {'error': f'Unknown document type: {doc_type}'}
        
        results.append({'json': prompt})
    
    return results

Document Generator

Type: skill A Python module that takes the LLM-generated content and merges it with the Jinja2 document template to produce a formatted Microsoft Word (.docx) document. Handles letterhead insertion, compliance notices, state-specific language blocks, and professional formatting with the agency's branding. Used as a Code Node in n8n after the LLM API response is received.

Implementation:

document_generator.py
python
# Generates formatted Word documents from AI output + templates

# document_generator.py — Generates formatted Word documents from AI output + templates
# Dependencies: python-docx, jinja2
# Install: pip install python-docx jinja2

import os
import re
from datetime import datetime, date
from typing import Dict, Any, Optional
from docx import Document
from docx.shared import Inches, Pt, Cm, RGBColor
from docx.enum.text import WD_ALIGN_PARAGRAPH
from docx.enum.style import WD_STYLE_TYPE
from jinja2 import Environment, FileSystemLoader
import markdown
import html

TEMPLATE_DIR = '/opt/insurance-ai/templates'
OUTPUT_DIR = '/opt/insurance-ai/output'

# Agency branding configuration (customize per client)
AGENCY_CONFIG = {
    'name': 'ABC Insurance Agency',
    'address_line1': '123 Main Street, Suite 200',
    'address_line2': 'Anytown, ST 12345',
    'phone': '(555) 123-4567',
    'email': 'info@abcinsurance.com',
    'website': 'www.abcinsurance.com',
    'logo_path': '/opt/insurance-ai/assets/agency_logo.png',
    'primary_color': RGBColor(0x1B, 0x3A, 0x5C),  # Dark blue
    'secondary_color': RGBColor(0x4A, 0x90, 0xD9),  # Light blue
}

AI_DISCLOSURE = (
    'This document was prepared with AI assistance and has been reviewed '
    'and approved by {reviewer_name}, a licensed insurance professional '
    '(License #{reviewer_license}), on {review_date}.'
)

# State-specific compliance notices for declination
STATE_DECLINATION_NOTICES = {
    'TX': (
        'TEXAS NOTICE: In accordance with Texas Insurance Code §551.104, '
        'this notice is being provided not later than the 10th day before '
        'the cancellation takes effect. You may contact the Texas Department '
        'of Insurance at 1-800-252-3439 or www.tdi.texas.gov.'
    ),
    'NY': (
        'NEW YORK NOTICE: You have the right to request a review of this '
        'decision. Contact the New York State Department of Financial '
        'Services at 1-800-342-3736 or www.dfs.ny.gov.'
    ),
    'CA': (
        'CALIFORNIA NOTICE: Per California Insurance Code §677 and §678, '
        'you are entitled to the specific reasons for this action in writing. '
        'Contact the California Department of Insurance at 1-800-927-4357.'
    ),
    'FL': (
        'FLORIDA NOTICE: Per Florida Statute §627.4133, you may request '
        'reconsideration of this action. Contact the Florida Office of '
        'Insurance Regulation at 1-877-693-5236.'
    ),
    # Add more states as needed during implementation
}


def create_letterhead(doc: Document) -> None:
    """Add agency letterhead to the document."""
    # Add logo if available
    if os.path.exists(AGENCY_CONFIG['logo_path']):
        header = doc.sections[0].header
        header_para = header.paragraphs[0]
        header_para.alignment = WD_ALIGN_PARAGRAPH.CENTER
        run = header_para.add_run()
        run.add_picture(AGENCY_CONFIG['logo_path'], width=Inches(2.0))
    
    # Agency name
    p = doc.add_paragraph()
    p.alignment = WD_ALIGN_PARAGRAPH.LEFT
    run = p.add_run(AGENCY_CONFIG['name'])
    run.bold = True
    run.font.size = Pt(14)
    run.font.color.rgb = AGENCY_CONFIG['primary_color']
    
    # Agency address
    p = doc.add_paragraph()
    p.alignment = WD_ALIGN_PARAGRAPH.LEFT
    run = p.add_run(f"{AGENCY_CONFIG['address_line1']}\n{AGENCY_CONFIG['address_line2']}")
    run.font.size = Pt(9)
    run.font.color.rgb = RGBColor(0x66, 0x66, 0x66)
    
    # Contact info
    run = p.add_run(f"\n{AGENCY_CONFIG['phone']} | {AGENCY_CONFIG['email']}")
    run.font.size = Pt(9)
    run.font.color.rgb = RGBColor(0x66, 0x66, 0x66)
    
    # Divider line
    p = doc.add_paragraph()
    p.paragraph_format.space_after = Pt(6)
    # Add a horizontal rule via bottom border
    from docx.oxml.ns import qn
    pPr = p._p.get_or_add_pPr()
    pBdr = pPr.makeelement(qn('w:pBdr'), {})
    bottom = pBdr.makeelement(qn('w:bottom'), {
        qn('w:val'): 'single',
        qn('w:sz'): '6',
        qn('w:space'): '1',
        qn('w:color'): '1B3A5C'
    })
    pBdr.append(bottom)
    pPr.append(pBdr)


def generate_renewal_letter(ai_content: str, client_data: Dict, 
                            policy_data: Dict, producer_data: Dict) -> str:
    """Generate a formatted renewal letter Word document."""
    doc = Document()
    
    # Set default font
    style = doc.styles['Normal']
    font = style.font
    font.name = 'Calibri'
    font.size = Pt(11)
    
    create_letterhead(doc)
    
    # Date
    doc.add_paragraph(datetime.now().strftime('%B %d, %Y'))
    
    # Client address block
    addr = doc.add_paragraph()
    addr.add_run(client_data.get('display_name', '') + '\n')
    addr.add_run(client_data.get('address_line1', '') + '\n')
    if client_data.get('address_line2'):
        addr.add_run(client_data['address_line2'] + '\n')
    addr.add_run(
        f"{client_data.get('city', '')}, {client_data.get('state', '')} "
        f"{client_data.get('zip', '')}"
    )
    
    # Subject line
    subj = doc.add_paragraph()
    run = subj.add_run(
        f"RE: Policy Renewal — {policy_data.get('policy_type', 'Insurance')} "
        f"Policy #{policy_data.get('policy_number', '')}"
    )
    run.bold = True
    
    # Salutation
    doc.add_paragraph(
        f"Dear {client_data.get('salutation', 'Valued')} "
        f"{client_data.get('last_name', 'Client')},"
    )
    
    # AI-generated body
    for para_text in ai_content.strip().split('\n\n'):
        if para_text.strip():
            doc.add_paragraph(para_text.strip())
    
    # Renewal summary table
    doc.add_paragraph()  # spacer
    summary_heading = doc.add_paragraph()
    run = summary_heading.add_run('Renewal Summary')
    run.bold = True
    run.font.size = Pt(12)
    
    table = doc.add_table(rows=6, cols=2)
    table.style = 'Light Shading Accent 1'
    
    summary_data = [
        ('Policy Type', policy_data.get('policy_type', '')),
        ('Current Premium', f"${policy_data.get('current_premium', 'N/A')}"),
        ('Renewal Premium', f"${policy_data.get('renewal_premium', 'N/A')}"),
        ('Premium Change', f"{policy_data.get('premium_change_pct', 0)}%"),
        ('Effective Date', policy_data.get('renewal_effective_date', '')),
        ('Key Coverages', policy_data.get('coverage_summary', '')),
    ]
    
    for i, (label, value) in enumerate(summary_data):
        table.rows[i].cells[0].text = label
        table.rows[i].cells[1].text = str(value)
        # Bold the labels
        for paragraph in table.rows[i].cells[0].paragraphs:
            for run in paragraph.runs:
                run.bold = True
    
    # Closing
    doc.add_paragraph()  # spacer
    closing = doc.add_paragraph('Sincerely,')
    doc.add_paragraph()  # signature space
    
    sig = doc.add_paragraph()
    run = sig.add_run(producer_data.get('name', ''))
    run.bold = True
    sig.add_run(f"\n{producer_data.get('title', '')}")
    sig.add_run(f"\n{AGENCY_CONFIG['name']}")
    sig.add_run(f"\n{producer_data.get('phone', '')} | {producer_data.get('email', '')}")
    
    # AI Disclosure (small footer)
    doc.add_paragraph()  # spacer
    disclosure = doc.add_paragraph()
    run = disclosure.add_run(AI_DISCLOSURE.format(
        reviewer_name='[PENDING REVIEW]',
        reviewer_license='[PENDING]',
        review_date='[PENDING]'
    ))
    run.font.size = Pt(8)
    run.font.color.rgb = RGBColor(0x99, 0x99, 0x99)
    run.italic = True
    
    # Save document
    filename = (
        f"renewal_{policy_data.get('policy_number', 'unknown')}_"
        f"{datetime.now().strftime('%Y%m%d_%H%M%S')}.docx"
    )
    filepath = os.path.join(OUTPUT_DIR, filename)
    os.makedirs(OUTPUT_DIR, exist_ok=True)
    doc.save(filepath)
    
    return filepath


def generate_declination_notice(ai_content: str, client_data: Dict,
                                 policy_data: Dict, declination_data: Dict,
                                 producer_data: Dict) -> str:
    """Generate a formatted declination notice Word document."""
    doc = Document()
    
    style = doc.styles['Normal']
    style.font.name = 'Calibri'
    style.font.size = Pt(11)
    
    create_letterhead(doc)
    
    # Date
    doc.add_paragraph(datetime.now().strftime('%B %d, %Y'))
    
    # Delivery method
    delivery = doc.add_paragraph()
    run = delivery.add_run(
        f"SENT VIA: {declination_data.get('delivery_method', 'First Class Mail')}"
    )
    run.bold = True
    run.font.size = Pt(10)
    
    # Client address
    addr = doc.add_paragraph()
    addr.add_run(client_data.get('display_name', '') + '\n')
    addr.add_run(client_data.get('address_line1', '') + '\n')
    if client_data.get('address_line2'):
        addr.add_run(client_data['address_line2'] + '\n')
    addr.add_run(
        f"{client_data.get('city', '')}, {client_data.get('state', '')} "
        f"{client_data.get('zip', '')}"
    )
    
    # Subject line
    notice_type = declination_data.get('notice_type', 'Non-Renewal').replace('_', ' ').title()
    subj = doc.add_paragraph()
    run = subj.add_run(
        f"RE: Notice of {notice_type} — "
        f"{policy_data.get('policy_type', 'Insurance')} "
        f"Policy #{policy_data.get('policy_number', '')}"
    )
    run.bold = True
    
    # Salutation
    doc.add_paragraph(
        f"Dear {client_data.get('salutation', 'Valued')} "
        f"{client_data.get('last_name', 'Client')},"
    )
    
    # AI-generated body (split on --- separator for body vs next-steps)
    sections = ai_content.split('---')
    body = sections[0].strip() if sections else ai_content.strip()
    next_steps = sections[1].strip() if len(sections) > 1 else ''
    
    for para_text in body.split('\n\n'):
        if para_text.strip():
            doc.add_paragraph(para_text.strip())
    
    # State-specific notice (mandatory compliance language)
    client_state = client_data.get('state', '').upper()
    if client_state in STATE_DECLINATION_NOTICES:
        doc.add_paragraph()  # spacer
        state_notice = doc.add_paragraph()
        run = state_notice.add_run(STATE_DECLINATION_NOTICES[client_state])
        run.bold = True
        run.font.size = Pt(10)
    
    # Next steps
    if next_steps:
        doc.add_paragraph()  # spacer
        for para_text in next_steps.split('\n\n'):
            if para_text.strip():
                doc.add_paragraph(para_text.strip())
    
    # Closing & signature
    doc.add_paragraph()  # spacer
    doc.add_paragraph('Sincerely,')
    doc.add_paragraph()  # signature space
    sig = doc.add_paragraph()
    run = sig.add_run(producer_data.get('name', ''))
    run.bold = True
    sig.add_run(f"\n{producer_data.get('title', '')}")
    sig.add_run(f"\n{AGENCY_CONFIG['name']}")
    
    # AI Disclosure
    doc.add_paragraph()
    disclosure = doc.add_paragraph()
    run = disclosure.add_run(AI_DISCLOSURE.format(
        reviewer_name='[PENDING REVIEW]',
        reviewer_license='[PENDING]',
        review_date='[PENDING]'
    ))
    run.font.size = Pt(8)
    run.font.color.rgb = RGBColor(0x99, 0x99, 0x99)
    run.italic = True
    
    filename = (
        f"declination_{policy_data.get('policy_number', 'unknown')}_"
        f"{datetime.now().strftime('%Y%m%d_%H%M%S')}.docx"
    )
    filepath = os.path.join(OUTPUT_DIR, filename)
    os.makedirs(OUTPUT_DIR, exist_ok=True)
    doc.save(filepath)
    
    return filepath


def generate_risk_report(ai_content: str, client_data: Dict,
                          producer_data: Dict) -> str:
    """Generate a formatted risk management report from Markdown AI output."""
    doc = Document()
    
    style = doc.styles['Normal']
    style.font.name = 'Calibri'
    style.font.size = Pt(11)
    
    create_letterhead(doc)
    
    # Report title
    title = doc.add_paragraph()
    title.alignment = WD_ALIGN_PARAGRAPH.CENTER
    run = title.add_run('Client Risk Management Report')
    run.bold = True
    run.font.size = Pt(18)
    run.font.color.rgb = AGENCY_CONFIG['primary_color']
    
    # Subtitle
    subtitle = doc.add_paragraph()
    subtitle.alignment = WD_ALIGN_PARAGRAPH.CENTER
    run = subtitle.add_run(
        f"Prepared for: {client_data.get('display_name', 'Client')}\n"
        f"Date: {datetime.now().strftime('%B %d, %Y')}\n"
        f"Prepared by: {producer_data.get('name', 'Agency')}"
    )
    run.font.size = Pt(11)
    run.font.color.rgb = RGBColor(0x66, 0x66, 0x66)
    
    doc.add_paragraph()  # spacer
    
    # Parse Markdown content into Word document
    for line in ai_content.split('\n'):
        line = line.strip()
        if not line:
            continue
        
        if line.startswith('## '):
            heading = doc.add_paragraph()
            run = heading.add_run(line[3:])
            run.bold = True
            run.font.size = Pt(14)
            run.font.color.rgb = AGENCY_CONFIG['primary_color']
        elif line.startswith('### '):
            heading = doc.add_paragraph()
            run = heading.add_run(line[4:])
            run.bold = True
            run.font.size = Pt(12)
        elif line.startswith('- **High'):
            p = doc.add_paragraph(style='List Bullet')
            run = p.add_run('🔴 ' + line[2:])
            run.font.size = Pt(11)
        elif line.startswith('- **Medium'):
            p = doc.add_paragraph(style='List Bullet')
            run = p.add_run('🟡 ' + line[2:])
            run.font.size = Pt(11)
        elif line.startswith('- **Low'):
            p = doc.add_paragraph(style='List Bullet')
            run = p.add_run('🟢 ' + line[2:])
            run.font.size = Pt(11)
        elif line.startswith('- '):
            doc.add_paragraph(line[2:], style='List Bullet')
        elif line.startswith('**') and line.endswith('**'):
            p = doc.add_paragraph()
            run = p.add_run(line.strip('*'))
            run.bold = True
        else:
            # Handle inline bold
            p = doc.add_paragraph()
            parts = re.split(r'(\*\*.*?\*\*)', line)
            for part in parts:
                if part.startswith('**') and part.endswith('**'):
                    run = p.add_run(part.strip('*'))
                    run.bold = True
                else:
                    p.add_run(part)
    
    # Disclaimer
    doc.add_paragraph()  # spacer
    disclaimer = doc.add_paragraph()
    run = disclaimer.add_run(
        'DISCLAIMER: This report is for informational purposes only and does not '
        'constitute a binding coverage analysis, legal advice, or guarantee of '
        'insurability. Coverage availability, terms, and pricing are subject to '
        'underwriting review by the applicable insurance carrier(s). Please '
        'consult with your insurance professional for specific coverage advice.'
    )
    run.font.size = Pt(9)
    run.italic = True
    run.font.color.rgb = RGBColor(0x99, 0x99, 0x99)
    
    # AI Disclosure
    disclosure = doc.add_paragraph()
    run = disclosure.add_run(AI_DISCLOSURE.format(
        reviewer_name='[PENDING REVIEW]',
        reviewer_license='[PENDING]',
        review_date='[PENDING]'
    ))
    run.font.size = Pt(8)
    run.font.color.rgb = RGBColor(0x99, 0x99, 0x99)
    run.italic = True
    
    filename = (
        f"risk_report_{client_data.get('client_id', 'unknown')}_"
        f"{datetime.now().strftime('%Y%m%d_%H%M%S')}.docx"
    )
    filepath = os.path.join(OUTPUT_DIR, filename)
    os.makedirs(OUTPUT_DIR, exist_ok=True)
    doc.save(filepath)
    
    return filepath


# --- n8n Code Node Entry Point ---
def main(items):
    results = []
    for item in items:
        data = item.get('json', {})
        doc_type = data.get('document_type', 'renewal')
        ai_content = data.get('ai_generated_content', '')
        
        if doc_type == 'renewal':
            filepath = generate_renewal_letter(
                ai_content=ai_content,
                client_data=data.get('client_data', {}),
                policy_data=data.get('policy_data', {}),
                producer_data=data.get('producer_data', {})
            )
        elif doc_type == 'declination':
            filepath = generate_declination_notice(
                ai_content=ai_content,
                client_data=data.get('client_data', {}),
                policy_data=data.get('policy_data', {}),
                declination_data=data.get('declination_data', {}),
                producer_data=data.get('producer_data', {})
            )
        elif doc_type == 'risk_report':
            filepath = generate_risk_report(
                ai_content=ai_content,
                client_data=data.get('client_data', {}),
                producer_data=data.get('producer_data', {})
            )
        else:
            filepath = None
        
        results.append({'json': {'filepath': filepath, 'document_type': doc_type}})
    
    return results

AMS Data Connector

Type: integration A reusable integration module that abstracts the AMS API connection, providing a unified interface for extracting client, policy, claims, and producer data regardless of which AMS platform the agency uses (Applied Epic, AMS360, or HawkSoft). This allows the prompt assembly and document generation components to be AMS-agnostic.

Implementation:

ams_connector.py
python
# Unified AMS data extraction layer supporting Applied Epic API, Vertafore
# AMS360 API, and HawkSoft CSV fallback

# ams_connector.py — Unified AMS data extraction layer
# Supports: Applied Epic API, Vertafore AMS360 API, HawkSoft CSV fallback

import os
import json
import csv
import requests
from datetime import datetime, timedelta
from typing import Dict, Any, List, Optional
from abc import ABC, abstractmethod


class AMSConnector(ABC):
    """Abstract base class for AMS integrations."""
    
    @abstractmethod
    def get_client(self, client_id: str) -> Dict[str, Any]:
        pass
    
    @abstractmethod
    def get_policies(self, client_id: str) -> List[Dict[str, Any]]:
        pass
    
    @abstractmethod
    def get_upcoming_renewals(self, days_ahead: int = 60) -> List[Dict[str, Any]]:
        pass
    
    @abstractmethod
    def get_claims_history(self, client_id: str, years_back: int = 5) -> List[Dict[str, Any]]:
        pass
    
    @abstractmethod
    def get_producer(self, producer_id: str) -> Dict[str, Any]:
        pass
    
    @abstractmethod
    def log_activity(self, client_id: str, activity_type: str, description: str) -> bool:
        pass


class AppliedEpicConnector(AMSConnector):
    """Connector for Applied Epic REST API."""
    
    def __init__(self, base_url: str, client_id: str, client_secret: str):
        self.base_url = base_url.rstrip('/')
        self.client_id = client_id
        self.client_secret = client_secret
        self.access_token = None
        self.token_expiry = None
    
    def _get_token(self) -> str:
        """Obtain or refresh OAuth2 access token."""
        if self.access_token and self.token_expiry and datetime.now() < self.token_expiry:
            return self.access_token
        
        response = requests.post(
            f"{self.base_url}/oauth/token",
            data={
                'grant_type': 'client_credentials',
                'client_id': self.client_id,
                'client_secret': self.client_secret,
                'scope': 'clients.read policies.read activities.read activities.write producers.read claims.read'
            },
            headers={'Content-Type': 'application/x-www-form-urlencoded'}
        )
        response.raise_for_status()
        token_data = response.json()
        self.access_token = token_data['access_token']
        self.token_expiry = datetime.now() + timedelta(seconds=token_data.get('expires_in', 3600) - 60)
        return self.access_token
    
    def _api_get(self, endpoint: str, params: dict = None) -> dict:
        """Make authenticated GET request to Applied Epic API."""
        token = self._get_token()
        response = requests.get(
            f"{self.base_url}{endpoint}",
            params=params,
            headers={
                'Authorization': f'Bearer {token}',
                'Accept': 'application/json'
            },
            timeout=30
        )
        response.raise_for_status()
        return response.json()
    
    def _api_post(self, endpoint: str, data: dict) -> dict:
        token = self._get_token()
        response = requests.post(
            f"{self.base_url}{endpoint}",
            json=data,
            headers={
                'Authorization': f'Bearer {token}',
                'Content-Type': 'application/json'
            },
            timeout=30
        )
        response.raise_for_status()
        return response.json()
    
    def get_client(self, client_id: str) -> Dict[str, Any]:
        raw = self._api_get(f'/v1/clients/{client_id}')
        return {
            'client_id': raw.get('clientId'),
            'display_name': raw.get('displayName', ''),
            'first_name': raw.get('firstName', ''),
            'last_name': raw.get('lastName', ''),
            'salutation': raw.get('salutation', 'Mr./Ms.'),
            'client_type': raw.get('clientType', ''),
            'industry': raw.get('industry', ''),
            'address_line1': raw.get('address', {}).get('line1', ''),
            'address_line2': raw.get('address', {}).get('line2', ''),
            'city': raw.get('address', {}).get('city', ''),
            'state': raw.get('address', {}).get('state', ''),
            'zip': raw.get('address', {}).get('zip', ''),
            'email': raw.get('email', ''),
            'phone': raw.get('phone', ''),
            'client_since': raw.get('clientSince', ''),
            'producer_id': raw.get('producerId', ''),
            'total_policies': raw.get('activePolicyCount', 0),
        }
    
    def get_policies(self, client_id: str) -> List[Dict[str, Any]]:
        raw = self._api_get(f'/v1/clients/{client_id}/policies', {'status': 'active'})
        policies = []
        for p in raw.get('items', []):
            policies.append({
                'policy_number': p.get('policyNumber', ''),
                'policy_type': p.get('lineOfBusiness', ''),
                'carrier_name': p.get('carrierName', ''),
                'premium': p.get('annualPremium', 0),
                'current_premium': p.get('annualPremium', 0),
                'renewal_premium': p.get('renewalPremium', p.get('annualPremium', 0)),
                'effective_date': p.get('effectiveDate', ''),
                'expiration_date': p.get('expirationDate', ''),
                'limits_summary': p.get('limitsSummary', ''),
                'deductible': p.get('deductible', ''),
                'coverage_summary': p.get('coverageSummary', ''),
                'producer_id': p.get('producerId', ''),
                'producer_name': p.get('producerName', ''),
                'status': p.get('status', ''),
            })
        return policies
    
    def get_upcoming_renewals(self, days_ahead: int = 60) -> List[Dict[str, Any]]:
        today = datetime.now().strftime('%Y-%m-%d')
        future = (datetime.now() + timedelta(days=days_ahead)).strftime('%Y-%m-%d')
        raw = self._api_get('/v1/policies', {
            'expirationDateFrom': today,
            'expirationDateTo': future,
            'status': 'active'
        })
        renewals = []
        for p in raw.get('items', []):
            renewals.append({
                'client_id': p.get('clientId'),
                'client_name': p.get('clientName', ''),
                'policy_number': p.get('policyNumber', ''),
                'policy_type': p.get('lineOfBusiness', ''),
                'expiration_date': p.get('expirationDate', ''),
                'current_premium': p.get('annualPremium', 0),
                'renewal_premium': p.get('renewalPremium', p.get('annualPremium', 0)),
                'producer_id': p.get('producerId', ''),
            })
        return renewals
    
    def get_claims_history(self, client_id: str, years_back: int = 5) -> List[Dict[str, Any]]:
        since = (datetime.now() - timedelta(days=365 * years_back)).strftime('%Y-%m-%d')
        raw = self._api_get(f'/v1/clients/{client_id}/claims', {'dateFrom': since})
        claims = []
        for c in raw.get('items', []):
            claims.append({
                'claim_number': c.get('claimNumber', ''),
                'date': c.get('lossDate', ''),
                'type': c.get('claimType', ''),
                'description': c.get('description', ''),
                'amount_paid': c.get('amountPaid', 0),
                'amount_reserved': c.get('amountReserved', 0),
                'status': c.get('status', ''),
                'policy_number': c.get('policyNumber', ''),
            })
        return claims
    
    def get_producer(self, producer_id: str) -> Dict[str, Any]:
        raw = self._api_get(f'/v1/producers/{producer_id}')
        return {
            'producer_id': raw.get('producerId', ''),
            'name': raw.get('displayName', ''),
            'title': raw.get('title', 'Insurance Professional'),
            'email': raw.get('email', ''),
            'phone': raw.get('phone', ''),
            'license_number': raw.get('licenseNumber', ''),
        }
    
    def log_activity(self, client_id: str, activity_type: str, description: str) -> bool:
        try:
            self._api_post(f'/v1/clients/{client_id}/activities', {
                'activityType': activity_type,
                'description': description,
                'date': datetime.now().strftime('%Y-%m-%dT%H:%M:%S'),
                'user': 'AI Document System'
            })
            return True
        except Exception as e:
            print(f'Failed to log AMS activity: {e}')
            return False


class HawkSoftCSVConnector(AMSConnector):
    """Fallback connector for HawkSoft using CSV exports."""
    
    def __init__(self, export_dir: str):
        self.export_dir = export_dir
        self._clients_cache = None
        self._policies_cache = None
    
    def _load_csv(self, filename: str) -> List[Dict]:
        filepath = os.path.join(self.export_dir, filename)
        if not os.path.exists(filepath):
            return []
        with open(filepath, 'r', encoding='utf-8-sig') as f:
            return list(csv.DictReader(f))
    
    def get_client(self, client_id: str) -> Dict[str, Any]:
        clients = self._load_csv('clients_export.csv')
        for c in clients:
            if c.get('ClientID') == client_id:
                return {
                    'client_id': c.get('ClientID'),
                    'display_name': f"{c.get('FirstName', '')} {c.get('LastName', '')}".strip(),
                    'first_name': c.get('FirstName', ''),
                    'last_name': c.get('LastName', ''),
                    'salutation': 'Mr./Ms.',
                    'address_line1': c.get('Address1', ''),
                    'address_line2': c.get('Address2', ''),
                    'city': c.get('City', ''),
                    'state': c.get('State', ''),
                    'zip': c.get('Zip', ''),
                    'email': c.get('Email', ''),
                    'phone': c.get('Phone', ''),
                    'client_since': c.get('ClientSince', ''),
                }
        return {}
    
    def get_policies(self, client_id: str) -> List[Dict[str, Any]]:
        policies = self._load_csv('policies_export.csv')
        return [{
            'policy_number': p.get('PolicyNumber', ''),
            'policy_type': p.get('LineOfBusiness', ''),
            'premium': float(p.get('Premium', 0)),
            'current_premium': float(p.get('Premium', 0)),
            'expiration_date': p.get('ExpirationDate', ''),
            'carrier_name': p.get('Carrier', ''),
        } for p in policies if p.get('ClientID') == client_id]
    
    def get_upcoming_renewals(self, days_ahead: int = 60) -> List[Dict[str, Any]]:
        policies = self._load_csv('policies_export.csv')
        today = datetime.now().date()
        future = today + timedelta(days=days_ahead)
        renewals = []
        for p in policies:
            try:
                exp = datetime.strptime(p.get('ExpirationDate', ''), '%m/%d/%Y').date()
                if today <= exp <= future:
                    renewals.append({
                        'client_id': p.get('ClientID'),
                        'client_name': p.get('ClientName', ''),
                        'policy_number': p.get('PolicyNumber', ''),
                        'policy_type': p.get('LineOfBusiness', ''),
                        'expiration_date': p.get('ExpirationDate', ''),
                        'current_premium': float(p.get('Premium', 0)),
                    })
            except ValueError:
                continue
        return renewals
    
    def get_claims_history(self, client_id: str, years_back: int = 5) -> List[Dict]:
        claims = self._load_csv('claims_export.csv')
        return [c for c in claims if c.get('ClientID') == client_id]
    
    def get_producer(self, producer_id: str) -> Dict[str, Any]:
        return {'producer_id': producer_id, 'name': 'Agency Producer', 'title': 'Insurance Professional', 'email': '', 'phone': ''}
    
    def log_activity(self, client_id: str, activity_type: str, description: str) -> bool:
        # HawkSoft CSV fallback: write to a log file for manual import
        log_path = os.path.join(self.export_dir, 'ai_activities_log.csv')
        file_exists = os.path.exists(log_path)
        with open(log_path, 'a', newline='') as f:
            writer = csv.writer(f)
            if not file_exists:
                writer.writerow(['ClientID', 'ActivityType', 'Description', 'Date'])
            writer.writerow([client_id, activity_type, description, datetime.now().isoformat()])
        return True


def get_connector(ams_type: str, config: Dict) -> AMSConnector:
    """Factory function to create the appropriate AMS connector."""
    if ams_type.lower() in ('applied_epic', 'epic'):
        return AppliedEpicConnector(
            base_url=config['api_base_url'],
            client_id=config['client_id'],
            client_secret=config['client_secret']
        )
    elif ams_type.lower() in ('hawksoft', 'csv'):
        return HawkSoftCSVConnector(
            export_dir=config['export_dir']
        )
    else:
        raise ValueError(f'Unsupported AMS type: {ams_type}. Supported: applied_epic, hawksoft')

Compliance Audit Logger

Type: workflow A dedicated n8n sub-workflow that records every AI document generation event with full audit trail data required by the NAIC Model Bulletin. Logs the timestamp, client identifier, document type, model used, prompt version, input data hash (not the actual PII), tokens consumed, generation cost, reviewer identity, review decision, and final disposition. Data is written to both a SharePoint list (for agency access) and a PostgreSQL table (for MSP reporting).

Implementation:

audit_logger.py
python
# Compliance audit logging for insurance AI document generation. Implements
# NAIC Model Bulletin requirements for AI governance and record retention.
# Includes dual-write to SharePoint and PostgreSQL, SHA-256 PII hashing,
# cost estimation, and a monthly bias monitoring SQL report.

# audit_logger.py — Compliance audit logging for insurance AI document generation
# Implements NAIC Model Bulletin requirements for AI governance and record retention

import hashlib
import json
from datetime import datetime
from typing import Dict, Any, Optional
import requests


class AuditLogger:
    """
    Dual-write audit logger: SharePoint List + PostgreSQL.
    
    NAIC Model Bulletin requires:
    1. Oversight and approval documentation
    2. Data practices and accountability (lineage, quality)
    3. Validation, testing, and retesting records
    4. Privacy of non-public information
    5. Data and record retention (recommend 7 years for insurance)
    """
    
    def __init__(self, sharepoint_config: Dict, postgres_config: Optional[Dict] = None):
        self.sp_site_url = sharepoint_config['site_url']
        self.sp_list_name = sharepoint_config.get('list_name', 'AI Document Audit Log')
        self.sp_access_token = sharepoint_config.get('access_token', '')  # MS Graph token
        self.pg_config = postgres_config
        
        if postgres_config:
            import psycopg2
            self.pg_conn = psycopg2.connect(**postgres_config)
            self._ensure_pg_table()
    
    def _ensure_pg_table(self):
        """Create audit table if it doesn't exist."""
        with self.pg_conn.cursor() as cur:
            cur.execute("""
                CREATE TABLE IF NOT EXISTS ai_document_audit (
                    id SERIAL PRIMARY KEY,
                    event_id VARCHAR(64) UNIQUE NOT NULL,
                    timestamp TIMESTAMPTZ NOT NULL DEFAULT NOW(),
                    agency_id VARCHAR(50),
                    client_id VARCHAR(50),
                    policy_number VARCHAR(50),
                    document_type VARCHAR(30) NOT NULL,
                    notice_type VARCHAR(30),
                    ai_model VARCHAR(50) NOT NULL,
                    prompt_version VARCHAR(20) NOT NULL,
                    input_data_hash VARCHAR(64) NOT NULL,
                    output_token_count INTEGER,
                    input_token_count INTEGER,
                    estimated_cost_usd DECIMAL(8,4),
                    generation_time_ms INTEGER,
                    document_url TEXT,
                    review_status VARCHAR(20) DEFAULT 'pending_review',
                    reviewer_name VARCHAR(100),
                    reviewer_license VARCHAR(50),
                    review_timestamp TIMESTAMPTZ,
                    review_comments TEXT,
                    final_disposition VARCHAR(20),
                    sent_timestamp TIMESTAMPTZ,
                    sent_to_email VARCHAR(200),
                    pii_fields_sanitized TEXT,
                    state_compliance_checks TEXT,
                    error_details TEXT,
                    created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
                );
                
                CREATE INDEX IF NOT EXISTS idx_audit_client 
                    ON ai_document_audit(client_id, timestamp);
                CREATE INDEX IF NOT EXISTS idx_audit_type 
                    ON ai_document_audit(document_type, timestamp);
                CREATE INDEX IF NOT EXISTS idx_audit_review 
                    ON ai_document_audit(review_status);
            """)
            self.pg_conn.commit()
    
    def _hash_input_data(self, data: Dict) -> str:
        """Create SHA-256 hash of input data for audit without storing PII."""
        serialized = json.dumps(data, sort_keys=True, default=str)
        return hashlib.sha256(serialized.encode()).hexdigest()
    
    def _generate_event_id(self) -> str:
        """Generate unique event ID."""
        import uuid
        return str(uuid.uuid4()).replace('-', '')[:16].upper()
    
    def log_generation(self, 
                       document_type: str,
                       client_id: str,
                       policy_number: str,
                       ai_model: str,
                       prompt_version: str,
                       input_data: Dict,
                       output_tokens: int,
                       input_tokens: int,
                       generation_time_ms: int,
                       document_url: str,
                       sanitized_fields: list = None,
                       notice_type: str = None,
                       agency_id: str = None) -> str:
        """
        Log an AI document generation event.
        Returns the event_id for tracking through the review process.
        """
        event_id = self._generate_event_id()
        input_hash = self._hash_input_data(input_data)
        
        # Calculate estimated cost
        cost = self._estimate_cost(ai_model, input_tokens, output_tokens)
        
        record = {
            'event_id': event_id,
            'timestamp': datetime.now().isoformat(),
            'agency_id': agency_id or 'default',
            'client_id': client_id,
            'policy_number': policy_number,
            'document_type': document_type,
            'notice_type': notice_type,
            'ai_model': ai_model,
            'prompt_version': prompt_version,
            'input_data_hash': input_hash,
            'output_token_count': output_tokens,
            'input_token_count': input_tokens,
            'estimated_cost_usd': cost,
            'generation_time_ms': generation_time_ms,
            'document_url': document_url,
            'review_status': 'pending_review',
            'pii_fields_sanitized': json.dumps(sanitized_fields or []),
        }
        
        # Write to SharePoint
        self._write_sharepoint(record)
        
        # Write to PostgreSQL
        if self.pg_config:
            self._write_postgres(record)
        
        return event_id
    
    def log_review(self, event_id: str, reviewer_name: str, 
                   reviewer_license: str, decision: str, comments: str = ''):
        """Log the human review decision for an AI-generated document."""
        update = {
            'review_status': decision,  # 'approved', 'rejected', 'revision_requested'
            'reviewer_name': reviewer_name,
            'reviewer_license': reviewer_license,
            'review_timestamp': datetime.now().isoformat(),
            'review_comments': comments,
            'final_disposition': decision,
        }
        
        self._update_sharepoint(event_id, update)
        if self.pg_config:
            self._update_postgres(event_id, update)
    
    def log_delivery(self, event_id: str, sent_to_email: str):
        """Log the final delivery of an approved document."""
        update = {
            'sent_timestamp': datetime.now().isoformat(),
            'sent_to_email': sent_to_email,
            'final_disposition': 'delivered',
        }
        self._update_sharepoint(event_id, update)
        if self.pg_config:
            self._update_postgres(event_id, update)
    
    def _estimate_cost(self, model: str, input_tokens: int, output_tokens: int) -> float:
        """Estimate API cost based on model pricing."""
        pricing = {
            'gpt-4.1-mini': {'input': 0.40, 'output': 1.60},  # per 1M tokens
            'gpt-4.1': {'input': 2.00, 'output': 8.00},
            'gpt-4.1-nano': {'input': 0.02, 'output': 0.08},
            'claude-sonnet-4-20250514': {'input': 3.00, 'output': 15.00},
            'claude-haiku-4.5': {'input': 1.00, 'output': 5.00},
        }
        rates = pricing.get(model, {'input': 2.00, 'output': 8.00})
        cost = (input_tokens / 1_000_000 * rates['input']) + \
               (output_tokens / 1_000_000 * rates['output'])
        return round(cost, 4)
    
    def _write_sharepoint(self, record: Dict):
        """Write audit record to SharePoint list via MS Graph API."""
        try:
            # MS Graph API: Create list item
            url = f"{self.sp_site_url}/_api/lists/getbytitle('{self.sp_list_name}')/items"
            headers = {
                'Authorization': f'Bearer {self.sp_access_token}',
                'Content-Type': 'application/json',
                'Accept': 'application/json'
            }
            # Map record to SharePoint column names
            sp_data = {'fields': {
                'Title': record['event_id'],
                'ClientID': record.get('client_id', ''),
                'PolicyNumber': record.get('policy_number', ''),
                'DocumentType': record.get('document_type', ''),
                'AIModel': record.get('ai_model', ''),
                'PromptVersion': record.get('prompt_version', ''),
                'TokensUsed': record.get('output_token_count', 0),
                'EstimatedCost': str(record.get('estimated_cost_usd', 0)),
                'DocumentURL': record.get('document_url', ''),
                'ReviewStatus': record.get('review_status', 'pending_review'),
                'GeneratedDate': record.get('timestamp', ''),
            }}
            requests.post(url, json=sp_data, headers=headers, timeout=15)
        except Exception as e:
            print(f'SharePoint audit write failed: {e}')
    
    def _update_sharepoint(self, event_id: str, updates: Dict):
        """Update existing SharePoint list item."""
        # Implementation: query for item by event_id, then PATCH
        pass  # Implement using MS Graph list item update API
    
    def _write_postgres(self, record: Dict):
        """Write audit record to PostgreSQL."""
        try:
            with self.pg_conn.cursor() as cur:
                columns = ', '.join(record.keys())
                placeholders = ', '.join(['%s'] * len(record))
                cur.execute(
                    f'INSERT INTO ai_document_audit ({columns}) VALUES ({placeholders})',
                    list(record.values())
                )
                self.pg_conn.commit()
        except Exception as e:
            print(f'PostgreSQL audit write failed: {e}')
            self.pg_conn.rollback()
    
    def _update_postgres(self, event_id: str, updates: Dict):
        """Update existing PostgreSQL audit record."""
        try:
            with self.pg_conn.cursor() as cur:
                set_clause = ', '.join([f"{k} = %s" for k in updates.keys()])
                cur.execute(
                    f'UPDATE ai_document_audit SET {set_clause} WHERE event_id = %s',
                    list(updates.values()) + [event_id]
                )
                self.pg_conn.commit()
        except Exception as e:
            print(f'PostgreSQL audit update failed: {e}')
            self.pg_conn.rollback()


# Bias monitoring query (run monthly)
BIAS_MONITORING_SQL = """
-- Monthly declination bias report
-- Flags potential disparate impact patterns
SELECT 
    DATE_TRUNC('month', timestamp) as month,
    document_type,
    notice_type,
    COUNT(*) as total_notices,
    COUNT(CASE WHEN final_disposition = 'delivered' THEN 1 END) as delivered,
    COUNT(CASE WHEN final_disposition = 'rejected' THEN 1 END) as rejected_by_reviewer
FROM ai_document_audit
WHERE document_type = 'declination_notice'
  AND timestamp > NOW() - INTERVAL '6 months'
GROUP BY 1, 2, 3
ORDER BY 1 DESC, 3;
"""

State Compliance Rules Engine

Type: skill A configuration-driven rules engine that enforces state-specific regulatory requirements for insurance correspondence. It validates that declination notices include required language, meet timing requirements, and specify the correct state regulatory contact information. Returns validation results and injects required compliance blocks into document templates.

Implementation:

state_compliance.py
python
# State-specific insurance compliance rules engine

# state_compliance.py — State-specific insurance compliance rules engine
# Validates and enforces regulatory requirements for AI-generated correspondence

import json
from datetime import datetime, date, timedelta
from typing import Dict, Any, List, Tuple

# State compliance rules database
# Expand this as the agency operates in more states
STATE_RULES = {
    'TX': {
        'name': 'Texas',
        'regulator': 'Texas Department of Insurance',
        'regulator_phone': '1-800-252-3439',
        'regulator_url': 'www.tdi.texas.gov',
        'declination': {
            'min_notice_days': 10,  # TIC §551.104
            'required_delivery': 'mail',
            'reason_required': True,
            'appeal_rights_required': True,
            'statutory_reference': 'Texas Insurance Code §551.104',
            'required_language': (
                'In accordance with Texas Insurance Code §551.104, this notice is being '
                'provided not later than the 10th day before the action takes effect. '
                'You may contact the Texas Department of Insurance at 1-800-252-3439 '
                'or visit www.tdi.texas.gov for assistance.'
            ),
        },
        'renewal': {
            'advance_notice_days': 30,
            'premium_change_disclosure': True,
        }
    },
    'NY': {
        'name': 'New York',
        'regulator': 'New York State Department of Financial Services',
        'regulator_phone': '1-800-342-3736',
        'regulator_url': 'www.dfs.ny.gov',
        'declination': {
            'min_notice_days': 15,
            'required_delivery': 'mail',
            'reason_required': True,
            'appeal_rights_required': True,
            'statutory_reference': 'NY Insurance Law §3425',
            'required_language': (
                'You have the right to request a review of this decision and to seek '
                'alternative coverage. Contact the New York State Department of Financial '
                'Services at 1-800-342-3736 or visit www.dfs.ny.gov. You may also be '
                'eligible for coverage through the New York Property Insurance Underwriting '
                'Association (NYPIUA) or other residual market mechanisms.'
            ),
        },
        'renewal': {
            'advance_notice_days': 45,
            'premium_change_disclosure': True,
        },
        'ai_requirements': {
            'bias_testing_required': True,  # DFS Circular Letter 2024-7
            'disparate_impact_review': True,
            'ai_disclosure_required': True,
        }
    },
    'CA': {
        'name': 'California',
        'regulator': 'California Department of Insurance',
        'regulator_phone': '1-800-927-4357',
        'regulator_url': 'www.insurance.ca.gov',
        'declination': {
            'min_notice_days': 20,
            'required_delivery': 'mail',
            'reason_required': True,
            'appeal_rights_required': True,
            'statutory_reference': 'California Insurance Code §677-678',
            'required_language': (
                'Per California Insurance Code §677, you are entitled to the specific '
                'reasons for this action in writing. You may contact the California '
                'Department of Insurance at 1-800-927-4357 or visit '
                'www.insurance.ca.gov for assistance. You may also be eligible for '
                'coverage through the California FAIR Plan.'
            ),
        },
        'renewal': {
            'advance_notice_days': 45,
            'premium_change_disclosure': True,
        }
    },
    'FL': {
        'name': 'Florida',
        'regulator': 'Florida Office of Insurance Regulation',
        'regulator_phone': '1-877-693-5236',
        'regulator_url': 'www.floir.com',
        'declination': {
            'min_notice_days': 10,
            'required_delivery': 'mail',
            'reason_required': True,
            'appeal_rights_required': True,
            'statutory_reference': 'Florida Statute §627.4133',
            'required_language': (
                'Per Florida Statute §627.4133, you may request reconsideration '
                'of this action. Contact the Florida Office of Insurance Regulation '
                'at 1-877-693-5236 or visit www.floir.com.'
            ),
        },
        'renewal': {
            'advance_notice_days': 45,
            'premium_change_disclosure': True,
        }
    },
    'CO': {
        'name': 'Colorado',
        'regulator': 'Colorado Division of Insurance',
        'regulator_phone': '1-800-930-3745',
        'regulator_url': 'doi.colorado.gov',
        'declination': {
            'min_notice_days': 30,
            'required_delivery': 'mail',
            'reason_required': True,
            'appeal_rights_required': True,
            'statutory_reference': 'C.R.S. §10-4-109.7',
            'required_language': (
                'You may contact the Colorado Division of Insurance at 1-800-930-3745 '
                'or visit doi.colorado.gov for assistance with this matter.'
            ),
        },
        'ai_requirements': {
            'bias_testing_required': True,  # SB 21-169
            'external_data_audit': True,  # C.R.S. §10-3-1104.9
            'ai_disclosure_required': True,
        }
    },
    # DEFAULT — used for states not explicitly configured
    'DEFAULT': {
        'name': 'Default',
        'regulator': 'State Department of Insurance',
        'regulator_phone': 'Contact your state DOI',
        'regulator_url': '',
        'declination': {
            'min_notice_days': 30,
            'required_delivery': 'mail',
            'reason_required': True,
            'appeal_rights_required': True,
            'required_language': (
                'You may contact your state Department of Insurance '
                'for information about your rights and alternative coverage options.'
            ),
        },
        'renewal': {
            'advance_notice_days': 30,
            'premium_change_disclosure': True,
        }
    }
}


def get_state_rules(state_code: str) -> Dict:
    """Get compliance rules for a state, falling back to defaults."""
    return STATE_RULES.get(state_code.upper(), STATE_RULES['DEFAULT'])


def validate_declination_timing(state_code: str, effective_date_str: str) -> Tuple[bool, str]:
    """
    Validate that there is sufficient time to send a declination notice
    before the effective date, per state requirements.
    
    Returns: (is_valid, message)
    """
    rules = get_state_rules(state_code)
    min_days = rules['declination']['min_notice_days']
    
    try:
        effective_date = datetime.strptime(effective_date_str, '%Y-%m-%d').date()
    except ValueError:
        return (False, f'Invalid date format: {effective_date_str}. Expected YYYY-MM-DD.')
    
    today = date.today()
    days_until = (effective_date - today).days
    
    if days_until < min_days:
        return (
            False,
            f'COMPLIANCE VIOLATION: {rules["name"]} requires at least {min_days} days '
            f'notice before effective date. Only {days_until} days remain. '
            f'Effective date: {effective_date_str}. '
            f'Reference: {rules["declination"].get("statutory_reference", "State regulations")}. '
            f'ACTION REQUIRED: Consult with agency compliance officer before proceeding.'
        )
    
    if days_until < min_days + 5:  # Warning zone
        return (
            True,
            f'WARNING: Only {days_until} days until effective date. '
            f'{rules["name"]} requires {min_days} days minimum. '
            f'Expedite review and mailing immediately.'
        )
    
    return (True, f'Timing OK: {days_until} days until effective date (minimum: {min_days}).')


def get_required_declination_language(state_code: str) -> str:
    """Get the mandatory regulatory language for a declination notice."""
    rules = get_state_rules(state_code)
    return rules['declination'].get('required_language', '')


def validate_declination_content(state_code: str, ai_content: str) -> List[Dict[str, str]]:
    """
    Validate AI-generated declination content against compliance rules.
    Returns list of issues found.
    """
    issues = []
    rules = get_state_rules(state_code)
    content_lower = ai_content.lower()
    
    # Check for potentially discriminatory language
    PROHIBITED_TERMS = [
        'neighborhood', 'area demographics', 'credit score', 'credit history',
        'marital status', 'gender', 'race', 'ethnicity', 'religion',
        'national origin', 'disability', 'age-related',
        'genetic', 'sexual orientation'
    ]
    
    for term in PROHIBITED_TERMS:
        if term in content_lower:
            issues.append({
                'severity': 'HIGH',
                'type': 'prohibited_language',
                'detail': f'Potentially discriminatory term found: "{term}". '
                          f'Review under NAIC Model Bulletin and state unfair trade practices laws.'
            })
    
    # Check for fabricated specifics
    FABRICATION_INDICATORS = [
        'studies show', 'research indicates', 'statistics demonstrate',
        'according to data', 'industry average', 'benchmarks suggest'
    ]
    
    for indicator in FABRICATION_INDICATORS:
        if indicator in content_lower:
            issues.append({
                'severity': 'MEDIUM',
                'type': 'potential_fabrication',
                'detail': f'AI may have fabricated a claim: "{indicator}" found. '
                          f'Verify all factual claims are sourced from provided data.'
            })
    
    # Check for coverage guarantees
    GUARANTEE_TERMS = [
        'we guarantee', 'we promise', 'you will definitely',
        'coverage is assured', 'we can certainly'
    ]
    
    for term in GUARANTEE_TERMS:
        if term in content_lower:
            issues.append({
                'severity': 'HIGH',
                'type': 'improper_guarantee',
                'detail': f'AI generated a potential coverage guarantee: "{term}". '
                          f'Remove — agents cannot guarantee coverage availability.'
            })
    
    # State-specific AI requirements
    if rules.get('ai_requirements', {}).get('ai_disclosure_required'):
        issues.append({
            'severity': 'INFO',
            'type': 'state_ai_disclosure',
            'detail': f'{rules["name"]} requires AI disclosure on generated documents. '
                      f'Ensure AI disclosure notice is included in the template.'
        })
    
    return issues


def get_compliance_checklist(state_code: str, document_type: str) -> List[str]:
    """Generate a compliance checklist for the reviewer."""
    rules = get_state_rules(state_code)
    checklist = []
    
    if document_type == 'declination':
        checklist.extend([
            f'☐ Notice complies with {rules["name"]} {rules["declination"].get("min_notice_days", 30)}-day minimum notice requirement',
            f'☐ Delivery method: {rules["declination"].get("required_delivery", "mail")}',
            '☐ Reason for action is clearly stated and matches underwriting file',
            '☐ No discriminatory language or protected-class references',
            '☐ State regulatory contact information is included',
            '☐ Alternative coverage options mentioned (FAIR plan, assigned risk)',
            '☐ AI disclosure notice is present',
            '☐ Reviewed by licensed agent/principal (not just CSR)',
        ])
        if rules.get('ai_requirements', {}).get('bias_testing_required'):
            checklist.append(
                f'☐ {rules["name"]} AI bias review: Confirm this declination '
                f'does not create disparate impact on protected classes'
            )
    elif document_type == 'renewal':
        checklist.extend([
            '☐ Premium amounts match AMS/carrier renewal offer',
            '☐ Coverage summary is accurate — no fabricated details',
            '☐ No coverage guarantees or binding representations',
            '☐ Tone is appropriate for premium change direction',
            '☐ AI disclosure notice is present',
            '☐ Call-to-action directs client to schedule review with producer',
        ])
    elif document_type == 'risk_report':
        checklist.extend([
            '☐ All policy data matches current AMS records',
            '☐ Claims history is accurate and complete',
            '☐ Coverage gap analysis is reasonable — no fabricated gaps',
            '☐ No specific premium quotes or coverage guarantees',
            '☐ Informational disclaimer is present',
            '☐ AI disclosure notice is present',
            '☐ Recommendations are actionable and appropriate for client',
        ])
    
    return checklist


# n8n Code Node entry point
def main(items):
    results = []
    for item in items:
        data = item.get('json', {})
        state = data.get('client_state', 'DEFAULT')
        doc_type = data.get('document_type', 'renewal')
        
        result = {
            'state_rules': get_state_rules(state),
            'checklist': get_compliance_checklist(state, doc_type),
        }
        
        if doc_type == 'declination':
            timing_valid, timing_msg = validate_declination_timing(
                state, data.get('effective_date', '')
            )
            content_issues = validate_declination_content(
                state, data.get('ai_content', '')
            )
            result['timing_valid'] = timing_valid
            result['timing_message'] = timing_msg
            result['content_issues'] = content_issues
            result['required_state_language'] = get_required_declination_language(state)
            
            # Block generation if timing is invalid
            if not timing_valid:
                result['block_generation'] = True
                result['block_reason'] = timing_msg
        
        results.append({'json': result})
    
    return results

Declination Request Web Form

Type: agent A lightweight web-based form that agency staff use to initiate declination notice generation. Built as an n8n Form Trigger, it collects the required information (client ID, policy number, reason codes, narrative, effective date) and submits it to the declination workflow. Provides a user-friendly interface without requiring custom web development.

Implementation

Form Configuration (n8n Form Trigger Node)

n8n Form Trigger node configuration payload
json
{
  "formTitle": "AI Declination Notice Request",
  "formDescription": "Submit a request to generate an AI-drafted declination, non-renewal, or cancellation notice. All generated notices require licensed agent review before delivery.",
  "formFields": [
    {
      "fieldLabel": "Your Email",
      "fieldType": "email",
      "requiredField": true,
      "placeholder": "producer@agency.com"
    },
    {
      "fieldLabel": "Client ID (from AMS)",
      "fieldType": "text",
      "requiredField": true,
      "placeholder": "e.g., CLI-2024-00123"
    },
    {
      "fieldLabel": "Policy Number",
      "fieldType": "text",
      "requiredField": true,
      "placeholder": "e.g., HO-2024-001"
    },
    {
      "fieldLabel": "Notice Type",
      "fieldType": "dropdown",
      "requiredField": true,
      "fieldOptions": {
        "values": [
          {"option": "Non-Renewal (policy will not be renewed at expiration)"},
          {"option": "Cancellation (policy terminated before expiration)"},
          {"option": "Declination (new application declined)"}
        ]
      }
    },
    {
      "fieldLabel": "Reason(s) for Action (select all that apply)",
      "fieldType": "text",
      "requiredField": true,
      "placeholder": "e.g., Claims frequency, Carrier exit from market, Underwriting guidelines"
    },
    {
      "fieldLabel": "Detailed Explanation",
      "fieldType": "textarea",
      "requiredField": true,
      "placeholder": "Provide the specific reason(s) from the underwriter or carrier. Be factual — the AI will use this text to draft the notice."
    },
    {
      "fieldLabel": "Effective Date of Action",
      "fieldType": "date",
      "requiredField": true
    },
    {
      "fieldLabel": "Priority",
      "fieldType": "dropdown",
      "requiredField": true,
      "fieldOptions": {
        "values": [
          {"option": "Standard (review within 48 hours)"},
          {"option": "Urgent (review within 24 hours)"},
          {"option": "Emergency (review ASAP — compliance deadline imminent)"}
        ]
      }
    }
  ],
  "respondMode": "lastNode",
  "formSubmittedText": "✅ Your declination notice request has been submitted. The AI will draft the notice and route it for licensed agent review. You will receive a Teams notification when the draft is ready. Document ID: {{$json.documentId}}"
}

Deployment

1
After saving the workflow, n8n generates a form URL: https://<n8n-instance>/form/<workflow-id>
2
Share this URL with agency staff as a browser bookmark or Teams tab
3
Optionally embed the form in a SharePoint page for a more branded experience
4
The form URL can also be added as a button in the agency's AMS dashboard (if customizable)

Post-Submission Processing

1
Validates the effective date against state timing requirements (via State Compliance Rules Engine)
2
If timing is invalid, immediately returns an error to the submitter
3
If valid, fetches client/policy data from AMS, assembles prompt, calls LLM, generates document
4
Routes to review queue and notifies the submitter and designated reviewer via Teams

Testing & Validation

  • TEST 1 — AMS API Connectivity: Execute a manual API call to the AMS (Applied Epic/AMS360) to retrieve 5 client records and 5 policy records. Verify all required fields are present and correctly mapped: client name, address, state, policy number, policy type, premium amounts, expiration dates, producer ID. Document any missing fields and create fallback values.
  • TEST 2 — LLM API Connectivity & Response Quality: Send 3 test prompts to both OpenAI GPT-4.1 Mini and Anthropic Claude Sonnet 4 using the system prompts developed in Step 5. Verify: (a) API returns 200 status, (b) response completes within 10 seconds, (c) output follows the format instructions (no extraneous JSON, no greetings/sign-offs for renewal/declination, Markdown for risk reports), (d) output word count is within expected range.
  • TEST 3 — PII Sanitization: Create a test client record containing SSN (123-45-6789), date of birth, phone numbers in notes fields, and email addresses in free text. Pass through the sanitize_for_llm() function. Verify: SSN is completely removed, DOB is converted to age range, phone numbers and emails in text fields are redacted with [REDACTED-*] markers. Confirm the sanitized output is what gets sent to the LLM API by inspecting the n8n execution log.
  • TEST 4 — Renewal Letter End-to-End: Trigger the renewal letter workflow manually with a known test client (create a test record in AMS if needed). Verify the complete pipeline: AMS data extraction → PII sanitization → prompt assembly → LLM API call → Word document generation → SharePoint upload → Teams notification. Open the generated .docx and verify: agency letterhead present, client address correct, policy details accurate, AI-generated body is professional and appropriate for the premium change direction, renewal summary table populated, AI disclosure present.
  • TEST 5 — Declination Notice Compliance Validation: Submit a declination request via the web form for a client in Texas with an effective date only 5 days away (should trigger compliance violation). Verify the workflow blocks generation and returns an error about the 10-day minimum notice requirement. Then submit with an effective date 30 days away and verify the notice generates correctly with Texas-specific regulatory language included.
  • TEST 6 — Declination Content Safety: Generate 10 declination notices with varied reason codes and verify NONE contain: (a) discriminatory language referencing protected classes, (b) fabricated statistics or unsourced claims, (c) coverage guarantees, (d) internal underwriting scores or carrier proprietary information. Run each through the validate_declination_content() function and verify it catches any issues.
  • TEST 7 — Risk Management Report Quality: Generate risk reports for 3 test clients with varying portfolio complexity (1 policy, 3 policies, 6+ policies with claims history). Verify: (a) Claude Sonnet 4 is used (not GPT-4.1 Mini), (b) report includes all required sections (Executive Summary, Coverage Overview, Risk Gaps, Recommendations, Next Steps), (c) risk priorities are assigned (High/Medium/Low), (d) disclaimer is present, (e) no fabricated coverage types or premium estimates appear.
  • TEST 8 — Human Review Queue: Generate 5 documents of mixed types and verify they all appear in the SharePoint 'AI Document Review Queue' list with status 'Pending Review'. Test the Teams Adaptive Card approval flow: approve 2 documents, reject 1, request revision on 1. Verify SharePoint list updates correctly with reviewer name, decision, timestamp, and comments. Verify approved documents move to the 'Approved' folder.
  • TEST 9 — Email Delivery: After approving a renewal letter in the review queue, verify the automated email delivery: (a) PDF attachment generated from .docx, (b) email sent from correct shared mailbox, (c) client email address is correct, (d) producer is CC'd, (e) email subject line includes policy number, (f) AMS activity record is created. Check email deliverability (not in spam).
  • TEST 10 — Audit Trail Completeness: After processing 10 test documents through the full pipeline, query the audit log (SharePoint list and/or PostgreSQL). Verify every record contains: event_id, timestamp, client_id (no PII), policy_number, document_type, ai_model, prompt_version, input_data_hash (NOT actual PII), token counts, estimated cost, review_status, reviewer_name, and final_disposition. Verify records for delivered documents include sent_timestamp and sent_to_email.
  • TEST 11 — Cost Tracking: After generating 20+ test documents, check the OpenAI and Anthropic usage dashboards. Verify actual token consumption aligns with estimates (renewal letters ~2,000 input + ~500 output tokens, risk reports ~3,000 input + ~1,500 output tokens). Verify the audit log cost estimates are within 20% of actual API charges. Confirm billing alerts are configured at 50% and 80% of monthly limits.
  • TEST 12 — Error Handling & Recovery: Deliberately trigger errors and verify graceful handling: (a) Temporarily use an invalid API key — verify the workflow logs the error, sends an alert to the MSP, and does NOT send a blank/broken document. (b) Submit a declination request with an invalid client_id — verify the workflow returns a clear error message, not a crash. (c) Simulate a SharePoint upload failure — verify the document is saved locally as backup and an alert is sent.
  • TEST 13 — State-Specific Template Accuracy: Generate declination notices for clients in at least 4 different states (TX, NY, CA, FL or CO). Verify each notice contains the correct state-specific regulatory language, correct regulator contact information, and correct statutory references. Have the agency's compliance officer or E&O counsel review and sign off on the state-specific language for all states where the agency operates.

Client Handoff

Client Handoff Checklist

Training Sessions (Recommend 3 sessions, 60-90 minutes each)

Session 1: System Overview & Renewal Letters (All Staff)

  • How the AI document generation system works (high-level architecture, no technical jargon)
  • Demonstration of the renewal letter workflow: daily automatic generation, where to find drafts, how to review and approve
  • Navigating the SharePoint review queue and understanding document statuses
  • Using the Teams Adaptive Card approval flow (Approve/Reject buttons)
  • What to look for when reviewing AI-generated renewal letters: verify client details, check premium accuracy, ensure tone matches relationship
  • Live hands-on exercise: review and approve 3 practice renewal letters

Session 2: Declination Notices & Compliance (Producers + Compliance Officer)

  • Using the declination request web form (walk through each field)
  • Understanding state-specific compliance requirements and timing constraints
  • The compliance checklist and what each item means
  • Reviewing the state-specific regulatory language blocks (why they must not be edited)
  • What to do when the system flags a compliance violation (timing too short, prohibited language detected)
  • Escalation process for edge cases
  • Live exercise: submit 2 declination requests, review the generated notices using the compliance checklist

Session 3: Risk Management Reports & Advanced Usage (Producers)

  • Requesting on-demand risk reports through the web form or workflow trigger
  • Understanding the report structure and how to customize the AI output with producer notes
  • Reviewing risk gap analysis for accuracy — what the AI can and cannot know
  • Editing AI-generated reports in Word before final delivery
  • Quarterly key-account report scheduling

Documentation to Leave Behind

1
Quick Reference Guide (1-page, laminated): Step-by-step for each document type: where to trigger, where to review, how to approve, what to check
2
Compliance Checklist Cards: Printed checklists for each document type (renewal, declination, risk report) that reviewers can reference during review
3
State Requirements Matrix: Spreadsheet showing all states where the agency operates with specific notice periods, required language, and delivery methods
4
FAQ Document: Common questions and answers (e.g., 'What if the AI generates incorrect premium amounts?' → 'The AI uses data from your AMS. If premiums are wrong, correct them in the AMS first, then re-generate.')
5
Escalation Contact Sheet: MSP support contact (email, phone, Teams channel), hours of availability, SLA response times, and what constitutes a critical vs. routine issue
6
AI Disclosure Policy: Template policy document for the agency's records explaining their use of AI in client communications, signed by the agency principal

Success Criteria to Review Together

Maintenance

Ongoing Maintenance Plan

Weekly Tasks (MSP — 30 minutes/week)

  • Review n8n workflow execution logs for errors or anomalies
  • Check API usage and cost against monthly budget (OpenAI dashboard, Anthropic console, n8n audit log)
  • Verify all documents in 'Pending Review' status for >48 hours and alert the agency if a backlog is forming
  • Review the error log for any failed API calls, AMS connectivity issues, or document generation failures

Monthly Tasks (MSP — 2-3 hours/month)

  • Run the bias monitoring SQL query against the audit database; review declination patterns for any indicators of disparate impact; document findings
  • Review and optimize prompts based on reviewer feedback — if reviewers are consistently editing a particular section, adjust the prompt to produce better first drafts
  • Update API client libraries and n8n to latest stable versions (test in staging first)
  • Generate and deliver the monthly ROI report to the agency principal: documents generated, estimated hours saved, API costs, approval rate
  • Review OpenAI and Anthropic model deprecation notices; plan migration if current models are being sunset

Quarterly Tasks (MSP — 4-6 hours/quarter)

  • Comprehensive compliance audit: review 10% sample of all AI-generated documents for compliance accuracy
  • State regulatory update check: monitor NAIC bulletins and state DOI communications for new AI regulations; update state compliance rules engine as needed
  • Prompt library version review: compare output quality against original benchmarks; update system prompts if model behavior has drifted
  • Template language review with agency E&O counsel: confirm all state-specific language remains current
  • Performance and cost optimization: evaluate if a model tier change (e.g., GPT-4.1 Nano for simple renewals) could reduce costs without quality loss
  • Client satisfaction survey: brief check-in with agency producers on document quality and workflow usability

Annual Tasks (MSP — 1-2 days/year)

  • Full system architecture review and technology refresh assessment
  • Annual compliance certification: produce a summary document certifying the AI system's compliance posture for the agency's E&O file
  • LLM model migration if needed (e.g., new GPT or Claude version with better quality/cost)
  • Renegotiate API pricing tiers based on actual usage volume
  • Review and update the agency's AI use policy document

Trigger-Based Maintenance

  • New state added to agency operations: Add state-specific compliance rules, declination language, and notice timing to the rules engine; have E&O counsel review
  • AMS upgrade or migration: Test all API integrations against the new AMS version; update field mappings if schema changed
  • Model deprecation notice: Migrate to successor model within 60 days; test all prompts against new model before cutover
  • Regulatory change: Update state compliance rules engine within 30 days of effective date; notify agency and retrain if needed
  • Quality issue reported: Investigate within 4 business hours; adjust prompts or escalate to compliance review as appropriate

SLA Recommendations

  • P1 — System Down (no documents generating): 4-hour response, 8-hour resolution during business hours
  • P2 — Compliance Issue (incorrect state language, PII leak): 2-hour response, 24-hour resolution
  • P3 — Quality Issue (poor letter quality, wrong tone): 1-business-day response, 5-business-day resolution
  • P4 — Enhancement Request (new document type, template change): Acknowledge within 2 business days, schedule in next monthly maintenance window

Escalation Path

1
Agency user reports issue → MSP service desk (email/Teams/phone)
2
MSP L1 tech reviews n8n logs, API status, and recent changes → resolves or escalates
3
MSP L2/solutions architect investigates prompt, integration, or compliance issues
4
If compliance-related → loop in agency E&O counsel and MSP compliance advisor
5
If vendor issue (OpenAI outage, AMS API change) → open vendor support ticket and implement temporary workaround

...

Insurance-Specific Platform (Zywave / Applied AI)

Deploy an insurance-vertical-specific AI platform like Zywave's AI-powered Broker Briefcase content library with its forthcoming agentic AI outreach tools, or leverage Applied Systems' native AI features being embedded into Applied Epic (including AI-powered renewals benchmarking and Epic Bridge). These platforms offer pre-built insurance content, compliance-aware templates, and native AMS integration without custom development.

Self-Hosted On-Premises LLM (Maximum Data Control)

Deploy Ollama with Meta Llama 3.3 8B (or Mistral 7B) on a Dell PowerEdge T360 server with NVIDIA T4 GPU at the agency's office. All AI inference runs locally with zero client data leaving the agency's network. Use n8n self-hosted (also on-premises) for workflow orchestration. The complete system operates in an air-gapped or internet-optional configuration.

Hybrid Approach (Cloud API + Local Fallback)

Use OpenAI GPT-4.1 Mini (cloud) as the primary engine for routine renewal letters (which contain minimal sensitive data), but route declination notices and risk management reports through a self-hosted Ollama instance for maximum data control on the most sensitive document types. n8n orchestration routes documents to the appropriate engine based on document type and data sensitivity classification.

Want early access to the full toolkit?