
Implementation Guide: Generate certificate of insurance documents and coverage comparison reports
Step-by-step implementation guide for deploying AI to generate certificate of insurance documents and coverage comparison reports for Insurance Agencies clients.
Hardware Procurement
Document Scanner
$420 per unit (MSP cost) / $625 suggested resale
High-speed duplex scanner for digitizing legacy paper policies, dec pages, and endorsements that need to be ingested into the AI pipeline. Supports batch scanning with intelligent auto-classification. Required for agencies with a backlog of paper-only policies not yet in the AMS.
Dual Monitor (Primary Workstation)
Dual Monitor (Primary Workstation)
$220 per unit (MSP cost) / $330 suggested resale
Provides side-by-side display for CSRs and producers to review AI-generated COIs alongside source policy data in the AMS, and to view coverage comparison reports next to the original quotes. Critical for the human-in-the-loop review workflow required by NAIC compliance.
Workstation Upgrade (RAM)
$35 per unit (MSP cost) / $65 suggested resale
RAM upgrade for existing agency workstations to ensure smooth performance when running the AMS, browser-based AI dashboard, and PDF viewer simultaneously. Only needed if current workstations have less than 16GB RAM.
Software Procurement
Azure OpenAI Service (GPT-5.4 + GPT-5.4 mini)
$50–$150/month (GPT-5.4 mini at $0.15/$0.60 per M input/output tokens for COI population; GPT-5.4 at $2.50/$10.00 per M input/output tokens for comparison narratives)
Core LLM engine for generating coverage comparison narratives, extracting and structuring policy data, and populating COI form fields from unstructured policy documents. Azure OpenAI chosen over consumer OpenAI API for GLBA-compliant data residency, enterprise SLA, and content filtering controls.
$99–$300/month (scales by policy volume); MSP resale at $149–$450/month
Insurance-specific AI platform providing 900+ point policy checking checklists and automated quote comparison. Validates AI-generated COI accuracy against source policy data and identifies coverage gaps in comparison reports. Delivers 85% time reduction in policy checking workflows.
DocSpring - PDF Form API
$49/month (50 live PDFs) to $249/month (5,000 PDFs); MSP resale at $99–$399/month
Programmatic ACORD form filling via REST API. Hosts ACORD 25, 26, 125, and 126 certificate templates with mapped form fields. Receives structured JSON from the AI pipeline and renders completed, print-ready PDF certificates.
Certificial - Smart COI Network
Free for agents/brokers with up to 5 insureds; custom pricing for larger deployments
Real-time Smart COI distribution platform that automatically updates certificates when policy changes occur (cancellations, modifications, renewals). Provides a certificate holder portal eliminating repetitive COI reissuance requests. Exclusive integration with Applied Epic announced June 2025.
Microsoft 365 Business Standard
$12.50/user/month (if not already licensed)
Baseline productivity suite for document delivery (Outlook), report formatting (Word/Excel), and cloud storage (SharePoint/OneDrive) for generated documents. Required for email-based COI delivery workflow and Teams notifications.
Microsoft 365 Copilot (Optional)
$30/user/month add-on; MSP resale at $42–$50/user/month
Optional enhancement allowing producers to interactively query coverage data, draft client-facing comparison summaries in Word, and generate Excel-based premium comparison spreadsheets using natural language. Recommended for 2–3 key producer seats only.
Azure Blob Storage
$5–$20/month for 20–100GB storage + minimal transaction costs
Secure cloud storage for generated COI PDFs, comparison reports, audit logs, and archived policy documents. Integrated with Azure OpenAI for seamless data pipeline. Configured with immutable storage for compliance record retention.
Azure API Management (Basic tier)
$0.042/10,000 API calls (Consumption tier) or ~$150/month Basic tier
API gateway managing authentication, rate limiting, and logging for all API calls between the AMS, Azure OpenAI, DocSpring, and the orchestration middleware. Provides centralized monitoring and security controls.
Prerequisites
- Active Agency Management System (AMS) subscription with API access enabled — Applied Epic (Applied Dev Center registration required), Vertafore AMS360 (TransactNOW/IVANS access), or HawkSoft (REST API key provisioned)
- IVANS policy download service active with all major carriers the agency writes with — required to ensure policy data flows automatically into the AMS for AI consumption
- Microsoft 365 Business Standard or higher licensed for all staff who will use the COI generation and comparison workflows
- Internet connectivity minimum 50 Mbps symmetrical at the agency office — required for reliable API calls to Azure OpenAI and cloud platform access
- Azure subscription (Pay-As-You-Go or Enterprise Agreement) provisioned under MSP's Azure CSP tenant or client's direct tenant — needed for Azure OpenAI, Blob Storage, and API Management
- Python 3.10+ runtime environment available on the MSP's development/staging server for building and deploying the orchestration middleware — can be an Azure App Service, Azure Functions, or a containerized environment
- Admin-level access to the client's AMS for API configuration, custom field mapping, and activity code setup
- List of all ACORD form types the agency currently uses for COIs (typically ACORD 25 for liability, ACORD 26 for auto, ACORD 125/126 for commercial) — needed for template configuration in DocSpring
- Inventory of current COI request intake methods (email, phone, portal) to design the automated intake workflow
- Written authorization from agency principal to implement AI-assisted document generation, acknowledging the human-in-the-loop review requirement — needed for NAIC compliance documentation
- Current E&O (Errors and Omissions) insurance policy reviewed to confirm coverage for AI-assisted document generation processes
- DNS and firewall rules allowing outbound HTTPS (443) to: api.openai.com, *.openai.azure.com, api.docspring.com, api.certificial.com, and the AMS cloud endpoints
Installation Steps
...
Step 1: Provision Azure OpenAI Service Instance
Create an Azure OpenAI resource in the client's or MSP's Azure subscription. Deploy both GPT-5.4 and GPT-5.4 mini models. GPT-5.4 mini will handle high-volume COI field extraction and population (cost-efficient at $0.15/M input tokens). GPT-5.4 will handle coverage comparison narrative generation requiring higher reasoning capability. Configure content filtering to allow insurance terminology while blocking PII leakage.
az login
az group create --name rg-insurance-ai --location eastus2
az cognitiveservices account create --name oai-insurance-coi --resource-group rg-insurance-ai --kind OpenAI --sku S0 --location eastus2
az cognitiveservices account deployment create --name oai-insurance-coi --resource-group rg-insurance-ai --deployment-name gpt-5.4-mini --model-name gpt-5.4-mini --model-version 2024-07-18 --model-format OpenAI --sku-capacity 30 --sku-name Standard
az cognitiveservices account deployment create --name oai-insurance-coi --resource-group rg-insurance-ai --deployment-name gpt-5.4 --model-name gpt-5.4 --model-version 2024-08-06 --model-format OpenAI --sku-capacity 20 --sku-name Standard
az cognitiveservices account keys list --name oai-insurance-coi --resource-group rg-insurance-aiChoose the Azure region closest to the agency for lowest latency. eastus2 is recommended for most US agencies. Store the API key and endpoint URL securely in Azure Key Vault. The Standard SKU provides 30K tokens-per-minute by default; request quota increases if the agency exceeds 500 COIs/month. Content filtering is enabled by default — do NOT disable it for insurance compliance.
Step 2: Provision Azure Blob Storage and Key Vault
Create a storage account for generated documents, audit logs, and template files. Create a Key Vault instance to securely store all API keys (Azure OpenAI, DocSpring, AMS API credentials). Enable immutable storage on the audit log container for regulatory compliance record retention.
az storage account create --name stinsuranceaidocs --resource-group rg-insurance-ai --location eastus2 --sku Standard_LRS --kind StorageV2
az storage container create --name generated-cois --account-name stinsuranceaidocs
az storage container create --name comparison-reports --account-name stinsuranceaidocs
az storage container create --name audit-logs --account-name stinsuranceaidocs
az storage container immutability-policy create --account-name stinsuranceaidocs --container-name audit-logs --period 365
az keyvault create --name kv-insurance-ai --resource-group rg-insurance-ai --location eastus2
az keyvault secret set --vault-name kv-insurance-ai --name azure-openai-key --value <YOUR_OPENAI_KEY>
az keyvault secret set --vault-name kv-insurance-ai --name docspring-api-key --value <YOUR_DOCSPRING_KEY>
az keyvault secret set --vault-name kv-insurance-ai --name ams-api-key --value <YOUR_AMS_API_KEY>Immutable storage policy of 365 days on audit-logs container ensures compliance with NAIC record retention requirements. Use Azure RBAC to restrict access to the Key Vault — only the orchestration middleware's managed identity should have read access to secrets.
Step 3: Configure AMS API Integration
Register as a developer with the agency's AMS platform and obtain API credentials. For Applied Epic: register at the Applied Dev Center and complete the partner onboarding process. For HawkSoft: request a REST API key from HawkSoft support. For AMS360: configure TransactNOW/IVANS integration credentials. Map the AMS data fields that correspond to ACORD certificate form fields.
# For Applied Epic - example API authentication test
curl -X POST https://api.appliedsystems.com/oauth/token -d 'grant_type=client_credentials&client_id=YOUR_CLIENT_ID&client_secret=YOUR_CLIENT_SECRET'
# For HawkSoft - example API connectivity test
curl -X GET https://api.hawksoft.com/v2/policies -H 'Authorization: Bearer YOUR_API_KEY' -H 'Content-Type: application/json'
# Store AMS credentials in Key Vault
az keyvault secret set --vault-name kv-insurance-ai --name ams-client-id --value <AMS_CLIENT_ID>
az keyvault secret set --vault-name kv-insurance-ai --name ams-client-secret --value <AMS_CLIENT_SECRET>Applied Epic API access requires partner agreement execution — allow 1–2 weeks for approval. HawkSoft API keys are typically provisioned within 48 hours. Document every AMS field mapping in a spreadsheet: AMS Field Name → ACORD Form Field → DocSpring Template Field. Critical fields include: Named Insured, Policy Number, Effective Date, Expiration Date, Coverage Types, Limits, Additional Insureds, and Certificate Holder information.
Step 4: Set Up DocSpring ACORD Form Templates
Create a DocSpring account and configure ACORD certificate form templates. Upload or recreate ACORD 25 (Certificate of Liability Insurance), ACORD 26 (Certificate of Property Insurance), and optionally ACORD 125/126 (Commercial Insurance Application) forms. Map each fillable field to a JSON key that the orchestration middleware will populate.
# Install DocSpring CLI tool
npm install -g @docspring/cli
# Authenticate with DocSpring
docspring auth --api-token YOUR_API_TOKEN
# Upload ACORD 25 template (prepare the PDF with mapped fields first)
curl -X POST https://api.docspring.com/api/v1/templates -H 'Authorization: Basic YOUR_BASE64_CREDENTIALS' -F 'template[document]=@acord25_template.pdf' -F 'template[name]=ACORD 25 - Certificate of Liability Insurance'
# List templates to confirm upload
curl -X GET https://api.docspring.com/api/v1/templates -H 'Authorization: Basic YOUR_BASE64_CREDENTIALS'DocSpring provides a visual field mapper in the web UI — use this for initial template setup rather than CLI for ACORD forms. Map every field including the 27 standard ACORD 25 sections: Producer info, Insured info, General Liability limits, Auto Liability limits, Umbrella/Excess limits, Workers Comp limits, Description of Operations, Certificate Holder, and Cancellation provisions. Save each template ID — you'll need these in the orchestration middleware configuration.
Step 5: Subscribe to Patra AI Platform
Sign up for Patra AI's Quote Compare and Policy Checking modules. Configure the platform to connect to the agency's carrier download feed (via IVANS). Set up the 900+ point policy checking checklist customized to the agency's most common commercial lines (GL, Auto, WC, Umbrella, Property). This serves as the AI validation layer ensuring generated COIs accurately reflect policy data.
Patra AI offers a 14-day free trial — use this for the integration testing phase. The $99/month starter plan includes unlimited user seats but is volume-limited. For agencies processing 200+ policies/month, expect $200–$300/month tier. Patra's Quote Compare module is particularly valuable for the coverage comparison report feature — it identifies coverage differences across up to 5 carrier quotes simultaneously.
Step 6: Set Up Certificial Smart COI (Optional but Recommended for Applied Epic Agencies)
If the agency uses Applied Epic, configure Certificial's native integration for real-time Smart COI distribution. This eliminates the need for manual COI reissuance when policies change. Certificate holders receive a live link instead of a static PDF, and updates propagate automatically when IVANS downloads new policy data.
Certificial is free for agencies with up to 5 insureds. For larger deployments, negotiate pricing through the MSP partner program. The Applied Epic integration announced in June 2025 provides the deepest native connectivity. For AMS360 or HawkSoft agencies, Certificial integration requires the orchestration middleware to push updates via API — this adds 1–2 weeks to the build.
Step 7: Deploy Orchestration Middleware on Azure
Deploy the Python-based orchestration middleware as an Azure Function App or Azure App Service. This middleware coordinates all API calls between the AMS, Azure OpenAI, DocSpring, Patra AI, and Certificial. It handles: (1) receiving COI requests, (2) pulling policy data from the AMS, (3) sending data to GPT-5.4 mini for field extraction, (4) populating DocSpring templates, (5) triggering Patra AI validation, (6) generating comparison narratives via GPT-5.4, and (7) storing generated documents in Azure Blob Storage with full audit logging.
# Clone the orchestration middleware repository
git clone https://github.com/your-msp/insurance-coi-ai-middleware.git
cd insurance-coi-ai-middleware
# Create Python virtual environment
python -m venv .venv
source .venv/bin/activate # Linux/Mac
# .venv\Scripts\activate # Windows
# Install dependencies
pip install -r requirements.txt
# requirements.txt includes:
# openai>=1.35.0
# azure-identity>=1.17.0
# azure-keyvault-secrets>=4.8.0
# azure-storage-blob>=12.20.0
# requests>=2.32.0
# fastapi>=0.111.0
# uvicorn>=0.30.0
# pydantic>=2.7.0
# python-docx>=1.1.0
# openpyxl>=3.1.0
# Create Azure Function App
az functionapp create --name func-insurance-coi --resource-group rg-insurance-ai --consumption-plan-location eastus2 --runtime python --runtime-version 3.11 --functions-version 4 --storage-account stinsuranceaidocs --os-type Linux
# Configure application settings
az functionapp config appsettings set --name func-insurance-coi --resource-group rg-insurance-ai --settings AZURE_OPENAI_ENDPOINT=https://oai-insurance-coi.openai.azure.com/ AZURE_OPENAI_KEY=@Microsoft.KeyVault(SecretUri=https://kv-insurance-ai.vault.azure.net/secrets/azure-openai-key/) DOCSPRING_API_KEY=@Microsoft.KeyVault(SecretUri=https://kv-insurance-ai.vault.azure.net/secrets/docspring-api-key/) AMS_API_KEY=@Microsoft.KeyVault(SecretUri=https://kv-insurance-ai.vault.azure.net/secrets/ams-api-key/)
# Enable managed identity for Key Vault access
az functionapp identity assign --name func-insurance-coi --resource-group rg-insurance-ai
# Grant Key Vault access to managed identity
az keyvault set-policy --name kv-insurance-ai --object-id <MANAGED_IDENTITY_OBJECT_ID> --secret-permissions get list
# Deploy the function app
func azure functionapp publish func-insurance-coiUse Azure Functions Consumption plan for cost efficiency during initial rollout (pay only for execution time). If the agency processes >1,000 COIs/month, consider upgrading to a Premium plan for always-warm instances and VNET integration. Set up Application Insights for monitoring all API calls, error rates, and latency. The managed identity approach eliminates the need to store credentials in code or environment variables.
Step 8: Configure Email-Based COI Request Intake
Set up an automated intake pipeline for COI requests received via email. Use a Microsoft 365 shared mailbox (e.g., coi@agencyname.com) with a Power Automate flow or Azure Logic App that monitors the inbox, extracts certificate holder information and policy references using GPT-5.4 mini, and triggers the COI generation pipeline.
- Create shared mailbox in Microsoft 365 Admin Center
- Power Automate flow configuration (JSON export): Trigger: 'When a new email arrives in a shared mailbox (V2)', Mailbox: coi@agencyname.com
- Action 1: Send email body + subject to Azure Function 'parse-coi-request'
- Action 2: Azure Function returns structured JSON with: certificate_holder_name, certificate_holder_address, insured_name, policy_number (if referenced), coverage_types_requested, special_requirements (additional insured, waiver of subrogation, etc.)
- Action 3: Trigger COI generation pipeline with parsed data
- Action 4: Post notification to Teams channel 'COI-Requests' for human review
az logic workflow create --name la-coi-intake --resource-group rg-insurance-ai --location eastus2 --definition @logic-app-definition.jsonThe Power Automate approach is simpler for agencies already on M365. Azure Logic Apps provide more control and better logging. Either way, the key is extracting structured data from free-form email requests — this is where GPT-5.4 mini excels. Include a fallback for requests that can't be parsed (route to human agent via Teams notification). Test with 50+ real historical COI request emails during Phase 4.
Step 9: Build Human Review Dashboard
Deploy a lightweight web-based review dashboard (React or simple HTML/JS hosted on Azure Static Web Apps) where CSRs can review AI-generated COIs before final issuance. The dashboard shows the generated PDF side-by-side with source policy data, highlights any Patra AI validation warnings, and provides approve/reject/edit buttons. This is the mandatory human-in-the-loop control required for NAIC compliance.
# Create Azure Static Web App for the review dashboard
az staticwebapp create --name swa-coi-review --resource-group rg-insurance-ai --location eastus2 --source https://github.com/your-msp/coi-review-dashboard --branch main --app-location /src --output-location /build
# The dashboard connects to the orchestration middleware API:
# GET /api/pending-reviews - List COIs awaiting review
# GET /api/coi/{id}/preview - Render PDF preview
# GET /api/coi/{id}/source-data - Show AMS source data
# GET /api/coi/{id}/validation - Show Patra AI check results
# POST /api/coi/{id}/approve - Approve and finalize
# POST /api/coi/{id}/reject - Reject with reason
# POST /api/coi/{id}/edit - Modify fields and regenerateFor Phase 1 MVP, this can be as simple as a Teams Adaptive Card notification with approve/reject buttons rather than a full web dashboard. The full dashboard should be built in Phase 2. Ensure Azure AD / Entra ID authentication is enforced — only licensed agency staff should approve COIs. Log the reviewer's identity and timestamp for every approval in the audit trail.
Step 10: Configure Audit Logging and Compliance Documentation
Implement comprehensive audit logging that records every AI-generated document with: timestamp, model used (GPT-5.4 vs GPT-5.4 mini), input data hash (not raw PII), output document hash, Patra AI validation score, reviewer identity, approval timestamp, and delivery method. Generate the agency's NAIC AI Governance Program documentation.
# Audit log schema (stored as JSON in Azure Blob Storage audit-logs container):
# {
# 'event_id': 'uuid',
# 'timestamp': 'ISO-8601',
# 'event_type': 'coi_generated | comparison_generated | coi_approved | coi_rejected',
# 'model_used': 'gpt-5.4-mini | gpt-5.4',
# 'input_data_hash': 'SHA-256 hash of input payload',
# 'output_document_hash': 'SHA-256 hash of generated PDF',
# 'policy_number': 'masked (last 4 digits only)',
# 'insured_name_hash': 'SHA-256 hash',
# 'patra_validation_score': 0.0-1.0,
# 'patra_warnings': ['list of flagged items'],
# 'reviewer_email': 'user@agency.com',
# 'review_timestamp': 'ISO-8601',
# 'review_decision': 'approved | rejected | edited',
# 'delivery_method': 'email | certificial | portal',
# 'recipient_email': 'holder@company.com'
# }
# Enable Azure Monitor diagnostic logging
az monitor diagnostic-settings create --name diag-coi-functions --resource /subscriptions/<SUB_ID>/resourceGroups/rg-insurance-ai/providers/Microsoft.Web/sites/func-insurance-coi --logs '[{"category":"FunctionAppLogs","enabled":true}]' --storage-account stinsuranceaidocsCRITICAL: Never log raw PII (full names, full policy numbers, SSNs) in audit logs. Use hashing for traceability while maintaining GLBA compliance. The NAIC AI Governance Program documentation template should be prepared as a Word document and stored in the agency's compliance folder. Include: AI system inventory, risk assessment, data flow diagram, human oversight procedures, testing methodology, and incident response plan. This document is required in the 24+ states that have adopted the NAIC Model Bulletin.
Step 11: Deploy and Test End-to-End COI Generation Pipeline
Run the complete pipeline in a staging environment using test policy data. Verify each step: AMS data extraction → GPT-5.4 mini field population → DocSpring PDF rendering → Patra AI validation → human review dashboard notification → approval → delivery to certificate holder → audit log entry → AMS activity log update.
# Run integration test suite
cd insurance-coi-ai-middleware
python -m pytest tests/integration/ -v --tb=short
# Test individual components:
python tests/test_ams_connection.py # Verify AMS API connectivity
python tests/test_openai_extraction.py # Verify field extraction accuracy
python tests/test_docspring_rendering.py # Verify PDF generation
python tests/test_patra_validation.py # Verify policy checking
python tests/test_audit_logging.py # Verify compliance logging
# Generate a test COI
curl -X POST https://func-insurance-coi.azurewebsites.net/api/generate-coi -H 'Content-Type: application/json' -d '{"policy_number": "TEST-GL-001", "certificate_holder": {"name": "Test Holder LLC", "address": "123 Test St, Anytown, ST 12345"}, "additional_insured": true, "waiver_of_subrogation": true}'Use test/sandbox policy data — never use real policyholder data during testing. Create 10 test policies in the AMS staging environment covering: General Liability, Commercial Auto, Workers Compensation, Umbrella, and Property. Verify every ACORD form field is populated correctly by comparing AI-generated COIs against manually-created reference COIs for the same test policies.
Step 12: Train Agency Staff and Execute Phased Go-Live
Conduct a half-day training session for all CSRs and producers covering: (1) how to submit COI requests through the new email intake, (2) how to use the review dashboard, (3) how to handle Patra AI validation warnings, (4) how to approve/reject/edit generated COIs, (5) how to trigger coverage comparison reports, and (6) compliance obligations (human review requirement). Roll out in phases: Week 1 — 2 pilot users; Week 2 — all CSRs; Week 3 — producers.
- Training materials to prepare:
- User Guide PDF (20-30 pages with screenshots)
- Quick Reference Card (laminated 1-pager for desks)
- Video walkthrough (15-minute screen recording)
- FAQ document addressing common concerns
- Compliance acknowledgment form (signed by each user)
- Post-training monitoring:
- Enable enhanced logging for first 30 days
- Schedule daily review of rejection rates and edit frequency
- Hold weekly 15-minute check-in calls for first month
Expect 1–2 weeks of 'parallel running' where staff generate COIs both manually and via AI to build confidence. The most common resistance point is trust in AI accuracy — address this by showing Patra AI's 900+ point validation checklist results. Emphasize that AI assists but the human always approves. Track adoption metrics: COIs generated via AI vs. manual, average review time, rejection rate, and edit frequency.
Custom AI Components
COI Field Extraction Prompt
Type: prompt System prompt for GPT-5.4 mini that extracts structured COI field data from raw policy text pulled from the AMS. Takes unstructured policy data (dec page text, endorsement summaries) and returns a structured JSON object matching ACORD 25/26 form fields. Optimized for cost-efficiency on high-volume COI processing. Implementation:
COI Field Extraction Prompt
Coverage Comparison Report Generator
Type: prompt System prompt for GPT-5.4 that generates professional, plain-language coverage comparison reports from structured policy data for multiple carrier quotes. Produces a formatted narrative suitable for client presentation, highlighting coverage differences, gaps, and recommendations.
Implementation:
Coverage Comparison Report Generator
COI Generation Orchestration Workflow
Type: workflow The core orchestration workflow that coordinates the entire COI generation pipeline from request intake to delivery. Implemented as an Azure Function with durable orchestration pattern (Azure Durable Functions) for reliable execution with automatic retry and checkpointing.
Implementation:
# coi_orchestrator.py - Azure Durable Functions Orchestrator
import azure.functions as func
import azure.durable_functions as df
import json
import hashlib
import logging
from datetime import datetime, timezone
app = df.DFApp(http_auth_level=func.AuthLevel.FUNCTION)
# Orchestrator function
@app.orchestration_trigger(context_name='context')
def coi_generation_orchestrator(context: df.DurableOrchestrationContext):
"""Main orchestration workflow for COI generation."""
request_data = context.get_input()
# Step 1: Extract policy data from AMS
policy_data = yield context.call_activity(
'fetch_policy_from_ams',
{
'policy_number': request_data['policy_number'],
'insured_name': request_data.get('insured_name'),
'ams_type': request_data.get('ams_type', 'applied_epic')
}
)
if not policy_data or policy_data.get('error'):
yield context.call_activity('log_audit_event', {
'event_type': 'coi_generation_failed',
'reason': f"AMS data fetch failed: {policy_data.get('error', 'unknown')}",
'request_data_hash': hashlib.sha256(json.dumps(request_data, sort_keys=True).encode()).hexdigest()
})
return {'status': 'error', 'message': 'Could not retrieve policy data from AMS'}
# Step 2: Extract COI fields using GPT-5.4 mini
coi_fields = yield context.call_activity(
'extract_coi_fields_llm',
{
'policy_data': policy_data,
'certificate_holder': request_data['certificate_holder'],
'special_requirements': request_data.get('special_requirements', {})
}
)
# Step 3: Validate with Patra AI
validation_result = yield context.call_activity(
'validate_with_patra',
{
'coi_fields': coi_fields,
'policy_data': policy_data
}
)
# Step 4: Generate PDF via DocSpring
pdf_result = yield context.call_activity(
'generate_coi_pdf',
{
'coi_fields': coi_fields,
'form_type': request_data.get('form_type', 'acord_25'),
'validation_warnings': validation_result.get('warnings', [])
}
)
# Step 5: Store in Azure Blob and create review request
storage_result = yield context.call_activity(
'store_and_create_review',
{
'pdf_url': pdf_result['pdf_url'],
'coi_fields': coi_fields,
'validation_result': validation_result,
'request_data': request_data
}
)
# Step 6: Wait for human approval (with timeout)
approval_event = yield context.wait_for_external_event(
'coi_review_decision',
timeout=timedelta(hours=24)
)
if approval_event and approval_event.get('decision') == 'approved':
# Step 7: Deliver COI
delivery_result = yield context.call_activity(
'deliver_coi',
{
'pdf_url': pdf_result['pdf_url'],
'certificate_holder_email': request_data['certificate_holder'].get('email'),
'delivery_method': request_data.get('delivery_method', 'email'),
'certificial_enabled': request_data.get('use_certificial', False)
}
)
# Step 8: Update AMS activity log
yield context.call_activity(
'update_ams_activity',
{
'policy_number': request_data['policy_number'],
'activity_type': 'COI Issued',
'certificate_holder': request_data['certificate_holder']['name'],
'document_url': pdf_result['pdf_url']
}
)
# Step 9: Log audit event
yield context.call_activity('log_audit_event', {
'event_type': 'coi_approved_and_delivered',
'model_used': 'gpt-5.4-mini',
'input_data_hash': hashlib.sha256(json.dumps(policy_data, sort_keys=True).encode()).hexdigest(),
'output_document_hash': pdf_result.get('document_hash'),
'patra_validation_score': validation_result.get('score'),
'reviewer_email': approval_event.get('reviewer_email'),
'delivery_method': request_data.get('delivery_method', 'email')
})
return {'status': 'delivered', 'pdf_url': pdf_result['pdf_url']}
elif approval_event and approval_event.get('decision') == 'rejected':
yield context.call_activity('log_audit_event', {
'event_type': 'coi_rejected',
'reason': approval_event.get('reason', 'No reason provided'),
'reviewer_email': approval_event.get('reviewer_email')
})
return {'status': 'rejected', 'reason': approval_event.get('reason')}
else:
# Timeout - escalate
yield context.call_activity('send_escalation_notification', {
'message': f"COI review timed out for policy {request_data['policy_number']}",
'request_data': request_data
})
return {'status': 'timeout', 'message': 'Review timed out - escalated to manager'}
# Activity: Fetch policy data from AMS
@app.activity_trigger(input_name='input')
def fetch_policy_from_ams(input: dict) -> dict:
import requests
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
credential = DefaultAzureCredential()
kv_client = SecretClient(vault_url='https://kv-insurance-ai.vault.azure.net/', credential=credential)
ams_type = input.get('ams_type', 'applied_epic')
if ams_type == 'applied_epic':
client_id = kv_client.get_secret('ams-client-id').value
client_secret = kv_client.get_secret('ams-client-secret').value
# Get OAuth token
token_resp = requests.post(
'https://api.appliedsystems.com/oauth/token',
data={'grant_type': 'client_credentials', 'client_id': client_id, 'client_secret': client_secret}
)
token = token_resp.json()['access_token']
# Fetch policy
policy_resp = requests.get(
f"https://api.appliedsystems.com/v1/policies/{input['policy_number']}",
headers={'Authorization': f'Bearer {token}', 'Content-Type': 'application/json'}
)
return policy_resp.json()
elif ams_type == 'hawksoft':
api_key = kv_client.get_secret('ams-api-key').value
policy_resp = requests.get(
f"https://api.hawksoft.com/v2/policies/{input['policy_number']}",
headers={'Authorization': f'Bearer {api_key}', 'Content-Type': 'application/json'}
)
return policy_resp.json()
return {'error': f'Unsupported AMS type: {ams_type}'}
# Activity: Extract COI fields using LLM
@app.activity_trigger(input_name='input')
def extract_coi_fields_llm(input: dict) -> dict:
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
credential = DefaultAzureCredential()
kv_client = SecretClient(vault_url='https://kv-insurance-ai.vault.azure.net/', credential=credential)
api_key = kv_client.get_secret('azure-openai-key').value
client = AzureOpenAI(
api_key=api_key,
api_version='2024-06-01',
azure_endpoint='https://oai-insurance-coi.openai.azure.com/'
)
policy_text = json.dumps(input['policy_data'], indent=2)
cert_holder = input['certificate_holder']
special_reqs = input.get('special_requirements', {})
# Use the COI Field Extraction Prompt (defined as separate component)
system_prompt = """You are an expert insurance document processor specializing in Certificate of Insurance (COI) generation. Extract structured data from insurance policy information and return it in precise JSON format matching ACORD certificate form fields. Extract ONLY data explicitly stated. Never infer or fabricate. Set null for unavailable fields. Dates in MM/DD/YYYY. Monetary amounts as numbers only."""
user_prompt = f"""Extract ACORD Certificate of Insurance fields from the following policy data.
Certificate Holder: {cert_holder.get('name')}, {cert_holder.get('address')}
Special Requirements: Additional Insured: {special_reqs.get('additional_insured', False)}, Waiver of Subrogation: {special_reqs.get('waiver_of_subrogation', False)}, Primary & Non-Contributory: {special_reqs.get('primary_noncontributory', False)}
Policy Data:
---
{policy_text}
---"""
response = client.chat.completions.create(
model='gpt-5.4-mini',
messages=[
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': user_prompt}
],
temperature=0.1,
max_tokens=4000,
response_format={'type': 'json_object'}
)
return json.loads(response.choices[0].message.content)
# Activity: Generate PDF via DocSpring
@app.activity_trigger(input_name='input')
def generate_coi_pdf(input: dict) -> dict:
import requests
import base64
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
credential = DefaultAzureCredential()
kv_client = SecretClient(vault_url='https://kv-insurance-ai.vault.azure.net/', credential=credential)
api_key = kv_client.get_secret('docspring-api-key').value
# Map form type to DocSpring template ID
template_map = {
'acord_25': 'tpl_XXXXXXXXXX', # Replace with actual template IDs
'acord_26': 'tpl_YYYYYYYYYY',
'acord_125': 'tpl_ZZZZZZZZZZ'
}
template_id = template_map.get(input.get('form_type', 'acord_25'))
coi_fields = input['coi_fields']
# Map extracted fields to DocSpring template fields
submission_data = {
'producer_name': coi_fields.get('producer', {}).get('name'),
'producer_address': coi_fields.get('producer', {}).get('address'),
'insured_name': coi_fields.get('insured', {}).get('name'),
'insured_address': coi_fields.get('insured', {}).get('address'),
# ... map all fields per DocSpring template configuration
}
auth = base64.b64encode(f'{api_key}:'.encode()).decode()
response = requests.post(
f'https://api.docspring.com/api/v1/templates/{template_id}/submissions',
headers={'Authorization': f'Basic {auth}', 'Content-Type': 'application/json'},
json={'data': submission_data, 'wait': True}
)
result = response.json()
pdf_url = result.get('submission', {}).get('download_url')
# Calculate document hash for audit
if pdf_url:
pdf_content = requests.get(pdf_url).content
doc_hash = hashlib.sha256(pdf_content).hexdigest()
else:
doc_hash = None
return {'pdf_url': pdf_url, 'document_hash': doc_hash, 'submission_id': result.get('submission', {}).get('id')}
# HTTP trigger to start orchestration
@app.route(route='generate-coi', methods=['POST'])
@app.durable_client_input(client_name='client')
async def http_start_coi(req: func.HttpRequest, client: df.DurableOrchestrationClient):
request_body = req.get_json()
# Validate required fields
required = ['policy_number', 'certificate_holder']
for field in required:
if field not in request_body:
return func.HttpResponse(json.dumps({'error': f'Missing required field: {field}'}), status_code=400)
instance_id = await client.start_new('coi_generation_orchestrator', None, request_body)
return client.create_check_status_response(req, instance_id)
# HTTP trigger for review decisions
@app.route(route='coi/{instance_id}/review', methods=['POST'])
@app.durable_client_input(client_name='client')
async def submit_review(req: func.HttpRequest, client: df.DurableOrchestrationClient):
instance_id = req.route_params['instance_id']
decision = req.get_json()
await client.raise_event(instance_id, 'coi_review_decision', decision)
return func.HttpResponse(json.dumps({'status': 'review submitted'}), status_code=200)AMS Data Field Mapper
Type: integration A configuration-driven mapping layer that translates field names between different AMS platforms (Applied Epic, Vertafore AMS360, HawkSoft) and the standardized ACORD form field schema used by the COI extraction prompt and DocSpring templates. Allows the same orchestration logic to work across multiple AMS platforms.
Implementation:
# ams_field_mapper.py
import json
from typing import Dict, Optional, Any
# Field mapping configurations for each AMS platform
AMS_FIELD_MAPPINGS = {
'applied_epic': {
'insured_name': 'policy.namedInsured.fullName',
'insured_address': 'policy.namedInsured.address.streetAddress',
'insured_city': 'policy.namedInsured.address.city',
'insured_state': 'policy.namedInsured.address.state',
'insured_zip': 'policy.namedInsured.address.zipCode',
'policy_number': 'policy.policyNumber',
'effective_date': 'policy.effectiveDate',
'expiration_date': 'policy.expirationDate',
'carrier_name': 'policy.carrier.name',
'carrier_naic': 'policy.carrier.naicCode',
'line_of_business': 'policy.lineOfBusiness',
'gl_each_occurrence': 'policy.coverages[?type=GL].limits.eachOccurrence',
'gl_general_aggregate': 'policy.coverages[?type=GL].limits.generalAggregate',
'gl_products_comp_op': 'policy.coverages[?type=GL].limits.productsCompletedOps',
'gl_personal_adv_injury': 'policy.coverages[?type=GL].limits.personalAdvertisingInjury',
'gl_damage_rented_premises': 'policy.coverages[?type=GL].limits.damageRentedPremises',
'gl_medical_expense': 'policy.coverages[?type=GL].limits.medicalExpense',
'auto_combined_single_limit': 'policy.coverages[?type=AUTO].limits.combinedSingleLimit',
'auto_bodily_injury_person': 'policy.coverages[?type=AUTO].limits.bodilyInjuryPerPerson',
'auto_bodily_injury_accident': 'policy.coverages[?type=AUTO].limits.bodilyInjuryPerAccident',
'auto_property_damage': 'policy.coverages[?type=AUTO].limits.propertyDamage',
'wc_statutory': 'policy.coverages[?type=WC].limits.statutory',
'wc_el_each_accident': 'policy.coverages[?type=WC].limits.elEachAccident',
'wc_el_disease_employee': 'policy.coverages[?type=WC].limits.elDiseaseEachEmployee',
'wc_el_disease_policy': 'policy.coverages[?type=WC].limits.elDiseasePolicyLimit',
'umbrella_each_occurrence': 'policy.coverages[?type=UMBRELLA].limits.eachOccurrence',
'umbrella_aggregate': 'policy.coverages[?type=UMBRELLA].limits.aggregate',
'endorsements': 'policy.endorsements[*]',
'additional_insureds': 'policy.additionalInsureds[*]',
'description_of_operations': 'policy.descriptionOfOperations'
},
'hawksoft': {
'insured_name': 'PolicyInfo.InsuredName',
'insured_address': 'PolicyInfo.InsuredAddress.Street',
'insured_city': 'PolicyInfo.InsuredAddress.City',
'insured_state': 'PolicyInfo.InsuredAddress.State',
'insured_zip': 'PolicyInfo.InsuredAddress.Zip',
'policy_number': 'PolicyInfo.PolicyNumber',
'effective_date': 'PolicyInfo.EffDate',
'expiration_date': 'PolicyInfo.ExpDate',
'carrier_name': 'PolicyInfo.CompanyName',
'carrier_naic': 'PolicyInfo.NAICCode',
'line_of_business': 'PolicyInfo.LOB',
'gl_each_occurrence': 'Coverages.GL.EachOccurrence',
'gl_general_aggregate': 'Coverages.GL.GenAggregate',
'gl_products_comp_op': 'Coverages.GL.ProdCompOps',
'gl_personal_adv_injury': 'Coverages.GL.PersAdvInjury',
'gl_damage_rented_premises': 'Coverages.GL.DmgRentedPrem',
'gl_medical_expense': 'Coverages.GL.MedExpense',
'auto_combined_single_limit': 'Coverages.Auto.CSL',
'wc_el_each_accident': 'Coverages.WC.ELEachAccident',
'wc_el_disease_employee': 'Coverages.WC.ELDiseaseEmployee',
'wc_el_disease_policy': 'Coverages.WC.ELDiseasePolicyLimit',
'umbrella_each_occurrence': 'Coverages.Umbrella.EachOccurrence',
'umbrella_aggregate': 'Coverages.Umbrella.Aggregate',
'endorsements': 'Endorsements[*]',
'additional_insureds': 'AdditionalInsureds[*]',
'description_of_operations': 'PolicyInfo.DescOfOps'
},
'ams360': {
'insured_name': 'Policy.Customer.Name',
'insured_address': 'Policy.Customer.Address1',
'insured_city': 'Policy.Customer.City',
'insured_state': 'Policy.Customer.State',
'insured_zip': 'Policy.Customer.ZipCode',
'policy_number': 'Policy.PolicyNumber',
'effective_date': 'Policy.EffectiveDate',
'expiration_date': 'Policy.ExpirationDate',
'carrier_name': 'Policy.Company.CompanyName',
'carrier_naic': 'Policy.Company.NAICNumber',
'line_of_business': 'Policy.PolicyType',
'gl_each_occurrence': 'Policy.Limits.GLEachOccurrence',
'gl_general_aggregate': 'Policy.Limits.GLGeneralAggregate',
'endorsements': 'Policy.Endorsements[*]',
'additional_insureds': 'Policy.AdditionalInsureds[*]',
'description_of_operations': 'Policy.DescriptionOfOperations'
}
}
def resolve_nested_path(data: dict, path: str) -> Any:
"""Resolve a dot-notation path with array support against a nested dict."""
parts = path.split('.')
current = data
for part in parts:
if current is None:
return None
if '[*]' in part:
key = part.replace('[*]', '')
current = current.get(key, [])
elif '[?' in part:
# Simple filter: field[?type=VALUE].rest
key = part.split('[?')[0]
filter_expr = part.split('[?')[1].rstrip(']')
filter_key, filter_val = filter_expr.split('=')
items = current.get(key, [])
current = next((item for item in items if item.get(filter_key) == filter_val), None)
else:
current = current.get(part) if isinstance(current, dict) else None
return current
def map_ams_to_standard(ams_data: dict, ams_type: str) -> dict:
"""Map AMS-specific field structure to standardized schema."""
mapping = AMS_FIELD_MAPPINGS.get(ams_type)
if not mapping:
raise ValueError(f'Unsupported AMS type: {ams_type}')
result = {}
for standard_field, ams_path in mapping.items():
result[standard_field] = resolve_nested_path(ams_data, ams_path)
return result
def get_supported_ams_types() -> list:
return list(AMS_FIELD_MAPPINGS.keys())
# Usage:
# standardized = map_ams_to_standard(raw_epic_data, 'applied_epic')
# standardized = map_ams_to_standard(raw_hawksoft_data, 'hawksoft')Email COI Request Parser
Type: agent An AI agent that monitors the agency's COI request email inbox, parses unstructured email requests into structured COI generation parameters, and triggers the orchestration pipeline. Handles various request formats including free-text emails, forwarded certificate holder requests, and renewal/update requests.
Implementation:
EMAIL_PARSER_SYSTEM_PROMPT
# Email intake agent with Azure OpenAI integration and validation logic
# coi_email_parser.py - Email intake agent
import json
import re
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
from typing import Dict, Optional
EMAIL_PARSER_SYSTEM_PROMPT = """You are an insurance agency's COI request intake specialist. Parse incoming emails requesting Certificates of Insurance and extract structured data.
Extract the following from the email:
1. certificate_holder_name: The entity requesting the COI (the party that needs proof of insurance)
2. certificate_holder_address: Full mailing address of the certificate holder
3. certificate_holder_email: Email to send the completed COI to
4. insured_name: The agency's client who needs the COI (the insured party)
5. policy_number: If referenced (may not be in the email)
6. coverage_types_requested: List of coverage types needed (GL, Auto, WC, Umbrella, Property)
7. special_requirements: Object with boolean flags:
- additional_insured: Is the certificate holder requesting to be added as additional insured?
- waiver_of_subrogation: Is waiver of subrogation requested?
- primary_noncontributory: Is primary & non-contributory status requested?
- specific_limits: Any minimum limit requirements mentioned
8. project_description: Description of the project/contract the COI is for (for Description of Operations field)
9. urgency: 'standard' (24hr), 'rush' (4hr), or 'immediate' (1hr) based on language used
10. request_type: 'new_coi', 'update_existing', 'renewal', or 'additional_insured_add'
RULES:
- If a field is not mentioned in the email, set it to null
- For coverage types, infer from context (e.g., 'general liability and auto' = ['GL', 'Auto'])
- Look for contractual language indicating special requirements (e.g., 'must be named as additional insured')
- Flag urgency based on words like 'ASAP', 'urgent', 'needed today', 'rush'
- If the email is unclear or you cannot determine the insured, set confidence to 'low'
Return ONLY valid JSON."""
class COIEmailParser:
def __init__(self):
credential = DefaultAzureCredential()
kv_client = SecretClient(
vault_url='https://kv-insurance-ai.vault.azure.net/',
credential=credential
)
api_key = kv_client.get_secret('azure-openai-key').value
self.llm_client = AzureOpenAI(
api_key=api_key,
api_version='2024-06-01',
azure_endpoint='https://oai-insurance-coi.openai.azure.com/'
)
def parse_email(self, subject: str, body: str, sender: str) -> Dict:
"""Parse an incoming COI request email into structured data."""
user_prompt = f"""Parse this COI request email:
From: {sender}
Subject: {subject}
Body:
{body}"""
response = self.llm_client.chat.completions.create(
model='gpt-5.4-mini',
messages=[
{'role': 'system', 'content': EMAIL_PARSER_SYSTEM_PROMPT},
{'role': 'user', 'content': user_prompt}
],
temperature=0.1,
max_tokens=2000,
response_format={'type': 'json_object'}
)
parsed = json.loads(response.choices[0].message.content)
parsed['original_sender'] = sender
parsed['original_subject'] = subject
parsed['parse_model'] = 'gpt-5.4-mini'
return parsed
def validate_parsed_request(self, parsed: Dict) -> Dict:
"""Validate parsed data and determine if human intervention is needed."""
issues = []
if not parsed.get('insured_name'):
issues.append('Could not determine insured name from email')
if not parsed.get('certificate_holder_name'):
issues.append('Could not determine certificate holder')
if not parsed.get('coverage_types_requested'):
issues.append('No coverage types identified')
if parsed.get('confidence') == 'low':
issues.append('Low confidence parse - manual review recommended')
return {
'is_valid': len(issues) == 0,
'issues': issues,
'can_auto_process': len(issues) == 0 and parsed.get('urgency') != 'immediate',
'parsed_data': parsed
}
# Integration with Power Automate / Azure Logic App:
# This class is called by an Azure Function HTTP trigger
# that receives webhook calls from the M365 email flow.
# POST /api/parse-coi-email
# Body: { 'subject': '...', 'body': '...', 'sender': '...' }
# Returns: validated parsed result JSONCoverage Comparison Report Workflow
Type: workflow Orchestration workflow specifically for generating coverage comparison reports from multiple carrier quotes. Pulls quote data from the AMS or accepts manual input, uses GPT-5.4 for narrative generation, and produces both a formatted Word document and a summary Excel spreadsheet. Implementation:
# comparison_report_orchestrator.py
import json
from datetime import datetime
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient
from azure.storage.blob import BlobServiceClient
from docx import Document
from docx.shared import Inches, Pt, RGBColor
from docx.enum.text import WD_ALIGN_PARAGRAPH
import openpyxl
from io import BytesIO
import hashlib
class CoverageComparisonGenerator:
def __init__(self):
credential = DefaultAzureCredential()
kv_client = SecretClient(
vault_url='https://kv-insurance-ai.vault.azure.net/',
credential=credential
)
self.openai_key = kv_client.get_secret('azure-openai-key').value
self.llm_client = AzureOpenAI(
api_key=self.openai_key,
api_version='2024-06-01',
azure_endpoint='https://oai-insurance-coi.openai.azure.com/'
)
self.blob_client = BlobServiceClient(
account_url='https://stinsuranceaidocs.blob.core.windows.net/',
credential=credential
)
def generate_report(self, client_name: str, agency_name: str, agent_name: str,
agent_license: str, quotes: list, contractual_requirements: dict = None) -> dict:
"""Generate a complete coverage comparison report."""
# Step 1: Generate narrative via GPT-5.4
narrative = self._generate_narrative(
client_name, agency_name, agent_name, agent_license, quotes, contractual_requirements
)
# Step 2: Generate Word document
docx_buffer = self._create_word_document(narrative, agency_name)
# Step 3: Generate Excel summary
xlsx_buffer = self._create_excel_summary(client_name, quotes)
# Step 4: Upload to Azure Blob Storage
timestamp = datetime.utcnow().strftime('%Y%m%d_%H%M%S')
safe_client = client_name.replace(' ', '_').lower()
docx_blob_name = f'{safe_client}/comparison_{timestamp}.docx'
xlsx_blob_name = f'{safe_client}/comparison_{timestamp}.xlsx'
container = self.blob_client.get_container_client('comparison-reports')
container.upload_blob(docx_blob_name, docx_buffer.getvalue())
container.upload_blob(xlsx_blob_name, xlsx_buffer.getvalue())
# Step 5: Audit log
doc_hash = hashlib.sha256(docx_buffer.getvalue()).hexdigest()
return {
'status': 'generated',
'narrative_markdown': narrative,
'docx_blob': docx_blob_name,
'xlsx_blob': xlsx_blob_name,
'document_hash': doc_hash,
'carriers_compared': [q.get('carrier_name') for q in quotes],
'generated_at': datetime.utcnow().isoformat()
}
def _generate_narrative(self, client_name, agency_name, agent_name,
agent_license, quotes, contractual_requirements) -> str:
quotes_text = ''
for i, quote in enumerate(quotes, 1):
quotes_text += f"\n--- Carrier {i}: {quote.get('carrier_name', 'Unknown')} ---\n"
quotes_text += json.dumps(quote, indent=2)
quotes_text += '\n'
reqs_text = json.dumps(contractual_requirements, indent=2) if contractual_requirements else 'None specified'
# Uses the Coverage Comparison Report Generator prompt (defined separately)
system_prompt = """You are an expert insurance advisor generating a professional Coverage Comparison Report. Compare quotes from multiple carriers. Be accurate, thorough, and write in clear business language. Include: Executive Summary, Premium Comparison table, Coverage Comparison by Line (with limits, key differences, gaps, recommendations), Endorsement Analysis, Financial Strength notes, Overall Recommendation, and Disclaimers. NEVER fabricate details. Format as clean Markdown."""
user_prompt = f"""Generate a Coverage Comparison Report:
Client: {client_name}
Agency: {agency_name}
Agent: {agent_name} (License: {agent_license})
Date: {datetime.utcnow().strftime('%B %d, %Y')}
Contractual Requirements: {reqs_text}
Carrier Quotes:
{quotes_text}"""
response = self.llm_client.chat.completions.create(
model='gpt-5.4',
messages=[
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': user_prompt}
],
temperature=0.3,
max_tokens=8000
)
return response.choices[0].message.content
def _create_word_document(self, narrative: str, agency_name: str) -> BytesIO:
doc = Document()
# Set default font
style = doc.styles['Normal']
font = style.font
font.name = 'Calibri'
font.size = Pt(11)
# Parse markdown narrative into Word formatting
for line in narrative.split('\n'):
line = line.strip()
if line.startswith('# '):
p = doc.add_heading(line[2:], level=1)
elif line.startswith('## '):
p = doc.add_heading(line[3:], level=2)
elif line.startswith('### '):
p = doc.add_heading(line[4:], level=3)
elif line.startswith('- **'):
doc.add_paragraph(line[2:], style='List Bullet')
elif line.startswith('- '):
doc.add_paragraph(line[2:], style='List Bullet')
elif line.startswith('|'):
# Simple table row - accumulate and create table
doc.add_paragraph(line, style='Normal')
elif line:
doc.add_paragraph(line)
buffer = BytesIO()
doc.save(buffer)
buffer.seek(0)
return buffer
def _create_excel_summary(self, client_name: str, quotes: list) -> BytesIO:
wb = openpyxl.Workbook()
ws = wb.active
ws.title = 'Premium Comparison'
# Headers
headers = ['Coverage Line'] + [q.get('carrier_name', f'Carrier {i+1}') for i, q in enumerate(quotes)]
ws.append(headers)
# Coverage lines
lines = ['General Liability', 'Commercial Auto', 'Workers Compensation', 'Umbrella/Excess', 'Property', 'TOTAL']
premium_keys = ['gl_premium', 'auto_premium', 'wc_premium', 'umbrella_premium', 'property_premium']
for line_name, key in zip(lines[:-1], premium_keys):
row = [line_name]
for quote in quotes:
row.append(quote.get(key, 'N/A'))
ws.append(row)
# Total row
total_row = ['TOTAL']
for quote in quotes:
total = sum(quote.get(k, 0) or 0 for k in premium_keys if isinstance(quote.get(k), (int, float)))
total_row.append(total)
ws.append(total_row)
buffer = BytesIO()
wb.save(buffer)
buffer.seek(0)
return buffer
# Azure Function HTTP trigger:
# POST /api/generate-comparison
# Body: { 'client_name': '...', 'agency_name': '...', 'agent_name': '...',
# 'agent_license': '...', 'quotes': [...], 'contractual_requirements': {...} }Coverage Comparison Report — System Prompt
Coverage Comparison Report — User Prompt
Testing & Validation
- AMS API Connectivity Test: Execute a GET request to the AMS API for a known test policy number and verify the response contains all expected fields (insured name, policy number, effective/expiration dates, coverage limits). Expected result: HTTP 200 with complete JSON payload. Run for each AMS type the agency uses.
- GPT-5.4 mini Field Extraction Accuracy Test: Submit 10 diverse test policies (GL, Auto, WC, Umbrella, Property) through the COI field extraction prompt and compare output JSON against manually-verified reference data. Acceptance criteria: 98%+ field accuracy — every limit amount, date, and policy number must match exactly. Document any null fields and verify they are genuinely absent from source data.
- DocSpring ACORD 25 PDF Rendering Test: Generate 5 test ACORD 25 certificates via the DocSpring API with known data and visually verify every field is populated in the correct form location. Check: Producer section, Insured section, all coverage limit cells, policy numbers, dates, Description of Operations, Certificate Holder block, and Cancellation provisions. Print one certificate and compare to a manually-created reference.
- DocSpring ACORD 26 PDF Rendering Test: Repeat the ACORD 25 test for ACORD 26 (Certificate of Property Insurance) with property coverage data. Verify building limits, BPP limits, deductibles, and special property coverage fields are correctly placed.
- Patra AI Validation Integration Test: Submit a deliberately incorrect COI (e.g., wrong limits, mismatched dates) to Patra AI's policy checking module and verify it flags the discrepancies. Expected: validation score below threshold and specific warnings identifying the errors.
- Email Parser Accuracy Test: Forward 20 real historical COI request emails (with PII redacted) to the intake mailbox and verify the parser correctly extracts: certificate holder name (95%+), insured name (90%+), coverage types (90%+), and special requirements (85%+). Document parse failures for prompt refinement.
- End-to-End COI Generation Test: Submit a COI request via the email intake for a test policy, verify it flows through: email parse → AMS data fetch → LLM extraction → DocSpring rendering → Patra validation → review dashboard notification → approval → email delivery → AMS activity log update. Total elapsed time should be under 5 minutes for automated steps.
- Coverage Comparison Report Test: Submit 3 carrier quotes for the same insured through the comparison report generator. Verify the output includes: accurate premium comparison table, correct limits for each carrier, identified coverage differences, endorsement analysis, and the mandatory AI disclosure disclaimer. Have a licensed agent review for accuracy.
- Audit Log Completeness Test: Generate 10 COIs through the full pipeline, then query Azure Blob Storage audit-logs container. Verify each COI has a corresponding audit entry with: event_id, timestamp, model_used, input_data_hash, output_document_hash, reviewer_email, review_decision, and delivery_method. Verify no raw PII (full names, full policy numbers) appears in logs.
- Human Review Dashboard Functionality Test: Verify that the review dashboard correctly displays: pending COI count, PDF preview, source policy data comparison, Patra AI validation warnings, and approve/reject/edit buttons. Test that approval triggers delivery and rejection logs the reason. Test that the 24-hour timeout escalation notification fires correctly.
- Concurrent Load Test: Submit 20 simultaneous COI requests and verify the Azure Functions scale correctly without timeouts or data corruption. Monitor Application Insights for error rates and P95 latency. Acceptance criteria: all 20 complete successfully within 10 minutes.
- Security and Compliance Test: Verify Azure Key Vault access is restricted to the Function App managed identity only. Verify no API keys are exposed in application logs or error messages. Verify the review dashboard requires Entra ID authentication. Verify immutable storage policy is active on audit-logs container.
Client Handoff
The client handoff meeting should be a 2-hour session with the agency principal, office manager, and all CSRs/producers who will use the system. Cover the following topics: (1) System Overview — demonstrate the end-to-end COI generation workflow from email request to delivered certificate, showing the time savings versus the manual process; (2) Email Intake Training — how to forward or submit COI requests to the coi@ mailbox, including what information to include for fastest processing; (3) Review Dashboard Training — hands-on walkthrough of reviewing, approving, editing, and rejecting AI-generated COIs, with emphasis on the human-in-the-loop compliance requirement; (4) Coverage Comparison Reports — how to trigger comparison reports, interpret the output, and customize before sending to clients; (5) Error Handling — what to do when the AI parser fails, when Patra AI flags warnings, and when manual intervention is required; (6) Compliance Obligations — review the NAIC AI governance documentation, ensure all staff sign the compliance acknowledgment form, and explain the audit trail; (7) Escalation Procedures — how to contact the MSP for support issues, expected response times, and the Teams support channel.
Documentation to leave behind: (a) User Guide PDF with screenshots (20-30 pages), (b) Laminated Quick Reference Card for each desk, (c) Video walkthrough recording (hosted on SharePoint), (d) FAQ document, (e) NAIC AI Governance Program documentation (Word document in compliance folder), (f) Compliance acknowledgment forms (signed copies retained), (g) MSP support contact card with escalation matrix.
Success criteria to review together: (a) AI-generated COIs match manual baseline accuracy on 10 test cases, (b) Average COI generation time reduced from 15-20 minutes to under 3 minutes (including review), (c) All staff can independently submit, review, and approve a COI, (d) Audit logs are being generated correctly, (e) Patra AI validation is catching intentionally-introduced errors in test cases. Schedule a 30-day review meeting to assess adoption metrics and address any issues.
Maintenance
Ongoing MSP maintenance responsibilities for this implementation:
Weekly (30 minutes)
- Review Azure Application Insights dashboard for error rates, API latency trends, and failed COI generations. Alert threshold: >5% error rate or P95 latency >30 seconds.
- Check Azure OpenAI token consumption vs. budget. Alert if trending >20% above projected monthly spend.
- Review Patra AI validation warning trends — if rejection rates exceed 10%, investigate prompt accuracy or AMS data quality issues.
Monthly (2 hours)
- Review and rotate API keys if required by security policy (Azure Key Vault makes this seamless).
- Analyze COI generation volume trends and adjust Azure Functions scaling plan if needed.
- Review audit logs for completeness and compliance — spot-check 5 random COIs against their audit entries.
- Check for Azure OpenAI model updates/deprecations — plan migration when new model versions are released (typically 6-12 month deprecation windows).
- Generate monthly usage report for client: COIs generated, comparison reports produced, average processing time, error rate, and cost.
Quarterly (4 hours)
- Update NAIC AI Governance Program documentation if state regulations change (monitor NAIC bulletins and state DOI announcements).
- Review and update LLM prompts based on accumulated feedback — if specific coverage types or ACORD form fields are consistently problematic, refine extraction prompts.
- Test AMS API connectivity and field mappings — AMS vendors occasionally update API schemas.
- Review DocSpring template accuracy — ACORD periodically updates form versions.
- Conduct security review: verify Key Vault access policies, check for any exposed credentials in logs, confirm immutable storage policy on audit container.
- Meet with agency principal to review ROI metrics and discuss expansion opportunities (additional form types, new workflows).
Annual
- Full system audit: end-to-end testing of all workflows, compliance documentation update, staff re-training if needed.
- Azure OpenAI model migration if current models are being deprecated.
- Patra AI subscription review and renewal.
- DocSpring template refresh for any new ACORD form versions.
- E&O insurance review to ensure continued coverage for AI-assisted document generation.
SLA Targets
- System availability: 99.5% (aligned with Azure Functions SLA)
- COI generation pipeline: <5 minutes from request to review-ready
- Critical issue response: 1 hour during business hours, 4 hours after hours
- Non-critical issue response: 4 hours during business hours, next business day after hours
Escalation Path
- L1 (MSP helpdesk): Basic troubleshooting, user access issues, email intake problems
- L2 (MSP technician): API connectivity issues, DocSpring template fixes, Azure monitoring alerts
- L3 (MSP developer/architect): Prompt engineering updates, AMS API integration changes, new workflow development
- Vendor escalation: Azure Support (Premier), DocSpring support, Patra AI support, Applied/HawkSoft/Vertafore support
Alternatives
Turnkey Platform Approach (Patra AI + Certificial Only)
Instead of building a custom orchestration middleware with Azure OpenAI, rely entirely on Patra AI for policy checking and quote comparison, and Certificial for COI issuance and distribution. No custom code is written — the MSP configures pre-built integrations between these platforms and the agency's AMS. Staff interact with Patra and Certificial web UIs directly rather than a custom dashboard.
Microsoft Copilot-Centric Approach
Use Microsoft 365 Copilot ($30/user/month) as the primary AI interface. Staff use Copilot in Word to generate comparison reports from pasted quote data, Copilot in Excel to create premium comparison spreadsheets, and Copilot in Outlook to draft COI delivery emails. DocSpring handles ACORD form rendering via a simple Power Automate flow triggered from a SharePoint list where staff enter COI details.
Open-Source LLM On-Premises Approach
Insurance-Specific IDP Platform (ACORD Transcriber + Inaza)
Use ACORD Solutions Group's official Transcriber platform for COI generation and Inaza for inbound ACORD form processing. This approach leverages the ACORD organization's own technology stack, which has native understanding of all ACORD form types and direct access to the complete forms library. No general-purpose LLM is used — all document intelligence comes from insurance-trained IDP (Intelligent Document Processing) models.
Want early access to the full toolkit?