55 min readAmbient capture

Implementation Guide: Transcribe coverage review calls and draft policy change recommendation summaries

Step-by-step implementation guide for deploying AI to transcribe coverage review calls and draft policy change recommendation summaries for Insurance Agencies clients.

Hardware Procurement

Jabra Speak2 75 USB/Bluetooth Speakerphone

Jabra (GN Audio)2775-419 (UC variant, USB-A/USB-C)Qty: 10

$250/unit MSP cost (CDW/Ingram) / $340 suggested resale

Primary desk-based call capture device for each agent station. Features 4 beamforming noise-cancelling microphones optimized for voice capture, super wide-band full duplex audio, and up to 32 hours wireless talk time. Provides high-quality audio input essential for accurate transcription in open-office insurance agency environments.

Poly Sync 20 USB/Bluetooth Speakerphone

Poly (HP)772C8AA (USB-C variant)Qty: 2

$110/unit MSP cost / $155 suggested resale

Budget alternative speakerphones for overflow desks or part-time agent stations. Dual-mic array suitable for 1:1 calls at a lower price point. Deploy at stations with lower call volume or as spares.

Poly Sync 60 Conference Speakerphone

Poly (HP)7S2J6AAQty: 1

$350/unit MSP cost / $475 suggested resale

Conference room speakerphone for group coverage reviews and team policy discussions. Six-microphone array fills small-to-medium conference rooms, capturing all participants clearly for accurate speaker diarization in multi-party calls.

Jabra Evolve2 75 Wireless ANC Headset

Jabra (GN Audio)27599-989-999 (UC variant, USB-C)Qty: 3

$200/unit MSP cost / $290 suggested resale

Active noise-cancelling headsets with 8-microphone array for agents who prefer headsets or work in noisy environments. Deploy for producers who take calls while mobile within the office or from home offices. Superior noise isolation improves transcription accuracy in challenging acoustic environments.

Software Procurement

Fireflies.ai Business Plan

Fireflies.aiBusiness PlanQty: per seat

$19/user/month (annual billing) or $29/user/month (monthly) — MSP cost; resell at $32–$40/user/month

Phase 1 transcription and meeting AI platform. Auto-joins Zoom, Teams, Google Meet, and RingCentral calls to record, transcribe, and generate initial AI summaries. Provides immediate value with zero custom development. Includes speaker diarization, topic detection, action item extraction, and search across all transcripts. Serves as the quick-win deployment while the custom pipeline is built in Phase 2.

Deepgram Nova-3 API

DeepgramNova-3Qty: Usage-based API (pay-as-you-go)

$0.0043/minute pre-recorded, $0.0077/minute streaming — estimated $5–$15/agent/month for typical insurance agency call volumes (20–40 calls/agent/month averaging 15 min each)

Phase 2 primary speech-to-text engine. Nova-3 delivers best-in-class accuracy at the lowest per-minute cost with per-second billing (no rounding waste). Features include speaker diarization, punctuation, smart formatting, topic detection, and PII redaction. $200 free credits included on sign-up for development and testing.

OpenAI API (GPT-4.1-mini and GPT-5.4)

OpenAIGPT-4.1-mini / GPT-5.4

GPT-4.1-mini: $0.40/million output tokens (~$0.02–$0.05 per summary); GPT-5.4: $10.00/million output tokens (~$0.15–$0.30 per summary for complex commercial lines). Estimated $10–$40/agency/month.

Phase 2 LLM summarization engine. Processes raw transcripts through insurance-specific prompt templates to generate structured policy change recommendation summaries. GPT-4.1-mini handles standard personal lines reviews cost-effectively; GPT-5.4 is reserved for complex commercial lines or high-value accounts requiring deeper analysis.

Microsoft 365 Business Standard or Premium

MicrosoftSaaS per-seat monthly

$12.50–$22.00/user/month (assumed pre-existing in most agencies)

Baseline productivity platform. Required for Outlook email delivery of summaries, Teams integration for video-based coverage reviews, SharePoint/OneDrive for transcript archive storage, and Azure AD for single sign-on to the solution dashboard. Most insurance agencies already have M365; verify plan tier during prerequisites.

Self-hosted: $0 software + hosting on existing server; Cloud: $24/month (Starter) or $60/month (Pro). Recommend Cloud Pro for MSP management.

Workflow automation middleware that orchestrates the entire pipeline: receives VoIP webhook triggers or scheduled recording fetches, calls Deepgram API for transcription, passes transcript to OpenAI for summarization, formats output, pushes to AMS API, and sends email notification to agent. No-code/low-code alternative to custom application development, dramatically reducing implementation time and maintenance burden.

Azure Blob Storage

Microsoft Azure

~$0.018/GB/month for Hot tier; estimated $2–$5/month for typical agency storing 6 months of compressed audio recordings

Secure, encrypted cloud storage for call audio recordings and raw transcripts. Provides immutable audit trail for E&O compliance and regulatory requirements. Configured with lifecycle policies to automatically tier recordings to Cool storage after 90 days and Archive after 1 year to minimize cost.

$0 additional (API access included in RingCentral MVP plans); RingCentral subscription assumed pre-existing at $20–$35/user/month

Programmatic access to call recordings for agencies using RingCentral as their VoIP provider. The API enables automatic retrieval of completed call recordings via webhook notification, feeding them into the transcription pipeline without agent intervention.

Prerequisites

  • Active business VoIP phone system with call recording capability and API access (RingCentral MVP, Zoom Phone, Microsoft Teams Phone, Nextiva, or 8x8). Call recording must be enabled at the admin level for all agent extensions involved in coverage reviews.
  • Microsoft 365 Business Standard or higher subscription for all participating agents, with Exchange Online mailboxes active and Azure AD accounts provisioned.
  • Agency Management System (AMS) with API access enabled — Applied Epic (with Applied Dev Center credentials), HawkSoft (with Partner API enabled by agency admin), EZLynx (with admin portal access), or Vertafore AMS360 (with WSAPI enabled). Obtain API credentials and endpoint documentation before beginning Phase 3.
  • Minimum 25 Mbps symmetrical internet connection at the agency office, with QoS configured to prioritize VoIP traffic. Verify firewall allows outbound HTTPS (TCP 443) and WebSocket (WSS) connections to: api.deepgram.com, api.openai.com, *.fireflies.ai, and the relevant AMS API endpoints.
  • Written call recording consent policy approved by the agency principal and reviewed by the agency's legal counsel or E&O carrier. Must include: (1) automated consent disclosure script for the phone system IVR/greeting, (2) documentation of all-party consent requirements for states where the agency operates, (3) data retention and destruction policy for audio recordings and transcripts.
  • Designated agency 'AI Champion' — a senior CSR or account manager who will serve as the primary tester, provide feedback on summary quality, and train other agents. This person should be comfortable with technology and have deep knowledge of the agency's book of business and carrier appetite.
  • Admin credentials for the VoIP phone system, M365 tenant (Global Admin or at minimum Application Admin), and the AMS platform. These are needed for API key generation, app registration, and webhook configuration.
  • Budget approval for: (1) hardware procurement of $2,800–$4,500 one-time, (2) Phase 1 SaaS subscription of $19–$29/user/month, (3) Phase 2 API costs of $15–$55/month, (4) Phase 3 development/integration of $2,000–$8,000 one-time, (5) ongoing managed service fee of $700–$1,800/month.
  • SSL/TLS certificates and DNS control for any custom subdomain (e.g., ai.agencyname.com) if deploying a self-hosted n8n instance or custom dashboard. Otherwise, n8n Cloud handles this automatically.
  • Test call recordings — gather 10–15 sample coverage review call recordings (personal auto, homeowners, commercial general liability, and umbrella/excess) to use during prompt engineering and quality validation. Ensure recordings include both agent and client audio clearly.

Installation Steps

...

Step 1: Audit Current Environment and Document Baseline

Before any deployment, perform a thorough audit of the agency's existing phone system, AMS, network infrastructure, and call recording practices. Document the current workflow for coverage review calls: how agents schedule them, what they document today, where notes are stored, and how long post-call documentation takes. This baseline is critical for measuring ROI after deployment.

Network and endpoint connectivity checks from agent workstations
powershell
# Network speed test from agent workstations
# Run from PowerShell on a representative workstation:
Invoke-WebRequest -Uri 'https://api.deepgram.com' -Method HEAD -TimeoutSec 10
# Expected: HTTP 200 or 403 (confirms connectivity to Deepgram endpoint)

# Test OpenAI endpoint connectivity:
Invoke-WebRequest -Uri 'https://api.openai.com/v1/models' -Method GET -Headers @{'Authorization'='Bearer sk-test'} -TimeoutSec 10
# Expected: HTTP 401 (confirms connectivity, auth expected to fail with test key)

# Verify M365 tenant and user licenses:
Connect-MsolService
Get-MsolUser -All | Select-Object DisplayName, Licenses | Format-Table
Note

Document the exact AMS version, VoIP system model and firmware version, number of agent extensions, average calls per agent per day, and typical call duration. Take screenshots of existing call recording settings. This information drives configuration decisions in later steps. If the agency has no call recording enabled today, work with the VoIP vendor to enable it before proceeding — this is a hard prerequisite.

Implement automated consent disclosure on the agency's VoIP system. This is legally required before any recording/transcription begins, especially if the agency operates in or receives calls from two-party consent states (CA, CT, DE, FL, IL, MD, MA, MI, MT, NV, NH, PA, WA). Configure the IVR or auto-attendant to play a consent message before connecting to an agent, and add a manual disclosure script for outbound calls.

  • Example RingCentral IVR consent message (upload as .wav or use TTS): 'Thank you for calling [Agency Name]. This call may be recorded and transcribed for quality assurance and documentation purposes. By continuing this call, you consent to recording. If you do not wish to be recorded, please inform your agent and recording will be disabled for this call.'
  • RingCentral Admin Portal path: Phone System > Auto-Receptionist > IVR Menus > Add Greeting > Upload Audio
  • For Microsoft Teams: Teams Admin Center > Voice > Call Policies > Set 'Recording Consent' to Enabled | Teams Admin Center > Voice > Auto Attendants > Edit greeting to include disclosure
  • For Zoom Phone: Admin Portal > Phone System Management > Auto Receptionist > Business Hours > Greeting
Critical

Have the agency principal sign off on the consent disclosure language before deployment. Recommend the agency's E&O insurance carrier review the language as well. For outbound calls, agents must verbally disclose recording before the coverage review begins — create a laminated desk card with the script. For states like California and Illinois, implied consent (staying on the line) may not be sufficient; consult the agency's legal counsel about explicit opt-in requirements.

Step 3: Deploy Audio Capture Hardware

Unbox, configure, and deploy speakerphones and headsets at each agent workstation. Test audio quality with sample recordings to ensure clear capture of both agent and client (speakerphone) or agent-only (headset with VoIP providing client audio separately via call recording).

1
Jabra Speak2 75 setup: Connect USB-C cable to agent workstation
2
Install Jabra Direct software for firmware updates and device management: Download from https://www.jabra.com/software-and-services/jabra-direct
3
Open Jabra Direct > Devices > Speak2 75 > Update Firmware
4
Set as default audio device in Windows: Settings > System > Sound > Output: Jabra Speak2 75 / Input: Jabra Speak2 75
5
Verify in VoIP client — RingCentral App: Settings > Audio > Microphone: Jabra Speak2 75 / Speaker: Jabra Speak2 75
6
Verify in VoIP client — Zoom: Settings > Audio > Microphone: Jabra Speak2 75
7
Verify in VoIP client — Teams: Settings > Devices > Audio devices: Jabra Speak2 75
8
Poly Sync 60 conference room setup: Place in center of conference table
9
Connect via USB-C to room PC or connect via Bluetooth to meeting room system
10
Install Poly Lens for management: https://www.poly.com/us/en/products/services/poly-lens
11
Update firmware via Poly Lens
12
Set as default audio device on room PC
Note

Position speakerphones on agent desks at least 12 inches from the monitor and keyboard to reduce noise pickup. For open-plan offices, consider the Jabra Evolve2 75 headset instead of a speakerphone to prevent cross-talk between adjacent agents. Test each device by making a sample call and playing back the recording — both agent and caller audio should be clearly distinguishable. If using softphone (RingCentral app, Teams), the VoIP call recording captures both sides digitally, so speakerphone audio quality primarily matters for in-person meetings, not phone calls.

Step 4: Deploy Phase 1: Fireflies.ai Business Transcription Platform

Sign up for Fireflies.ai Business plan for all participating agents. Configure calendar integration, VoIP integration, and custom vocabulary for insurance terminology. This provides immediate transcription capability while the custom pipeline is built in Phase 2.

1
Sign up at https://app.fireflies.ai/signup with the agency's admin email
2
Select Business plan ($19/user/month annual or $29/user/month monthly)
3
Add all agent users via: Settings > Team > Invite Members
4
Connect calendar integration: Settings > Integrations > Google Calendar or Microsoft Outlook (Authorize OAuth for each agent's M365 account)
5
Connect VoIP/meeting platform: Settings > Integrations > Select platform: - Zoom: Authorize Fireflies bot to join meetings - Microsoft Teams: Install Fireflies Teams app from Teams App Store - RingCentral: Enable via Settings > Integrations > RingCentral - Google Meet: Authorize via Google Workspace
6
Configure auto-join settings: Settings > Auto-join > Enable 'Join all meetings on my calendar'. Set join behavior: Join when host joins / Join 1 minute after start time
7
Add custom vocabulary for insurance terms: Settings > Intelligence > Custom Vocabulary. Add terms: umbrella policy, excess liability, endorsement, rider, declarations page, binder, E&O, BOP, commercial general liability, workers comp, professional liability, D&O, EPLI, cyber liability, inland marine, dwelling fire, HO-3, HO-5, DP-3, ISO, ACORD, loss run, claims-made, occurrence, aggregate, deductible, coinsurance, replacement cost, actual cash value, ACV, RCV
8
Configure transcript privacy: Settings > Privacy > Enable 'Only meeting participants can view transcript'. Settings > Privacy > Enable 'Require admin approval for external sharing'
Note

Fireflies.ai works best with scheduled meetings (calendar-based auto-join). For ad-hoc phone calls without a calendar event, agents will need to manually invite the Fireflies bot by adding fred@fireflies.ai to the call or using the Fireflies browser extension. Train agents on both methods. Fireflies retains transcripts on their servers — review their SOC 2 compliance documentation and execute a data processing agreement (DPA) before deployment. If the agency discusses PHI (health insurance), execute a BAA with Fireflies.ai.

Step 5: Configure Phase 1 Compliance Controls and Data Retention

Set up data retention policies, access controls, and audit logging for Fireflies.ai to meet insurance regulatory requirements (GLBA, NAIC Model Law #668). Configure PII handling and establish the transcript review workflow.

  • Settings > Security > Enable SSO with Microsoft Azure AD (if available on plan)
  • Settings > Security > Enable Two-Factor Authentication for all users
  • Settings > Privacy > Data Retention: Set to 7 years (insurance standard)
  • Settings > Team > Roles: Set producers as 'Member', agency principal as 'Admin'
  • Azure Portal > Azure AD > Security > Conditional Access > New Policy: Name: 'Fireflies Access - Require MFA', Users: All Fireflies users group, Cloud Apps: Fireflies.ai (add as Enterprise App), Conditions: All locations, Grant: Require MFA
  • In Fireflies: Settings > Workflows > Create Workflow — Trigger: New transcript completed; Action: Send email to agent with transcript link; Action: Add task in Fireflies: 'Review transcript and copy summary to AMS'; Deadline: 24 hours after call
Note

Insurance agencies must retain call documentation for the duration of the policy plus the statute of limitations (typically 3–7 years depending on state). Verify the Fireflies.ai Business plan supports the required retention period. If not, configure automated export of transcripts to Azure Blob Storage as a backup archive. Document the data flow in the agency's Written Information Security Program (WISP) as required by NAIC Model Law.

Step 6: Validate Phase 1 with Pilot Group and Gather Feedback

Run a 2-week pilot with the designated AI Champion and 2–3 additional agents. Conduct 20–30 coverage review calls through Fireflies.ai, evaluate transcription accuracy, summary usefulness, and identify insurance-specific terminology issues. Use this feedback to refine custom vocabulary and prepare prompt templates for Phase 2.

  • Create a tracking spreadsheet (SharePoint/Excel) with columns: Date, Agent, Call Type (Personal/Commercial), Duration, Transcription Accuracy (1-5), Summary Usefulness (1-5), Missed Terms, False Positives, Time Saved (minutes), Notes/Feedback
  • Share with pilot group via Teams: https://agencyname.sharepoint.com/sites/AIProject/Shared Documents/
  • Calculate baseline metrics after pilot: Average transcription accuracy score, Average post-call documentation time (before vs. after), List of commonly missed insurance terms (add to custom vocabulary), List of summary format preferences from agents
Note

This pilot phase is essential — do not skip it. Agent feedback directly shapes the custom prompt templates in Phase 2. Common issues to watch for: insurance jargon transcription errors (e.g., 'BOP' transcribed as 'bop' or 'pop'), speaker diarization confusion when agent and client have similar voices, and summaries that are too generic to be actionable. Collect at least 10 sample transcripts that the AI Champion rates as 'good quality' — these become few-shot examples for Phase 2 prompt engineering.

Step 7: Set Up Phase 2 Infrastructure: API Accounts and n8n Workflow Engine

Create accounts for Deepgram, OpenAI, and n8n Cloud. Configure API keys, set usage limits, and deploy the n8n workflow engine that will orchestrate the custom transcription and summarization pipeline.

1
Deepgram Account Setup: - Sign up at https://console.deepgram.com/signup - Create new project: 'Insurance-Agency-Transcription' - Generate API key: Settings > API Keys > Create Key - Scope: 'Member' (read/write for transcription) - Label: 'n8n-production' - Note the API key securely in the MSP password manager - Set usage alert: Settings > Billing > Alerts > $50/month warning
2
OpenAI Account Setup: - Sign up at https://platform.openai.com/signup - Create organization: '[Agency Name] AI Pipeline' - Generate API key: API Keys > Create new secret key - Name: 'n8n-production-summarization' - Set usage limits: Settings > Limits > Monthly budget: $100 - Enable GPT-4.1-mini and GPT-5.4 model access
3
n8n Cloud Setup: - Sign up at https://app.n8n.cloud/register - Select Pro plan ($60/month) for production workflows - Note your instance URL: https://[instance-name].app.n8n.cloud - Set up credentials in n8n: n8n > Settings > Credentials > Add: - Deepgram API: Header Auth (Authorization: Token [API_KEY]) - OpenAI: OpenAI API credential type - Azure Blob: Azure Storage Account credentials - RingCentral: OAuth2 (Client ID + Secret from RC Developer Portal) - Email: SMTP (agency's M365 SMTP or Microsoft Graph API)
Azure Blob Storage Setup
bash
az login
az group create --name rg-insurance-ai --location eastus
az storage account create --name stinsuranceai --resource-group rg-insurance-ai --location eastus --sku Standard_LRS --encryption-services blob --min-tls-version TLS1_2
az storage container create --name call-recordings --account-name stinsuranceai --auth-mode login
az storage container create --name transcripts --account-name stinsuranceai --auth-mode login
az storage container create --name summaries --account-name stinsuranceai --auth-mode login
az storage account management-policy create --account-name stinsuranceai --resource-group rg-insurance-ai --policy @lifecycle-policy.json
lifecycle-policy.json
json
# Azure Blob Storage lifecycle policy for cost optimization

{
  "rules": [{
    "name": "archive-old-recordings",
    "type": "Lifecycle",
    "definition": {
      "filters": {"blobTypes": ["blockBlob"], "prefixMatch": ["call-recordings/"]},
      "actions": {
        "baseBlob": {
          "tierToCool": {"daysAfterModificationGreaterThan": 90},
          "tierToArchive": {"daysAfterModificationGreaterThan": 365}
        }
      }
    }
  }]
}
Note

Store all API keys in the MSP's secure password manager (e.g., IT Glue, Hudu, Keeper) and in n8n's encrypted credential store. Never hardcode keys in workflow configurations. Set up billing alerts on both Deepgram and OpenAI to prevent unexpected charges. The $200 Deepgram free credits will cover approximately 46,500 minutes of pre-recorded transcription — enough for several months of pilot usage. For n8n, the Cloud Pro plan supports up to 10,000 workflow executions per month, which is sufficient for an agency processing 200–400 calls per month.

Step 8: Build the Core Transcription-Summarization Pipeline in n8n

Create the main n8n workflow that processes call recordings through Deepgram for transcription and OpenAI for insurance-specific summarization. This is the heart of the Phase 2 custom pipeline, replacing the generic Fireflies.ai summaries with tailored insurance policy change recommendations.

1
Webhook Trigger (receives notification of new call recording)
2
HTTP Request: Download recording from VoIP API
3
HTTP Request: Upload recording to Azure Blob Storage
4
HTTP Request: Send recording to Deepgram Nova-3 API
5
Code Node: Parse and format transcript
6
HTTP Request: Send transcript to OpenAI for summarization
7
Code Node: Parse and format summary into structured output
8
HTTP Request: Push summary to AMS API (Phase 3)
9
Send Email: Deliver summary to agent
Note

Build and test each node individually before connecting the full pipeline. Start with a hardcoded test audio file URL (from your Phase 1 pilot recordings) to validate the Deepgram and OpenAI nodes independently. The n8n workflow should include error handling at each step: retry logic for API failures (3 retries with exponential backoff), error notification emails to the MSP monitoring address, and a dead-letter queue in Azure Blob Storage for failed processing attempts. Workflow name: 'Insurance Call Intelligence Pipeline'. Import into n8n via: Workflows > Import from File. See custom_ai_components section for complete workflow JSON and prompt templates.

Step 9: Configure VoIP Call Recording Webhook Integration

Set up the VoIP system to automatically notify the n8n pipeline when a new call recording is available. This creates the hands-free automation where coverage review calls are automatically captured and processed without agent intervention.

1
Log into RingCentral Developer Portal: https://developers.ringcentral.com
2
Create new App: Apps > Create App — App Name: 'Insurance AI Transcription', App Type: 'Private (for internal use only)', Platform Type: 'Server/Web', Permissions: 'Read Call Recording', 'Read Call Log'
3
Get Client ID and Client Secret
4
Create subscription via API (see command below)
5
For Microsoft Teams: Teams Admin Center > Voice > Call Recording Policy — Enable 'Compliance recording' with your n8n webhook as the recording sink, or use Microsoft Graph API subscriptions (see command below)
6
For Zoom Phone: Zoom Marketplace > Build App > Webhook Only — Event subscriptions > Add: 'recording.completed', Notification endpoint URL: https://[n8n-instance].app.n8n.cloud/webhook/zoom-recording
RingCentral: Create webhook subscription via API
bash
curl -X POST 'https://platform.ringcentral.com/restapi/v1.0/subscription' \
  -H 'Authorization: Bearer {access_token}' \
  -H 'Content-Type: application/json' \
  -d '{
    "eventFilters": [
      "/restapi/v1.0/account/~/extension/~/call-log?type=Voice&recordingType=Automatic"
    ],
    "deliveryMode": {
      "transportType": "WebHook",
      "address": "https://[n8n-instance].app.n8n.cloud/webhook/ringcentral-recording"
    },
    "expiresIn": 630720000
  }'
Microsoft Teams: Create Graph API subscription for call recordings
http
POST https://graph.microsoft.com/v1.0/subscriptions
{
  "changeType": "created",
  "notificationUrl": "https://[n8n-instance].app.n8n.cloud/webhook/teams-recording",
  "resource": "communications/callRecords",
  "expirationDateTime": "2026-01-01T00:00:00Z",
  "clientState": "[random-secret-string]"
}
Note

Webhook URLs must be HTTPS. n8n Cloud provides HTTPS by default. If self-hosting n8n, ensure a valid SSL certificate is configured. Test the webhook by making a 2-minute test call and verifying the n8n workflow receives the trigger notification. For RingCentral, note that call recordings are not immediately available after the call ends — there may be a 1–5 minute delay before the recording URL is accessible. Build a retry mechanism (wait 60 seconds, then retry) into the n8n workflow for this scenario. If the agency uses multiple phone systems or has agents on cell phones for field visits, consider a hybrid approach where field calls are manually uploaded via a simple web form.

Step 10: Deploy Phase 2 Pipeline to Production and Transition from Phase 1

After testing the custom pipeline with 20+ sample recordings, deploy it as the primary transcription and summarization engine. Run it in parallel with Fireflies.ai for 2 weeks to compare quality, then transition agents to the new system. Fireflies.ai can be retained as a backup or cancelled.

1
Enable the n8n production workflow: n8n > Workflows > 'Insurance Call Intelligence Pipeline' > Toggle Active: ON
2
Create agent-facing summary delivery: Configure n8n email node to send formatted HTML summaries. Include: Summary, action items, coverage gaps, and 'Approve & Push to AMS' link
3
Run parallel comparison for 2 weeks: Both Fireflies.ai AND custom pipeline process the same calls. Agent rates both summaries in tracking spreadsheet. Compare: accuracy, insurance relevance, actionability
4
After validation, disable Fireflies.ai auto-join: Fireflies > Settings > Auto-join > Disable. (Keep account active for 30 days as fallback before cancelling)
5
Update agent workflow documentation: Remove Fireflies.ai from agent onboarding docs. Update SOP with new email-based summary workflow. Conduct 30-minute training session for all agents
Note

Do NOT cancel Fireflies.ai immediately. Run in parallel for at least 2 weeks to validate the custom pipeline handles edge cases: very long calls (60+ minutes), poor audio quality, multiple speakers, heavy insurance jargon, calls with hold music interruptions, and calls transferred between departments. If the custom pipeline quality is not meeting standards, extend Fireflies.ai while refining prompts. Some agencies may prefer to keep Fireflies.ai for its search/playback features and use the custom pipeline solely for the insurance-specific summarization layer.

Step 11: Build Phase 3: AMS Integration for Automated Activity Note Creation

Connect the summarization pipeline to the agency's AMS to automatically create activity notes with policy change recommendations. This eliminates the last manual step — agents review and approve the AI-generated summary, then it's pushed directly into the client's account in the AMS.

AMS API integration examples for Applied Epic, HawkSoft, EZLynx, and middleware fallback via Zapier/Make.com
http
# Applied Epic REST API Integration:
# 1. Register at Applied Dev Center: https://developer.appliedsystems.com
# 2. Obtain Enterprise ID and Epic database name from agency IT admin
# 3. Create OAuth2 application for API access
# 4. In n8n, add HTTP Request node after summarization:

# POST https://api.appliedepic.com/v1/activities
# Headers:
#   Authorization: Bearer {oauth_token}
#   Content-Type: application/json
# Body:
# {
#   "clientId": "{matched_client_id}",
#   "policyId": "{matched_policy_id}",
#   "activityType": "Coverage Review Note",
#   "description": "{ai_generated_summary}",
#   "followUpDate": "{recommended_followup_date}",
#   "assignedTo": "{agent_id}",
#   "priority": "Normal",
#   "tags": ["AI-Generated", "Coverage Review", "Policy Change Recommendation"]
# }

# HawkSoft Partner API Integration:
# 1. Agency admin enables Partner API in HawkSoft settings
# 2. Register as API Partner at https://partners.hawksoft.com
# 3. Obtain API key and configure OAuth flow
# 4. POST activity/note to client record via HawkSoft API

# EZLynx API Integration:
# 1. Contact EZLynx partner team for API access
# 2. Use EZLynx REST API to post activity notes
# 3. Leverage EVA integration for enhanced AI features

# For AMS platforms without robust APIs (e.g., QQ Catalyst):
# Use Zapier or Make.com as middleware:
# Trigger: Webhook from n8n with summary data
# Action: Create activity in AMS via Zapier's AMS connector or screen-scraping RPA
Note

AMS integration is the most variable and complex part of this project. Applied Epic has the most mature API, but requires a developer relationship with Applied Systems. HawkSoft's Partner API is newer but well-documented. For agencies using older AMS platforms without APIs, consider a 'semi-automated' approach: the n8n workflow generates the formatted activity note and opens a pre-filled web form where the agent clicks 'Copy to Clipboard' and pastes into the AMS. This reduces documentation time by 80% even without full API integration. Budget 4–8 weeks for AMS API integration development and testing.

Step 12: Configure Monitoring, Alerting, and MSP Management Dashboard

Set up monitoring for the entire pipeline: API health, processing success rates, cost tracking, and quality metrics. Configure alerts so the MSP is notified of any failures before the client notices.

1
n8n Workflow Error Handling: Add 'Error Trigger' workflow in n8n: Triggers on any workflow execution failure → Sends email to msp-alerts@mspcompany.com → Logs error details to Azure Blob Storage 'error-logs' container → Creates ticket in MSP PSA (ConnectWise/Autotask) via API
2
API Usage Monitoring: Deepgram Console: Settings > Billing > Set alert at 80% of budget. OpenAI Dashboard: Settings > Limits > Set email alert at $80/month
3
Create MSP monitoring dashboard (simple approach): Use n8n scheduled workflow (runs daily at 7 AM): Query Deepgram usage API → Query OpenAI usage API → Count successful vs. failed n8n executions in past 24 hours → Send daily digest email to MSP team
4
Uptime monitoring: Add n8n webhook health check to MSP's existing monitoring platform (Datto RMM, NinjaRMM, or standalone like UptimeRobot). URL: https://[n8n-instance].app.n8n.cloud/webhook/health-check — Interval: 5 minutes — Alert: Email + SMS to MSP on-call technician
5
Monthly reporting workflow — n8n scheduled workflow (1st of each month): Total calls processed, Total transcription minutes, API costs breakdown (Deepgram + OpenAI), Average processing time per call, Error rate percentage, Estimated time saved (calls × avg_duration × documentation_factor) → Send formatted PDF report to agency principal and MSP account manager
Note

Proactive monitoring is what differentiates an MSP managed AI service from a one-time project. The monthly report is also a powerful retention tool — it demonstrates ongoing value and justifies the managed service fee. Include a 'Time Saved' metric calculated as: (number of calls processed) × (average documentation time before AI, from baseline) × (80% automation factor). For a 10-agent agency processing 300 calls/month with 10 minutes saved per call, this equals 50 hours/month — a compelling ROI narrative.

Custom AI Components

Insurance Coverage Review Summarization Prompt

Type: prompt

The core GPT-4.1-mini/GPT-5.4 prompt template that transforms raw call transcripts into structured policy change recommendation summaries. This prompt is the primary value-add of the solution, incorporating deep insurance domain knowledge to identify coverage gaps, endorsement opportunities, and actionable follow-up tasks. It is designed to be used as the system prompt in the OpenAI API call within the n8n workflow.

Implementation:

Insurance Coverage Review Summarization Prompt

SYSTEM PROMPT: You are an expert insurance coverage analyst assistant working for an independent insurance agency. Your job is to analyze transcripts of coverage review calls between insurance agents and their clients (policyholders), then produce a structured Policy Change Recommendation Summary. You must: 1. Identify all policies discussed (auto, home, umbrella, commercial, life, etc.) 2. Extract any life changes mentioned (new home, new vehicle, new business, marriage, children, retirement, renovation, home-based business, etc.) 3. Identify COVERAGE GAPS — exposures mentioned or implied that are NOT currently covered 4. Recommend specific ENDORSEMENTS or POLICY CHANGES with carrier-standard endorsement names where possible 5. Flag any COMPLIANCE CONCERNS (e.g., coverage below state minimums, lapsed policies mentioned, E&O exposure) 6. Generate specific FOLLOW-UP TASKS with deadlines 7. Estimate urgency: URGENT (bind today), HIGH (within 1 week), MEDIUM (within 30 days), LOW (next renewal) IMPORTANT RULES: - Never fabricate policy details not mentioned in the transcript - If something is unclear, flag it as 'NEEDS CLARIFICATION' rather than guessing - Always recommend umbrella/excess liability review if not discussed - Flag any mention of home-based business, rental property, or recreational vehicles as potential coverage gaps - Use standard insurance terminology (ISO form numbers where applicable) - Be specific about dollar amounts when discussed in the call - Note if the client expressed price sensitivity — this affects recommendation priority OUTPUT FORMAT (strict JSON): { "call_metadata": { "date": "YYYY-MM-DD", "agent_name": "extracted from transcript", "client_name": "extracted from transcript", "call_duration_minutes": number, "call_type": "Annual Review | Mid-Term Review | New Business | Claim Follow-up" }, "policies_discussed": [ { "line_of_business": "Personal Auto | Homeowners | Commercial Package | etc.", "carrier": "if mentioned", "policy_number": "if mentioned", "current_status": "Active | Expiring | Lapsed | Unknown", "renewal_date": "if mentioned", "key_coverages_mentioned": ["liability limits", "deductibles", "endorsements discussed"] } ], "life_changes_identified": [ { "change": "description of life change", "insurance_impact": "how this affects coverage needs", "urgency": "URGENT | HIGH | MEDIUM | LOW" } ], "coverage_gaps": [ { "gap_description": "specific coverage gap identified", "risk_exposure": "what could go wrong without this coverage", "recommended_solution": "specific endorsement, policy, or limit change", "estimated_premium_impact": "if discussed or estimable", "urgency": "URGENT | HIGH | MEDIUM | LOW" } ], "endorsement_recommendations": [ { "endorsement_name": "standard endorsement name (e.g., HO-461 Scheduled Personal Property)", "policy_to_endorse": "which policy", "reason": "why this is recommended based on the call", "urgency": "URGENT | HIGH | MEDIUM | LOW" } ], "compliance_flags": [ { "issue": "description of compliance concern", "severity": "CRITICAL | WARNING | INFORMATIONAL", "recommended_action": "what to do" } ], "follow_up_tasks": [ { "task": "specific action item", "assigned_to": "Agent | CSR | Client", "deadline": "specific date or relative timeframe", "urgency": "URGENT | HIGH | MEDIUM | LOW" } ], "client_sentiment": { "overall": "Satisfied | Neutral | Concerned | Frustrated", "price_sensitivity": "High | Medium | Low", "retention_risk": "High | Medium | Low", "notes": "brief sentiment summary" }, "executive_summary": "2-3 sentence plain-English summary of the call and top recommendations for the agent's quick review" } USER PROMPT TEMPLATE: Please analyze the following coverage review call transcript and generate a Policy Change Recommendation Summary. CALL DATE: {call_date} CALL DURATION: {call_duration} minutes TRANSCRIPT: --- {transcript_text} --- Generate the structured summary following the exact JSON format specified.
Sonnet 4.6

n8n Workflow: Insurance Call Intelligence Pipeline

Type: workflow The complete n8n workflow that orchestrates the end-to-end pipeline: receives call recording notifications via webhook, downloads recordings, sends them to Deepgram for transcription with speaker diarization, passes the transcript to OpenAI for insurance-specific summarization, stores all artifacts in Azure Blob Storage, and delivers the formatted summary to the agent via email. Includes error handling, retry logic, and audit logging.

N8N Workflow Specification — import as JSON into n8n
n8n-workflow
N8N WORKFLOW SPECIFICATION (import as JSON into n8n):

Node 1: Webhook Trigger
- Type: Webhook
- HTTP Method: POST
- Path: /call-recording-ready
- Authentication: Header Auth (X-Webhook-Secret: [configured_secret])
- Response Mode: Immediately respond with 200
- Expected payload: { recording_url, call_id, caller_number, agent_extension, call_duration, call_direction, timestamp }

Node 2: Download Call Recording
- Type: HTTP Request
- Method: GET
- URL: ={{$json.recording_url}}
- Authentication: Predefined (RingCentral OAuth2 credential)
- Response Format: Binary (audio file)
- Retry on Fail: 3 times, 30 second interval
- Timeout: 120 seconds

Node 3: Upload to Azure Blob Storage
- Type: HTTP Request
- Method: PUT
- URL: https://stinsuranceai.blob.core.windows.net/call-recordings/{{$json.call_id}}_{{$json.timestamp}}.wav
- Headers: x-ms-blob-type: BlockBlob, Content-Type: audio/wav
- Authentication: Azure Storage Account credentials
- Body: Binary data from Node 2

Node 4: Transcribe with Deepgram Nova-3
- Type: HTTP Request
- Method: POST
- URL: https://api.deepgram.com/v1/listen?model=nova-3&smart_format=true&diarize=true&punctuate=true&paragraphs=true&utterances=true&redact=pci&redact=ssn&language=en-US
- Headers: Authorization: Token {{$credentials.deepgram_api_key}}, Content-Type: audio/wav
- Body: Binary audio data from Node 2
- Retry on Fail: 3 times, 60 second interval
- Timeout: 300 seconds (5 min for long recordings)

Node 5: Parse Deepgram Response
- Type: Code (JavaScript)
- Code:
const response = $input.first().json;
const results = response.results;

// Extract diarized transcript
let transcript = '';
let currentSpeaker = null;

if (results.utterances) {
  for (const utterance of results.utterances) {
    const speaker = `Speaker ${utterance.speaker}`;
    if (speaker !== currentSpeaker) {
      currentSpeaker = speaker;
      transcript += `\n\n${speaker}:\n`;
    }
    transcript += utterance.transcript + ' ';
  }
} else {
  // Fallback to channel-based transcript
  transcript = results.channels[0].alternatives[0].paragraphs?.transcript 
    || results.channels[0].alternatives[0].transcript;
}

const metadata = {
  call_id: $('Webhook Trigger').first().json.call_id,
  agent_extension: $('Webhook Trigger').first().json.agent_extension,
  caller_number: $('Webhook Trigger').first().json.caller_number,
  call_duration: $('Webhook Trigger').first().json.call_duration,
  call_date: $('Webhook Trigger').first().json.timestamp,
  transcription_confidence: results.channels[0].alternatives[0].confidence,
  word_count: transcript.split(/\s+/).length
};

return [{ json: { transcript, metadata } }];

Node 6: Store Transcript in Azure Blob
- Type: HTTP Request
- Method: PUT
- URL: https://stinsuranceai.blob.core.windows.net/transcripts/{{$json.metadata.call_id}}_transcript.json
- Body: JSON with transcript and metadata

Node 7: Summarize with OpenAI GPT-4.1-mini
- Type: HTTP Request
- Method: POST
- URL: https://api.openai.com/v1/chat/completions
- Headers: Authorization: Bearer {{$credentials.openai_api_key}}, Content-Type: application/json
- Body:
{
  "model": "gpt-4.1-mini",
  "response_format": { "type": "json_object" },
  "temperature": 0.3,
  "max_tokens": 4000,
  "messages": [
    {
      "role": "system",
      "content": "[INSERT FULL SYSTEM PROMPT FROM 'Insurance Coverage Review Summarization Prompt' COMPONENT]"
    },
    {
      "role": "user",
      "content": "Please analyze the following coverage review call transcript and generate a Policy Change Recommendation Summary.\n\nCALL DATE: {{$json.metadata.call_date}}\nCALL DURATION: {{$json.metadata.call_duration}} minutes\n\nTRANSCRIPT:\n---\n{{$json.transcript}}\n---\n\nGenerate the structured summary following the exact JSON format specified."
    }
  ]
}
- Retry on Fail: 2 times, 30 second interval
- Timeout: 120 seconds

Node 8: Parse Summary and Format Email
- Type: Code (JavaScript)
- Code:
const summary = JSON.parse($input.first().json.choices[0].message.content);
const metadata = $('Parse Deepgram Response').first().json.metadata;

// Format HTML email
let html = `<h2>📋 Coverage Review Summary</h2>`;
html += `<p><strong>Client:</strong> ${summary.call_metadata.client_name || 'Unknown'}</p>`;
html += `<p><strong>Agent:</strong> ${summary.call_metadata.agent_name || 'Unknown'}</p>`;
html += `<p><strong>Date:</strong> ${summary.call_metadata.date}</p>`;
html += `<p><strong>Duration:</strong> ${summary.call_metadata.call_duration_minutes} min</p>`;
html += `<hr>`;

html += `<h3>Executive Summary</h3><p>${summary.executive_summary}</p>`;

if (summary.coverage_gaps && summary.coverage_gaps.length > 0) {
  html += `<h3>⚠️ Coverage Gaps Identified</h3><ul>`;
  for (const gap of summary.coverage_gaps) {
    html += `<li><strong>[${gap.urgency}]</strong> ${gap.gap_description}<br>`;
    html += `<em>Risk:</em> ${gap.risk_exposure}<br>`;
    html += `<em>Recommendation:</em> ${gap.recommended_solution}</li>`;
  }
  html += `</ul>`;
}

if (summary.endorsement_recommendations && summary.endorsement_recommendations.length > 0) {
  html += `<h3>📝 Endorsement Recommendations</h3><ul>`;
  for (const e of summary.endorsement_recommendations) {
    html += `<li><strong>[${e.urgency}]</strong> ${e.endorsement_name} on ${e.policy_to_endorse}<br>`;
    html += `<em>Reason:</em> ${e.reason}</li>`;
  }
  html += `</ul>`;
}

if (summary.follow_up_tasks && summary.follow_up_tasks.length > 0) {
  html += `<h3>✅ Follow-Up Tasks</h3><ol>`;
  for (const task of summary.follow_up_tasks) {
    html += `<li><strong>[${task.urgency}]</strong> ${task.task} — <em>${task.assigned_to}</em> by ${task.deadline}</li>`;
  }
  html += `</ol>`;
}

if (summary.compliance_flags && summary.compliance_flags.length > 0) {
  html += `<h3>🚨 Compliance Flags</h3><ul>`;
  for (const flag of summary.compliance_flags) {
    html += `<li><strong>[${flag.severity}]</strong> ${flag.issue}<br>${flag.recommended_action}</li>`;
  }
  html += `</ul>`;
}

html += `<h3>Client Sentiment</h3>`;
html += `<p>Overall: ${summary.client_sentiment?.overall || 'N/A'} | Price Sensitivity: ${summary.client_sentiment?.price_sensitivity || 'N/A'} | Retention Risk: ${summary.client_sentiment?.retention_risk || 'N/A'}</p>`;

html += `<hr><p style="color:gray;font-size:11px;">AI-generated summary. Review for accuracy before acting. Call ID: ${metadata.call_id}</p>`;

return [{ json: { summary, metadata, html_email: html, subject: `Coverage Review Summary: ${summary.call_metadata.client_name || 'Client'} — ${summary.call_metadata.date}` } }];

Node 9: Store Summary in Azure Blob
- Type: HTTP Request
- Method: PUT
- URL: https://stinsuranceai.blob.core.windows.net/summaries/{{$json.metadata.call_id}}_summary.json
- Body: Full summary JSON

Node 10: Send Email to Agent
- Type: Microsoft Outlook (or SMTP)
- To: Agent email (lookup from agent_extension mapping)
- Subject: ={{$json.subject}}
- HTML Body: ={{$json.html_email}}
- Attachments: None (link to full transcript in body)

Node 11: Error Handler (connected to all nodes via Error Output)
- Type: Send Email
- To: msp-alerts@mspcompany.com
- Subject: '[ALERT] Insurance AI Pipeline Error — {{$json.call_id}}'
- Body: Error details, node name, timestamp

AGENT EXTENSION TO EMAIL MAPPING:
Create a static lookup table in n8n as a Code node or use a Google Sheet / SharePoint list:
{
  "101": {"name": "Jane Smith", "email": "jane@agencyname.com"},
  "102": {"name": "Bob Johnson", "email": "bob@agencyname.com"},
  "103": {"name": "Maria Garcia", "email": "maria@agencyname.com"}
}

Agent Extension Lookup Service

Type: integration A lightweight mapping service that resolves VoIP phone extensions and caller IDs to agent names, email addresses, and optionally AMS client records. This enables the pipeline to route summaries to the correct agent and pre-populate client information in the AMS activity note. Implemented as an n8n sub-workflow or a SharePoint list lookup.

Implementation:

1
Create a SharePoint List named 'Agent Extension Map' with columns: Extension (Single line of text, Primary), AgentName (Single line of text), AgentEmail (Single line of text), AMSAgentId (Single line of text, optional), Department (Choice: Personal Lines, Commercial Lines, Life & Health)
2
Populate with all agent extensions.
3
In n8n, create a sub-workflow 'Lookup Agent by Extension': Input: agent_extension (string) | Node 1: Microsoft SharePoint > Get List Items — Site: agency SharePoint site, List: 'Agent Extension Map', Filter: Extension eq '{{$json.agent_extension}}' | Node 2: IF node — if results found, return agent info; else return default/unknown | Output: { agent_name, agent_email, ams_agent_id, department }
4
Call this sub-workflow from the main pipeline between the Webhook Trigger and the email send nodes.
  • Use the caller_number from the webhook payload
  • Query the AMS API to find matching client by phone number
  • If match found, include client_id, client_name, and active policy numbers in the summarization prompt context
  • This dramatically improves summary quality by grounding the LLM with known policy data
CLIENT LOOKUP (Phase 3 enhancement)
http
# AMS API queries to find matching client by phone number

Applied Epic: GET /v1/clients?phone={caller_number}
HawkSoft: GET /api/clients?search={caller_number}

Call Type Classifier Prompt

Type: prompt A lightweight GPT-4.1-mini prompt that classifies incoming calls as coverage reviews vs. other call types (claims, billing, certificates, general inquiries). This acts as a filter at the beginning of the pipeline to avoid wasting transcription and summarization costs on non-coverage-review calls. Only calls classified as coverage reviews proceed through the full summarization pipeline.

Call Type Classifier — System Prompt

SYSTEM PROMPT: You are a call type classifier for an insurance agency. Given the first 2 minutes of a call transcript, classify the call into one of the following categories: 1. COVERAGE_REVIEW — Annual or mid-term coverage review discussion, policy changes, endorsement requests, coverage questions 2. CLAIMS — Reporting a new claim, claim status inquiry, claim dispute 3. BILLING — Payment questions, billing disputes, payment plan changes 4. CERTIFICATES — Certificate of insurance requests, additional insured requests 5. NEW_BUSINESS — New prospect quoting, application intake 6. GENERAL — General inquiry, appointment scheduling, other Respond with ONLY a JSON object: { "classification": "COVERAGE_REVIEW", "confidence": 0.92, "reason": "Client discussing current auto coverage limits and asking about umbrella policy options" }
Sonnet 4.6
1
After Deepgram transcription, extract the first 500 words of the transcript
2
Send to GPT-4.1-mini with this classifier prompt (cost: ~$0.001 per classification)
3
If classification is COVERAGE_REVIEW or NEW_BUSINESS with confidence > 0.7, proceed to full summarization
4
If classification is anything else, skip summarization and log the call as processed-but-not-summarized
5
This saves approximately 40-60% on OpenAI summarization costs by filtering out non-relevant calls
N8N IF node condition
n8n
# route calls based on classifier output

// N8N IMPLEMENTATION:
// Add an IF node after the classifier API call:
// Condition: $json.classification == 'COVERAGE_REVIEW' OR $json.classification == 'NEW_BUSINESS'
// True branch: Continue to full summarization (Node 7)
// False branch: Log to Azure Blob and skip to end (no email sent)

$json.classification == 'COVERAGE_REVIEW' || $json.classification == 'NEW_BUSINESS'

PII Redaction Post-Processor

Type: skill A post-processing component that ensures all Personally Identifiable Information (PII), Protected Health Information (PHI), and Payment Card Industry (PCI) data is redacted from stored transcripts and summaries before they are written to long-term storage or the AMS. While Deepgram offers built-in PCI and SSN redaction, this component provides a secondary defense layer for additional PII types specific to insurance (driver's license numbers, VINs, dates of birth, policy numbers in certain contexts).

Implementation:

  • IMPLEMENTATION: n8n Code Node (JavaScript)
  • Place this node between the Deepgram transcript parse and the Azure Blob storage node, and again between the OpenAI summary parse and the summary storage node.
javascript
// PII Redaction Post-Processor for Insurance Transcripts
function redactPII(text) {
  if (!text) return text;
  
  // SSN patterns (XXX-XX-XXXX or XXXXXXXXX)
  text = text.replace(/\b\d{3}[-\s]?\d{2}[-\s]?\d{4}\b/g, '[SSN REDACTED]');
  
  // Credit card numbers (13-19 digits with optional spaces/dashes)
  text = text.replace(/\b(?:\d{4}[-\s]?){3,4}\d{1,4}\b/g, '[CARD REDACTED]');
  
  // Driver's license (varies by state, catch common patterns)
  text = text.replace(/\b[A-Z]\d{7,14}\b/gi, function(match) {
    // Only redact if it looks like a DL (starts with letter, followed by 7+ digits)
    if (/^[A-Z]\d{7,}/i.test(match)) return '[DL# REDACTED]';
    return match;
  });
  
  // Date of birth patterns (spoken: 'born on March 15 1985', 'date of birth is 03/15/1985')
  text = text.replace(/(?:born|birth|DOB|d\.o\.b\.)\s*(?:is|was|on)?\s*:?\s*\d{1,2}[\/\-]\d{1,2}[\/\-]\d{2,4}/gi, '[DOB REDACTED]');
  
  // VIN numbers (17 alphanumeric characters)
  text = text.replace(/\b[A-HJ-NPR-Z0-9]{17}\b/g, '[VIN REDACTED]');
  
  // Bank routing and account numbers (9 digit routing, variable account)
  text = text.replace(/\b(?:routing|account)\s*(?:number|#|num)?\s*:?\s*\d{6,17}\b/gi, '[BANK# REDACTED]');
  
  // Email addresses (keep for agent emails, redact for clients)
  // Only redact emails that don't match the agency domain
  const agencyDomain = 'agencyname.com'; // Configure per deployment
  text = text.replace(/\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b/g, function(match) {
    if (match.toLowerCase().includes(agencyDomain)) return match;
    return '[EMAIL REDACTED]';
  });
  
  return text;
}

const input = $input.first().json;

// Redact transcript
if (input.transcript) {
  input.transcript = redactPII(input.transcript);
}

// Redact summary fields if present
if (input.summary) {
  input.summary = JSON.parse(redactPII(JSON.stringify(input.summary)));
}

return [{ json: input }];
  • Update agencyDomain variable per client deployment
  • Test with sample transcripts containing known PII patterns
  • This is a defense-in-depth measure — Deepgram's built-in redaction handles the first pass
  • Log redaction counts for compliance reporting
  • Review and update regex patterns quarterly as new PII patterns emerge

AMS Activity Note Formatter

Type: integration Transforms the structured JSON summary from the OpenAI output into the specific format required by the target AMS platform's API. Each AMS has different field names, data types, and formatting requirements. This component handles the translation layer so the core pipeline remains AMS-agnostic.

Implementation:

n8n Code Node with AMS-specific formatters — Supports: Applied Epic, HawkSoft, EZLynx, Generic
javascript
// AMS Activity Note Formatter
// Supports: Applied Epic, HawkSoft, EZLynx, Generic

const summary = $input.first().json.summary;
const metadata = $input.first().json.metadata;
const AMS_TYPE = 'applied_epic'; // Configure per deployment: 'applied_epic' | 'hawksoft' | 'ezlynx' | 'generic'

function formatForAppliedEpic(summary, metadata) {
  // Applied Epic Activity format
  const urgentGaps = (summary.coverage_gaps || []).filter(g => g.urgency === 'URGENT' || g.urgency === 'HIGH');
  
  let description = `AI COVERAGE REVIEW SUMMARY — ${summary.call_metadata.date}\n`;
  description += `Duration: ${summary.call_metadata.call_duration_minutes} min\n`;
  description += `─────────────────────────────\n`;
  description += `${summary.executive_summary}\n\n`;
  
  if (urgentGaps.length > 0) {
    description += `⚠ URGENT COVERAGE GAPS:\n`;
    urgentGaps.forEach(g => {
      description += `• ${g.gap_description} — ${g.recommended_solution}\n`;
    });
    description += `\n`;
  }
  
  if (summary.endorsement_recommendations?.length > 0) {
    description += `ENDORSEMENT RECOMMENDATIONS:\n`;
    summary.endorsement_recommendations.forEach(e => {
      description += `• [${e.urgency}] ${e.endorsement_name}: ${e.reason}\n`;
    });
    description += `\n`;
  }
  
  if (summary.follow_up_tasks?.length > 0) {
    description += `FOLLOW-UP TASKS:\n`;
    summary.follow_up_tasks.forEach((t, i) => {
      description += `${i+1}. [${t.urgency}] ${t.task} (${t.assigned_to}) by ${t.deadline}\n`;
    });
  }
  
  description += `\n─────────────────────────────\n`;
  description += `Sentiment: ${summary.client_sentiment?.overall || 'N/A'} | Retention Risk: ${summary.client_sentiment?.retention_risk || 'N/A'}\n`;
  description += `[AI-Generated — Verify before acting | Call ID: ${metadata.call_id}]`;
  
  return {
    endpoint: 'https://api.appliedepic.com/v1/activities',
    method: 'POST',
    body: {
      activityTypeCode: 'NOTE',
      categoryCode: 'COVERAGE_REVIEW',
      description: description,
      subject: `Coverage Review: ${summary.call_metadata.client_name} — ${summary.call_metadata.date}`,
      followUpDate: summary.follow_up_tasks?.[0]?.deadline || null,
      priorityCode: urgentGaps.length > 0 ? 'HIGH' : 'NORMAL',
      tags: ['AI-Generated', 'Coverage Review'],
      assignedToCode: metadata.ams_agent_id || null
    }
  };
}

function formatForHawkSoft(summary, metadata) {
  let noteText = `[AI SUMMARY] ${summary.executive_summary}\n\n`;
  
  if (summary.coverage_gaps?.length > 0) {
    noteText += `GAPS: `;
    summary.coverage_gaps.forEach(g => {
      noteText += `${g.gap_description} (${g.urgency}); `;
    });
    noteText += `\n`;
  }
  
  if (summary.follow_up_tasks?.length > 0) {
    noteText += `TASKS: `;
    summary.follow_up_tasks.forEach(t => {
      noteText += `${t.task} by ${t.deadline}; `;
    });
  }
  
  return {
    endpoint: 'https://api.hawksoft.com/v1/clients/{client_id}/notes',
    method: 'POST',
    body: {
      noteType: 'Coverage Review',
      noteText: noteText,
      importance: summary.coverage_gaps?.some(g => g.urgency === 'URGENT') ? 'High' : 'Normal',
      createdBy: metadata.agent_name || 'AI System'
    }
  };
}

function formatGeneric(summary, metadata) {
  // Plain text format for clipboard/manual paste into any AMS
  let text = `COVERAGE REVIEW SUMMARY\n`;
  text += `Client: ${summary.call_metadata.client_name}\n`;
  text += `Date: ${summary.call_metadata.date}\n`;
  text += `Agent: ${summary.call_metadata.agent_name}\n\n`;
  text += `${summary.executive_summary}\n\n`;
  
  if (summary.coverage_gaps?.length > 0) {
    text += `COVERAGE GAPS:\n`;
    summary.coverage_gaps.forEach(g => text += `- [${g.urgency}] ${g.gap_description}: ${g.recommended_solution}\n`);
    text += `\n`;
  }
  
  if (summary.follow_up_tasks?.length > 0) {
    text += `FOLLOW-UP:\n`;
    summary.follow_up_tasks.forEach(t => text += `- ${t.task} (${t.assigned_to}, ${t.deadline})\n`);
  }
  
  return { endpoint: null, method: null, body: { text: text } };
}

let result;
switch(AMS_TYPE) {
  case 'applied_epic': result = formatForAppliedEpic(summary, metadata); break;
  case 'hawksoft': result = formatForHawkSoft(summary, metadata); break;
  case 'generic':
  default: result = formatGeneric(summary, metadata); break;
}

return [{ json: { ...result, ams_type: AMS_TYPE, summary, metadata } }];
  • Set AMS_TYPE constant per client deployment
  • For Applied Epic, the client_id and policy_id must be resolved via the Client Lookup integration (see Agent Extension Lookup Service)
  • For AMS platforms without APIs, use the 'generic' formatter and include the plain text in the email for manual paste
  • Character limits: Applied Epic notes support up to 10,000 characters; HawkSoft notes up to 8,000; trim if needed
  • Always append the '[AI-Generated]' disclaimer to satisfy E&O requirements

Testing & Validation

  • AUDIO QUALITY TEST: Place a Jabra Speak2 75 on each agent's desk. Make a 5-minute test call between two agents discussing a mock coverage review scenario. Play back the recording and verify: (1) both speakers are clearly audible, (2) no background noise overwhelms speech, (3) speaker diarization correctly separates the two voices. Score each station pass/fail before proceeding.
  • NETWORK CONNECTIVITY TEST: From each agent workstation, run: curl -s -o /dev/null -w '%{http_code} %{time_total}s' https://api.deepgram.com — confirm HTTP 200 or 403 response in under 2 seconds. Repeat for api.openai.com. If latency exceeds 2 seconds or connection fails, investigate firewall rules and DNS resolution.
  • DEEPGRAM TRANSCRIPTION ACCURACY TEST: Upload 5 sample insurance coverage review recordings (varying quality and complexity) to Deepgram via the API. Compare output to manual transcription. Target: >90% word accuracy for clear recordings, >85% for moderate quality. Verify insurance terms (umbrella, endorsement, declarations, BOP, etc.) are transcribed correctly. If accuracy is below threshold, add terms to Deepgram's keyword boosting parameter.
  • SPEAKER DIARIZATION ACCURACY TEST: Using a recording with two clearly different speakers (agent and client), verify Deepgram assigns the correct speaker label to each utterance at least 90% of the time. Test with both phone calls (mono audio, harder) and speakerphone recordings (potentially stereo, easier). If diarization is poor on mono phone recordings, consider using Deepgram's multichannel feature if the VoIP system provides separate audio channels.
  • OPENAI SUMMARIZATION QUALITY TEST: Process 10 real coverage review transcripts through the Insurance Coverage Review Summarization Prompt. Have the agency's AI Champion review each summary for: (1) factual accuracy — no hallucinated policy details, (2) completeness — all coverage gaps mentioned in the call are identified, (3) actionability — follow-up tasks are specific and realistic, (4) appropriate urgency ratings. Score each summary 1-5 on each dimension. Target: average 4.0+ across all dimensions.
  • CALL TYPE CLASSIFIER ACCURACY TEST: Process 20 sample calls of mixed types (10 coverage reviews, 5 claims calls, 3 billing calls, 2 certificate requests) through the Call Type Classifier Prompt. Verify: (1) all coverage review calls are correctly classified (zero false negatives), (2) non-coverage-review calls are filtered out with >80% accuracy (some false positives are acceptable — better to over-process than miss a coverage review).
  • PII REDACTION TEST: Create a test transcript containing known PII: a fake SSN (123-45-6789), a fake credit card number (4111-1111-1111-1111), a fake date of birth, and a fake VIN. Process through the PII Redaction Post-Processor. Verify all PII instances are replaced with [REDACTED] markers. Then verify that legitimate insurance terms containing numbers (policy numbers, coverage limits like '$500,000') are NOT incorrectly redacted.
  • END-TO-END PIPELINE TEST: Make a real 15-minute coverage review call discussing: auto insurance with a new teen driver, homeowners with a recent renovation, and an umbrella policy question. Verify the complete pipeline: (1) VoIP webhook fires within 5 minutes of call end, (2) recording downloads successfully, (3) Deepgram transcription completes within 2 minutes, (4) OpenAI summary is generated within 30 seconds, (5) formatted email arrives in agent's inbox within 10 minutes of call end, (6) summary correctly identifies the teen driver as a coverage change, renovation as a dwelling limit increase need, and umbrella as a recommendation.
  • AMS INTEGRATION TEST (Phase 3): After configuring the AMS API connection, process a test summary and verify: (1) activity note appears in the correct client record in the AMS, (2) all fields are populated correctly (subject, description, follow-up date, priority, assigned agent), (3) the note is visible to all appropriate users, (4) no duplicate notes are created if the pipeline is triggered twice for the same call (implement idempotency check via call_id).
  • CONSENT COMPLIANCE TEST: Call the agency's main number from an external phone. Verify: (1) the consent disclosure plays before connecting to an agent, (2) the disclosure is clearly audible and complete, (3) the recording begins only after the disclosure. For outbound calls, shadow an agent making a coverage review call and verify they read the consent script before beginning the review discussion. Document this test with date, tester name, and result for the compliance file.
  • FAILOVER AND ERROR HANDLING TEST: Deliberately trigger failure conditions: (1) temporarily invalidate the Deepgram API key and verify the error handler sends an alert email to the MSP within 5 minutes, (2) upload a corrupted audio file and verify graceful failure with appropriate logging, (3) simulate an OpenAI rate limit by sending 20 concurrent requests and verify the retry logic handles throttling correctly, (4) disconnect the n8n instance from the internet briefly and verify queued webhooks are processed when connectivity resumes.
  • MONTHLY COST VALIDATION TEST: After 30 days of production use, compare actual API costs against projections. Calculate: (1) Deepgram cost per call and per agent, (2) OpenAI cost per summary and per agent, (3) total pipeline cost per call. Verify costs are within the $15-$55/month agency estimate. If costs exceed projections, investigate: are non-coverage-review calls being fully processed (classifier issue), are transcripts unusually long (prompt optimization needed), or is the call volume higher than estimated.

Client Handoff

Conduct a 90-minute client handoff session with the agency principal, all participating agents, and the designated AI Champion. Cover the following topics:

1
SYSTEM OVERVIEW (15 min): Walk through the end-to-end flow — from the moment a coverage review call ends to the summary arriving in their inbox and AMS. Show the architecture diagram. Explain what happens in the cloud vs. what happens locally.
2
AGENT TRAINING (30 min): Demonstrate the daily workflow: (a) making/receiving calls normally with the Jabra speakerphone, (b) consent disclosure best practices for outbound calls, (c) reviewing the AI-generated email summary, (d) how to approve, edit, or reject a summary, (e) how the summary appears in the AMS activity note, (f) what to do if a summary is inaccurate (flag for prompt refinement). Emphasize that AI summaries must ALWAYS be reviewed by a licensed agent before any action is taken — the AI is a documentation assistant, not a coverage advisor.
3
AI CHAMPION ADVANCED TRAINING (15 min): Train the AI Champion on: (a) how to report summary quality issues to the MSP, (b) how to request new prompt adjustments (e.g., 'please add flood insurance gap detection'), (c) how to add new agents to the system, (d) how to access the transcript archive, (e) monthly reporting metrics and what they mean.
4
COMPLIANCE REVIEW (15 min): Review with the agency principal: (a) the call recording consent mechanism and how it works, (b) data retention policy — where recordings and transcripts are stored, for how long, and how to request deletion, (c) PII redaction capabilities and limitations, (d) the agency's responsibility to review AI outputs before acting, (e) documentation for the agency's WISP (Written Information Security Program) update.
5
SUPPORT AND ESCALATION (15 min): Provide: (a) MSP support contact information and SLA (response times for different severity levels), (b) how to report issues (email, phone, ticket portal), (c) what constitutes an emergency (system down, data breach) vs. a standard request (prompt adjustment, new user), (d) scheduled monthly check-in dates for the first 3 months.

Documentation to Leave Behind

  • Quick Reference Card (laminated, desk-sized): Call consent script, summary review workflow, support contact info
  • Agent User Guide (PDF, 5 pages): Step-by-step with screenshots for daily workflow
  • Admin Guide (PDF, 10 pages): For AI Champion — adding users, reporting issues, understanding metrics
  • Compliance Documentation Packet: Consent disclosure text, data flow diagram, retention policy, vendor DPA/BAA copies, WISP amendment template
  • Monthly ROI Report Template: Pre-configured to show calls processed, time saved, coverage gaps identified

Success Criteria to Review at 30-Day Check-In

Maintenance

ONGOING MSP MAINTENANCE RESPONSIBILITIES:

  • Review n8n workflow execution logs for errors or failures
  • Check Deepgram and OpenAI API usage dashboards for anomalies
  • Verify webhook connectivity (automated via UptimeRobot monitoring)
  • Review any flagged summaries from agents (quality feedback queue)

WEEKLY TASKS (15 minutes/week)

  • Review n8n workflow execution logs for errors or failures
  • Check Deepgram and OpenAI API usage dashboards for anomalies
  • Verify webhook connectivity (automated via UptimeRobot monitoring)
  • Review any flagged summaries from agents (quality feedback queue)

MONTHLY TASKS (2 hours/month)

  • Generate and deliver the monthly ROI report to the agency principal
  • Review and optimize API costs — check if call type classifier is filtering effectively
  • Update agent extension mapping if staff changes occurred
  • Review transcription accuracy on 5 random calls — check for degradation
  • Apply n8n platform updates (Cloud: automatic; Self-hosted: manual update)
  • Review Deepgram and OpenAI API changelogs for model updates or deprecations
  • Back up n8n workflow configurations to the MSP's documentation system
  • Check Azure Blob Storage usage and verify lifecycle policies are archiving correctly

QUARTERLY TASKS (4 hours/quarter)

  • Prompt engineering review: analyze agent feedback trends and refine the summarization prompt
  • Insurance terminology update: add new terms to Deepgram custom vocabulary and OpenAI prompt (new carriers, new product lines, regulatory changes)
  • Security review: rotate API keys, review access logs, verify encryption settings
  • Compliance audit: verify consent mechanisms still functioning, review data retention compliance
  • Hardware check: verify speakerphone firmware is current via Jabra Direct / Poly Lens
  • Vendor relationship review: check for better pricing tiers based on actual usage volumes
  • Performance benchmarking: measure end-to-end processing time and compare to initial baseline

ANNUAL TASKS (8 hours/year)

  • Full system architecture review — evaluate if the current stack is still optimal
  • Major model migration if applicable (e.g., Deepgram Nova-4, GPT-5, new OpenAI models)
  • Renegotiate API pricing based on annual volume
  • Client business review: has the agency grown? New locations? New lines of business? Adjust system accordingly.
  • Disaster recovery test: simulate complete pipeline failure and verify restoration from backups
  • Update all documentation (agent guides, admin guides, compliance docs)

Model Retraining/Update Triggers

  • Deepgram releases a new Nova model version → test on 10 sample recordings, compare accuracy, migrate if improved
  • OpenAI releases new GPT model → test summarization quality on 10 transcripts, compare output, migrate if cost/quality improves
  • Agent feedback scores drop below 3.5 average → immediate prompt engineering session
  • New insurance product line added to agency (e.g., adding commercial lines, cyber liability) → update prompt templates with new terminology and coverage gap patterns
  • Regulatory change in call recording consent laws → update IVR disclosure and compliance documentation
  • VoIP system upgrade or migration → reconfigure webhook integration and test end-to-end

SLA Considerations

  • Pipeline availability target: 99.5% uptime during business hours (Mon-Fri 8AM-6PM agency time zone)
  • Summary delivery SLA: within 15 minutes of call completion for 95% of calls
  • Error response SLA: MSP acknowledges pipeline failures within 2 hours during business hours
  • Prompt refinement requests: addressed within 5 business days
  • New agent onboarding: completed within 2 business days of request
  • Emergency (data breach, compliance incident): MSP responds within 1 hour, 24/7

Escalation Paths

  • Tier 1 (Agent-level issues): AI Champion handles — restarting speakerphone, re-sending failed summary, basic troubleshooting
  • Tier 2 (MSP Technician): Workflow failures, API errors, new user provisioning, hardware replacement
  • Tier 3 (MSP Solutions Architect): Prompt engineering, AMS API issues, architecture changes, vendor escalations
  • Tier 4 (Vendor Support): Deepgram support (support@deepgram.com), OpenAI support (platform support), AMS vendor API support

Monthly Managed Service Billing

  • Base platform management: $200–$400/month
  • Per-agent fee (includes all API costs, monitoring, support): $25–$50/agent/month
  • Quarterly prompt optimization: included in base fee
  • AMS integration maintenance: $150–$300/month
  • Total for typical 10-agent agency: $700–$1,300/month

...

Option B: Microsoft Copilot for M365 + Teams Phone

For agencies already on Microsoft 365 Business Premium and Microsoft Teams Phone, deploy Microsoft Copilot ($30/user/month add-on) to leverage native Teams meeting transcription and Copilot summarization. All coverage review calls are conducted or routed through Teams. Copilot generates meeting summaries, action items, and follow-ups natively within the Teams/Outlook ecosystem. No third-party transcription APIs needed.

Option C: Zoom AI Companion + Zoom Phone

For agencies using Zoom Phone as their VoIP platform, leverage the included Zoom AI Companion feature for meeting/call transcription and summarization. AI Companion is included at no extra cost with Zoom Workplace Pro ($13.33/user/month annual) and above. Provides automatic meeting summaries, action items, and smart recording highlights. Can be supplemented with a custom GPT wrapper for insurance-specific re-summarization.

Option D: Insurance-Specific AI Platform (Strada or Similar)

Deploy an insurance-vertical AI platform like Strada that is purpose-built for insurance agency call intelligence. These platforms offer pre-built integrations with common AMS platforms (Applied Epic, HawkSoft, EZLynx), insurance-specific AI models, and compliance features designed for the insurance industry. The MSP acts as implementation partner rather than custom developer.

Note

Check Strada's current availability and pricing as this is an emerging market segment with rapidly evolving vendors.

Option E: On-Premises Whisper + Local LLM (Air-Gapped)

For agencies with extreme data sensitivity requirements (e.g., handling sensitive health insurance information or operating under strict state privacy regulations), deploy an entirely on-premises solution using OpenAI Whisper (open-source) for transcription and a local LLM (e.g., Llama 3 8B or Mistral 7B) for summarization. All processing occurs on a local server with no data leaving the agency's network.

Want early access to the full toolkit?