23 min readAmbient Capture

Implementation Guide: Capture After-Action Reviews & EOC Briefings — Generate SITREPs and Lessons-Learned Records

Step-by-step implementation guide for deploying AI to capture after-action reviews & eoc briefings — generate sitreps and lessons-learned records for Government & Defense clients.

Software Procurement

WebEOC — Emergency Management Platform

JuvareWebEOCQty: SaaS per-agency annual

$15,000–$40,000/year depending on jurisdiction size; contact vendor for government pricing

WebEOC is the most widely deployed emergency management information system in the US, used by the majority of state EOCs, FEMA regions, and many county/municipal emergency management agencies. It manages incident tracking, resource requests, situation reports, and inter-agency coordination. The AI integration for this use case adds AI-generated SITREP drafts and AAR summaries that populate directly into WebEOC incident records. Juvare provides a FedRAMP-in-process authorization; confirm current authorization status before deployment in federal environments.

VEOCI — Cloud Emergency Operations Platform

VEOCI

VEOCICloud Emergency Operations PlatformQty: SaaS per-agency annual

$10,000–$30,000/year; contact vendor for government pricing

Cloud-native emergency operations platform used by airports, universities, healthcare systems, and local governments. More modern UI than WebEOC with built-in workflow automation. Better suited for organizations without an existing WebEOC investment. Includes API access for integration with AI-generated SITREP content. FedRAMP Moderate authorized.

Microsoft Azure AI Speech (Azure Government)

Microsoft Azure AI Speech (Azure Government)

Microsoft Azure GovernmentConsumption-based

~$1.00–$1.50/audio hour

FedRAMP High-authorized speech-to-text for EOC briefing audio capture. See UC-01 for detailed provisioning. Critical for EOC environments where sensitive incident information may be discussed — commercial transcription services are not appropriate for CIKR or public health incident data.

Microsoft Azure OpenAI (Azure Government)

Microsoft Azure OpenAI (Azure Government)

Microsoft Azure GovernmentGPT-5.4Qty: Consumption-based

~$0.005/1K input tokens

FedRAMP High-authorized LLM for generating SITREPs, AAR summaries, and Improvement Plans from raw transcripts. See UC-01 for provisioning details.

Otter.ai Enterprise (Unclassified / Non-CIKR EOC use)

Otter.ai Enterprise

Otter.aiSaaS per-seat

Enterprise pricing: contact vendor; approx. $30–$50/user/month at enterprise tier

Appropriate for EOC briefings involving non-sensitive public emergency information (weather events, public communications coordination) and for defense contractor AAR sessions involving unclassified training data. Not appropriate for CIKR incidents, public health emergencies with PHI, or cybersecurity incidents.

Vanta (CMMC / NIST 800-171 Compliance)

Vanta (CMMC / NIST 800-171 Compliance)

VantaSaaS annual

$15,000–$25,000/year

See UC-01. Required for defense contractor environments. For state/local government EOC deployments, substitute with the state's existing compliance framework (StateRAMP authorization is increasingly required for state government SaaS tools).

Hardware Procurement

Shure MXA920 Ceiling Array Microphone

ShureMXA920Qty: 2–4 per EOC main floor

$1,800–$2,200 per unit (MSP cost) / $2,500–$3,000 suggested resale

EOC main floors are large, noisy environments with simultaneous workstation conversations, radio traffic, and briefing speakers. Ceiling-mounted array microphones with IntelliMix DSP provide the directional capture and noise suppression needed to isolate the briefing speaker from ambient EOC noise. Critical for accurate transcription in operational environments.

Shure ANIUSB-MATRIX USB Audio Network Interface

ShureANIUSB-MATRIXQty: 1 per EOC recording workstation

$600–$750 per unit (MSP cost) / $850–$1,000 suggested resale

Dante-to-USB audio interface that routes the Shure MXA920 Dante audio output to a standard USB connection on the recording workstation. Enables the Azure AI Speech SDK or Otter.ai desktop client to receive high-quality audio from the ceiling mic array without requiring a dedicated audio workstation.

Dell Precision 3680 Workstation (TAA Compliant)

Dell TechnologiesPrecision 3680 TowerQty: 1 per EOC recording station

$1,800–$2,400 per unit

TAA-compliant dedicated recording and processing workstation for EOC environments. Runs the Azure AI Speech SDK for real-time transcription during briefings. Must be on the EOC's secure network segment, not the public or visitor network. Configure with BitLocker encryption, FIPS 140-2 validated cryptography, and STIG hardening per DISA STIG for Windows 11 (or applicable OS).

Jabra Evolve2 85 Wireless Headset

JabraEvolve2 85Qty: Per field team member conducting AARs

$380–$420 per unit (MSP cost) / $499–$549 suggested resale

For field-based AARs conducted at training ranges, exercise sites, or deployed locations without fixed AV infrastructure. Provides high-quality audio capture via USB-A/C to a laptop running Granola or Otter.ai. Active noise cancellation critical for outdoor or high-noise field environments.

Prerequisites

  • EOC network assessment: EOC networks are often segmented between operational networks (connected to HSIN, WebEOC, and state networks) and administrative networks. Confirm which network segment the AI recording workstation will connect to, and verify it has the required outbound connectivity to Azure Government endpoints. Do not place recording equipment on the same segment as CIKR system controls or critical infrastructure networks.
  • HSIN (Homeland Security Information Network) considerations: If the EOC participates in HSIN, confirm that AI-generated SITREPs intended for HSIN distribution meet the classification and handling requirements of that network. AI-generated documents must be reviewed and approved by a human official before HSIN distribution — AI output is never distributed directly to external networks.
  • DHS NIMS compliance: EOC briefings conducted under the National Incident Management System (NIMS) / Incident Command System (ICS) framework use specific terminology and document formats (ICS-209, ICS-214, etc.). The AI prompt templates must be configured to produce output in ICS-compliant formats where required.
  • StateRAMP authorization (state/local government): Many states now require SaaS tools used by state agencies to hold StateRAMP authorization. Verify the chosen platform's StateRAMP status for the client's state before procurement. As of 2025, most states with active StateRAMP programs maintain a public authorized product list.
  • AAR security classification: For defense contractor training AARs, determine the classification level of the exercise and its data. Unclassified exercises → Otter.ai or Azure Government. Exercises involving classified scenario data → on-premises only; out of scope for this guide.
  • Exercise control staff involvement: For military simulation AARs, the exercise control (EXCON) staff typically owns the AAR process. The MSP must work with the EXCON officer to understand their existing AAR methodology (typically based on Army Training Circular 25-20 or equivalent) before configuring AI output templates.
  • IT admin access: Admin access to WebEOC/VEOCI API, Azure Government subscription, EOC network infrastructure, and recording workstation.

Installation Steps

Step 1: Configure EOC Audio Capture Infrastructure

Install Shure MXA920 ceiling microphones in the EOC briefing area and connect via Dante network to the ANIUSB-MATRIX interface on the recording workstation. Test audio quality under simulated operational conditions (background radio traffic, multiple simultaneous conversations at workstations).

1
Install Dante Controller on recording workstation (download from: https://www.audinate.com/products/software/dante-controller)
2
Open Dante Controller — MXA920 and ANIUSB-MATRIX should appear on Dante network
3
In the Routing view, create audio route: Transmitter: MXA920 (channels 1-2 for main briefing mic pattern), Receiver: ANIUSB-MATRIX (channels 1-2)
4
Verify green connection indicator in routing grid
Test audio input on Windows
powershell
# verify ANIUSB-MATRIX is detected as an audio endpoint

Get-PnpDevice -Class AudioEndpoint | Where-Object {$_.FriendlyName -like '*ANIUSB*'}
1
Verify signal level: Windows Sound Settings > Input > ANIUSB-MATRIX > Test (speak into room, verify meter activity)
2
Open MXA920 configuration in a browser via device IP: http://<MXA920-IP-ADDRESS>
3
Settings > Coverage Mode: Select "Directional" aimed at briefing position
4
Settings > IntelliMix > Enable Noise Reduction: High
5
Settings > IntelliMix > Enable Echo Cancellation: On
6
Settings > Automix > Enable (for multi-speaker environments)
Note

In EOC environments, radio scanner audio and simultaneous workstation conversations create significant background noise. Configure the MXA920's directional beam pattern to focus on the briefing podium position and apply maximum noise reduction. Test specifically with simulated radio traffic playing in the background — this is the most common cause of transcription accuracy degradation in EOC environments. If accuracy remains below 85%, consider adding a directional podium microphone (Shure MX418) as the primary capture source instead of the ceiling array.

Step 2: Deploy Azure AI Speech for Real-Time EOC Transcription

Configure the recording workstation to stream EOC briefing audio to Azure AI Speech in real time, producing a live transcript displayed on a secondary monitor for the EOC Documentation Unit to review during the briefing.

eoc_transcription_service.py
python
# streams microphone audio to Azure Government Speech endpoint with real-
# time file output

# eoc_transcription_service.py
# Runs on EOC recording workstation — streams to Azure Government Speech endpoint

import azure.cognitiveservices.speech as speechsdk
import datetime
import os

SPEECH_KEY = os.environ.get("AZURE_SPEECH_KEY")
SPEECH_REGION = "usgovvirginia"  # Azure Government region

def create_transcript_file():
    timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
    return f"eoc_briefing_{timestamp}.txt"

def transcribe_eoc_briefing():
    speech_config = speechsdk.SpeechConfig(
        subscription=SPEECH_KEY,
        region=SPEECH_REGION
    )
    speech_config.speech_recognition_language = "en-US"
    speech_config.set_property(
        speechsdk.PropertyId.SpeechServiceConnection_EndSilenceTimeoutMs, "3000"
    )

    # Use default microphone input (ANIUSB-MATRIX set as Windows default)
    audio_config = speechsdk.audio.AudioConfig(use_default_microphone=True)

    # Conversation transcriber supports speaker identification
    transcriber = speechsdk.transcription.ConversationTranscriber(
        speech_config=speech_config,
        audio_config=audio_config
    )

    transcript_file = create_transcript_file()
    all_text = []

    def handle_transcribed(evt):
        if evt.result.reason == speechsdk.ResultReason.RecognizedSpeech:
            timestamp = datetime.datetime.now().strftime("%H:%M:%S")
            speaker = evt.result.speaker_id or "Unknown"
            line = f"[{timestamp}] [{speaker}]: {evt.result.text}"
            print(line)
            all_text.append(line)
            # Append to file in real time for resilience
            with open(transcript_file, 'a', encoding='utf-8') as f:
                f.write(line + "\n")

    def handle_canceled(evt):
        print(f"Transcription canceled: {evt.reason}")
        if evt.reason == speechsdk.CancellationReason.Error:
            print(f"Error: {evt.error_details}")

    transcriber.transcribed.connect(handle_transcribed)
    transcriber.canceled.connect(handle_canceled)

    print(f"EOC Briefing Transcription Active — output: {transcript_file}")
    print("Press Enter to stop transcription and generate SITREP...")

    transcriber.start_transcribing_async()
    input()  # Wait for operator to press Enter at briefing end
    transcriber.stop_transcribing_async()

    print(f"\nTranscription complete. {len(all_text)} segments captured.")
    return transcript_file, "\n".join(all_text)

if __name__ == "__main__":
    transcript_file, full_transcript = transcribe_eoc_briefing()
    print(f"Transcript saved to: {transcript_file}")
    # Pass to SITREP generation function
Note

Run this script as a Windows service (using NSSM or Task Scheduler) so it starts automatically when the recording workstation boots. Configure the service to auto-restart on failure. In the EOC environment, the Documentation Unit Leader should have a keyboard shortcut or large button on screen to start/stop transcription — don't require the tech to be present for every briefing. Store transcript files in an encrypted folder (BitLocker or VeraCrypt) on the workstation, synced to the authorized cloud storage after each briefing.

Step 3: Configure SITREP Generation Pipeline

Build the Azure OpenAI integration that converts a completed EOC briefing transcript into a draft Situation Report (SITREP) in the standard ICS or agency-specific format, ready for Documentation Unit review and distribution.

sitrep_generator.py
python
# Generates ICS-compliant SITREP from EOC briefing transcript

# sitrep_generator.py
# Generates ICS-compliant SITREP from EOC briefing transcript

from openai import AzureOpenAI
import os
import datetime

client = AzureOpenAI(
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],  # *.openai.azure.us
    api_key=os.environ["AZURE_OPENAI_KEY"],
    api_version="2024-08-01-preview"
)

SITREP_PROMPT = """
You are an emergency management documentation specialist trained in FEMA NIMS/ICS protocols.
Generate a Situation Report (SITREP) from the following EOC briefing transcript.

Format the SITREP using the following standard structure:

---
SITUATION REPORT
Incident Name: [Extract from transcript or 'TBD']
SITREP Number: [Sequence — use 1 if not determinable]
Reporting Period: [Date/Time from transcript or today's date]
Prepared by: [Documentation Unit — to be completed by reviewer]
Approved by: [Incident Commander — LEAVE BLANK for human signature]
Classification: UNCLASSIFIED [flag if any sensitive content detected]

1. SITUATION OVERVIEW
   [2-3 sentence summary of current incident status]

2. INCIDENT OBJECTIVES (from briefing)
   [List objectives discussed — numbered]

3. CURRENT STATUS BY SECTION
   Operations: [Summary of operational status]
   Logistics: [Summary of logistics/resource status]
   Planning: [Summary of planning activities]
   Finance/Admin: [Summary if discussed]
   Public Information: [Summary if discussed]

4. RESOURCE SUMMARY
   Resources Deployed: [List if mentioned]
   Resources Requested/Pending: [List if mentioned]
   Resource Shortfalls: [List if mentioned]

5. PRIORITIES FOR NEXT OPERATIONAL PERIOD
   [Numbered list of priorities discussed]

6. SIGNIFICANT EVENTS (past 12-24 hours)
   [Bulleted list of significant events mentioned]

7. ANTICIPATED ACTIONS (next 12-24 hours)
   [Bulleted list of anticipated activities]

8. ISSUES AND CONCERNS
   [List unresolved issues or concerns raised in briefing]

9. WEATHER / ENVIRONMENTAL CONDITIONS
   [If discussed in briefing]

10. PUBLIC INFORMATION SUMMARY
    [Key public messaging points if discussed]

REVIEWER INSTRUCTIONS:
- Items marked [VERIFY] require confirmation before distribution
- Items marked [SENSITIVE] should be reviewed for appropriate handling before external distribution
- All AI-generated content must be reviewed and approved by Documentation Unit Leader before distribution
- Do not distribute this SITREP until approved by Incident Commander or designee

---

BRIEFING TRANSCRIPT:
{transcript}
"""

def generate_sitrep(transcript_text: str, incident_name: str = "Unknown") -> str:
    response = client.chat.completions.create(
        model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
        messages=[
            {
                "role": "system",
                "content": "You are an emergency management documentation specialist. Generate accurate, concise, and actionable emergency management documents. Flag any content that appears sensitive or requires human verification."
            },
            {
                "role": "user",
                "content": SITREP_PROMPT.format(transcript=transcript_text)
            }
        ],
        temperature=0.1,
        max_tokens=3000
    )

    sitrep = response.choices[0].message.content
    timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"SITREP_DRAFT_{incident_name}_{timestamp}.md"

    with open(filename, 'w', encoding='utf-8') as f:
        f.write(f"[DRAFT — AI GENERATED — REQUIRES HUMAN REVIEW AND APPROVAL]\n\n")
        f.write(sitrep)

    print(f"Draft SITREP saved: {filename}")
    return filename

if __name__ == "__main__":
    # Read transcript from file
    import sys
    if len(sys.argv) > 1:
        with open(sys.argv[1], 'r') as f:
            transcript = f.read()
        generate_sitrep(transcript, incident_name=sys.argv[2] if len(sys.argv) > 2 else "Incident")
Note

Every AI-generated SITREP must include the watermark [DRAFT — AI GENERATED — REQUIRES HUMAN REVIEW AND APPROVAL] on every page. This is non-negotiable for emergency management environments — SITREPs drive resource deployment, public communications, and mutual aid decisions. A hallucinated resource count or incorrect situation description can have real operational consequences. The Documentation Unit Leader must read and approve every SITREP before it leaves the EOC.

Step 4: Configure AAR Generation for Defense/Training Environments

Configure the Azure OpenAI pipeline to generate structured After-Action Reports from exercise or training event transcripts, following the standard Army/DoD AAR format (Observations, Discussions, Decisions).

aar_generator.py
python
# Generates a structured After-Action Report from a training/exercise
# transcript using Azure OpenAI

# aar_generator.py
# Generates structured After-Action Report from training/exercise transcript

AAR_PROMPT = """
You are a military training and readiness specialist. Generate a structured 
After-Action Report (AAR) from the following session transcript.

Follow the standard AAR format per Army Training Circular 25-20:

---
AFTER-ACTION REPORT
Unit/Organization: [Extract or 'TBD']
Exercise/Event Name: [Extract or 'TBD']
Date of Event: [Extract or today's date]
Location: [Extract or 'TBD']
Prepared by: [AAR Facilitator — to be completed by reviewer]
Classification: UNCLASSIFIED [Verify and adjust if needed]

1. EXECUTIVE SUMMARY
   [2-3 sentence overview of the event and key findings]

2. TASK(S) EVALUATED
   [List the tasks, missions, or objectives that were the focus of the event]

3. WHAT WAS SUPPOSED TO HAPPEN (Standard/Plan)
   [Expected outcomes, planned actions, and standards against which performance is measured]

4. WHAT ACTUALLY HAPPENED (Observations)
   [Factual account of what occurred — no judgment, just facts from the discussion]

5. WHY IT HAPPENED (Discussion)
   For each significant observation:
   - Observation: [What was observed]
   - Root Cause: [Why it happened — contributing factors]
   - Impact: [Effect on mission/training objective]

6. STRENGTHS (Sustain)
   [Numbered list of things that went well and should be repeated/reinforced]

7. AREAS FOR IMPROVEMENT (Improve)
   [Numbered list of things that need to be improved]

8. LESSONS LEARNED
   [Transferable lessons applicable beyond this specific event]

9. RECOMMENDED ACTIONS
   For each area for improvement:
   - Action: [Specific corrective action]
   - Who: [Responsible party]
   - By When: [Timeline]
   - How Measured: [How completion/success is determined]

10. CONCLUSION
    [Brief summary of overall assessment and path forward]

REVIEWER INSTRUCTIONS:
- Mark with [CLASSIFICATION REVIEW NEEDED] if any discussion touched on sensitive topics
- Verify all recommended actions are specific and achievable
- Have the senior leader present sign/approve before distribution

---
TRANSCRIPT:
{transcript}
"""

def generate_aar(transcript_text: str, event_name: str = "Training Event") -> str:
    from openai import AzureOpenAI
    import os, datetime

    client = AzureOpenAI(
        azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
        api_key=os.environ["AZURE_OPENAI_KEY"],
        api_version="2024-08-01-preview"
    )

    response = client.chat.completions.create(
        model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
        messages=[
            {"role": "system", "content": "You are a military training documentation specialist. Generate factual, structured after-action reports. Do not embellish or add observations not supported by the transcript."},
            {"role": "user", "content": AAR_PROMPT.format(transcript=transcript_text)}
        ],
        temperature=0.1,
        max_tokens=4000
    )

    aar = response.choices[0].message.content
    timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
    filename = f"AAR_DRAFT_{event_name.replace(' ','_')}_{timestamp}.md"

    with open(filename, 'w', encoding='utf-8') as f:
        f.write(f"[DRAFT — AI GENERATED — REQUIRES FACILITATOR REVIEW AND COMMANDER APPROVAL]\n\n")
        f.write(aar)

    return filename
Note

Army TC 25-20 specifies that AARs should be conducted as close to the event as possible while details are fresh. The AI-assisted AAR workflow enables the facilitator to focus on running a quality discussion rather than taking notes — the AI handles transcription and initial structuring, while the facilitator focuses on drawing out observations and ensuring psychological safety in the discussion. The AI must never be presented to participants as the author of the AAR — the AAR is a human document that the AI helped draft.

Step 5: Integrate with WebEOC (SITREP Auto-Population)

Configure the WebEOC API integration to automatically create a draft SITREP record in WebEOC from the AI-generated content, ready for Documentation Unit review and approval within the EOC platform.

webeoc_integration.py
python
# Pushes AI-generated SITREP to WebEOC via REST API

# webeoc_integration.py
# Pushes AI-generated SITREP to WebEOC via REST API

import requests
import os
import json

WEBEOC_BASE_URL = os.environ["WEBEOC_URL"]  # e.g., https://webeoc.agency.gov
WEBEOC_USERNAME = os.environ["WEBEOC_SERVICE_ACCOUNT"]
WEBEOC_PASSWORD = os.environ["WEBEOC_SERVICE_PASSWORD"]
WEBEOC_INCIDENT = os.environ["WEBEOC_INCIDENT_ID"]

def get_webeoc_token():
    """Authenticate to WebEOC and retrieve session token."""
    auth_url = f"{WEBEOC_BASE_URL}/api/authenticate"
    resp = requests.post(auth_url, json={
        "username": WEBEOC_USERNAME,
        "password": WEBEOC_PASSWORD
    }, verify=True)  # Always verify SSL in government environments
    resp.raise_for_status()
    return resp.json().get("token")

def create_sitrep_in_webeoc(sitrep_content: str, incident_id: str):
    """Create a draft SITREP record in WebEOC."""
    token = get_webeoc_token()
    headers = {
        "Authorization": f"Bearer {token}",
        "Content-Type": "application/json"
    }

    # WebEOC board/record structure varies by agency configuration
    # Adjust field names to match the agency's WebEOC SITREP board schema
    sitrep_record = {
        "incidentId": incident_id,
        "boardName": "Situation Reports",
        "status": "DRAFT",
        "preparedBy": "AI Documentation Assistant — REQUIRES HUMAN REVIEW",
        "content": sitrep_content,
        "distribution": "INTERNAL ONLY — NOT FOR EXTERNAL DISTRIBUTION UNTIL APPROVED",
        "autoGenerated": True,
        "requiresApproval": True
    }

    create_url = f"{WEBEOC_BASE_URL}/api/incidents/{incident_id}/boards/sitreps"
    resp = requests.post(create_url, json=sitrep_record, headers=headers, verify=True)
    resp.raise_for_status()

    record_id = resp.json().get("id")
    print(f"Draft SITREP created in WebEOC. Record ID: {record_id}")
    print(f"URL: {WEBEOC_BASE_URL}/incidents/{incident_id}/sitreps/{record_id}")
    return record_id
Note

WebEOC API structures vary significantly between agency configurations and WebEOC versions. Work with the agency's WebEOC administrator to map field names correctly to their specific board configuration. Test the integration in WebEOC's training/exercise environment before connecting to the live operational system. Never push AI-generated content directly to boards that distribute to external agencies (mutual aid partners, state EOC) — configure the draft to require Documentation Unit Leader approval before any distribution flag is set.

Custom AI Components

Improvement Plan Generator

Type: Prompt Extends AAR output to generate a structured Improvement Plan (IP) with specific corrective actions, owners, timelines, and measurable outcomes — the standard deliverable following formal exercises under HSEEP (Homeland Security Exercise and Evaluation Program).

Implementation

Improvement Plan Generator – System Prompt
text
SYSTEM PROMPT:
You are an exercise evaluation specialist certified in FEMA's Homeland Security 
Exercise and Evaluation Program (HSEEP). Based on the provided After-Action Report,
generate a structured Improvement Plan (IP) in HSEEP format.

For each Area for Improvement (AFI) identified in the AAR, produce:

IMPROVEMENT PLAN
Exercise Name: [From AAR]
Date: [From AAR]
Prepared by: [To be completed by reviewer]

IMPROVEMENT MATRIX:
| # | Capability | AFI Description | Recommendation | Responsible Agency | Point of Contact | Due Date | Status |
|---|---|---|---|---|---|---|---|
[Populate from AAR findings]

IMPLEMENTATION NOTES:
- Each recommendation must be SMART (Specific, Measurable, Achievable, Relevant, Time-bound)
- Due dates should be realistic — default to 90 days unless context suggests otherwise
- Flag recommendations requiring interagency coordination with [MULTI-AGENCY]
- Flag recommendations requiring funding with [FUNDING REQUIRED]
- This IP should be reviewed at the post-exercise conference and updated quarterly

INPUT AAR:
{aar_content}

SITREP Quality Checklist Prompt

Type: Prompt

A secondary AI review pass that checks a generated SITREP for completeness, consistency, and potential sensitive content before the Documentation Unit Leader reviews it.

Implementation

text
SYSTEM PROMPT:
You are an emergency management quality assurance reviewer.
Review the following draft SITREP and evaluate it against these criteria:

COMPLETENESS CHECK:
- [ ] Incident name and number present
- [ ] Reporting period clearly stated
- [ ] All standard ICS sections addressed (or noted as N/A)
- [ ] Resource summary present
- [ ] Next operational period priorities identified
- [ ] No blank or 'TBD' fields that should have been populated from the transcript

ACCURACY FLAGS:
- Flag any numeric claims (resource counts, casualty figures, dates) with [VERIFY-NUMBER]
- Flag any named individuals with [VERIFY-NAME]
- Flag any claims not clearly supported by the transcript with [UNVERIFIED]

SENSITIVE CONTENT FLAGS:
- [CIKR] — any mention of critical infrastructure specifics
- [VICTIM-PHI] — any individually identifiable victim information
- [LAW-ENFORCEMENT-SENSITIVE] — any tactical law enforcement information
- [OPSEC] — any information that could compromise ongoing response operations

DISTRIBUTION READINESS:
Rate each distribution channel:
- Internal EOC only: [READY / NOT READY — reason]
- State EOC: [READY / NOT READY — reason]
- Public (PIO): [READY / NOT READY — reason]
- HSIN: [READY / NOT READY — reason]

DRAFT SITREP:
{sitrep_content}

Testing & Validation

  • EOC ambient noise test: Play recordings of realistic EOC background noise (radio traffic, multiple conversations) at the expected volume levels while conducting a simulated 10-minute briefing with 2–3 speakers. Measure transcription word error rate — target <15% WER in the EOC environment (higher tolerance than a quiet conference room due to environmental complexity).
  • SITREP format compliance test: Generate SITREPs from three sample transcripts representing different incident types (weather emergency, infrastructure failure, public health event). Have an experienced emergency manager evaluate each SITREP against agency SITREP standards. Target: ≥80% of sections correctly populated without hallucinated content.
  • AAR format compliance test: Generate AARs from two sample exercise transcripts. Have the exercise EXCON officer evaluate against TC 25-20 standards. Verify no observations are fabricated — AI should only report what was discussed in the transcript.
  • WebEOC integration test: Create a test incident in WebEOC training environment. Run the integration to push a draft SITREP. Verify the record appears correctly in the SITREP board with DRAFT status, approval-required flag, and correct incident association.
  • Sensitive content detection test: Include mock sensitive terms (critical infrastructure location, individual victim name, law enforcement tactical detail) in a test transcript. Verify the SITREP quality checklist prompt correctly flags these items with the appropriate sensitivity markers.
  • Watermark persistence test: Verify the [DRAFT — AI GENERATED] watermark appears on every page of generated documents and cannot be inadvertently removed by WebEOC's display formatting.
  • Network isolation test: During a test transcription session, verify via network capture that audio data routes exclusively to Azure Government endpoints (*.azure.us), not to commercial endpoints.
  • Failover test: Simulate an internet connectivity failure during an active transcription session. Verify the transcription service fails gracefully, saves a local backup of the transcript captured so far, and alerts the operator. Confirm no data loss for the portion captured before connectivity failed.
  • Approval workflow test: Verify that a draft SITREP in WebEOC cannot be distributed to external channels without the Documentation Unit Leader approval action. Attempt to distribute without approval and confirm the system blocks the action.

Client Handoff

Client Handoff Meeting Agenda (75 minutes — Emergency Management Director + IT Lead + Documentation Unit Lead)

1. Architecture and Data Flow Review (15 min)

  • Review: microphone → Azure Government → SITREP generation → WebEOC draft → human approval → distribution
  • Confirm no sensitive EOC data transits commercial cloud endpoints
  • Present network diagram and Azure resource inventory

2. Operational Workflow Training (20 min)

  • Live demonstration: start briefing transcription → stop at briefing end → review SITREP draft in WebEOC → approve and set distribution
  • Demonstrate AAR workflow for training events
  • Demonstrate sensitive content flagging — show a SITREP with [VERIFY] and [SENSITIVE] flags
  • Role-play a Documentation Unit Leader reviewing and correcting an AI draft

3. Compliance and Policy Documentation (15 min)

  • Present StateRAMP / FedRAMP authorization documentation for all platforms
  • Review data handling policy for EOC-generated content
  • Confirm retention schedule for transcripts, SITREPs, and AARs
  • Review the mandatory human approval requirement (document as agency policy)

4. Roles, Responsibilities, and Governance (15 min)

  • Designate: Documentation Unit Leader (primary reviewer), Alternate (backup)
  • Confirm: who manages Azure Government subscription, WebEOC admin, recording workstation
  • Review: what happens if AI system is unavailable during an actual incident (manual fallback documented)

5. Documentation Handoff

Maintenance

Monthly Tasks

  • Review Azure AI Speech and OpenAI consumption costs in Azure Government Cost Management. Alert if costs exceed budget.
  • Test recording workstation connectivity to Azure Government endpoints. Verify transcription service auto-starts correctly.
  • Review WebEOC integration — check service account password hasn't expired. Verify API connectivity.

Quarterly Tasks

  • Review SITREP and AAR quality with the Documentation Unit Lead. Adjust prompt templates if output quality has shifted (common after Azure OpenAI model updates).
  • Update ICS/SITREP templates if the agency has revised its SITREP format or adopted new NIMS guidance.
  • Verify StateRAMP/FedRAMP authorization status of all platforms remains current.
  • Test the manual fallback procedure (can the Documentation Unit produce an acceptable SITREP without AI assistance if the system fails during an incident?).

Annual Tasks

  • Full compliance review: update SSP/security documentation for the AI system. Renew StateRAMP authorization evidence.
  • Records disposition review: confirm transcripts, SITREPs, and AARs are being retained and disposed per the agency's records schedule.
  • Exercise the full system during a scheduled tabletop or functional exercise to confirm it performs under realistic conditions before it's needed in an actual incident.

Alternatives

...

Palantir AIP for Government (Enterprise AI Operations)

Palantir's Artificial Intelligence Platform (AIP) for Government offers FedRAMP High and IL4/IL5-authorized AI capabilities including meeting transcription, document generation, and operational AI workflows. Purpose-built for defense and intelligence community workflows.

AWS GovCloud — Amazon Transcribe + Bedrock

Direct equivalent to the Azure Government approach using Amazon Transcribe (FedRAMP High) for speech-to-text and Amazon Bedrock with Claude (GovCloud) for document generation. Best for: Agencies standardized on AWS GovCloud. Tradeoffs: Less native integration with Microsoft 365 GCC High; requires separate integration work for SharePoint/Teams environments.

ESRI ArcGIS Emergency Management (Geospatial SITREP)

For incidents with a strong geospatial component (wildfires, flood events, hazmat), ESRI's ArcGIS Emergency Management platform provides map-integrated SITREPs and situation awareness tools. Can be combined with the AI transcription pipeline — AI generates the text SITREP, geospatial data is overlaid in ArcGIS. Best for: Agencies already using ArcGIS for incident mapping. Complementary to, not a replacement for, the transcription/AI pipeline described here.

Want early access to the full toolkit?