
Implementation Guide: Continuous Earned Value Management (EVM) Analysis — Detect Schedule/Cost Variances & Draft Corrective Action Plans
Step-by-step implementation guide for deploying AI to continuous earned value management (evm) analysis — detect schedule/cost variances & draft corrective action plans for Government & Defense clients.
Software Procurement
Deltek Costpoint (EVM Data Source)
Deltek Costpoint
Client-owned
Primary source for EVM performance data (BCWS, BCWP, ACWP, BAC, EAC, ETC by WBS element). Costpoint's Earned Value module tracks performance data at the control account level. The monitoring pipeline queries Costpoint daily via REST API or SQL reporting views.
Microsoft Azure OpenAI Service (Azure Government)
Microsoft Azure OpenAI Service (Azure Government)
GPT-5.4: ~$0.005/1K input, ~$0.015/1K output. Monthly CAP narrative: ~$5–$15. Variance analysis narrative: ~$3–$8.
Generates CAP narratives, variance analysis prose, and EVM portions of CPR submissions from structured EVM data. Operates within Azure Government FedRAMP boundary — EVM data is CUI//PROCURE.
Microsoft Power BI (GCC)
Microsoft Power BI (GCC)
$20/user/month
EVM dashboard providing real-time program health visualization: CPI/SPI trends, cumulative and current period performance, variance waterfall charts, TCPI comparison, and EAC projection cones. Connects to Azure SQL (Government) populated by the EVM data pipeline.
Microsoft Azure SQL (Azure Government)
Microsoft Azure SQL (Azure Government)
~$185/month (General Purpose, 2 vCores)
Stores the historical EVM data time series (monthly snapshots of all EVM metrics by contract, WBS element, and control account). Enables trend analysis and threshold monitoring across the full program lifecycle.
Microsoft Power Automate (GCC High)
Microsoft Power Automate (GCC High)
Included
Orchestrates the autonomous monitoring loop: daily data pull from Costpoint → threshold analysis → alert generation → CAP draft routing → CPR preparation trigger.
Prerequisites
- EVMS validation or baseline: The contractor's EVMS must be validated (for programs ≥$100M) or established per ANSI/EIA-748 guidelines before this monitoring pipeline can be deployed. The pipeline monitors an existing EVMS — it does not establish one. If the client does not have a functioning EVMS, EVMS implementation is a prerequisite engagement.
- Control account structure in Costpoint: The Costpoint EVM module must be configured with a proper WBS/OBS (Work Breakdown Structure / Organizational Breakdown Structure) and control account structure. Each control account must have: baseline (BCWS), actuals (ACWP), and earned value (BCWP) tracking enabled. Verify with the program controller before deployment.
- Threshold definitions: Work with the Program Manager and contract requirements to define variance thresholds for automated alerting. DCMA's standard surveillance thresholds (CPI < 0.85, SPI < 0.85, VAC > 10%) are the baseline — the program team may define tighter internal thresholds. Document approved thresholds in writing before configuring alerts.
- CPR/IPMR reporting requirements: Identify the specific contract reporting requirements: which format (CPR Formats 1–5, IPMR Format 7), what level of WBS detail, and the submission schedule. The AI-generated EVM narratives must align with the required reporting format.
- IT admin access: Azure Government subscription, Costpoint API credentials (read-only), Azure SQL, Power BI GCC, Power Automate GCC High.
Installation Steps
...
Step 1: Build the EVM Data Extraction and Storage Pipeline
Extract EVM metrics from Costpoint daily and store in Azure SQL for trend analysis and threshold monitoring.
# Daily extraction of EVM metrics from Deltek Costpoint
# evm_data_pipeline.py
# Daily extraction of EVM metrics from Deltek Costpoint
import requests
import pandas as pd
import sqlalchemy
import os
from datetime import datetime
COSTPOINT_BASE = os.environ["COSTPOINT_URL"]
COSTPOINT_AUTH = (os.environ["COSTPOINT_USER"], os.environ["COSTPOINT_PASS"])
SQL_ENGINE = sqlalchemy.create_engine(os.environ["AZURE_SQL_CONNECTION"])
def extract_evm_data(contract_ids: list, period: str = None) -> pd.DataFrame:
"""Extract EVM performance data from Costpoint for all active contracts."""
period = period or datetime.today().strftime("%Y-%m")
all_records = []
for contract_id in contract_ids:
# Get control account level EVM data
resp = requests.get(
f"{COSTPOINT_BASE}/api/v1/contracts/{contract_id}/evm/control-accounts",
auth=COSTPOINT_AUTH,
params={"period": period},
headers={"Accept": "application/json"}
)
if resp.status_code != 200:
print(f"Warning: Could not retrieve EVM data for contract {contract_id}")
continue
for ca in resp.json().get("controlAccounts", []):
bcws = ca.get("budgetedCostWorkScheduled", 0) or 0
bcwp = ca.get("budgetedCostWorkPerformed", 0) or 0
acwp = ca.get("actualCostWorkPerformed", 0) or 0
bac = ca.get("budgetAtCompletion", 0) or 0
eac = ca.get("estimateAtCompletion", 0) or 0
etc = ca.get("estimateToComplete", 0) or 0
# Calculate derived EVM metrics
cv = bcwp - acwp # Cost Variance
sv = bcwp - bcws # Schedule Variance
cpi = bcwp / acwp if acwp else None # Cost Performance Index
spi = bcwp / bcws if bcws else None # Schedule Performance Index
vac = bac - eac # Variance At Completion
tcpi_bac = (bac - bcwp) / (bac - acwp) if (bac - acwp) else None # TCPI to BAC
tcpi_eac = (bac - bcwp) / (eac - acwp) if (eac - acwp) else None # TCPI to EAC
record = {
"extract_date": datetime.utcnow().date(),
"period": period,
"contract_id": contract_id,
"contract_number": ca.get("contractNumber", ""),
"control_account": ca.get("controlAccountId", ""),
"wbs_element": ca.get("wbsElement", ""),
"responsible_manager": ca.get("responsibleManager", ""),
"bcws": bcws,
"bcwp": bcwp,
"acwp": acwp,
"bac": bac,
"eac": eac,
"etc": etc,
"cv": cv,
"sv": sv,
"cpi": round(cpi, 4) if cpi else None,
"spi": round(spi, 4) if spi else None,
"vac": vac,
"tcpi_bac": round(tcpi_bac, 4) if tcpi_bac else None,
"tcpi_eac": round(tcpi_eac, 4) if tcpi_eac else None,
"percent_complete": round(bcwp / bac * 100, 1) if bac else 0
}
all_records.append(record)
df = pd.DataFrame(all_records)
return df
def load_evm_to_sql(df: pd.DataFrame):
"""Load EVM data to Azure SQL for trend analysis."""
if df.empty:
return
df.to_sql("evm_performance_history", SQL_ENGINE,
if_exists="append", index=False, chunksize=500)
print(f"Loaded {len(df)} EVM records to Azure SQL.")
def check_thresholds(df: pd.DataFrame, thresholds: dict) -> pd.DataFrame:
"""Flag control accounts exceeding variance thresholds."""
df = df.copy()
df["cpi_flag"] = df["cpi"].apply(
lambda x: "CRITICAL" if x and x < thresholds.get("cpi_critical", 0.80)
else ("WARNING" if x and x < thresholds.get("cpi_warning", 0.90) else "OK")
)
df["spi_flag"] = df["spi"].apply(
lambda x: "CRITICAL" if x and x < thresholds.get("spi_critical", 0.80)
else ("WARNING" if x and x < thresholds.get("spi_warning", 0.90) else "OK")
)
df["vac_flag"] = df.apply(
lambda row: "CRITICAL" if row["bac"] and abs(row["vac"] / row["bac"]) > thresholds.get("vac_pct_critical", 0.10)
else ("WARNING" if row["bac"] and abs(row["vac"] / row["bac"]) > thresholds.get("vac_pct_warning", 0.05) else "OK"),
axis=1
)
df["overall_flag"] = df[["cpi_flag", "spi_flag", "vac_flag"]].apply(
lambda row: "CRITICAL" if "CRITICAL" in row.values
else ("WARNING" if "WARNING" in row.values else "OK"),
axis=1
)
return dfStep 2: Build the AI Variance Analysis and CAP Generator
Generate automated variance analysis narratives and Corrective Action Plan drafts when thresholds are breached.
# evm_analysis_generator.py
from openai import AzureOpenAI
import os, json, pandas as pd
client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
api_key=os.environ["AZURE_OPENAI_KEY"],
api_version="2024-08-01-preview"
)
VARIANCE_ANALYSIS_PROMPT = """You are a defense program management EVM specialist.
Generate a variance analysis narrative for the following EVM performance data.
This narrative will be used in the program's Cost Performance Report (CPR)
and reviewed by DCMA surveillance personnel.
CONTRACT: {contract_number}
CONTROL ACCOUNT: {control_account}
PERIOD: {period}
RESPONSIBLE MANAGER: {responsible_manager}
EVM PERFORMANCE DATA:
BAC: ${bac:,.0f}
BCWS: ${bcws:,.0f} (Planned Value)
BCWP: ${bcwp:,.0f} (Earned Value)
ACWP: ${acwp:,.0f} (Actual Cost)
EAC: ${eac:,.0f} (Estimate at Completion)
ETC: ${etc:,.0f} (Estimate to Complete)
CALCULATED METRICS:
CPI: {cpi:.3f} ({cpi_status})
SPI: {spi:.3f} ({spi_status})
CV: ${cv:,.0f} ({cv_dir})
SV: ${sv:,.0f} ({sv_dir})
VAC: ${vac:,.0f} ({vac_dir})
% Complete: {pct_complete:.1f}%
PROGRAM CONTEXT (if provided): {program_context}
Generate the following CPR narrative sections:
## COST VARIANCE ANALYSIS
[Explain the CV in plain language: what caused it, whether it is isolated or systemic,
and trend direction. Reference specific work activities where possible.
If context not provided, indicate [PM INPUT NEEDED: specific cause of cost overrun/underrun]]
## SCHEDULE VARIANCE ANALYSIS
[Explain the SV: what schedule events drove performance, which milestones are affected,
and impact on downstream work.
If context not provided, indicate [PM INPUT NEEDED: specific cause of schedule variance]]
## EAC ASSESSMENT
[Assess whether the EAC is realistic: does TCPI-to-EAC = {tcpi_eac:.3f} represent
a achievable performance improvement? If TCPI > 1.10, note that the EAC may be
optimistic and an EAC revision may be warranted.]
## OUTLOOK AND RISK
[Assess forward-looking risk based on current performance trends.
Flag if CPI or SPI trends indicate further deterioration is likely.]
[DRAFT — REQUIRES PROGRAM MANAGER REVIEW AND APPROVAL BEFORE CPR SUBMISSION]
[All [PM INPUT NEEDED] items must be filled before submission to DCMA]"""
CAP_PROMPT = """You are a defense program management specialist drafting a Corrective Action Plan
for a control account with adverse EVM performance. This CAP will be reviewed by DCMA.
CONTROL ACCOUNT: {control_account}
CONTRACT: {contract_number}
PM: {responsible_manager}
PERFORMANCE ISSUE: CPI={cpi:.3f}, SPI={spi:.3f}
VAC: ${vac:,.0f}
Generate a structured Corrective Action Plan (CAP):
## CORRECTIVE ACTION PLAN
**Control Account:** {control_account}
**Contract:** {contract_number}
**CAP Prepared By:** [PM NAME — to be filled]
**Date:** [TODAY]
**CAP Number:** CAP-[SEQUENCE]
### 1. ROOT CAUSE OF VARIANCE
[Primary cause (technical / scope / estimation / resource): [PM INPUT REQUIRED]
Secondary contributing factors: [PM INPUT REQUIRED]]
### 2. CORRECTIVE ACTIONS
| Action | Description | Owner | Due Date | Expected Impact |
|--------|-------------|-------|----------|-----------------|
| CA-01 | [Specific action — PM input] | [Owner] | [Date] | [CPI/SPI impact] |
| CA-02 | [Specific action — PM input] | [Owner] | [Date] | [CPI/SPI impact] |
| CA-03 | [Specific action — PM input] | [Owner] | [Date] | [CPI/SPI impact] |
### 3. REVISED EAC (IF APPLICABLE)
Current EAC: ${eac:,.0f}
Revised EAC: [PM INPUT — if EAC revision is warranted]
EAC Rationale: [PM INPUT]
### 4. PERFORMANCE RECOVERY PLAN
Projected CPI recovery timeline: [PM INPUT]
Projected SPI recovery timeline: [PM INPUT]
Metrics to track recovery: [CPI/SPI monthly, milestone completion rate]
### 5. LESSONS LEARNED
[What process improvement will prevent recurrence: PM INPUT]
[DRAFT — AI GENERATED — PROGRAM MANAGER MUST COMPLETE ALL [PM INPUT] SECTIONS]
[DCMA CAP SUBMISSION REQUIRES PM AND CONTRACT MANAGER SIGN-OFF]"""
def generate_variance_analysis(ca_data: dict, program_context: str = "") -> str:
response = client.chat.completions.create(
model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
messages=[
{"role": "system", "content": "You are a DCMA-experienced EVM analyst. Generate technically accurate variance analysis narratives. Flag any assumptions clearly."},
{"role": "user", "content": VARIANCE_ANALYSIS_PROMPT.format(
**ca_data,
cv_dir="unfavorable" if ca_data.get("cv", 0) < 0 else "favorable",
sv_dir="unfavorable" if ca_data.get("sv", 0) < 0 else "favorable",
vac_dir="unfavorable" if ca_data.get("vac", 0) < 0 else "favorable",
cpi_status="below threshold" if ca_data.get("cpi", 1) < 0.90 else "acceptable",
spi_status="below threshold" if ca_data.get("spi", 1) < 0.90 else "acceptable",
program_context=program_context or "[No context provided — PM should review all narrative sections]"
)}
],
temperature=0.1,
max_tokens=2000
)
return response.choices[0].message.content
def generate_cap(ca_data: dict) -> str:
response = client.chat.completions.create(
model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
messages=[
{"role": "system", "content": "You are a defense program management specialist. Generate structured CAP frameworks that guide program managers to document specific, accountable corrective actions."},
{"role": "user", "content": CAP_PROMPT.format(**ca_data)}
],
temperature=0.1,
max_tokens=2000
)
return response.choices[0].message.contentStep 3: Configure the EVM Dashboard in Power BI
Build the EVM program health dashboard with real-time performance visualization.
## Power BI GCC Dashboard: EVM Program Health Monitor
DATA SOURCE: Azure SQL (Government) — evm_performance_history table
DASHBOARD PAGES:
PAGE 1: PROGRAM HEALTH OVERVIEW
- KPI Cards (traffic light — green/amber/red):
* CPI (cumulative): [value] — Green(≥0.95) / Amber(0.85-0.95) / Red(<0.85)
* SPI (cumulative): [value]
* VAC ($): [value] — Green(favorable) / Red(unfavorable > 5% of BAC)
* % Complete: [value]
* EAC vs BAC: [$ variance]
- Waterfall chart: BAC → BCWP → ACWP → EAC (shows cost variance visually)
- Line chart: CPI and SPI trend (last 12 months) with threshold reference lines at 0.85 and 1.0
- Gauge: % Complete vs Time Elapsed (should track together on healthy programs)
PAGE 2: CONTROL ACCOUNT DRILL-DOWN
- Table: All control accounts with CPI, SPI, CV, SV, % Complete, Flag status
Color-coded rows: Green / Amber / Red per overall_flag
Sortable by: CPI ascending (worst performers first)
- Bar chart: CV by control account (horizontal, sorted worst to best)
- Filter: By WBS level, by responsible manager, by flag status
PAGE 3: THRESHOLD ALERTS
- Table: All control accounts in WARNING or CRITICAL status
Columns: CA, Manager, CPI, SPI, VAC, Flag, Days Since First Flag, CAP Status
- KPI: Count of CAs in CRITICAL status
- DCMA Surveillance Risk: CAs below 0.85 that will appear in next CPR
PAGE 4: EAC PROJECTION AND TCPI
- Scatter: EAC vs BAC by control account (diagonal line = EAC=BAC)
Points above the line = over-budget projections
- TCPI analysis: CAs with TCPI > 1.10 (effectively impossible recovery pace) highlighted
- EAC confidence: Range showing optimistic/realistic/pessimistic EAC scenarios
REFRESH: Daily at 07:00 ET (after overnight Costpoint data extraction)
ALERT INTEGRATION: Power BI data-driven alerts push to Teams when CPI/SPI cross thresholdsCustom AI Components
CPR Narrative Package Generator
Type: Prompt Generates the complete CPR Format 5 (Problem Analysis Report) narrative from the AI variance analyses for all flagged control accounts, ready for PM review and DCMA submission.
Implementation:
SYSTEM PROMPT:
You are a defense program controls specialist assembling the CPR Format 5
(Problem Analysis Report) for DCMA submission.
CONTRACT: {contract_number}
REPORTING PERIOD: {period}
PROGRAM MANAGER: {pm_name}
Compile the following individual control account variance analyses into a
cohesive CPR Format 5 narrative that:
1. Provides a program-level summary of performance and EAC confidence
2. Presents significant variances by WBS element (only those meeting threshold criteria)
3. Describes corrective actions in place or planned
4. Assesses the risk to the program's EAC
5. Meets DCMA's CPR Format 5 content requirements per ANSI/EIA-748
INDIVIDUAL VARIANCE ANALYSES:
{variance_analyses}
PROGRAM CONTEXT:
{program_context}
[DRAFT — REQUIRES PM AND PROGRAM CONTROLS MANAGER REVIEW AND SIGN-OFF BEFORE DCMA SUBMISSION]Testing & Validation
- EVM metric accuracy test: Compare pipeline-calculated CPI, SPI, CV, SV, VAC, and TCPI against values manually computed from the same Costpoint data. All metrics must match to 4 decimal places.
- Threshold detection test: Insert test records with known CPI/SPI values bracketing the warning and critical thresholds. Verify the flagging logic correctly classifies each as OK/WARNING/CRITICAL.
- Trend detection test: Load 12 months of historical EVM data for a test contract with known performance trends (declining CPI). Verify the Power BI trend chart correctly shows the deterioration and the dashboard alerts fire at the correct threshold crossing point.
- CAP generation quality test: Generate a CAP for a control account with actual past performance data. Have the PM who managed that control account evaluate the CAP framework — is it structured correctly, does it capture the right categories of corrective action, are all PM input sections clearly marked?
- CPR narrative quality test: Generate a CPR Format 5 narrative and have the program controls manager evaluate it against DCMA's CPR Format 5 requirements. Verify all required elements are present.
- Power BI DCMA simulation test: Set 3 control accounts below 0.85 CPI. Have the Program Manager simulate a DCMA surveillance visit using the Power BI dashboard. Verify the dashboard surfaces exactly the right information for the surveillance conversation.
- Alert delivery test: Trigger a threshold crossing in the test data and verify: Teams alert to Program Manager and Program Controls Manager delivered within 1 hour, variance analysis draft generated and saved to SharePoint within 2 hours.
- Data latency test: Verify the daily extract from Costpoint completes before 07:00 ET Power BI refresh, so the dashboard always reflects same-day data.
Client Handoff
Handoff Meeting Agenda (75 minutes — Program Manager + Program Controls Manager + Contracts Manager + IT Lead)
1. EVM dashboard walkthrough (20 min) - Walk through all 4 dashboard pages with the program controls manager - Verify all metrics align with the program controls manager's manual calculations - Review the DCMA surveillance threshold configuration 2. Alert and escalation workflow (15 min) - Demonstrate a threshold breach alert and the resulting CAP generation workflow - Confirm the alert recipients are correct - Review the 30/15/5-day CPR preparation workflow **3. Variance analysis and ...
Maintenance
Daily Tasks (Automated)
- EVM data extraction from Costpoint runs at 04:00 ET
- Power BI dashboard refreshes at 07:00 ET
- Threshold monitoring runs after data extraction — alerts fire automatically
Monthly Tasks (CPR Cycle)
- 15 days before CPR due: trigger CPR narrative preparation workflow
- PM receives draft CPR Format 5 narrative for all flagged control accounts
- Program controls manager reviews and signs off before submission
Quarterly Tasks
- Review threshold configuration with PM — adjust if program risk profile has changed
- Review AI-generated narrative quality against DCMA feedback from surveillance visits
- Azure SQL cost review and right-sizing
Annual Tasks
- Full EVM data archive — move closed contracts' EVM history to cold storage
- Review ANSI/EIA-748 guideline updates and update prompts if surveillance criteria have changed
- DCMA surveillance preparation — compile 12 months of EVM trend data and CAP history
Alternatives
Empower (Deltek EVM Product)
wInsight Analytics (EVM Analytics Platform)
Manual EVM Monitoring + AI Report Assistance (Conservative)
For programs with low variance and infrequent threshold breaches, deploy the AI narrative generation capability (variance analysis, CAP drafts, CPR Format 5) without the continuous autonomous monitoring. Program controls staff run the analysis on demand each month rather than autonomously. Reduces infrastructure cost while still providing AI assistance for the most labor-intensive reporting tasks. Best for: Small programs with stable EVM performance where autonomous monitoring adds limited value.
Want early access to the full toolkit?