
Implementation Guide: Draft Proposal Volumes — Technical, Management & Past Performance
Step-by-step implementation guide for deploying AI to draft proposal volumes — technical, management & past performance for Government & Defense clients.
Software Procurement
Microsoft Azure OpenAI Service (Azure Government)
GPT-5.4: ~$0.005/1K input tokens, ~$0.015/1K output tokens. A full proposal draft (3 volumes, ~100 pages) typically costs $15–$40 in API consumption.
Primary LLM for proposal drafting in CUI/DFARS environments. FedRAMP High authorized, operating within Azure Government boundary. Required for proposals involving DoD CUI, ITAR-adjacent technical content, or any solicitation under DFARS 252.204-7012. See UC-01 for provisioning details.
Anthropic Claude API (Commercial — Non-CUI Only)
~$0.003/1K input tokens, ~$0.015/1K output tokens
Consumption-based API
Claude via the commercial Anthropic API is appropriate for civilian agency proposals, GSA schedule bids, state and local government RFP responses, and any proposal where the solicitation and response content does not contain CUI or export-controlled data. Produces high-quality, coherent long-form prose well-suited to proposal writing. Do not use for DoD CUI or ITAR-adjacent content.
Microsoft SharePoint GCC High
Included
Serves as the proposal content library — storing past proposal sections, boilerplate, corporate capability descriptions, resume library, past performance write-ups, and discriminator statements. The AI pipeline retrieves relevant content from SharePoint as context when generating new proposal sections. Sensitivity labels enforce CUI handling on all proposal documents.
Anthropic Claude API via AWS GovCloud (IL4 — Future)
Contact AWS for GovCloud pricing
Anthropic's Claude models are available via Amazon Bedrock on AWS GovCloud, providing FedRAMP High-authorized access to Claude for CUI environments. As of mid-2025, this is the preferred path for contractors who want Claude's writing quality within a FedRAMP High boundary. Monitor availability and IL4 authorization status — this capability is expanding rapidly.
Vanta (CMMC Compliance)
$15,000–$25,000/year
SaaS annual license for CMMC compliance tracking.
Proposal development systems handling CUI must be within the contractor's CMMC boundary. Vanta tracks the required controls and evidence. The proposal AI system must be added to the SSP as a new information system component. See UC-01.
Adobe Acrobat Pro (Government Edition)
$358/user/year (VIP government pricing)
Required for final proposal formatting, PDF/A compliance (many agencies require PDF/A-1b for proposal submissions), page count verification, and assembling final volumes from AI-drafted sections. Most federal proposal submissions are PDF-only.
Microsoft Word (M365 GCC High)
Included
Primary authoring tool. AI-generated proposal sections are delivered as Markdown or plain text, then pasted into Word proposal templates. The MSP configures standard proposal templates (with correct fonts, margins, headers/footers, and section numbering per solicitation requirements) in the SharePoint GCC High template library.
Prerequisites
- Proposal content library: The engagement begins with a content library build. The MSP must work with the contractor's proposal manager to collect and organize existing proposal content: past technical volumes, management volumes, past performance citations, corporate capability statements, key personnel resumes, and discriminator statements. These are loaded into SharePoint GCC High as the AI's retrieval context. Without this library, AI output will be generic and not representative of the company's actual capabilities.
- Solicitation access: The AI pipeline requires the full solicitation package: RFP (Section L — Instructions, Section M — Evaluation Criteria), Performance Work Statement (PWS) or Statement of Work (SOW), and any applicable attachments. Verify the solicitation's distribution limitations — some solicitations are restricted to cleared personnel or require registration on specific portals (SAM.gov, PIEE, SEAPORT-NXG).
- CMMC boundary confirmation: Confirm that the workstations and SharePoint environment used for proposal development are within the contractor's CMMC boundary. If the contractor does not yet have a CMMC Level 2 or 3 certification, they cannot use government-connected AI tools for CUI proposal content — this is a prerequisite, not something the MSP can remediate within this engagement.
- Win theme and capture strategy inputs: AI drafting without strategic direction produces generic, uncompetitive proposals. Before running any AI generation, obtain the capture manager's win themes (3–5 discriminating reasons why this contractor should win), solution approach summary, teaming partner roles, and price-to-win estimate. These inputs become part of the AI prompt context.
- Section L/M compliance matrix: The proposal manager must provide a completed compliance matrix mapping each Section L instruction to the corresponding proposal section. The AI pipeline uses this matrix to ensure every required element is addressed. The MSP configures this matrix as a structured input to the generation pipeline.
- Key personnel resumes: Many proposals require tailored resumes for key personnel (Program Manager, Technical Lead, etc.) in a specific government format. Collect current resumes and any existing government-formatted versions for the content library.
- IT admin access: Admin access to Azure Government subscription, SharePoint GCC High tenant, and proposal workstations.
Installation Steps
...
Step 1: Build the Proposal Content Library in SharePoint GCC High
Create and populate the SharePoint proposal content library with the contractor's reusable proposal content, organized for efficient AI retrieval.
# Connect to SharePoint GCC High via PnP PowerShell
Install-Module PnP.PowerShell -Force
Connect-PnPOnline -Url "https://[tenant].sharepoint.us/sites/ProposalLibrary" `
-UseWebLogin
# Create document library structure
New-PnPList -Title "ProposalContentLibrary" -Template DocumentLibrary
Add-PnPField -List "ProposalContentLibrary" -DisplayName "ContentType" `
-InternalName "ContentType" -Type Choice `
-Choices "Technical Approach","Management Approach","Past Performance","Resume","Boilerplate","Discriminator","Lessons Learned"
Add-PnPField -List "ProposalContentLibrary" -DisplayName "ContractVehicle" `
-InternalName "ContractVehicle" -Type Text
Add-PnPField -List "ProposalContentLibrary" -DisplayName "AgencyCustomer" `
-InternalName "AgencyCustomer" -Type Text
Add-PnPField -List "ProposalContentLibrary" -DisplayName "ContractValue" `
-InternalName "ContractValue" -Type Currency
Add-PnPField -List "ProposalContentLibrary" -DisplayName "WinLoss" `
-InternalName "WinLoss" -Type Choice `
-Choices "Win","Loss","Pending","No Bid"
Add-PnPField -List "ProposalContentLibrary" -DisplayName "Relevance" `
-InternalName "Relevance" -Type MultiChoice `
-Choices "C4ISR","Logistics","Cybersecurity","Training","Systems Engineering","Program Management","Intelligence","Sustainment"
# Apply CUI sensitivity label to library
# (Via Purview auto-labeling policy — see UC-01 Step 6)
# Label: CUI//PROCURE
# Create folder structure
$folders = @("Technical", "Management", "PastPerformance", "Resumes", "Boilerplate", "Discriminators")
foreach ($folder in $folders) {
Add-PnPFolder -Name $folder -Folder "ProposalContentLibrary"
}
Write-Host "Proposal content library created. Begin uploading content to SharePoint."Content library quality is the single biggest determinant of AI proposal output quality. A contractor with 50 well-organized, tagged past proposal sections will get dramatically better AI output than one with a disorganized file share. Budget 2–3 days of content library organization work at project start. Assign a proposal manager (not the MSP) to review and tag all uploaded content — the MSP builds the structure, the contractor populates it with their institutional knowledge.
Step 2: Configure the Proposal Generation Pipeline
Build the Python pipeline that reads solicitation requirements, retrieves relevant content from the SharePoint library, and generates draft proposal sections via Azure OpenAI.
# proposal_generator.py
# Core AI proposal drafting pipeline
from openai import AzureOpenAI
from office365.sharepoint.client_context import ClientContext
from office365.runtime.auth.client_credential import ClientCredential
import os, json
# Azure OpenAI (Government)
aoai_client = AzureOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], # *.openai.azure.us
api_key=os.environ["AZURE_OPENAI_KEY"],
api_version="2024-08-01-preview"
)
# SharePoint content retrieval
SP_URL = os.environ["SHAREPOINT_URL"]
SP_CLIENT_ID = os.environ["SP_CLIENT_ID"]
SP_CLIENT_SECRET = os.environ["SP_CLIENT_SECRET"]
def retrieve_relevant_content(content_type: str, relevance_tags: list, top_n: int = 5) -> list:
"""Retrieve relevant past proposal content from SharePoint library."""
ctx = ClientContext(SP_URL).with_credentials(
ClientCredential(SP_CLIENT_ID, SP_CLIENT_SECRET)
)
library = ctx.web.lists.get_by_title("ProposalContentLibrary")
query = f"ContentType eq '{content_type}'"
items = library.items.filter(query).top(top_n).get().execute_query()
content_chunks = []
for item in items:
file = item.file
content = file.read().decode('utf-8', errors='ignore')[:3000] # Truncate for context
content_chunks.append({
"title": item.properties.get("FileLeafRef"),
"type": item.properties.get("ContentType"),
"customer": item.properties.get("AgencyCustomer"),
"win_loss": item.properties.get("WinLoss"),
"content": content
})
return content_chunks
def generate_proposal_section(
section_name: str,
l_instructions: str,
m_criteria: str,
win_themes: list,
solution_summary: str,
past_content: list,
page_limit: int = 10
) -> str:
"""Generate a draft proposal section."""
past_content_context = "\n\n".join([
f"--- PAST CONTENT: {c['title']} ({c['customer']}, {c['win_loss']}) ---\n{c['content']}"
for c in past_content
])
system_prompt = """You are an expert federal proposal writer with 20 years of experience
writing winning proposals for defense contractors. You follow the Shipley proposal methodology.
Your writing is clear, compliant, compelling, and customer-focused.
RULES:
- Address every Section L instruction explicitly
- Use discriminating language that sets this company apart from competitors
- Write in active voice, present tense where appropriate
- Use callout boxes, graphics placeholders [GRAPHIC: description], and action captions
- Never fabricate specific metrics, contract numbers, or technical claims not provided
- Flag sections needing SME input with [SME INPUT NEEDED: topic]
- Mark all output as DRAFT requiring technical and compliance review"""
user_prompt = f"""Draft the {section_name} section of our proposal.
SECTION L INSTRUCTIONS (what we must address):
{l_instructions}
SECTION M EVALUATION CRITERIA (what we will be scored on):
{m_criteria}
OUR WIN THEMES (weave these throughout):
{chr(10).join(f"- {t}" for t in win_themes)}
SOLUTION SUMMARY:
{solution_summary}
RELEVANT PAST PROPOSAL CONTENT (adapt and improve — do not copy verbatim):
{past_content_context}
CONSTRAINTS:
- Page limit: {page_limit} pages (approximately {page_limit * 500} words)
- Use Level 1 (##) and Level 2 (###) headers matching Section L structure
- Include at least 3 graphics placeholders with descriptive captions
- End each major section with a brief discriminating summary statement
Generate the complete draft section now."""
response = aoai_client.chat.completions.create(
model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
messages=[
{"role": "system", "content": system_prompt},
{"role": "user", "content": user_prompt}
],
temperature=0.3, # Slightly higher than factual tasks — proposals need engaging prose
max_tokens=4000
)
return f"[DRAFT — AI GENERATED — REQUIRES SME REVIEW, TECHNICAL ACCURACY CHECK, AND PROPOSAL MANAGER APPROVAL]\n\n{response.choices[0].message.content}"The temperature=0.3 setting balances creative, engaging prose with factual accuracy. Do not set temperature above 0.4 for proposals — hallucinated contract values, fabricated past performance metrics, or invented technical claims can disqualify a proposal or, worse, constitute misrepresentation to the government. Every AI-generated claim must be verified by a human SME before submission.
Step 3: Build the Compliance Matrix Checker
Implement an automated compliance check that verifies every Section L requirement is addressed in the generated proposal before human review.
# Verifies proposal sections address all Section L requirements
# compliance_checker.py
# Verifies proposal sections address all Section L requirements
def check_compliance(
section_l_requirements: list,
generated_proposal_text: str
) -> dict:
"""Check that all Section L requirements are addressed in the draft."""
requirements_formatted = "\n".join([
f"{i+1}. {req}" for i, req in enumerate(section_l_requirements)
])
compliance_prompt = f"""You are a federal proposal compliance reviewer.
Review the following draft proposal section against the Section L requirements.
For each requirement, determine:
- ADDRESSED: The requirement is clearly addressed in the proposal
- PARTIAL: The requirement is mentioned but not fully addressed
- MISSING: The requirement is not addressed at all
Provide output as JSON:
{{
"compliance_summary": {{
"total_requirements": N,
"addressed": N,
"partial": N,
"missing": N,
"compliance_score_pct": N
}},
"requirements": [
{{
"requirement_number": 1,
"requirement_text": "...",
"status": "ADDRESSED|PARTIAL|MISSING",
"location_in_proposal": "Section X.X paragraph Y (or 'Not found')",
"notes": "..."
}}
],
"critical_gaps": ["List of MISSING requirements that must be addressed before submission"]
}}
SECTION L REQUIREMENTS:
{requirements_formatted}
DRAFT PROPOSAL:
{generated_proposal_text[:8000]}
"""
response = aoai_client.chat.completions.create(
model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
messages=[{"role": "user", "content": compliance_prompt}],
temperature=0.0,
max_tokens=3000,
response_format={"type": "json_object"}
)
return json.loads(response.choices[0].message.content)
def generate_compliance_report(compliance_result: dict, output_file: str):
"""Generate a human-readable compliance report."""
score = compliance_result["compliance_summary"]["compliance_score_pct"]
missing = compliance_result["compliance_summary"]["missing"]
report = f"# Proposal Compliance Report\n\n"
report += f"**Compliance Score: {score}%**\n"
report += f"- Addressed: {compliance_result['compliance_summary']['addressed']}\n"
report += f"- Partial: {compliance_result['compliance_summary']['partial']}\n"
report += f"- Missing: {missing}\n\n"
if missing > 0:
report += "## ⚠️ CRITICAL GAPS — Must Address Before Submission\n"
for gap in compliance_result.get("critical_gaps", []):
report += f"- {gap}\n"
report += "\n"
report += "## Requirement-by-Requirement Review\n\n"
for req in compliance_result["requirements"]:
status_icon = {"ADDRESSED": "✅", "PARTIAL": "⚠️", "MISSING": "❌"}.get(req["status"], "?")
report += f"{status_icon} **Req {req['requirement_number']}:** {req['requirement_text']}\n"
report += f" - Status: {req['status']}\n"
report += f" - Location: {req['location_in_proposal']}\n"
if req.get("notes"):
report += f" - Notes: {req['notes']}\n"
report += "\n"
with open(output_file, 'w') as f:
f.write(report)
print(f"Compliance report saved: {output_file}")Step 4: Past Performance Citation Generator
Build the past performance citation generator that produces standardized CPARS-aligned citations from structured inputs, formatted for common government past performance questionnaires (PPQs) and CPARS narratives.
PP_CITATION_PROMPT
def generate_pp_citation(contract_data: dict, target_word_count: int = 400) -> str:
response = aoai_client.chat.completions.create(
model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
messages=[{
"role": "user",
"content": PP_CITATION_PROMPT.format(
word_count=target_word_count,
**contract_data
)
}],
temperature=0.2,
max_tokens=1000
)
return response.choices[0].message.contentStep 5: Configure the Proposal Workspace in SharePoint GCC High
Create a dedicated proposal workspace for each active bid, with automated folder structure, version control, and document lifecycle management.
# Create proposal workspace via PnP PowerShell
function New-ProposalWorkspace {
param(
[string]$Solicitation,
[string]$Agency,
[string]$DueDate,
[string]$ProposalManager
)
$siteName = "Proposal-$($Solicitation -replace '[^a-zA-Z0-9]','')"
$siteUrl = "https://[tenant].sharepoint.us/sites/$siteName"
# Create proposal site
New-PnPSite -Type TeamSiteWithoutMicrosoft365Group `
-Title "Proposal: $Solicitation" `
-Url $siteUrl `
-Owner $ProposalManager
Connect-PnPOnline -Url $siteUrl -UseWebLogin
# Create standard folder structure
$folders = @(
"00_Solicitation", "01_Capture_Strategy",
"02_Volume_I_Technical", "03_Volume_II_Management",
"04_Volume_III_Past_Performance", "05_Cost_Volume",
"06_AI_Drafts", "07_Review_Comments", "08_Final_Submission"
)
foreach ($folder in $folders) {
Add-PnPFolder -Name $folder -Folder "Documents"
}
# Set metadata
Set-PnPWeb -Title "Proposal: $Solicitation" `
-Description "Agency: $Agency | Due: $DueDate | PM: $ProposalManager"
# Apply CUI label to site
# Via Purview sensitivity label policy (CUI//PROCURE)
# Add proposal team members (proposal manager + capture manager + volume leads)
# Add-PnPSiteCollectionAdmin -Owners $ProposalManager
Write-Host "Proposal workspace created: $siteUrl"
}
New-ProposalWorkspace `
-Solicitation "FA8621-25-R-0042" `
-Agency "Air Force Life Cycle Management Center" `
-DueDate "2025-03-15" `
-ProposalManager "jsmith@contractor.com"Custom AI Components
Win Theme Weaver Prompt
Type: Prompt Analyzes a draft proposal section and scores how effectively each win theme is expressed, then suggests specific edits to strengthen theme reinforcement throughout the section.
Implementation:
Win Theme Weaver
Section M Score Predictor
Type: Prompt Scores a draft proposal section against the Section M evaluation criteria as a mock evaluator, identifying weaknesses before the proposal goes to color review.
Implementation:
Section M Score Predictor
Testing & Validation
- Content library retrieval test: Submit a test query for each content type (Technical, Management, Past Performance) and verify the pipeline retrieves the 5 most relevant items. Confirm no content from other clients' proposals is accessible (library must be client-specific, not shared across contractors).
- Section L compliance test: Generate a draft technical section for a sample solicitation with 15 Section L requirements. Run the compliance checker and verify it correctly identifies addressed, partial, and missing requirements. Have the proposal manager manually verify the compliance report accuracy.
- Hallucination detection test: Review 3 AI-generated past performance citations for fabricated metrics. Compare against actual contract records provided by the client. Any fabricated numbers (contract values, percentages, dates) must be caught in this test — if the pipeline produces hallucinated data, adjust temperature down and add explicit instructions not to invent metrics.
- CUI boundary test: Verify all proposal content (solicitation documents, AI drafts, past performance data) is stored exclusively in SharePoint GCC High with CUI//PROCURE labels applied. Confirm no proposal content is stored in commercial SharePoint, personal OneDrive, or any non-GCC High location.
- ITAR screening test: For any solicitation involving defense articles or export-controlled technology, verify the client has screened all foreign national personnel before granting access to the proposal workspace. This is the contractor's responsibility, not the MSP's — but the MSP must confirm the access control is in place before go-live.
- Version control test: Make competing edits to a draft section from two different user accounts simultaneously. Verify SharePoint version control correctly tracks both edits without data loss.
- Compliance report formatting test: Verify the compliance report renders correctly in both Word and PDF. The report must be legible and distributable to the proposal manager and color team reviewers without reformatting.
- Page count accuracy test: After generating a draft volume, confirm the word count aligns with the page limit assumption (500 words/page for standard government proposal formatting: 12pt Times New Roman, 1-inch margins, single-spaced).
Client Handoff
Client Handoff Meeting Agenda (90 minutes — Proposal Manager + Capture Manager + IT Lead)
1. Content Library Review (20 min)
- Walk through the SharePoint proposal content library structure
- Demonstrate search and retrieval — show the proposal manager how to find and tag content
- Establish content library maintenance responsibility (proposals manager owns content currency)
- Review the CUI//PROCURE labeling applied to all content
2. Pipeline Demonstration (25 min)
- Live demo: input a real Section L from a past bid → generate a technical section → run compliance check → review output
- Show the win theme weaver and Section M score predictor in action
- Demonstrate past performance citation generation
- Show proposal workspace creation for a new bid
3. Human Review Requirements (15 min)
- Emphasize: AI output is a first draft only — never submit without SME technical review, proposal manager approval, and legal/compliance review
- Review the mandatory review gates (Pink Team, Red Team, Gold Team) — AI accelerates Pink Team drafting, not final submission
- Document the review and approval workflow in the contractor's proposal process
4. ITAR/CUI Compliance (15 min)
- Review which solicitations require Azure Government vs. commercial API
- Confirm the contractor's CMMC boundary includes the proposal system
- Review foreign national access controls for ITAR-adjacent proposals
- Confirm CUI//PROCURE labels are applied to all active proposal workspaces
5. Documentation Handoff
- SharePoint library structure guide and tagging taxonomy
- Pipeline configuration documentation (Azure resource names, environment variables)
- Prompt template library (stored in SharePoint, not hardcoded)
- Proposal workspace creation runbook
- ITAR/CUI decision tree (which platform to use for which solicitation type)
- MSP support contact card
Maintenance
Monthly Tasks
- Review Azure OpenAI consumption costs. A spike may indicate the pipeline is being used inefficiently (excessively large context, repeated regeneration) — review usage logs and optimize.
- Verify SharePoint GCC High content library is being maintained — check for stale content (proposals older than 3 years should be archived, not deleted, per records requirements).
- Monitor for Azure OpenAI model version updates that may affect output quality or prompt behavior.
Quarterly Tasks
- Content library review with proposal manager: identify gaps (new capabilities not represented, recent wins not uploaded), tag new content, archive outdated content.
- Prompt template review: update system prompts to reflect current Shipley methodology updates, recent evaluation feedback from debriefs, or new agency-specific requirements.
- Review win rates on proposals using AI assistance vs. prior baseline. This is the primary ROI metric — present to the client's business development leadership.
Annual Tasks
- CMMC boundary review: confirm the proposal system remains correctly scoped in the SSP. If new proposal types, new teaming partners, or new contract vehicles have been added, update the boundary documentation.
- Prompt template competitive refresh: review recent published Source Selection Decision Documents (SSDDs) and debrief feedback to update prompt templates with current evaluator preferences and adjectival rating patterns.
Alternatives
...
Anthropic Claude via AWS Bedrock GovCloud (CUI-Authorized)
As noted above, Claude models on Amazon Bedrock GovCloud provide FedRAMP High-authorized LLM capability with Claude's strong long-form writing quality. Best for: Contractors already on AWS GovCloud, or those who prefer Claude's prose style over GPT-5.4 for proposal writing. Tradeoffs: Requires AWS GovCloud account; less native Microsoft 365 integration than Azure OpenAI.
Loopio / RFPIO (RFP Response Platforms)
Commercial proposal response platforms (Loopio, RFPIO/Responsive) provide AI-assisted RFP response with built-in content libraries, workflow management, and SME review routing. Best for: Contractors with high volume of commercial RFPs and task order responses (IDIQ task orders, GSA schedule responses). Tradeoffs: Not FedRAMP authorized — cannot be used for CUI proposal content. Appropriate for unclassified civilian agency work only. No custom AI pipeline required — these are SaaS platforms the contractor uses directly.
Govly (Government Contract Intelligence)
Govly provides AI-powered government contracting intelligence including solicitation monitoring, opportunity matching, and teaming partner discovery. Complements the proposal drafting pipeline by automating the front-end of the capture process. Best for: Contractors who want AI assistance across the full business development lifecycle (pipeline development, capture, proposal). Tradeoffs: Separate SaaS subscription ($15,000–$30,000/year); not a replacement for the drafting pipeline, but a valuable complement.
On-Premises LLM (Air-Gapped — Classified-Adjacent)
For contractors working on classified or ITAR-restricted programs where cloud connectivity is prohibited, deploy an on-premises LLM (Llama 3 70B or similar) on internal GPU servers. Best for: Prime contractors with dedicated AI infrastructure teams working on Top Secret/SCI-adjacent programs where no commercial cloud is acceptable. Tradeoffs: $100,000–$500,000+ upfront hardware investment; requires ML engineering staff; output quality lower than GPT-5.4 or Claude. Only justified for contractors with sustained high-value classified proposal volume.
Want early access to the full toolkit?