17 min readIntelligence & Insights

Implementation Guide: Analyze RFP Requirements Against Incumbent Capabilities — Bid/No-Bid & Gap Analysis

Step-by-step implementation guide for deploying AI to analyze rfp requirements against incumbent capabilities — bid/no-bid & gap analysis for Government & Defense clients.

Software Procurement

Microsoft Azure OpenAI Service (Azure Government)

Microsoft Azure OpenAI Service (Azure Government)

Microsoft Azure Government

GPT-5.4: ~$0.005/1K input, ~$0.015/1K output. Full bid/no-bid analysis (100-page RFP): ~$10–$25 per analysis.

Required for solicitations involving CUI//PROCURE data or DFARS-covered programs. See UC-01 for provisioning.

SAM.gov API (Beta)

$0

Provides programmatic access to federal contract opportunities, awards, and entity registrations. Used to automatically retrieve solicitation packages when published, monitor for amendments, and track award outcomes.

GovWin IQ (Market Intelligence — Optional)

GovWin IQ

Deltek GovWinSaaS annual subscription

$15,000–$40,000/year depending on tier

Government market intelligence platform providing solicitation tracking, incumbent identification, competitor analysis, agency spending data, and industry relationship mapping. Complements the AI analysis pipeline — GovWin provides the market context (who has the incumbent contract, what did they win it for, what is the agency's budget history), while the AI pipeline analyzes the specific RFP requirements. Not required but significantly enhances analysis quality.

Microsoft Power BI (GCC)

Microsoft Power BI (GCC)

MicrosoftPower BI ProQty: per BD/capture team member

$20/user/month (GCC)

Business development pipeline dashboard showing active opportunities, bid/no-bid decisions, win probability scores, and analysis status. Pulls data from the SharePoint opportunity tracking list. Provides leadership with real-time visibility into the opportunity pipeline without requiring them to dig through SharePoint.

Microsoft SharePoint GCC High

Microsoft SharePoint GCC High

MicrosoftIncluded in M365 GCC High

Included

Stores the corporate capability library (past performance, key personnel, technical capabilities, certifications), opportunity tracking list, and AI-generated bid/no-bid analyses. CUI//PROCURE sensitivity labels applied to all solicitation-related content.

Prerequisites

  • Corporate capability library: The bid/no-bid analysis quality is directly proportional to the richness of the capability library the AI can draw on. Before deployment, work with the BD and capture team to document: (a) technical capability areas (with specifics — not just "cybersecurity" but specific frameworks, tool experience, clearance levels held); (b) past performance contracts with NAICS codes, award values, agency customers, and scope summaries; (c) key personnel with relevant experience and clearances; (d) current certifications (CMMC, ISO, CMMI, FedRAMP ATOs, etc.); (e) current contract vehicles (GWACs, MACs, BPAs). This library is the AI's knowledge of the company.
  • SAM.gov API key and entity registration: The contractor must have an active SAM.gov registration (required for federal contracting). The API key is registered under the same entity. Confirm the UEI (Unique Entity Identifier) is current and the registration is not expired.
  • Opportunity qualification criteria baseline: Before configuring the AI analysis, work with the BD director to define the company's baseline opportunity qualification criteria: minimum contract value threshold, preferred NAICS codes, acceptable agencies, geographic constraints, clearance level requirements, and incumbent preference (prefer incumbent/recompete, avoid cold-start, etc.). These criteria become part of the automated screening prompt.
  • DD Form 254 clearance confirmation: For any solicitation requiring a facility clearance (FCL) or requiring personnel with specific clearances, confirm the contractor holds the required FCL level before the opportunity enters the analysis pipeline. The AI cannot assess whether classified program requirements are met without cleared personnel reviewing the classified portions of the solicitation.
  • IT admin access: Azure Government subscription, SAM.gov API key, SharePoint GCC High, Power BI GCC.

Installation Steps

Step 1: Configure the SAM.gov Solicitation Monitor

Build the automated pipeline that monitors SAM.gov for new solicitations matching the contractor's profile and triggers analysis when a qualifying opportunity is published.

sam_monitor.py
python
# Monitors SAM.gov API for new solicitations matching contractor
# qualification criteria

# sam_monitor.py
# Monitors SAM.gov API for new solicitations matching contractor qualification criteria

import requests
import os
import json
import datetime
import time

SAM_API_KEY = os.environ["SAM_GOV_API_KEY"]
SAM_BASE_URL = "https://api.sam.gov/opportunities/v2"

# Contractor opportunity qualification criteria
QUALIFICATION_CRITERIA = {
    "naics_codes": ["541511", "541512", "541513", "541519", "541330", "541690"],
    "min_value": 1_000_000,  # $1M minimum
    "max_value": 500_000_000,  # $500M maximum
    "set_aside_preferences": ["SBA", "SDVOSB", "WOSB", "EDWOSB", "8A", "NONE"],
    "excluded_agencies": [],  # Add agencies to exclude from monitoring
    "required_keywords": [],  # Optional: keywords that must appear in description
    "excluded_keywords": ["construction", "janitorial", "food service"],
    "notice_types": ["Solicitation", "Pre-Solicitation", "Sources Sought", "Special Notice"],
    "posted_within_days": 1  # Check for opportunities posted in the last day
}

def search_new_opportunities(criteria: dict) -> list:
    """Search SAM.gov for new opportunities matching criteria."""
    headers = {"X-Api-Key": SAM_API_KEY}

    cutoff_date = (datetime.datetime.now() - datetime.timedelta(
        days=criteria["posted_within_days"]
    )).strftime("%m/%d/%Y")

    opportunities = []

    for naics in criteria["naics_codes"]:
        params = {
            "naics": naics,
            "postedFrom": cutoff_date,
            "postedTo": datetime.datetime.now().strftime("%m/%d/%Y"),
            "ptype": ",".join(["o", "p", "k", "r"]),  # Solicitation, Pre-Sol, Sources Sought
            "limit": 100,
            "offset": 0
        }

        resp = requests.get(f"{SAM_BASE_URL}/search",
                           headers=headers, params=params)
        if resp.status_code == 200:
            data = resp.json()
            opps = data.get("opportunitiesData", [])

            for opp in opps:
                # Basic filtering
                title_lower = opp.get("title", "").lower()
                desc_lower = opp.get("description", "").lower()

                # Exclude if excluded keywords present
                if any(kw in title_lower or kw in desc_lower
                       for kw in criteria["excluded_keywords"]):
                    continue

                # Check required keywords if specified
                if criteria["required_keywords"]:
                    if not any(kw in title_lower or kw in desc_lower
                               for kw in criteria["required_keywords"]):
                        continue

                opportunities.append({
                    "notice_id": opp.get("noticeId"),
                    "title": opp.get("title"),
                    "agency": opp.get("department"),
                    "sub_agency": opp.get("subtier"),
                    "naics": naics,
                    "type": opp.get("type"),
                    "posted_date": opp.get("postedDate"),
                    "response_date": opp.get("responseDeadLine"),
                    "set_aside": opp.get("typeOfSetAsideDescription", "None"),
                    "place_of_performance": opp.get("placeOfPerformance", {}).get("city", {}).get("name", ""),
                    "sol_number": opp.get("solicitationNumber"),
                    "description": opp.get("description", "")[:2000],
                    "sam_url": f"https://sam.gov/opp/{opp.get('noticeId')}/view"
                })

        time.sleep(0.5)  # Rate limit compliance

    # Deduplicate by notice_id
    seen = set()
    unique_opps = []
    for opp in opportunities:
        if opp["notice_id"] not in seen:
            seen.add(opp["notice_id"])
            unique_opps.append(opp)

    return unique_opps


def download_solicitation_documents(notice_id: str) -> list:
    """Download attachments for a specific opportunity."""
    headers = {"X-Api-Key": SAM_API_KEY}
    resp = requests.get(
        f"{SAM_BASE_URL}/{notice_id}",
        headers=headers
    )
    if resp.status_code != 200:
        return []

    opp_data = resp.json()
    attachments = []

    for resource in opp_data.get("resourceLinks", []):
        url = resource.get("url", "")
        name = resource.get("name", "")
        if url:
            attachments.append({"name": name, "url": url})

    return attachments

Step 2: Build the Bid/No-Bid Analysis Engine

Generate a structured bid/no-bid analysis from the solicitation document and capability library.

bid_no_bid_analyzer.py
python
# bid_no_bid_analyzer.py

from openai import AzureOpenAI
import os, json

client = AzureOpenAI(
    azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
    api_key=os.environ["AZURE_OPENAI_KEY"],
    api_version="2024-08-01-preview"
)

BID_NO_BID_SYSTEM_PROMPT = """You are a defense contractor capture management specialist
with expertise in federal procurement and BD strategy. Analyze government solicitations
to produce rigorous, data-driven bid/no-bid recommendations.

RULES:
- Base all assessments strictly on the solicitation document and capability library provided
- Do not fabricate competitor intelligence or past performance data not in the inputs
- Be direct — if the company lacks a critical capability, say so clearly
- Win probability estimates must be accompanied by specific rationale
- Flag any assumptions with [ASSUMPTION]
- Flag any data gaps that would improve the analysis with [DATA NEEDED]"""

def generate_bid_no_bid_analysis(
    opportunity: dict,
    solicitation_text: str,
    capability_library: dict
) -> str:

    capability_summary = json.dumps(capability_library, indent=2)
    opp_summary = json.dumps(opportunity, indent=2)

    analysis_prompt = f"""Analyze this federal opportunity and produce a complete bid/no-bid assessment.

OPPORTUNITY METADATA:
{opp_summary}

SOLICITATION DOCUMENT (key sections):
{solicitation_text[:6000]}

COMPANY CAPABILITY LIBRARY:
{capability_summary}

Generate a complete Bid/No-Bid Analysis Report with the following sections:

## EXECUTIVE SUMMARY
- Recommended Decision: BID / NO BID / CONDITIONAL BID
- Win Probability Estimate: X% (with confidence: High/Medium/Low)
- One-sentence rationale for recommendation

## OPPORTUNITY PROFILE
- Estimated contract value and type (FFP/CPFF/T&M/IDIQ)
- Period of performance
- Key requirements summary (5 bullets)
- Set-aside status and implications
- Required clearances
- Place of performance
- Key evaluation factors (from Section M) ranked by weight

## COMPANY FIT ASSESSMENT
Score each dimension 1-5 (1=Poor, 5=Excellent):
- Technical capability match: [score] — [rationale]
- Past performance relevance: [score] — [rationale with specific past contracts]
- Key personnel availability: [score] — [rationale]
- Security clearance alignment: [score] — [rationale]
- Contract vehicle alignment: [score] — [rationale]
- Geographic/place of performance match: [score] — [rationale]
- Incumbent advantage/disadvantage: [score] — [rationale]

Overall Fit Score: [average]/5

## CAPABILITY GAPS
For each gap identified:
- Gap description
- Impact on win probability (High/Medium/Low)
- Mitigation options: [teaming partner, hire, partnership, etc.]

## COMPETITIVE LANDSCAPE
- Likely incumbent (if identifiable from solicitation language or FPDS)
- Likely competitors based on requirement profile
- Our competitive differentiators
- Competitor advantages we must overcome

## WIN STRATEGY ELEMENTS (if BID recommended)
- Top 3 discriminating win themes
- Teaming recommendations (capabilities to acquire via teaming)
- Pricing strategy guidance (FFP competitiveness, T&M rate considerations)
- Key risks and mitigations

## CAPTURE ACTIONS REQUIRED
If BID: List specific next actions with owners and due dates
If NO BID: Brief rationale and flag for future pipeline if requirements evolve

## QUESTIONS FOR GOVERNMENT
List the top 5 questions to submit during any industry day or Q&A period

[DRAFT — REQUIRES CAPTURE MANAGER REVIEW AND BD DIRECTOR APPROVAL BEFORE GATE REVIEW]"""

    response = client.chat.completions.create(
        model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
        messages=[
            {"role": "system", "content": BID_NO_BID_SYSTEM_PROMPT},
            {"role": "user", "content": analysis_prompt}
        ],
        temperature=0.2,
        max_tokens=4000
    )

    return response.choices[0].message.content


def score_opportunity_fit(opportunity: dict, capability_library: dict) -> dict:
    """Quick-score an opportunity for pipeline triage before full analysis."""

    score_prompt = f"""Score this opportunity for a quick pipeline triage (1 minute analysis).

OPPORTUNITY: {json.dumps(opportunity)}
CAPABILITIES: {json.dumps(capability_library.get('summary', capability_library))[:2000]}

Return JSON only:
{{
  "quick_score": [1-10],
  "go_no_go": "GO | NO-GO | INVESTIGATE",
  "top_reasons_go": ["reason 1", "reason 2"],
  "top_reasons_no_go": ["reason 1", "reason 2"],
  "recommended_next_action": "Full analysis | Pass | Watch | Industry Day only"
}}"""

    response = client.chat.completions.create(
        model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
        messages=[{"role": "user", "content": score_prompt}],
        temperature=0.0,
        max_tokens=400,
        response_format={"type": "json_object"}
    )

    return json.loads(response.choices[0].message.content)

Step 3: Build the Section L/M Compliance Matrix Generator

Automatically generate a compliance matrix from the solicitation's Section L instructions and Section M evaluation criteria — a critical proposal planning tool.

compliance_matrix_generator.py
python
# compliance_matrix_generator.py

def generate_lm_compliance_matrix(
    section_l_text: str,
    section_m_text: str,
    sow_pws_text: str
) -> dict:
    """Generate a compliance matrix from RFP Sections L and M."""

    matrix_prompt = f"""Extract and structure all proposal requirements from the following
RFP sections into a compliance matrix.

SECTION L — INSTRUCTIONS TO OFFERORS:
{section_l_text[:3000]}

SECTION M — EVALUATION CRITERIA:
{section_m_text[:2000]}

SOW/PWS — TECHNICAL REQUIREMENTS:
{sow_pws_text[:2000]}

Generate a JSON compliance matrix with this structure:
{{
  "volumes": [
    {{
      "volume_name": "Technical",
      "page_limit": "X pages or 'Not specified'",
      "requirements": [
        {{
          "req_id": "L-1.1",
          "source": "Section L paragraph reference",
          "requirement_text": "exact or paraphrased requirement",
          "proposal_section": "suggested proposal section heading",
          "evaluation_criterion": "matching Section M criterion if applicable",
          "evaluation_weight": "percentage or 'Not specified'",
          "compliance_method": "how this will be addressed (narrative/table/chart/attachment)",
          "page_estimate": "estimated pages",
          "owner": "TBD — assign to volume lead",
          "status": "Not Started"
        }}
      ]
    }}
  ],
  "key_attachments_required": ["list of required attachments, forms, certifications"],
  "submission_requirements": {{
    "format": "electronic/paper/both",
    "delivery_method": "SAM.gov/email/hand delivery",
    "copies_required": "N",
    "due_date": "extract from Section L",
    "page_limits_summary": "summary of all volume page limits"
  }},
  "evaluation_criteria_summary": [
    {{
      "criterion": "criterion name",
      "weight_or_order": "percentage or order of importance",
      "description": "how the government will evaluate"
    }}
  ]
}}"""

    response = client.chat.completions.create(
        model=os.environ["AZURE_OPENAI_DEPLOYMENT"],
        messages=[{"role": "user", "content": matrix_prompt}],
        temperature=0.0,
        max_tokens=4000,
        response_format={"type": "json_object"}
    )

    return json.loads(response.choices[0].message.content)


def export_matrix_to_markdown(matrix: dict) -> str:
    """Export compliance matrix as a Markdown table for SharePoint storage."""
    md = "# Proposal Compliance Matrix\n\n"
    md += "[DRAFT — REQUIRES PROPOSAL MANAGER REVIEW]\n\n"

    for volume in matrix.get("volumes", []):
        md += f"## {volume['volume_name']} Volume\n"
        md += f"**Page Limit:** {volume.get('page_limit', 'TBD')}\n\n"
        md += "| Req ID | Requirement | Eval Criterion | Method | Pages | Owner | Status |\n"
        md += "|--------|------------|----------------|--------|-------|-------|--------|\n"

        for req in volume.get("requirements", []):
            md += f"| {req['req_id']} | {req['requirement_text'][:60]}... | "
            md += f"{req.get('evaluation_criterion', 'N/A')} | "
            md += f"{req.get('compliance_method', 'TBD')} | "
            md += f"{req.get('page_estimate', 'TBD')} | "
            md += f"{req.get('owner', 'TBD')} | "
            md += f"{req.get('status', 'Not Started')} |\n"

        md += "\n"

    return md

Step 4: Configure the Opportunity Pipeline Dashboard in Power BI

Set up the Power BI GCC dashboard that gives BD and capture leadership real-time visibility into the opportunity pipeline.

Power BI GCC Dashboard Configuration

Data Source

  • DATA SOURCE: SharePoint GCC High — Opportunity Tracker List
  • Fields: NoticeID, Title, Agency, NAICS, PostedDate, ResponseDate, SetAside, EstValue, QuickScore, BidNoBid, WinProbability, Status, CaptureManager, ProposalManager, AnalysisComplete, LastUpdated

Page 1: Pipeline Overview

  • KPI Card: Active opportunities (BID decision made, in pursuit)
  • KPI Card: Opportunities in analysis (INVESTIGATE status)
  • KPI Card: Total pipeline value ($)
  • KPI Card: Weighted pipeline value ($ × Win Probability)
  • KPI Card: Proposals submitted (last 90 days)
  • KPI Card: Win rate (last 12 months)
  • Funnel chart: Identified → Analyzed → BID → Proposal Submitted → Awarded
  • Bar chart: Pipeline value by agency (top 10)
  • Scatter chart: Win probability vs. contract value (bubble = days to response deadline)

Page 2: Opportunity Analysis Status

  • Table: All opportunities in INVESTIGATE status — Columns: Title, Agency, Response Date, Quick Score, Days Remaining, Analyst
  • Color coding: Green (>30 days), Yellow (15–30 days), Red (<15 days)
  • Filters: By NAICS, by agency, by analyst

Page 3: Bid/No-Bid Decision Record

  • Table: All BID/NO-BID decisions (last 12 months) — Columns: Title, Agency, Decision, Win Probability, Estimated Value, Decision Date
  • Win/Loss tracker (for awarded opportunities)
  • NO-BID reason analysis (pie chart by reason category)

Page 4: Agency Analysis

  • Agency spending history (linked to USASpending.gov data)
  • Past performance with each agency
  • Incumbent contractor analysis

Refresh Schedule

  • SharePoint data: Every 4 hours (Power BI scheduled refresh)
  • Manual refresh: Available via Power BI Service (GCC)

Access Control

  • BD team: Full read access to all pages
  • Capture managers: Read access + write access to their opportunity records (via Power Apps form)
  • Leadership: Read access to Page 1 (Pipeline Overview) only
  • All access controlled via Azure AD GCC High security groups

Custom AI Components

Incumbent Analysis Prompt

Type: Prompt

Analyzes solicitation language to identify signals of the likely incumbent contractor and assess the strength of the incumbent's advantage.

Implementation:

text
SYSTEM PROMPT:
You are a federal procurement intelligence analyst. Analyze the following solicitation
for signals of incumbent contractor advantage.

Look for these indicators:
1. REQUIREMENTS SPECIFICITY — Are requirements written around very specific existing systems, tools, or processes that suggest an incumbent designed them?
2. TRANSITION REQUIREMENTS — Is a long transition period required (suggests complex existing work)?
3. KEY PERSONNEL — Do key personnel requirements match a specific person's resume profile?
4. PAST PERFORMANCE — Does the past performance requirement description match only one contractor's specific experience?
5. AWARD HISTORY — Does the solicitation reference a prior contract number (check USASpending.gov)?
6. LANGUAGE PATTERNS — Are there unusual technical terms, abbreviations, or trademarked tool names that suggest an incumbent's terminology?

For each signal found:
- Describe the signal
- Rate incumbent advantage strength: Strong / Moderate / Weak / None
- Identify the likely incumbent if determinable
- Estimate impact on non-incumbent win probability

Conclude with an Incumbent Advantage Score (1-10, where 10 = impossible for non-incumbent)
and specific strategies to overcome the incumbent's advantage.

SOLICITATION TEXT:
{solicitation_text}

Testing & Validation

Client Handoff

Handoff Meeting Agenda (60 minutes — BD Director + Capture Managers + IT Lead)

1
SAM.gov monitor demonstration (10 min)
2
Bid/no-bid analysis demonstration (20 min)
3
Pipeline dashboard review (15 min)
4
Process integration and governance (15 min)
5
Documentation handoff

1. SAM.gov monitor demonstration (10 min)

  • Show live opportunity feed for the client's NAICS codes
  • Review the qualification criteria configured and confirm they match BD strategy
  • Demonstrate the daily digest email format

2. Bid/no-bid analysis demonstration (20 min)

  • Run a live analysis on a current active opportunity
  • Walk through each section of the output with the capture manager
  • Show the Section L/M compliance matrix generation

3. Pipeline dashboard review (15 min)

  • Walk through all four Power BI dashboard pages
  • Confirm data is flowing correctly from SharePoint
  • Demonstrate the opportunity entry/update process (via Power Apps form or directly in SharePoint list)

4. Process integration and governance (15 min)

  • Confirm where AI analysis fits in the BD gate review process (before or at Gate 1 — Opportunity Identification)
  • Review the CUI//PROCURE handling requirements for solicitation documents
  • Establish the BD director's approval requirement before any NO BID decision is finalized

5. Documentation handoff

Maintenance

Daily Tasks (Automated)

  • SAM.gov opportunity monitor runs automatically at 07:00 ET
  • Daily digest email sent to BD team with new qualifying opportunities

Weekly Tasks

  • BD director reviews daily digest hits and assigns analyst for INVESTIGATE opportunities
  • Pipeline dashboard reviewed in weekly BD pipeline review meeting

Monthly Tasks

  • Review SAM.gov API key status and rate limit usage
  • Purge opportunities older than 180 days past response deadline from active pipeline (archive to SharePoint)
  • Azure OpenAI consumption review

Quarterly Tasks

  • Review and update qualification criteria with BD director (NAICS codes, value thresholds, agency preferences)
  • Update capability library with new past performance contracts, new personnel, new certifications
  • Win rate analysis: compute win rate for AI-recommended BID decisions vs. prior baseline

Annual Tasks

  • Full capability library refresh — work with BD and technical leadership to update capability descriptions, remove stale content, add new service lines
  • Prompt template competitive refresh — update bid/no-bid and incumbent analysis prompts based on lessons learned from debrief feedback and win/loss analysis

Alternatives

Govly (Integrated Opportunity Intelligence Platform)

Govly

GovlyIntegrated Opportunity Intelligence Platform

Govly provides AI-powered opportunity monitoring, teaming partner discovery, and pipeline management as an integrated SaaS platform. Less customizable than the Azure OpenAI pipeline but easier to deploy with no infrastructure required. Best for: Smaller contractors (under 200 employees) who want a managed SaaS solution rather than a custom pipeline. Tradeoffs: Not FedRAMP authorized — appropriate for unclassified civilian solicitations only. Limited customization of analysis depth vs. the custom pipeline.

Deltek GovWin + Costpoint Integration (Full BD Platform)

GovWin provides the richest government market intelligence available, and integrating it with a custom analysis pipeline enables much deeper incumbent analysis (actual award history, spending trends, teaming relationships). Best for: Mid-to-large prime contractors with dedicated capture management staff. Tradeoffs: GovWin subscription ($15,000–$40,000/year) on top of the Azure pipeline cost. Requires GovWin API access (additional cost) for automated data retrieval.

Bloomberg Government (Federal Market Intelligence)

Want early access to the full toolkit?