
Implementation Guide: Identify major gift prospects based on giving history and engagement signals
Step-by-step implementation guide for deploying AI to identify major gift prospects based on giving history and engagement signals for Non-Profit Organizations clients.
Hardware Procurement
Staff Workstation - Development Officer / Prospect Researcher
$950 per unit MSP cost via TechSoup or Dell nonprofit channel / $1,200 suggested resale
Primary workstations for gift officers and prospect researchers to access AI dashboards, CRM, and prospect reports simultaneously. 16GB RAM recommended for running browser-based CRM, AI platform, and Power BI dashboards concurrently.
External Monitor - Dual Display Setup
Dell P2425H 24-inch FHD IPS Monitor
$180 per unit MSP cost / $250 suggested resale
Dual-monitor setup for each workstation allows gift officers to view AI prospect scores and donor profiles on one screen while working in the CRM on the other. Two monitors per workstation, two workstations.
Monitor Stand / Dual Arm Mount
Dual Monitor Stand
$30 per unit MSP cost / $50 suggested resale
Ergonomic dual-monitor mounting for each workstation to maximize desk space and viewing comfort for prospect researchers working extended sessions.
Software Procurement
DonorSearch AI (DSAi) - Enhanced CORE
$1,695/year (Tier 1 annual contract); $1,000/year for INN member organizations
Primary AI prospect identification platform. Analyzes 800+ data points per donor against the world's largest philanthropic database to generate major gift propensity scores with 81% accuracy. Enhanced CORE is the entry-level AI tier ideal for small-to-mid-size nonprofits new to predictive analytics.
DonorSearch AI (DSAi) - Full Custom Predictive Modeling
$3,000–$8,000/year (custom quote based on database size and features)
Full-featured AI platform for mid-to-large nonprofits. Custom predictive models trained on the organization's specific data patterns. Includes advanced customization, deeper scoring granularity, and enhanced reporting. Use this tier for clients with 5,000+ donor records.
Dataro ProspectAI
$250/month (Pro Plan); $499/month (Core platform); Enterprise custom-quoted
Alternative or complementary AI prospect identification platform. Generates propensity scores for major giving, single giving, regular giving, and lapse prediction. Best for organizations with 10,000+ records and 5+ years of giving history. Predictions refresh weekly. Lower entry cost than DonorSearch.
Blackbaud Raiser's Edge NXT + Prospect Insights
$4,000–$10,000+/year for mid-sized nonprofit (CRM + Prospect Insights add-on); pricing is custom-quoted
If client already uses Raiser's Edge NXT, Prospect Insights is the native AI add-on that delivers AI-driven major gift prospect recommendations and prescriptive prioritizations directly within the CRM. Eliminates integration complexity. Prospect Insights Pro (launched Nov 2023) adds planned giving identification.
Salesforce Nonprofit Cloud - Enterprise Edition
10 free Enterprise licenses via Power of Us Program; additional licenses at $60/user/month (80% discount from standard pricing)
Alternative CRM backbone for nonprofits preferring Salesforce ecosystem. Einstein AI features provide donor engagement summaries and predictive scoring. Integrates natively with DonorSearch and Dataro via pre-built connectors. Most customizable long-term option.
DonorPerfect
$89–$299/month depending on plan and record count
Budget-friendly CRM for small nonprofits. Includes native DonorSearch integration for AI screening. Suitable for organizations under 2,000 donors.
Bloomerang
$125/month (scales by record count)
Mid-tier CRM with built-in donor retention analytics and native DonorSearch integration. Good fit for nonprofits prioritizing donor retention alongside major gift prospecting.
Microsoft 365 Business Basic (Nonprofit)
Free for up to 300 licenses for eligible nonprofits; 75% discount on premium plans
Collaboration suite (Teams, SharePoint, Exchange Online) for nonprofit staff. SharePoint used for storing prospect research reports and sharing AI-generated prospect lists across the development team.
Microsoft Azure Credits (Nonprofit Grant)
$2,000/year in Azure credits for eligible nonprofits (free)
Covers Azure hosting for Power BI Service, Azure Functions for data pipeline automation, and any supplementary cloud compute for custom data processing workflows.
Microsoft Power BI Pro
$10/user/month ($120/year); nonprofit discounts available via Microsoft 365 E5 bundle
Dashboard visualization platform for creating executive-level fundraising dashboards showing AI prospect scores, pipeline value, gift officer assignments, and portfolio performance. Desktop version is free; Pro license needed for sharing dashboards across the organization.
Mailchimp Standard (Nonprofit Discount)
~$13–$20/month for Standard plan (15% nonprofit discount available)
Email marketing platform for targeted outreach to AI-identified prospect segments. Syncs donor segments from CRM to enable personalized cultivation campaigns for major gift prospects.
Prerequisites
Installation Steps
Step 1: Conduct Data Quality Audit & Gap Analysis
Before any platform deployment, perform a comprehensive audit of the client's existing CRM data. Export a full donor dataset and analyze it for completeness, accuracy, and consistency. This step determines data readiness and identifies cleanup work required before the AI platform can generate reliable propensity scores. Poor data quality is the #1 cause of failed AI prospect identification projects.
Install-Module -Name SKYAPIPowerShell -Scope CurrentUser
Connect-SKYAPI -ClientId 'YOUR_CLIENT_ID' -ClientSecret 'YOUR_CLIENT_SECRET' -RedirectUri 'https://localhost'
$donors = Get-SKYAPIConstituent -Limit 10000
$donors | Export-Csv -Path 'C:\DataAudit\donor_export.csv' -NoTypeInformationjava -cp dataloader.jar -Dsalesforce.config.dir=conf process csvExtractDonorspip install pandas numpy
python data_quality_audit.py --input donor_export.csv --output audit_report.htmlDocument all findings in a Data Quality Audit Report. Typical issues found: 20-40% of records missing email addresses, 10-15% duplicate records, inconsistent gift type coding, missing engagement data. This report becomes the basis for Phase 2 cleanup work and sets client expectations. Allocate 8-16 hours for this step depending on database size.
Step 2: Execute Data Cleanup & Standardization
Based on the audit findings, systematically clean and standardize the donor database. This includes de-duplication, address standardization via NCOA (National Change of Address), email validation, deceased record suppression, and standardization of gift type and fund designation codes. This is typically the longest phase and most labor-intensive.
pip install requests pandas
python validate_emails.py --input donor_export.csv --api_key 'YOUR_NEVERBOUNCE_KEY' --output validated_emails.csvBudget 20-60 hours for data cleanup depending on database size and quality. For databases over 10,000 records, consider using a specialized nonprofit data hygiene service like TrueGivers ($0.02-0.05/record) or Blackbaud's Target Analytics data enrichment. Do NOT skip this step—AI models trained on dirty data produce unreliable prospect scores. Document all changes in a Data Cleanup Log for client records.
Step 3: Register for Nonprofit Technology Discounts
Before procuring software, ensure the client is registered with TechSoup, Microsoft Nonprofit, and Salesforce Power of Us (if applicable) to access significant discounts. TechSoup members save an average of $17,000 over the course of their membership. Microsoft provides up to 300 free M365 licenses and $2,000/year in Azure credits.
TechSoup validation is a prerequisite for Microsoft and many other vendor nonprofit programs. Start this process in Week 1 as it can take up to 2 weeks for full validation. If the client already has TechSoup membership, verify it is current and that the organization profile is up to date. Keep copies of all registration confirmations in the project documentation folder.
Step 4: Provision AI Prospect Identification Platform
Set up the primary AI platform account. For most engagements, DonorSearch AI Enhanced CORE is the recommended starting point. For clients with 10,000+ records and mature data, Dataro is a strong alternative. Contact the vendor's sales team to initiate the subscription, negotiate nonprofit pricing, and obtain API credentials.
DonorSearch AI Setup
Dataro Setup (Alternative)
DonorSearch typically requires a signed annual contract. Request a pilot period or proof-of-concept scoring on a subset of records if available. Dataro offers month-to-month billing which reduces client commitment risk. For Blackbaud clients already using Raiser's Edge NXT, inquire about bundled Prospect Insights pricing—it may be more cost-effective than a separate AI platform. Keep all API keys in the MSP's secure credential vault (e.g., IT Glue, Hudu, or Passportal).
Step 5: Configure CRM-to-AI Platform Integration
Establish the data connection between the client's CRM and the AI platform. This is the critical technical step that enables donor data to flow from the CRM into the AI scoring engine and prospect scores to flow back. The integration method depends on the CRM platform.
Raiser's Edge NXT + DonorSearch Integration
Salesforce NPSP + DonorSearch Integration
Salesforce NPSP + Dataro Integration
Dataro provides a free pre-built Salesforce connector.
DonorPerfect + DonorSearch Integration
The initial data sync can take 2-24 hours depending on database size. Do NOT run the initial sync during business hours as it may impact CRM performance. Schedule for overnight or weekend. Verify the sync by comparing record counts: CRM active donors vs. AI platform ingested records. A discrepancy of more than 2% indicates a mapping or filtering issue that must be resolved before proceeding. Document the field mapping in the project runbook.
Step 6: Configure AI Model Parameters & Scoring Thresholds
Configure the AI platform's scoring model to align with the client's specific fundraising program. Define what constitutes a 'major gift' for this organization, set scoring thresholds for prospect tiers, and configure which engagement signals should influence the propensity model. This step requires close collaboration with the client's Director of Development.
DonorSearch AI Dashboard
Dataro
The major gift threshold varies dramatically by organization size. A small community nonprofit may define major gifts at $500, while a university foundation sets it at $25,000+. ALWAYS confirm this with the client's development leadership before configuring. DonorSearch's model is largely pre-built and uses their philanthropic database for scoring; Dataro trains a custom model on the client's own data. For Dataro, the initial model training requires at least 50 historical major gifts to have sufficient training data for reliable predictions.
Step 7: Build Prospect Dashboard in Power BI
Create a visual dashboard that presents AI-generated prospect scores, pipeline metrics, and gift officer portfolio views in an accessible format for the development team. This dashboard becomes the daily working tool for the major gifts program and the primary deliverable the client's leadership sees.
Power BI Desktop is free; Power BI Pro license ($10/user/month) is required for sharing dashboards via the Power BI Service. If the client has Microsoft 365 E5 (available at nonprofit pricing), Power BI Pro is included. For clients who cannot afford Power BI Pro licenses, create a shared Excel workbook on SharePoint as a simpler alternative. The dashboard should be reviewed and refined during the validation phase based on client feedback. Export a PDF version weekly for leadership who prefer static reports.
Step 8: Configure Automated Alerts & Workflow Triggers
Set up automated notifications that alert gift officers when high-scoring prospects are identified, when donor behavior changes significantly (e.g., a mid-level donor's score jumps into Tier 1), or when a prospect hasn't been contacted within the defined cultivation timeline. This ensures AI insights translate into timely action.
Power Automate Flow for New High-Score Prospect Alert
CRM Workflow for Stale Prospect Follow-up (Raiser's Edge NXT)
Salesforce Workflow (if applicable)
Automated alerts are essential for adoption—without them, staff will forget to check the AI dashboard regularly. Start with a simple email alert for Tier 1 prospects and expand to more sophisticated workflows after the client team is comfortable. Be careful not to create alert fatigue: limit to 2-5 high-priority alerts per week. Configure a weekly digest email summarizing all score changes as a secondary notification.
Step 9: Validate AI Scoring Accuracy with Development Staff
Before going live, conduct a structured validation session with the client's development team. Present the AI-generated prospect list to experienced gift officers and prospect researchers who have institutional knowledge. Compare AI recommendations against their professional judgment to identify false positives, missed prospects, and scoring anomalies. This is critical for building staff trust in the system.
- Send validation results to AI vendor for model tuning if needed:
- DonorSearch: Contact support@donorsearch.net with validation findings
- Dataro: Use in-app feedback mechanism to flag scoring issues
This validation step is the make-or-break moment for client adoption. If gift officers don't trust the AI scores, they won't use the system. Emphasize the 'surprise' prospects — donors the AI identified as high potential that staff hadn't noticed. These concrete wins build credibility. If agreement rate is below 60%, investigate data quality issues before proceeding. Common causes: outdated wealth data, missing engagement signals, or misclassified gift types in the CRM.
Step 10: Conduct Staff Training & Go-Live
Deliver comprehensive training to all staff who will interact with the AI prospect identification system. Training should cover: interpreting AI scores, using the dashboard, integrating AI insights into gift officer workflows, understanding what the AI can and cannot do, and maintaining data quality going forward. Document everything for future reference.
Training Session Agenda (2-3 hours)
Module 1: Understanding AI Prospect Scoring (30 min)
- What the AI model analyzes (giving history, engagement, wealth indicators)
- How propensity scores are calculated (high-level, non-technical)
- What scores mean: Tier 1/2/3/4 definitions and recommended actions
- Limitations: AI is a tool, not a replacement for relationship knowledge
Module 2: Using the Dashboard (45 min)
- Live demo: Power BI dashboard navigation
- Filtering by score range, geography, gift officer
- Exporting prospect lists for meeting preparation
- Reading the engagement heatmap
Module 3: Workflow Integration (45 min)
- Daily routine: Check new high-score alerts (email + dashboard)
- Weekly routine: Review portfolio changes and score movements
- Monthly routine: Portfolio review meeting using AI pipeline data
- How to log contact reports in CRM to improve future AI scoring
Module 4: Data Stewardship (30 min)
- Why data quality matters for AI accuracy
- How to properly enter new gifts, update addresses, log interactions
- Process for flagging incorrect AI scores or data errors
- Annual data hygiene procedures
Deliver Training Materials
- Record training session (Zoom/Teams recording)
- Create quick-reference PDF: 'AI Prospect Dashboard Cheat Sheet'
- Create 1-page workflow guide: 'Gift Officer Daily/Weekly AI Checklist'
- Store all docs in shared SharePoint/Google Drive folder
Schedule training no more than 1 week before go-live to maximize retention. Provide a recorded session for staff who miss training or join later. The single most important training outcome is that gift officers understand the daily alert email and know how to act on it. Over-training on technical details reduces adoption; focus on practical workflows. Plan a 30-minute follow-up Q&A session 2 weeks after go-live to address questions that arise from real usage.
Step 11: Configure Compliance & Data Governance Controls
Implement data governance controls to ensure the AI prospect identification system complies with applicable privacy laws and organizational policies. This is especially critical for nonprofits operating in Colorado, Oregon, Delaware, Maryland, Minnesota, or New Jersey where nonprofit exemptions from state privacy laws are limited or nonexistent.
76% of nonprofits lack an AI governance policy. Providing this as part of your implementation creates significant value and differentiates your MSP. For nonprofits in Colorado and Oregon specifically, conduct a detailed compliance review as these states do not exempt nonprofits from their consumer privacy laws. Consider engaging a privacy attorney for clients with significant operations in these states. The AI acceptable use policy template should be delivered as a Word document the client can adopt with minimal modification.
Custom AI Components
Donor Data Quality Audit Script
Type: skill A Python script that analyzes a CSV export of donor data from any CRM and generates a comprehensive data quality report. The report identifies missing fields, duplicate records, invalid emails, inconsistent gift coding, and overall data readiness for AI prospect scoring. This is run during Phase 1 to assess the client's data before platform deployment.
Implementation
#!/usr/bin/env python3
"""Donor Data Quality Audit Script for AI Prospect Identification Projects
Usage: python data_quality_audit.py --input donor_export.csv --output audit_report.html
"""
import pandas as pd
import numpy as np
import argparse
import re
from datetime import datetime, timedelta
from collections import Counter
def load_data(filepath):
df = pd.read_csv(filepath, low_memory=False, encoding='utf-8-sig')
df.columns = [c.strip().lower().replace(' ', '_') for c in df.columns]
return df
def detect_column_mapping(df):
"""Auto-detect common CRM column names and map to standard fields."""
mappings = {
'donor_id': ['constituent_id', 'donor_id', 'id', 'contact_id', 'account_number', 'record_id'],
'first_name': ['first_name', 'first', 'fname', 'given_name'],
'last_name': ['last_name', 'last', 'lname', 'surname', 'family_name'],
'email': ['email', 'email_address', 'primary_email', 'e-mail'],
'phone': ['phone', 'phone_number', 'primary_phone', 'home_phone', 'mobile'],
'address': ['address', 'street', 'address_line_1', 'mailing_address', 'street_address'],
'city': ['city', 'mailing_city'],
'state': ['state', 'st', 'mailing_state', 'province'],
'zip': ['zip', 'zip_code', 'postal_code', 'zipcode', 'mailing_zip'],
'total_giving': ['total_giving', 'total_gifts', 'lifetime_giving', 'total_donated', 'cumulative_giving'],
'last_gift_date': ['last_gift_date', 'last_donation_date', 'most_recent_gift', 'latest_gift_date'],
'last_gift_amount': ['last_gift_amount', 'last_donation_amount', 'most_recent_gift_amount'],
'first_gift_date': ['first_gift_date', 'first_donation_date', 'earliest_gift_date'],
'donor_status': ['status', 'donor_status', 'constituent_status', 'record_status'],
'deceased': ['deceased', 'is_deceased', 'deceased_flag'],
'do_not_contact': ['do_not_contact', 'dnc', 'solicit_flag', 'no_contact']
}
result = {}
for standard, options in mappings.items():
for opt in options:
if opt in df.columns:
result[standard] = opt
break
return result
def analyze_completeness(df, col_map):
results = []
for field, col in col_map.items():
total = len(df)
non_null = df[col].notna().sum()
non_empty = df[col].astype(str).str.strip().replace('', np.nan).notna().sum()
pct_complete = round((non_empty / total) * 100, 1)
results.append({'Field': field, 'Column': col, 'Total_Records': total,
'Populated': non_empty, 'Missing': total - non_empty,
'Completeness_%': pct_complete,
'Status': 'GOOD' if pct_complete >= 90 else 'WARNING' if pct_complete >= 70 else 'CRITICAL'})
return pd.DataFrame(results)
def detect_duplicates(df, col_map):
dupe_checks = []
if 'first_name' in col_map and 'last_name' in col_map:
name_col = [col_map['first_name'], col_map['last_name']]
if 'zip' in col_map:
name_col.append(col_map['zip'])
dupes = df[df.duplicated(subset=name_col, keep=False)]
dupe_checks.append({'Method': 'Name + ZIP', 'Duplicate_Records': len(dupes),
'Duplicate_Groups': len(dupes.drop_duplicates(subset=name_col)),
'Pct_of_Total': round((len(dupes)/len(df))*100, 1)})
if 'email' in col_map:
email_col = col_map['email']
email_dupes = df[df[email_col].notna() & df.duplicated(subset=[email_col], keep=False)]
dupe_checks.append({'Method': 'Email Address', 'Duplicate_Records': len(email_dupes),
'Duplicate_Groups': len(email_dupes.drop_duplicates(subset=[email_col])),
'Pct_of_Total': round((len(email_dupes)/len(df))*100, 1)})
return pd.DataFrame(dupe_checks)
def validate_emails(df, col_map):
if 'email' not in col_map:
return {'valid': 0, 'invalid': 0, 'missing': len(df)}
email_col = col_map['email']
emails = df[email_col].dropna().astype(str)
pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
valid = emails.str.match(pattern).sum()
invalid = len(emails) - valid
missing = len(df) - len(emails)
return {'valid': int(valid), 'invalid': int(invalid), 'missing': int(missing)}
def analyze_giving_history(df, col_map):
results = {}
if 'last_gift_date' in col_map:
dates = pd.to_datetime(df[col_map['last_gift_date']], errors='coerce')
now = datetime.now()
results['donors_gave_last_12mo'] = int((dates > now - timedelta(days=365)).sum())
results['donors_gave_last_36mo'] = int((dates > now - timedelta(days=1095)).sum())
results['donors_lapsed_3plus_years'] = int((dates <= now - timedelta(days=1095)).sum())
results['earliest_gift_on_file'] = str(dates.min().date()) if dates.notna().any() else 'N/A'
results['years_of_history'] = round((now - dates.min()).days / 365.25, 1) if dates.notna().any() else 0
if 'total_giving' in col_map:
giving = pd.to_numeric(df[col_map['total_giving']], errors='coerce')
results['total_donors_with_gifts'] = int(giving[giving > 0].count())
results['median_lifetime_giving'] = round(float(giving[giving > 0].median()), 2)
results['mean_lifetime_giving'] = round(float(giving[giving > 0].mean()), 2)
results['donors_above_1000'] = int((giving >= 1000).sum())
results['donors_above_5000'] = int((giving >= 5000).sum())
results['donors_above_10000'] = int((giving >= 10000).sum())
return results
def ai_readiness_score(completeness_df, dupe_df, giving_stats, total_records):
score = 100
deductions = []
# Record count
if total_records < 500:
score -= 30
deductions.append('CRITICAL: Fewer than 500 records (minimum for DonorSearch)')
elif total_records < 10000:
score -= 10
deductions.append('WARNING: Fewer than 10,000 records (Dataro recommends 10K+ for best results)')
# Data completeness
for _, row in completeness_df.iterrows():
if row['Status'] == 'CRITICAL':
score -= 10
deductions.append(f"CRITICAL: {row['Field']} only {row['Completeness_%']}% complete")
elif row['Status'] == 'WARNING':
score -= 5
deductions.append(f"WARNING: {row['Field']} only {row['Completeness_%']}% complete")
# Duplicates
if not dupe_df.empty and dupe_df['Pct_of_Total'].max() > 15:
score -= 15
deductions.append(f"CRITICAL: {dupe_df['Pct_of_Total'].max()}% duplicate records detected")
elif not dupe_df.empty and dupe_df['Pct_of_Total'].max() > 5:
score -= 5
deductions.append(f"WARNING: {dupe_df['Pct_of_Total'].max()}% duplicate records detected")
# Giving history depth
years = giving_stats.get('years_of_history', 0)
if years < 3:
score -= 20
deductions.append(f'CRITICAL: Only {years} years of giving history (3-5 years recommended)')
elif years < 5:
score -= 5
deductions.append(f'WARNING: Only {years} years of giving history (5+ years ideal for Dataro)')
return max(0, min(100, score)), deductions
def generate_html_report(completeness_df, dupe_df, email_stats, giving_stats, readiness_score, deductions, total_records):
readiness_color = '#27ae60' if readiness_score >= 80 else '#f39c12' if readiness_score >= 60 else '#e74c3c'
html = f"""<!DOCTYPE html><html><head><title>Donor Data Quality Audit Report</title>
<style>body{{font-family:Arial,sans-serif;margin:40px;}}table{{border-collapse:collapse;width:100%;margin:20px 0;}}
th,td{{border:1px solid #ddd;padding:8px;text-align:left;}}th{{background:#2c3e50;color:white;}}
.good{{color:#27ae60;font-weight:bold;}}.warning{{color:#f39c12;font-weight:bold;}}.critical{{color:#e74c3c;font-weight:bold;}}
.score-box{{font-size:48px;color:{readiness_color};font-weight:bold;text-align:center;padding:20px;border:3px solid {readiness_color};border-radius:10px;width:200px;margin:20px auto;}}
h1{{color:#2c3e50;}}h2{{color:#34495e;border-bottom:2px solid #3498db;padding-bottom:5px;}}</style></head><body>
<h1>Donor Data Quality Audit Report</h1>
<p>Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}</p>
<p>Total Records Analyzed: <strong>{total_records:,}</strong></p>
<h2>AI Readiness Score</h2>
<div class='score-box'>{readiness_score}/100</div>
<ul>{''.join(f'<li class="{"critical" if "CRITICAL" in d else "warning"}">{d}</li>' for d in deductions)}</ul>
<h2>Field Completeness</h2>{completeness_df.to_html(index=False, classes='completeness')}
<h2>Duplicate Analysis</h2>{dupe_df.to_html(index=False) if not dupe_df.empty else '<p>No duplicate analysis possible with available columns.</p>'}
<h2>Email Validation</h2><ul><li>Valid: {email_stats['valid']:,}</li><li>Invalid format: {email_stats['invalid']:,}</li><li>Missing: {email_stats['missing']:,}</li></ul>
<h2>Giving History Analysis</h2><ul>{''.join(f'<li><strong>{k.replace("_"," ").title()}</strong>: {v}</li>' for k,v in giving_stats.items())}</ul>
<h2>Recommendations</h2><ol>
<li>{'De-duplicate records before AI platform ingestion' if not dupe_df.empty and dupe_df['Pct_of_Total'].max() > 5 else 'Duplicate rate is acceptable'}</li>
<li>{'Validate and update email addresses using NeverBounce or ZeroBounce' if email_stats['invalid'] > 0 else 'Email data looks clean'}</li>
<li>{'Run NCOA address update to ensure mailing addresses are current' if completeness_df[completeness_df['Field']=="address"]['Completeness_%'].values[0] < 90 if 'address' in completeness_df['Field'].values else True else 'Address data looks sufficient'}</li>
<li>Recommended AI Platform: {'DonorSearch Enhanced CORE (500-9,999 records)' if total_records < 10000 else 'Dataro or DonorSearch AI Full (10,000+ records)'}</li>
</ol></body></html>"""
return html
def main():
parser = argparse.ArgumentParser(description='Donor Data Quality Audit for AI Prospect Identification')
parser.add_argument('--input', required=True, help='Path to donor CSV export')
parser.add_argument('--output', default='audit_report.html', help='Output HTML report path')
args = parser.parse_args()
print(f'Loading data from {args.input}...')
df = load_data(args.input)
print(f'Loaded {len(df):,} records with {len(df.columns)} columns')
col_map = detect_column_mapping(df)
print(f'Detected column mappings: {col_map}')
completeness = analyze_completeness(df, col_map)
duplicates = detect_duplicates(df, col_map)
emails = validate_emails(df, col_map)
giving = analyze_giving_history(df, col_map)
score, deductions = ai_readiness_score(completeness, duplicates, giving, len(df))
html = generate_html_report(completeness, duplicates, emails, giving, score, deductions, len(df))
with open(args.output, 'w') as f:
f.write(html)
print(f'Report generated: {args.output}')
print(f'AI Readiness Score: {score}/100')
if __name__ == '__main__':
main()CRM-to-AI Platform Sync Monitor
Type: integration
A Power Automate flow (or Python cron job) that monitors the data sync between the CRM and the AI prospect identification platform. It checks daily that record counts match, identifies sync failures, and alerts the MSP if data freshness exceeds 48 hours. This ensures prospect scores remain current and catches integration breakdowns before the client notices.
Implementation:
#!/usr/bin/env python3
"""CRM-to-AI Platform Sync Health Monitor
Schedule via cron: 0 7 * * * /usr/bin/python3 /opt/msp/sync_monitor.py
Or run as Azure Function with Timer Trigger.
"""
import requests
import json
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from datetime import datetime, timedelta
import os
# Configuration - store in environment variables or Azure Key Vault
CONFIG = {
'crm_type': os.getenv('CRM_TYPE', 'raiser_edge_nxt'), # raiser_edge_nxt | salesforce | donorperfect
'crm_api_url': os.getenv('CRM_API_URL', 'https://api.sky.blackbaud.com/constituent/v1'),
'crm_api_key': os.getenv('CRM_API_KEY'),
'crm_oauth_token': os.getenv('CRM_OAUTH_TOKEN'),
'ai_platform': os.getenv('AI_PLATFORM', 'donorsearch'), # donorsearch | dataro
'ai_api_url': os.getenv('AI_API_URL'),
'ai_api_key': os.getenv('AI_API_KEY'),
'smtp_server': os.getenv('SMTP_SERVER', 'smtp.office365.com'),
'smtp_port': int(os.getenv('SMTP_PORT', '587')),
'smtp_user': os.getenv('SMTP_USER'),
'smtp_pass': os.getenv('SMTP_PASS'),
'alert_recipients': os.getenv('ALERT_RECIPIENTS', 'msp-alerts@yourmsp.com').split(','),
'client_name': os.getenv('CLIENT_NAME', 'Nonprofit Client'),
'sync_freshness_hours': int(os.getenv('SYNC_FRESHNESS_HOURS', '48')),
'record_count_tolerance_pct': float(os.getenv('RECORD_TOLERANCE_PCT', '2.0')),
}
def get_crm_record_count():
"""Get active donor count from CRM."""
if CONFIG['crm_type'] == 'raiser_edge_nxt':
headers = {
'Bb-Api-Subscription-Key': CONFIG['crm_api_key'],
'Authorization': f"Bearer {CONFIG['crm_oauth_token']}"
}
resp = requests.get(f"{CONFIG['crm_api_url']}/constituents?limit=1", headers=headers)
return resp.json().get('count', 0)
elif CONFIG['crm_type'] == 'salesforce':
headers = {'Authorization': f"Bearer {CONFIG['crm_oauth_token']}"}
query = 'SELECT COUNT() FROM Contact WHERE npe01__Donor__c = true'
resp = requests.get(f"{CONFIG['crm_api_url']}/query?q={query}", headers=headers)
return resp.json().get('totalSize', 0)
elif CONFIG['crm_type'] == 'donorperfect':
headers = {'Authorization': f"Bearer {CONFIG['crm_api_key']}"}
resp = requests.get(f"{CONFIG['crm_api_url']}/donors?count_only=true", headers=headers)
return resp.json().get('count', 0)
return 0
def get_ai_platform_status():
"""Get record count and last sync timestamp from AI platform."""
headers = {'Authorization': f"Bearer {CONFIG['ai_api_key']}"}
if CONFIG['ai_platform'] == 'donorsearch':
resp = requests.get(f"{CONFIG['ai_api_url']}/account/sync-status", headers=headers)
data = resp.json()
return {'record_count': data.get('total_records', 0),
'last_sync': data.get('last_sync_timestamp', ''),
'sync_status': data.get('status', 'unknown')}
elif CONFIG['ai_platform'] == 'dataro':
resp = requests.get(f"{CONFIG['ai_api_url']}/v1/integration/status", headers=headers)
data = resp.json()
return {'record_count': data.get('synced_contacts', 0),
'last_sync': data.get('last_prediction_run', ''),
'sync_status': data.get('integration_status', 'unknown')}
return {'record_count': 0, 'last_sync': '', 'sync_status': 'unknown'}
def check_sync_health():
issues = []
crm_count = get_crm_record_count()
ai_status = get_ai_platform_status()
ai_count = ai_status['record_count']
# Check record count discrepancy
if crm_count > 0 and ai_count > 0:
discrepancy = abs(crm_count - ai_count) / crm_count * 100
if discrepancy > CONFIG['record_count_tolerance_pct']:
issues.append(f"RECORD COUNT MISMATCH: CRM has {crm_count:,} records, AI platform has {ai_count:,} ({discrepancy:.1f}% discrepancy, threshold is {CONFIG['record_count_tolerance_pct']}%)")
elif ai_count == 0:
issues.append(f"CRITICAL: AI platform reports 0 records. CRM has {crm_count:,}. Sync may be broken.")
# Check sync freshness
if ai_status['last_sync']:
last_sync = datetime.fromisoformat(ai_status['last_sync'].replace('Z', '+00:00'))
hours_since = (datetime.now(last_sync.tzinfo) - last_sync).total_seconds() / 3600
if hours_since > CONFIG['sync_freshness_hours']:
issues.append(f"STALE DATA: Last sync was {hours_since:.0f} hours ago (threshold: {CONFIG['sync_freshness_hours']} hours). Last sync: {last_sync.strftime('%Y-%m-%d %H:%M')}")
else:
issues.append('WARNING: Unable to determine last sync timestamp.')
# Check sync status
if ai_status['sync_status'] not in ['active', 'healthy', 'connected', 'ok']:
issues.append(f"SYNC STATUS ISSUE: AI platform reports sync status as '{ai_status['sync_status']}'")
return {'healthy': len(issues) == 0, 'issues': issues, 'crm_count': crm_count,
'ai_count': ai_count, 'last_sync': ai_status['last_sync'], 'sync_status': ai_status['sync_status']}
def send_alert(health_report):
msg = MIMEMultipart('alternative')
msg['Subject'] = f"[{'ALERT' if not health_report['healthy'] else 'OK'}] AI Sync Monitor - {CONFIG['client_name']}"
msg['From'] = CONFIG['smtp_user']
msg['To'] = ', '.join(CONFIG['alert_recipients'])
body = f"""AI Prospect Platform Sync Health Report\n{'='*50}\nClient: {CONFIG['client_name']}\nTimestamp: {datetime.now().strftime('%Y-%m-%d %H:%M')}\nCRM Type: {CONFIG['crm_type']}\nAI Platform: {CONFIG['ai_platform']}\nCRM Record Count: {health_report['crm_count']:,}\nAI Platform Record Count: {health_report['ai_count']:,}\nLast Sync: {health_report['last_sync']}\nSync Status: {health_report['sync_status']}\nOverall Health: {'HEALTHY' if health_report['healthy'] else 'ISSUES DETECTED'}\n\n"""
if health_report['issues']:
body += 'Issues Detected:\n' + '\n'.join(f' - {i}' for i in health_report['issues'])
body += '\n\nAction Required: Review integration settings and re-sync if necessary.'
msg.attach(MIMEText(body, 'plain'))
with smtplib.SMTP(CONFIG['smtp_server'], CONFIG['smtp_port']) as server:
server.starttls()
server.login(CONFIG['smtp_user'], CONFIG['smtp_pass'])
server.sendmail(CONFIG['smtp_user'], CONFIG['alert_recipients'], msg.as_string())
def main():
try:
health = check_sync_health()
if not health['healthy']:
send_alert(health)
print(f"ALERT sent: {len(health['issues'])} issues detected")
else:
print(f"Sync healthy: CRM={health['crm_count']:,}, AI={health['ai_count']:,}")
# Send weekly OK report on Mondays
if datetime.now().weekday() == 0:
send_alert(health)
except Exception as e:
error_report = {'healthy': False, 'issues': [f'MONITOR ERROR: {str(e)}'], 'crm_count': 0, 'ai_count': 0, 'last_sync': 'unknown', 'sync_status': 'monitor_error'}
send_alert(error_report)
if __name__ == '__main__':
main()Prospect Score Change Alert Workflow
Type: workflow A Power Automate workflow that detects when a donor's AI prospect score crosses a tier threshold (e.g., moves from Tier 2 to Tier 1) and automatically creates a task for the assigned gift officer, sends an email notification, and logs the score change in the CRM. This ensures that high-potential prospects are acted on immediately when the AI model identifies a scoring change.
Implementation
Power Automate Flow: Prospect Score Change Alert
This can also be implemented as a Salesforce Flow or Raiser's Edge NXT Action.
Trigger & Flow Configuration
Trigger: Scheduled — runs daily at 7:30 AM after sync monitor
Condition 3a: Score Crossed Tier 1 Threshold
If: current_score >= 80 AND previous_score < 80
- YES branch — ACTION: Send email (V2): To: gift_officer_email, Subject: '🔥 New Tier 1 Prospect: {donor_name}', Body includes donor name, previous score, new score, estimated gift capacity, last gift amount/date, recommended action (schedule personal visit within 2 weeks), and CRM profile link
- YES branch — ACTION: Create a task (Planner): Plan: Major Gifts Pipeline, Bucket: New Tier 1 Prospects, Title: 'Contact {donor_name} - New Tier 1 AI Score: {current_score}', Due date: addDays(utcNow(), 14), Assigned to: gift_officer_email, Notes: 'AI identified this donor as a high-probability major gift prospect.'
- YES branch — ACTION: Update row in SharePoint List: alert_sent: Yes, alert_sent_date: utcNow()
Condition 3c: Score Dropped Below Tier 2
If: current_score < 60 AND previous_score >= 60
- YES branch — ACTION: Send email to gift officer: Subject: '⚠️ Score Decrease: {donor_name} dropped to Tier 3', Body: 'Review donor engagement and update CRM records.'
SharePoint List Schema: Prospect_Scores_History
Create this list to track score changes over time. Configure the following columns:
- donor_id (Single line of text, required)
- donor_name (Single line of text)
- gift_officer_email (Person or Group)
- previous_score (Number, 0-100)
- current_score (Number, 0-100)
- previous_tier (Choice: Tier 1, Tier 2, Tier 3, Tier 4)
- current_tier (Choice: Tier 1, Tier 2, Tier 3, Tier 4)
- score_change_date (Date)
- capacity_estimate (Currency)
- last_gift_amount (Currency)
- last_gift_date (Date)
- crm_profile_link (Hyperlink)
- alert_sent (Yes/No)
- alert_sent_date (Date)
Daily Score Refresh Script
Run this script before the Power Automate flow triggers. It pulls the latest scores from the AI platform and populates the SharePoint tracking list.
# pulls AI prospect scores and updates SharePoint Prospect_Scores_History
# list. Run before the Power Automate flow triggers at 7:30 AM.
import requests
import json
from datetime import datetime
def refresh_prospect_scores(ai_api_url, ai_api_key, sharepoint_site, sp_list_id, sp_access_token):
"""Pull latest scores from AI platform and update SharePoint tracking list."""
# Get current scores from AI platform
headers = {'Authorization': f'Bearer {ai_api_key}'}
resp = requests.get(f'{ai_api_url}/prospects/scores', headers=headers)
current_scores = {p['donor_id']: p for p in resp.json()['prospects']}
# Get previous scores from SharePoint
sp_headers = {'Authorization': f'Bearer {sp_access_token}',
'Content-Type': 'application/json'}
sp_url = f'https://graph.microsoft.com/v1.0/sites/{sharepoint_site}/lists/{sp_list_id}/items?$expand=fields'
sp_resp = requests.get(sp_url, headers=sp_headers)
existing = {item['fields']['donor_id']: item for item in sp_resp.json().get('value', [])}
def get_tier(score):
if score >= 80: return 'Tier 1'
elif score >= 60: return 'Tier 2'
elif score >= 40: return 'Tier 3'
return 'Tier 4'
changes = []
for donor_id, data in current_scores.items():
new_score = data.get('propensity_score', 0)
new_tier = get_tier(new_score)
old_score = existing.get(donor_id, {}).get('fields', {}).get('current_score', 0)
old_tier = get_tier(old_score) if old_score else 'Tier 4'
if new_tier != old_tier:
update_payload = {'fields': {
'donor_id': donor_id,
'donor_name': data.get('name', ''),
'previous_score': old_score,
'current_score': new_score,
'previous_tier': old_tier,
'current_tier': new_tier,
'score_change_date': datetime.utcnow().isoformat(),
'capacity_estimate': data.get('estimated_capacity', 0),
'last_gift_amount': data.get('last_gift_amount', 0),
'alert_sent': False
}}
if donor_id in existing:
item_id = existing[donor_id]['id']
requests.patch(f'{sp_url}/{item_id}', headers=sp_headers, json=update_payload)
else:
requests.post(f'{sp_url}', headers=sp_headers, json=update_payload)
changes.append(f"{data.get('name')}: {old_tier} -> {new_tier} (Score: {old_score} -> {new_score})")
return changesMajor Gift Pipeline Dashboard Template
Type: prompt
A Power BI dashboard template specification with DAX measures and visualization configurations for the major gift prospect pipeline. This template is imported into Power BI Desktop and connected to the client's data sources. It provides four dashboard pages: Executive Summary, Prospect Scoring Detail, Gift Officer Portfolio, and Engagement Heatmap.
Implementation
Data Model Tables Required
Table 1: Prospects — Source: AI Platform export (DonorSearch or Dataro) joined with CRM data
- DonorID (text, primary key)
- FullName (text)
- Email (text)
- City (text)
- State (text)
- AIScore (whole number, 0-100)
- ScoreTier (text: Tier 1/2/3/4)
- EstimatedCapacity (currency)
- RecommendedAskAmount (currency)
- LastGiftAmount (currency)
- LastGiftDate (date)
- FirstGiftDate (date)
- LifetimeGiving (currency)
- GiftCount (whole number)
- AssignedGiftOfficer (text)
- LastContactDate (date)
- Tags (text, comma-separated: Hidden Gem, Mystery Mogul, Planned Giving, etc.)
Table 2: EngagementSignals — Source: CRM engagement data
- DonorID (text, foreign key)
- EngagementType (text: Email Open, Event Attendance, Volunteer, Meeting, Gift, Website Visit)
- EngagementDate (date)
- EngagementDetail (text)
Table 3: ScoreHistory — Source: SharePoint Prospect_Scores_History list
- DonorID (text, foreign key)
- ScoreDate (date)
- Score (whole number)
- Tier (text)
Table 4: Calendar — Auto-generated date table for time intelligence
DAX Measures — Key Performance Indicators
# paste into the Model view Measures table
Total Prospects = COUNTROWS(Prospects)
Tier 1 Count = CALCULATE(COUNTROWS(Prospects), Prospects[ScoreTier] = "Tier 1")
Tier 2 Count = CALCULATE(COUNTROWS(Prospects), Prospects[ScoreTier] = "Tier 2")
Total Pipeline Value = SUMX(
FILTER(Prospects, Prospects[AIScore] >= 60),
Prospects[RecommendedAskAmount]
)
Avg Days Since Contact = AVERAGEX(
FILTER(Prospects, Prospects[AIScore] >= 60),
DATEDIFF(Prospects[LastContactDate], TODAY(), DAY)
)
Prospects Needing Contact = CALCULATE(
COUNTROWS(Prospects),
Prospects[AIScore] >= 60,
DATEDIFF(Prospects[LastContactDate], TODAY(), DAY) > 30
)
New Tier 1 This Month = CALCULATE(
COUNTROWS(ScoreHistory),
ScoreHistory[Tier] = "Tier 1",
MONTH(ScoreHistory[ScoreDate]) = MONTH(TODAY()),
YEAR(ScoreHistory[ScoreDate]) = YEAR(TODAY())
)
Conversion Rate = DIVIDE(
CALCULATE(COUNTROWS(Prospects), Prospects[LifetimeGiving] >= [MajorGiftThreshold], Prospects[AIScore] >= 60),
CALCULATE(COUNTROWS(Prospects), Prospects[AIScore] >= 60),
0
)
Engagement Depth = AVERAGEX(
Prospects,
CALCULATE(
COUNTROWS(EngagementSignals),
FILTER(EngagementSignals, EngagementSignals[EngagementDate] >= TODAY() - 365)
)
)
Hidden Gems = CALCULATE(
COUNTROWS(Prospects),
CONTAINSSTRING(Prospects[Tags], "Hidden Gem")
)Page 1: Executive Pipeline Summary
Layout: 1280×720. Background: White with #2C3E50 header bar.
- Row 1 — KPI Cards: [Total Prospects] titled 'Total Rated Prospects' | [Tier 1 Count] titled 'Tier 1 (Ready to Ask)' with conditional red formatting if 0 | [Total Pipeline Value] titled 'Estimated Pipeline Value' formatted as $#,##0 | [New Tier 1 This Month] titled 'New Tier 1 This Month'
- Row 2 — Funnel Chart (left 50%): Values = Count by ScoreTier, Category = ScoreTier. Colors: Tier 1 = #e74c3c, Tier 2 = #f39c12, Tier 3 = #3498db, Tier 4 = #95a5a6
- Row 2 — Donut Chart (right 50%): Pipeline Value by Gift Officer
- Row 3 — Line Chart: ScoreHistory count by month, filtered to Tier 1+2 entries. Title: 'Major Gift Pipeline Trend (12 Months)'
Page 2: Prospect Scoring Detail
- Slicer (top): ScoreTier multi-select, State, AssignedGiftOfficer
- Table columns: FullName | AIScore | EstimatedCapacity | RecommendedAskAmount | LastGiftAmount | LastGiftDate | Tags
- Conditional formatting on AIScore: 80–100 green, 60–79 yellow, 40–59 orange, <40 gray
- Sortable by all columns; default sort: AIScore descending
- Tooltip on hover: Show engagement count and last 3 engagement activities
Page 3: Gift Officer Portfolio View
- Slicer (top): AssignedGiftOfficer single-select
- KPI Row: Assigned Prospects | Portfolio Total Value | Avg Days Since Contact | Overdue Contacts
- Table: Prospect list for selected officer with action due dates
- Gauge: Contact cadence compliance (% of prospects contacted within 30 days)
Page 4: Engagement Heatmap
- Matrix: Rows = Top 50 donors by AI score, Columns = Engagement Types, Values = Count of engagements in last 12 months. Conditional formatting: Gradient white (0) to dark blue (10+)
- Highlight callout: Donors with AIScore >= 80 but low engagement = 'Cultivation Opportunity'
- Highlight callout: Donors with high engagement but no gift officer assigned = 'Unassigned Hot Prospects'
Donor Engagement Score Calculator
Type: skill A custom engagement scoring algorithm that supplements the AI platform's propensity scores by calculating an internal engagement score based on the nonprofit's specific touchpoints. This score is stored in the CRM and fed back to the AI platform as an additional signal. It weights recent interactions more heavily and accounts for recency, frequency, and breadth of engagement.
Implementation:
# calculates a 0–100 engagement score based on recency, frequency, and
# breadth of donor interactions
#!/usr/bin/env python3
"""Donor Engagement Score Calculator
Calculates a 0-100 engagement score based on recency, frequency, and breadth
of donor interactions. Designed to supplement AI prospect propensity scores.
Usage:
python engagement_scorer.py --input engagements.csv --donors donors.csv --output scored_donors.csv
Input CSV format (engagements.csv):
donor_id, engagement_type, engagement_date, engagement_value
Engagement types: gift, email_open, email_click, event_attended, volunteer_hours,
meeting, phone_call, website_visit, social_media, survey_response,
board_service, committee_service, peer_referral
"""
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import argparse
# Engagement type weights (configurable per client)
ENGAGEMENT_WEIGHTS = {
'gift': 10.0,
'meeting': 8.0,
'board_service': 9.0,
'committee_service': 7.0,
'event_attended': 6.0,
'volunteer_hours': 5.0,
'peer_referral': 7.0,
'phone_call': 5.0,
'email_click': 3.0,
'survey_response': 4.0,
'email_open': 1.5,
'website_visit': 1.0,
'social_media': 1.0,
}
# Recency decay: more recent = higher multiplier
def recency_multiplier(days_ago):
"""Exponential decay: interactions today = 1.0x, 1 year ago = 0.5x, 2 years ago = 0.25x"""
half_life = 365 # days
return np.power(0.5, days_ago / half_life)
def calculate_engagement_scores(engagements_df, donors_df):
now = datetime.now()
engagements_df['engagement_date'] = pd.to_datetime(engagements_df['engagement_date'], errors='coerce')
engagements_df['days_ago'] = (now - engagements_df['engagement_date']).dt.days
engagements_df['days_ago'] = engagements_df['days_ago'].clip(lower=0)
# Apply weights and recency decay
engagements_df['base_weight'] = engagements_df['engagement_type'].map(ENGAGEMENT_WEIGHTS).fillna(1.0)
engagements_df['recency_mult'] = engagements_df['days_ago'].apply(recency_multiplier)
engagements_df['weighted_score'] = engagements_df['base_weight'] * engagements_df['recency_mult']
# Aggregate per donor
donor_scores = engagements_df.groupby('donor_id').agg(
raw_score=('weighted_score', 'sum'),
total_interactions=('engagement_type', 'count'),
unique_types=('engagement_type', 'nunique'),
most_recent=('engagement_date', 'max'),
gift_count=('engagement_type', lambda x: (x == 'gift').sum()),
last_gift_days=('days_ago', lambda x: x[engagements_df.loc[x.index, 'engagement_type'] == 'gift'].min() if (engagements_df.loc[x.index, 'engagement_type'] == 'gift').any() else 9999),
).reset_index()
# Breadth bonus: engaging across multiple channels is a strong signal
max_types = len(ENGAGEMENT_WEIGHTS)
donor_scores['breadth_bonus'] = (donor_scores['unique_types'] / max_types) * 15
# Frequency bonus: consistent engagement matters
# Calculate interactions per month over last 24 months
recent = engagements_df[engagements_df['days_ago'] <= 730]
monthly_freq = recent.groupby('donor_id').agg(
months_active=('engagement_date', lambda x: x.dt.to_period('M').nunique())
).reset_index()
donor_scores = donor_scores.merge(monthly_freq, on='donor_id', how='left')
donor_scores['months_active'] = donor_scores['months_active'].fillna(0)
donor_scores['frequency_bonus'] = (donor_scores['months_active'] / 24) * 10
# Final score: raw + breadth + frequency, normalized to 0-100
donor_scores['total_raw'] = donor_scores['raw_score'] + donor_scores['breadth_bonus'] + donor_scores['frequency_bonus']
# Normalize using percentile ranking (0-100 scale)
donor_scores['engagement_score'] = donor_scores['total_raw'].rank(pct=True) * 100
donor_scores['engagement_score'] = donor_scores['engagement_score'].round(0).astype(int)
# Engagement tier
donor_scores['engagement_tier'] = pd.cut(
donor_scores['engagement_score'],
bins=[0, 25, 50, 75, 100],
labels=['Low', 'Moderate', 'High', 'Very High'],
include_lowest=True
)
# Merge with donor master
result = donors_df.merge(
donor_scores[['donor_id', 'engagement_score', 'engagement_tier', 'total_interactions',
'unique_types', 'most_recent', 'months_active']],
on='donor_id', how='left'
)
result['engagement_score'] = result['engagement_score'].fillna(0).astype(int)
result['engagement_tier'] = result['engagement_tier'].fillna('None')
return result
def main():
parser = argparse.ArgumentParser(description='Calculate donor engagement scores')
parser.add_argument('--input', required=True, help='Engagements CSV file')
parser.add_argument('--donors', required=True, help='Donors CSV file')
parser.add_argument('--output', default='scored_donors.csv', help='Output CSV')
parser.add_argument('--weights-config', default=None, help='Optional JSON file to override engagement weights')
args = parser.parse_args()
if args.weights_config:
import json
with open(args.weights_config) as f:
custom_weights = json.load(f)
ENGAGEMENT_WEIGHTS.update(custom_weights)
print(f'Loaded custom weights from {args.weights_config}')
engagements = pd.read_csv(args.input)
donors = pd.read_csv(args.donors)
print(f'Processing {len(engagements):,} engagement records for {len(donors):,} donors...')
result = calculate_engagement_scores(engagements, donors)
result.to_csv(args.output, index=False)
print(f'Scored donors exported to {args.output}')
print(f"\nEngagement Score Distribution:")
print(result['engagement_tier'].value_counts().sort_index())
print(f"\nTop 10 Most Engaged Donors:")
print(result.nlargest(10, 'engagement_score')[['donor_id', 'engagement_score', 'engagement_tier', 'total_interactions', 'unique_types']].to_string(index=False))
if __name__ == '__main__':
main()AI Governance Policy Generator for Nonprofits
Type: prompt A structured prompt template for generating a customized AI Acceptable Use Policy for nonprofit clients. This policy document covers donor data handling, AI tool usage guidelines, bias monitoring, transparency requirements, and staff responsibilities. The MSP runs this prompt with client-specific details to produce a ready-to-adopt policy document.
Implementation
# AI Governance Policy Generator Prompt Template
# Use with: ChatGPT-4, Claude, or Copilot
# Replace all {PLACEHOLDERS} with client-specific information before running
---
SYSTEM PROMPT:
You are a nonprofit governance and data privacy expert. Generate a comprehensive
AI Acceptable Use Policy for a 501(c)(3) nonprofit organization. The policy must
be practical, board-ready, and compliant with applicable state privacy laws.
Write in clear, non-technical language appropriate for nonprofit board members
and development staff.
USER PROMPT:
Please generate an AI Acceptable Use Policy for our nonprofit organization with
the following details:
- Organization Name: {ORGANIZATION_NAME}
- EIN: {EIN}
- State of Incorporation: {STATE}
- Operating States: {OPERATING_STATES_LIST}
- Annual Revenue: {ANNUAL_REVENUE}
- Number of Donor Records: {RECORD_COUNT}
- AI Tools in Use: {AI_TOOLS_LIST} (e.g., DonorSearch AI, Dataro, Power BI)
- CRM Platform: {CRM_PLATFORM}
- Staff Size: {STAFF_SIZE}
- Has existing data privacy policy: {YES_NO}
- Processes donor credit card data: {YES_NO}
- Has international (EU/UK) donors: {YES_NO}
The policy must include these sections:
1. PURPOSE AND SCOPE
- Why the organization uses AI tools
- Which staff and systems are covered
- Relationship to existing data governance policies
2. APPROVED AI TOOLS AND USES
- Enumerated list of approved AI platforms and their purposes
- Approved use cases (prospect identification, donor segmentation, lapse prediction)
- Prohibited uses (donor score sharing outside org, automated solicitation without human review, using AI scores for employment or volunteer decisions)
3. DONOR DATA HANDLING
- What donor data is shared with AI platforms
- Data minimization principles
- Vendor data processing agreements required
- Data retention and deletion schedules
- Donor opt-out rights and process
4. TRANSPARENCY AND DISCLOSURE
- Language to add to privacy policy regarding AI use
- When and how to disclose AI involvement to donors
- Board reporting requirements on AI tool usage
5. BIAS AND FAIRNESS
- Quarterly review of AI-generated prospect lists for demographic bias
- Process for investigating and addressing scoring anomalies
- Prohibition on using AI scores as sole basis for donor cultivation decisions
6. STATE PRIVACY LAW COMPLIANCE
- Specific requirements for {OPERATING_STATES_LIST}
- Special attention to Colorado Consumer Privacy Act (if applicable)
- Special attention to Oregon Consumer Privacy Act (if applicable, effective July 1, 2025 for nonprofits)
- Consent requirements and opt-out mechanisms
7. SECURITY REQUIREMENTS
- MFA required for all AI platform access
- Role-based access controls
- Annual security review of AI vendor SOC 2 compliance
- Incident response for AI-related data breaches
8. STAFF RESPONSIBILITIES
- Required training before AI tool access
- Restrictions on entering donor data into general-purpose AI (ChatGPT, etc.)
- Reporting obligations for suspected misuse
9. GOVERNANCE AND REVIEW
- Policy review schedule (annual minimum)
- Responsible officer (typically Director of Development or COO)
- Board oversight requirements
- Policy amendment process
Format as a professional policy document with section numbers, effective date placeholder, and signature lines for Board Chair and Executive Director.MSP Instructions
Testing & Validation
Client Handoff
The client handoff meeting should be a structured 90-minute session with the Director of Development, Major Gifts Officers, Prospect Researcher (if applicable), and Executive Director. Cover the following:
Success Criteria to Review Together
Schedule a 30-minute check-in for 2 weeks post-launch and a formal 90-day review meeting to assess adoption and ROI.
Maintenance
Ongoing MSP Managed Service Responsibilities:
Daily (Automated)
- CRM-to-AI platform sync health monitor runs at 7:00 AM (automated script/Power Automate). MSP receives alert only if issues detected.
- Power BI dashboard data refresh at 6:00 AM. Monitor for refresh failures via Power BI Service alerts.
Weekly
- Review sync monitor weekly summary report (sent automatically on Mondays).
- Verify AI platform prediction refresh completed (Dataro refreshes weekly; DonorSearch varies by tier).
- Check Power Automate flow run history for any failed alert deliveries.
- Estimated time: 30 minutes/week.
Monthly
- Run data quality spot check: export 50 random donor records and verify key fields (name, address, email, last gift) are accurate and current.
- Review platform usage metrics: number of logins, reports generated, prospects viewed. Flag if usage drops significantly (indicates adoption issues).
- Check for platform updates or new features from AI vendor (DonorSearch/Dataro release notes).
- Verify all user accounts are current (disable departed staff, add new hires).
- Estimated time: 2 hours/month.
Quarterly
- Comprehensive AI model performance review with client's Director of Development. Analyze: (a) How many AI-identified Tier 1 prospects received a solicitation? (b) What was the conversion rate? (c) Are there systematic blind spots in the scoring? (d) Should the major gift threshold be adjusted?
- Data quality audit using the Donor Data Quality Audit Script. Compare AI Readiness Score to baseline.
- Review and update engagement signal weights if the client has added new touchpoint types.
- Verify compliance controls: opt-out list is being honored, privacy policy is current, AI governance policy is being followed.
- Generate quarterly ROI report: new major gifts attributed to AI-identified prospects vs. platform and service costs.
- Estimated time: 4-6 hours/quarter.
Annually
- Full CRM data hygiene: NCOA address update, email validation, deceased record suppression, comprehensive de-duplication.
- AI vendor contract renewal review: evaluate pricing, compare to competitive alternatives, negotiate terms.
- Privacy policy annual review and update for any new state laws taking effect.
- AI Acceptable Use Policy annual review with client board.
- Security audit: verify MFA compliance, review API key rotation, confirm vendor SOC 2 reports are current.
- Platform upgrade/migration assessment: evaluate whether to upgrade tiers (e.g., Enhanced CORE → full DSAi) based on results.
- Estimated time: 8-12 hours/year.
Model Retraining Triggers (contact AI vendor)
- Client undergoes major organizational change (merger, new program launch, capital campaign)
- Significant shift in donor base composition (e.g., large acquisition of new donors)
- AI scoring accuracy drops below 65% in quarterly validation review
- Client changes their major gift threshold definition
- More than 2 years since last model retrain (Dataro auto-refreshes; DonorSearch may require manual request)
SLA Recommendations
- Critical issues (sync failure, platform outage): 4-hour response, 24-hour resolution
- Standard issues (scoring anomalies, report errors): 1-business-day response, 5-business-day resolution
- Enhancement requests (new dashboards, workflow changes): scoped and quoted within 5 business days
- Escalation path: MSP Level 1 → MSP Level 2 (CRM specialist) → AI vendor support → AI vendor account manager
Alternatives
Blackbaud Prospect Insights Native Add-on
For clients already using Blackbaud Raiser's Edge NXT, skip the third-party AI platform entirely and activate Blackbaud's native Prospect Insights and Prospect Insights Pro add-ons. These are built directly into the CRM, eliminating integration complexity. Prospect Insights uses Blackbaud's own AI models to identify major gift and planned giving prospects with prescriptive prioritizations.
Custom Python ML Pipeline with Open-Source Tools
Build a custom major gift propensity model using Python (scikit-learn, XGBoost, or LightGBM), pandas for data processing, and Power BI for visualization. The MSP extracts donor data from the CRM, trains a classification model on historical major gifts, and deploys scoring as an automated pipeline running on Azure (using the nonprofit's $2,000 annual Azure credits).
Kindsight (iWave) Premium Intelligence Platform
Deploy Kindsight's iWave platform as the primary prospect intelligence tool, with NonprofitOS generative AI co-pilot for automated prospect research. iWave aggregates 44 vetted data sources to build comprehensive donor profiles with wealth indicators, philanthropic history, and giving capacity ratings.
AgileSoftLabs White-Label Platform (MSP-Branded Solution)
Instead of reselling a third-party AI platform, deploy AgileSoftLabs' white-label nonprofit CRM and fundraising platform under the MSP's own brand. The platform includes AI-powered analytics, predictive donor insights, and compliance automation. The MSP controls the client relationship, pricing, and branding entirely.
Dataro as Primary with DonorSearch Enrichment
Use Dataro ($250-$499/month) as the primary AI scoring engine for propensity predictions, supplemented by DonorSearch's philanthropic database for wealth screening and capacity ratings. This two-platform approach provides both predictive scoring (Dataro) and descriptive intelligence (DonorSearch) at a moderate combined cost.
Want early access to the full toolkit?