57 min readIntelligence & insights

Implementation Guide: Identify major gift prospects based on giving history and engagement signals

Step-by-step implementation guide for deploying AI to identify major gift prospects based on giving history and engagement signals for Non-Profit Organizations clients.

Hardware Procurement

Staff Workstation - Development Officer / Prospect Researcher

DellDell Latitude 5550 (Intel Core Ultra 5, 16GB RAM, 256GB SSD)Qty: 2

$950 per unit MSP cost via TechSoup or Dell nonprofit channel / $1,200 suggested resale

Primary workstations for gift officers and prospect researchers to access AI dashboards, CRM, and prospect reports simultaneously. 16GB RAM recommended for running browser-based CRM, AI platform, and Power BI dashboards concurrently.

External Monitor - Dual Display Setup

Dell P2425H 24-inch FHD IPS Monitor

DellDell P2425H 24-inch FHD IPS MonitorQty: 4

$180 per unit MSP cost / $250 suggested resale

Dual-monitor setup for each workstation allows gift officers to view AI prospect scores and donor profiles on one screen while working in the CRM on the other. Two monitors per workstation, two workstations.

Monitor Stand / Dual Arm Mount

Dual Monitor Stand

AmazonBasicsAmazonBasics Dual Monitor Stand B00MIBN16OQty: 2

$30 per unit MSP cost / $50 suggested resale

Ergonomic dual-monitor mounting for each workstation to maximize desk space and viewing comfort for prospect researchers working extended sessions.

Software Procurement

DonorSearch AI (DSAi) - Enhanced CORE

DonorSearchEnhanced COREQty: Annual subscription

$1,695/year (Tier 1 annual contract); $1,000/year for INN member organizations

Primary AI prospect identification platform. Analyzes 800+ data points per donor against the world's largest philanthropic database to generate major gift propensity scores with 81% accuracy. Enhanced CORE is the entry-level AI tier ideal for small-to-mid-size nonprofits new to predictive analytics.

DonorSearch AI (DSAi) - Full Custom Predictive Modeling

DonorSearchSaaS - Annual subscription

$3,000–$8,000/year (custom quote based on database size and features)

Full-featured AI platform for mid-to-large nonprofits. Custom predictive models trained on the organization's specific data patterns. Includes advanced customization, deeper scoring granularity, and enhanced reporting. Use this tier for clients with 5,000+ donor records.

Dataro ProspectAI

DataroSaaS - Monthly subscription, per-seatQty: Pro Plan: up to 5 users, 250 reports/month; Core platform: expanded features; Enterprise: custom-quoted

$250/month (Pro Plan); $499/month (Core platform); Enterprise custom-quoted

Alternative or complementary AI prospect identification platform. Generates propensity scores for major giving, single giving, regular giving, and lapse prediction. Best for organizations with 10,000+ records and 5+ years of giving history. Predictions refresh weekly. Lower entry cost than DonorSearch.

Blackbaud Raiser's Edge NXT + Prospect Insights

BlackbaudSaaS - Annual subscriptionQty: 1

$4,000–$10,000+/year for mid-sized nonprofit (CRM + Prospect Insights add-on); pricing is custom-quoted

If client already uses Raiser's Edge NXT, Prospect Insights is the native AI add-on that delivers AI-driven major gift prospect recommendations and prescriptive prioritizations directly within the CRM. Eliminates integration complexity. Prospect Insights Pro (launched Nov 2023) adds planned giving identification.

Salesforce Nonprofit Cloud - Enterprise Edition

SalesforceEnterprise EditionQty: 10 free licenses via Power of Us Program; additional per-seat

10 free Enterprise licenses via Power of Us Program; additional licenses at $60/user/month (80% discount from standard pricing)

Alternative CRM backbone for nonprofits preferring Salesforce ecosystem. Einstein AI features provide donor engagement summaries and predictive scoring. Integrates natively with DonorSearch and Dataro via pre-built connectors. Most customizable long-term option.

DonorPerfect

DonorPerfect (SofterWare)SaaS - Monthly subscription

$89–$299/month depending on plan and record count

Budget-friendly CRM for small nonprofits. Includes native DonorSearch integration for AI screening. Suitable for organizations under 2,000 donors.

Bloomerang

BloomerangSaaS - Monthly subscription

$125/month (scales by record count)

Mid-tier CRM with built-in donor retention analytics and native DonorSearch integration. Good fit for nonprofits prioritizing donor retention alongside major gift prospecting.

Microsoft 365 Business Basic (Nonprofit)

MicrosoftSaaS - Per-seat, annual (nonprofit grant)Qty: Up to 300 licenses

Free for up to 300 licenses for eligible nonprofits; 75% discount on premium plans

Collaboration suite (Teams, SharePoint, Exchange Online) for nonprofit staff. SharePoint used for storing prospect research reports and sharing AI-generated prospect lists across the development team.

Microsoft Azure Credits (Nonprofit Grant)

MicrosoftCloud credits - Annual grant

$2,000/year in Azure credits for eligible nonprofits (free)

Covers Azure hosting for Power BI Service, Azure Functions for data pipeline automation, and any supplementary cloud compute for custom data processing workflows.

Microsoft Power BI Pro

MicrosoftSaaS - Per-seat, monthly

$10/user/month ($120/year); nonprofit discounts available via Microsoft 365 E5 bundle

Dashboard visualization platform for creating executive-level fundraising dashboards showing AI prospect scores, pipeline value, gift officer assignments, and portfolio performance. Desktop version is free; Pro license needed for sharing dashboards across the organization.

Mailchimp Standard (Nonprofit Discount)

Intuit MailchimpSaaS - Monthly subscription

~$13–$20/month for Standard plan (15% nonprofit discount available)

Email marketing platform for targeted outreach to AI-identified prospect segments. Syncs donor segments from CRM to enable personalized cultivation campaigns for major gift prospects.

Prerequisites

Installation Steps

Step 1: Conduct Data Quality Audit & Gap Analysis

Before any platform deployment, perform a comprehensive audit of the client's existing CRM data. Export a full donor dataset and analyze it for completeness, accuracy, and consistency. This step determines data readiness and identifies cleanup work required before the AI platform can generate reliable propensity scores. Poor data quality is the #1 cause of failed AI prospect identification projects.

1
Export donor data from Raiser's Edge NXT via SKY API (example using PowerShell)
2
For Salesforce NPSP, use Data Loader CLI
3
Run Data Quality Analysis Script (Python)
Export donor data from Raiser's Edge NXT via SKY API
powershell
Install-Module -Name SKYAPIPowerShell -Scope CurrentUser
Connect-SKYAPI -ClientId 'YOUR_CLIENT_ID' -ClientSecret 'YOUR_CLIENT_SECRET' -RedirectUri 'https://localhost'
$donors = Get-SKYAPIConstituent -Limit 10000
$donors | Export-Csv -Path 'C:\DataAudit\donor_export.csv' -NoTypeInformation
Salesforce NPSP export using Data Loader CLI
bash
java -cp dataloader.jar -Dsalesforce.config.dir=conf process csvExtractDonors
Data Quality Analysis Script (Python)
bash
pip install pandas numpy
python data_quality_audit.py --input donor_export.csv --output audit_report.html
Note

Document all findings in a Data Quality Audit Report. Typical issues found: 20-40% of records missing email addresses, 10-15% duplicate records, inconsistent gift type coding, missing engagement data. This report becomes the basis for Phase 2 cleanup work and sets client expectations. Allocate 8-16 hours for this step depending on database size.

Step 2: Execute Data Cleanup & Standardization

Based on the audit findings, systematically clean and standardize the donor database. This includes de-duplication, address standardization via NCOA (National Change of Address), email validation, deceased record suppression, and standardization of gift type and fund designation codes. This is typically the longest phase and most labor-intensive.

1
De-duplication in Raiser's Edge NXT: Navigate to: Administration > Duplicate Management > Find Duplicates
2
Set matching rules: Last Name + First Name + Street Address + ZIP
3
Review and merge duplicates manually (automated merge not recommended for donor data)
4
NCOA Address Update via Blackbaud: Navigate to Administration > Address Processing > Run NCOA Update
5
This requires active NCOA subscription through Blackbaud
6
Standardize gift types in CRM: Create mapping document: Legacy codes → Standard codes. Example: 'CC' → 'Credit Card', 'CHK' → 'Check', 'ACH' → 'Electronic'
7
Apply via batch update in CRM admin panel
Email Validation using NeverBounce API or ZeroBounce
bash
pip install requests pandas
python validate_emails.py --input donor_export.csv --api_key 'YOUR_NEVERBOUNCE_KEY' --output validated_emails.csv
Note

Budget 20-60 hours for data cleanup depending on database size and quality. For databases over 10,000 records, consider using a specialized nonprofit data hygiene service like TrueGivers ($0.02-0.05/record) or Blackbaud's Target Analytics data enrichment. Do NOT skip this step—AI models trained on dirty data produce unreliable prospect scores. Document all changes in a Data Cleanup Log for client records.

Step 3: Register for Nonprofit Technology Discounts

Before procuring software, ensure the client is registered with TechSoup, Microsoft Nonprofit, and Salesforce Power of Us (if applicable) to access significant discounts. TechSoup members save an average of $17,000 over the course of their membership. Microsoft provides up to 300 free M365 licenses and $2,000/year in Azure credits.

1
TechSoup Registration: Visit https://www.techsoup.org/join and complete nonprofit validation
2
Gather required documents: IRS determination letter, EIN number, annual budget documentation
3
Allow 3-7 business days for TechSoup validation
4
Microsoft Nonprofit Registration: Visit https://nonprofit.microsoft.com/en-us/getting-started
5
Supply TechSoup validation token to Microsoft
6
Claim 300 free Microsoft 365 Business Basic licenses
7
Apply for Azure nonprofit grant ($2,000/year credits)
8
Salesforce Power of Us (if using Salesforce): Visit https://www.salesforce.org/power-of-us/
9
Apply for 10 free Enterprise Edition licenses
10
Provide 501(c)(3) documentation for Salesforce application
Note

TechSoup validation is a prerequisite for Microsoft and many other vendor nonprofit programs. Start this process in Week 1 as it can take up to 2 weeks for full validation. If the client already has TechSoup membership, verify it is current and that the organization profile is up to date. Keep copies of all registration confirmations in the project documentation folder.

Step 4: Provision AI Prospect Identification Platform

Set up the primary AI platform account. For most engagements, DonorSearch AI Enhanced CORE is the recommended starting point. For clients with 10,000+ records and mature data, Dataro is a strong alternative. Contact the vendor's sales team to initiate the subscription, negotiate nonprofit pricing, and obtain API credentials.

DonorSearch AI Setup

1
Contact DonorSearch sales: sales@donorsearch.net or 800-818-4845
2
Request Enhanced CORE tier pricing for your client's database size
3
Provide: Organization name, EIN, CRM platform, estimated record count
4
Receive: Account credentials, API key, integration documentation
5
Log into https://app.donorsearch.net with provided credentials
6
Navigate to Settings > Organization Profile > Complete all fields

Dataro Setup (Alternative)

1
Contact Dataro sales: https://www.dataro.io/book-a-demo
2
Request ProspectAI Pro Plan ($250/month)
3
Provide: CRM type, record count, years of giving history available
4
Receive: Account provisioning email with setup wizard link
5
Complete onboarding wizard at https://app.dataro.io
Note

DonorSearch typically requires a signed annual contract. Request a pilot period or proof-of-concept scoring on a subset of records if available. Dataro offers month-to-month billing which reduces client commitment risk. For Blackbaud clients already using Raiser's Edge NXT, inquire about bundled Prospect Insights pricing—it may be more cost-effective than a separate AI platform. Keep all API keys in the MSP's secure credential vault (e.g., IT Glue, Hudu, or Passportal).

Step 5: Configure CRM-to-AI Platform Integration

Establish the data connection between the client's CRM and the AI platform. This is the critical technical step that enables donor data to flow from the CRM into the AI scoring engine and prospect scores to flow back. The integration method depends on the CRM platform.

Raiser's Edge NXT + DonorSearch Integration

1
In Raiser's Edge NXT, navigate to: Control Panel > Applications
2
Register a new application to get SKY API credentials
3
In DonorSearch portal: Settings > Integrations > Blackbaud Raiser's Edge NXT
4
Enter SKY API subscription key and authorize OAuth connection
5
Map fields: Constituent ID, Name, Address, Email, Gift Amount, Gift Date, Fund
6
Run initial sync and verify record count matches

Salesforce NPSP + DonorSearch Integration

1
In Salesforce: Setup > App Manager > Connected Apps
2
Create new Connected App for DonorSearch with OAuth 2.0
3
Install DonorSearch AppExchange package: https://appexchange.salesforce.com/
4
In DonorSearch: Settings > Integrations > Salesforce
5
Authorize with Salesforce admin credentials
6
Map objects: Contact, Account, Opportunity (as gifts), Campaign (as engagement)

Salesforce NPSP + Dataro Integration

Dataro provides a free pre-built Salesforce connector.

1
Install Dataro managed package from Salesforce AppExchange
2
In Dataro: Integrations > Salesforce > Connect
3
Authorize OAuth and select NPSP data model
4
Map: Contact → Donor; Opportunity → Gift; CampaignMember → Engagement
5
Enable bi-directional sync for propensity scores to write back to Contact records

DonorPerfect + DonorSearch Integration

1
In DonorPerfect: Admin > Integrations > DonorSearch
2
Enter DonorSearch API key
3
Enable automatic screening for new donors
4
Configure screening schedule: weekly full refresh recommended
Note

The initial data sync can take 2-24 hours depending on database size. Do NOT run the initial sync during business hours as it may impact CRM performance. Schedule for overnight or weekend. Verify the sync by comparing record counts: CRM active donors vs. AI platform ingested records. A discrepancy of more than 2% indicates a mapping or filtering issue that must be resolved before proceeding. Document the field mapping in the project runbook.

Step 6: Configure AI Model Parameters & Scoring Thresholds

Configure the AI platform's scoring model to align with the client's specific fundraising program. Define what constitutes a 'major gift' for this organization, set scoring thresholds for prospect tiers, and configure which engagement signals should influence the propensity model. This step requires close collaboration with the client's Director of Development.

DonorSearch AI Dashboard

1
Navigate to: Model Configuration > Major Gift Definition
2
Set major gift threshold (example: $5,000 for mid-size org; $1,000 for small org)
3
Configure time horizon: 'Likelihood of major gift within next 12 months'
4
Set prospect tiers: Tier 1 (Hot): Score 80-100 → Immediate gift officer assignment | Tier 2 (Warm): Score 60-79 → Cultivation queue | Tier 3 (Emerging): Score 40-59 → Long-term pipeline | Tier 4 (Monitor): Score 0-39 → Automated nurture only
5
Configure engagement signal weights (if platform allows): Recent gift (last 6 months): High weight | Event attendance: Medium weight | Email engagement (opens/clicks): Medium weight | Volunteer hours: Medium weight | Board/committee membership: High weight | Wealth indicator match: High weight

Dataro

1
Navigate to: Models > Major Giving Model > Configure
2
Set minimum gift amount for 'major gift' classification
3
Select training data window (recommended: 3-5 years)
4
Enable all available engagement signals
5
Click 'Train Model' — initial training takes 24-72 hours
6
Enable weekly model refresh schedule
Note

The major gift threshold varies dramatically by organization size. A small community nonprofit may define major gifts at $500, while a university foundation sets it at $25,000+. ALWAYS confirm this with the client's development leadership before configuring. DonorSearch's model is largely pre-built and uses their philanthropic database for scoring; Dataro trains a custom model on the client's own data. For Dataro, the initial model training requires at least 50 historical major gifts to have sufficient training data for reliable predictions.

Step 7: Build Prospect Dashboard in Power BI

Create a visual dashboard that presents AI-generated prospect scores, pipeline metrics, and gift officer portfolio views in an accessible format for the development team. This dashboard becomes the daily working tool for the major gifts program and the primary deliverable the client's leadership sees.

1
Install Power BI Desktop (free) on MSP workstation — Download from: https://powerbi.microsoft.com/en-us/downloads/
2
Connect to data sources: DonorSearch (export prospect scores as CSV or connect via API), CRM (connect via ODBC/API — Salesforce connector built into Power BI), Raiser's Edge (use SKY API connector or export to Excel/CSV)
3
Create the following dashboard pages: Page 1: Executive Pipeline Summary — KPI cards (Total prospects identified, Total estimated pipeline value), Funnel chart (Tier 1 → Tier 2 → Tier 3 → Tier 4 counts), Trend line (New prospects identified per month). Page 2: Prospect Scoring Detail — Table (Donor Name, Score, Estimated Capacity, Last Gift, Last Contact), Filters (Score range, Gift officer assigned, Geographic region), Conditional formatting (Green 80+, Yellow 60–79, Orange 40–59, Gray <40). Page 3: Gift Officer Portfolio View — Slicer (Select gift officer), Cards (Assigned prospects count, Total portfolio value, Avg days since last contact), Table (Assigned prospects with next action due date). Page 4: Engagement Signal Heatmap — Matrix (Donors vs. engagement types: gifts, events, emails, volunteering), Highlight donors with multiple engagement signals but no gift officer assignment.
4
Publish to Power BI Service: File > Publish > Select workspace > Publish, then configure scheduled refresh: Data > Scheduled Refresh > Daily at 6:00 AM
5
Share dashboard with client users: Workspace > Access > Add users (viewer role for gift officers, contributor role for managers)
Note

Power BI Desktop is free; Power BI Pro license ($10/user/month) is required for sharing dashboards via the Power BI Service. If the client has Microsoft 365 E5 (available at nonprofit pricing), Power BI Pro is included. For clients who cannot afford Power BI Pro licenses, create a shared Excel workbook on SharePoint as a simpler alternative. The dashboard should be reviewed and refined during the validation phase based on client feedback. Export a PDF version weekly for leadership who prefer static reports.

Step 8: Configure Automated Alerts & Workflow Triggers

Set up automated notifications that alert gift officers when high-scoring prospects are identified, when donor behavior changes significantly (e.g., a mid-level donor's score jumps into Tier 1), or when a prospect hasn't been contacted within the defined cultivation timeline. This ensures AI insights translate into timely action.

Power Automate Flow for New High-Score Prospect Alert

1
Navigate to https://flow.microsoft.com
2
Create new Automated Flow
3
Trigger: 'When a row is added or modified' (Dataverse/SharePoint list)
4
Condition: ProspectScore >= 80 AND GiftOfficerAssigned = blank
5
Action: Send email notification to Major Gifts Director — Subject: 'New High-Score Prospect Identified: {DonorName}' | Body: 'Score: {Score} | Estimated Capacity: {Capacity} | Last Gift: {LastGift}'
6
Action: Create task in Planner/To-Do for gift officer assignment

CRM Workflow for Stale Prospect Follow-up (Raiser's Edge NXT)

1
Navigate to: Lists > New Smart List
2
Criteria: AI Score >= 60 AND Last Contact Date > 30 days ago
3
Save as 'Overdue Prospect Follow-ups'
4
Set up email alert: Administration > Email Alerts > New
5
Schedule: Weekly on Monday at 8:00 AM
6
Recipients: Assigned gift officer for each prospect

Salesforce Workflow (if applicable)

1
Setup > Process Builder > New Process
2
Object: Contact
3
Criteria: AI_Prospect_Score__c >= 80 AND Owner = null
4
Action: Create Task assigned to Major Gifts Director
5
Action: Send Email Alert with prospect details
6
Activate the Process
Note

Automated alerts are essential for adoption—without them, staff will forget to check the AI dashboard regularly. Start with a simple email alert for Tier 1 prospects and expand to more sophisticated workflows after the client team is comfortable. Be careful not to create alert fatigue: limit to 2-5 high-priority alerts per week. Configure a weekly digest email summarizing all score changes as a secondary notification.

Step 9: Validate AI Scoring Accuracy with Development Staff

Before going live, conduct a structured validation session with the client's development team. Present the AI-generated prospect list to experienced gift officers and prospect researchers who have institutional knowledge. Compare AI recommendations against their professional judgment to identify false positives, missed prospects, and scoring anomalies. This is critical for building staff trust in the system.

1
Export top 50 AI-scored prospects from the platform
2
Export bottom 50 AI-scored donors who have given $1,000+ in the past
3
Create validation spreadsheet with columns: Donor Name | AI Score | AI Estimated Capacity | Staff Assessment (Agree/Disagree) | Staff Notes
4
Schedule 2-hour validation session with: Director of Development, Major Gifts Officers, Prospect Researcher (if available)
5
Review each prospect and capture staff feedback
6
Calculate agreement rate: (Agreements / Total Reviewed) * 100 — Target: 70%+ agreement rate indicates model is reliable; Below 60%: Review data quality and model configuration
7
Document 'surprise' finds — donors the AI identified that staff hadn't considered. These are the key value demonstrations for the client
  • Send validation results to AI vendor for model tuning if needed:
  • DonorSearch: Contact support@donorsearch.net with validation findings
  • Dataro: Use in-app feedback mechanism to flag scoring issues
Note

This validation step is the make-or-break moment for client adoption. If gift officers don't trust the AI scores, they won't use the system. Emphasize the 'surprise' prospects — donors the AI identified as high potential that staff hadn't noticed. These concrete wins build credibility. If agreement rate is below 60%, investigate data quality issues before proceeding. Common causes: outdated wealth data, missing engagement signals, or misclassified gift types in the CRM.

Step 10: Conduct Staff Training & Go-Live

Deliver comprehensive training to all staff who will interact with the AI prospect identification system. Training should cover: interpreting AI scores, using the dashboard, integrating AI insights into gift officer workflows, understanding what the AI can and cannot do, and maintaining data quality going forward. Document everything for future reference.

Training Session Agenda (2-3 hours)

Module 1: Understanding AI Prospect Scoring (30 min)

  • What the AI model analyzes (giving history, engagement, wealth indicators)
  • How propensity scores are calculated (high-level, non-technical)
  • What scores mean: Tier 1/2/3/4 definitions and recommended actions
  • Limitations: AI is a tool, not a replacement for relationship knowledge

Module 2: Using the Dashboard (45 min)

  • Live demo: Power BI dashboard navigation
  • Filtering by score range, geography, gift officer
  • Exporting prospect lists for meeting preparation
  • Reading the engagement heatmap

Module 3: Workflow Integration (45 min)

  • Daily routine: Check new high-score alerts (email + dashboard)
  • Weekly routine: Review portfolio changes and score movements
  • Monthly routine: Portfolio review meeting using AI pipeline data
  • How to log contact reports in CRM to improve future AI scoring

Module 4: Data Stewardship (30 min)

  • Why data quality matters for AI accuracy
  • How to properly enter new gifts, update addresses, log interactions
  • Process for flagging incorrect AI scores or data errors
  • Annual data hygiene procedures

Deliver Training Materials

  • Record training session (Zoom/Teams recording)
  • Create quick-reference PDF: 'AI Prospect Dashboard Cheat Sheet'
  • Create 1-page workflow guide: 'Gift Officer Daily/Weekly AI Checklist'
  • Store all docs in shared SharePoint/Google Drive folder
Note

Schedule training no more than 1 week before go-live to maximize retention. Provide a recorded session for staff who miss training or join later. The single most important training outcome is that gift officers understand the daily alert email and know how to act on it. Over-training on technical details reduces adoption; focus on practical workflows. Plan a 30-minute follow-up Q&A session 2 weeks after go-live to address questions that arise from real usage.

Step 11: Configure Compliance & Data Governance Controls

Implement data governance controls to ensure the AI prospect identification system complies with applicable privacy laws and organizational policies. This is especially critical for nonprofits operating in Colorado, Oregon, Delaware, Maryland, Minnesota, or New Jersey where nonprofit exemptions from state privacy laws are limited or nonexistent.

1
Update the nonprofit's privacy policy to include AI analytics disclosure: Add language: 'We may use automated analytical tools to analyze giving patterns and engagement data to better understand and serve our donors. This analysis helps us identify donors who may be interested in expanded partnership opportunities with our organization.'
2
Configure data retention policies in CRM: Navigate to Raiser's Edge NXT: Administration > Data Management > Retention Policies. Set: Inactive donor records archived after 7 years. Set: Engagement data (email opens, event attendance) retained for 5 years. Set: Financial transaction records retained per IRS requirements (7+ years).
3
Implement opt-out mechanism: Add 'AI Analytics Opt-Out' attribute/field to donor records in CRM. Configure AI platform integration to exclude opted-out donors from scoring. Add opt-out link to email communications footer. Document opt-out request handling procedure for staff.
4
Create AI Acceptable Use Policy document: Define approved uses of AI prospect scores. Define prohibited uses (e.g., discriminatory targeting, sharing scores externally). Define data access controls: who can see scores and capacity ratings. Define incident response: what to do if donor data is exposed.
5
Enable audit logging: CRM: Ensure audit trail is enabled for all donor record changes. AI Platform: Enable access logging to track who views prospect reports. Power BI: Enable activity logging in admin portal.
Note

76% of nonprofits lack an AI governance policy. Providing this as part of your implementation creates significant value and differentiates your MSP. For nonprofits in Colorado and Oregon specifically, conduct a detailed compliance review as these states do not exempt nonprofits from their consumer privacy laws. Consider engaging a privacy attorney for clients with significant operations in these states. The AI acceptable use policy template should be delivered as a Word document the client can adopt with minimal modification.

Custom AI Components

Donor Data Quality Audit Script

Type: skill A Python script that analyzes a CSV export of donor data from any CRM and generates a comprehensive data quality report. The report identifies missing fields, duplicate records, invalid emails, inconsistent gift coding, and overall data readiness for AI prospect scoring. This is run during Phase 1 to assess the client's data before platform deployment.

Implementation

Run this script against a CSV donor export to generate an HTML data quality report with an AI readiness score.
python
#!/usr/bin/env python3
"""Donor Data Quality Audit Script for AI Prospect Identification Projects
Usage: python data_quality_audit.py --input donor_export.csv --output audit_report.html
"""
import pandas as pd
import numpy as np
import argparse
import re
from datetime import datetime, timedelta
from collections import Counter

def load_data(filepath):
    df = pd.read_csv(filepath, low_memory=False, encoding='utf-8-sig')
    df.columns = [c.strip().lower().replace(' ', '_') for c in df.columns]
    return df

def detect_column_mapping(df):
    """Auto-detect common CRM column names and map to standard fields."""
    mappings = {
        'donor_id': ['constituent_id', 'donor_id', 'id', 'contact_id', 'account_number', 'record_id'],
        'first_name': ['first_name', 'first', 'fname', 'given_name'],
        'last_name': ['last_name', 'last', 'lname', 'surname', 'family_name'],
        'email': ['email', 'email_address', 'primary_email', 'e-mail'],
        'phone': ['phone', 'phone_number', 'primary_phone', 'home_phone', 'mobile'],
        'address': ['address', 'street', 'address_line_1', 'mailing_address', 'street_address'],
        'city': ['city', 'mailing_city'],
        'state': ['state', 'st', 'mailing_state', 'province'],
        'zip': ['zip', 'zip_code', 'postal_code', 'zipcode', 'mailing_zip'],
        'total_giving': ['total_giving', 'total_gifts', 'lifetime_giving', 'total_donated', 'cumulative_giving'],
        'last_gift_date': ['last_gift_date', 'last_donation_date', 'most_recent_gift', 'latest_gift_date'],
        'last_gift_amount': ['last_gift_amount', 'last_donation_amount', 'most_recent_gift_amount'],
        'first_gift_date': ['first_gift_date', 'first_donation_date', 'earliest_gift_date'],
        'donor_status': ['status', 'donor_status', 'constituent_status', 'record_status'],
        'deceased': ['deceased', 'is_deceased', 'deceased_flag'],
        'do_not_contact': ['do_not_contact', 'dnc', 'solicit_flag', 'no_contact']
    }
    result = {}
    for standard, options in mappings.items():
        for opt in options:
            if opt in df.columns:
                result[standard] = opt
                break
    return result

def analyze_completeness(df, col_map):
    results = []
    for field, col in col_map.items():
        total = len(df)
        non_null = df[col].notna().sum()
        non_empty = df[col].astype(str).str.strip().replace('', np.nan).notna().sum()
        pct_complete = round((non_empty / total) * 100, 1)
        results.append({'Field': field, 'Column': col, 'Total_Records': total,
                        'Populated': non_empty, 'Missing': total - non_empty,
                        'Completeness_%': pct_complete,
                        'Status': 'GOOD' if pct_complete >= 90 else 'WARNING' if pct_complete >= 70 else 'CRITICAL'})
    return pd.DataFrame(results)

def detect_duplicates(df, col_map):
    dupe_checks = []
    if 'first_name' in col_map and 'last_name' in col_map:
        name_col = [col_map['first_name'], col_map['last_name']]
        if 'zip' in col_map:
            name_col.append(col_map['zip'])
        dupes = df[df.duplicated(subset=name_col, keep=False)]
        dupe_checks.append({'Method': 'Name + ZIP', 'Duplicate_Records': len(dupes),
                            'Duplicate_Groups': len(dupes.drop_duplicates(subset=name_col)),
                            'Pct_of_Total': round((len(dupes)/len(df))*100, 1)})
    if 'email' in col_map:
        email_col = col_map['email']
        email_dupes = df[df[email_col].notna() & df.duplicated(subset=[email_col], keep=False)]
        dupe_checks.append({'Method': 'Email Address', 'Duplicate_Records': len(email_dupes),
                            'Duplicate_Groups': len(email_dupes.drop_duplicates(subset=[email_col])),
                            'Pct_of_Total': round((len(email_dupes)/len(df))*100, 1)})
    return pd.DataFrame(dupe_checks)

def validate_emails(df, col_map):
    if 'email' not in col_map:
        return {'valid': 0, 'invalid': 0, 'missing': len(df)}
    email_col = col_map['email']
    emails = df[email_col].dropna().astype(str)
    pattern = r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$'
    valid = emails.str.match(pattern).sum()
    invalid = len(emails) - valid
    missing = len(df) - len(emails)
    return {'valid': int(valid), 'invalid': int(invalid), 'missing': int(missing)}

def analyze_giving_history(df, col_map):
    results = {}
    if 'last_gift_date' in col_map:
        dates = pd.to_datetime(df[col_map['last_gift_date']], errors='coerce')
        now = datetime.now()
        results['donors_gave_last_12mo'] = int((dates > now - timedelta(days=365)).sum())
        results['donors_gave_last_36mo'] = int((dates > now - timedelta(days=1095)).sum())
        results['donors_lapsed_3plus_years'] = int((dates <= now - timedelta(days=1095)).sum())
        results['earliest_gift_on_file'] = str(dates.min().date()) if dates.notna().any() else 'N/A'
        results['years_of_history'] = round((now - dates.min()).days / 365.25, 1) if dates.notna().any() else 0
    if 'total_giving' in col_map:
        giving = pd.to_numeric(df[col_map['total_giving']], errors='coerce')
        results['total_donors_with_gifts'] = int(giving[giving > 0].count())
        results['median_lifetime_giving'] = round(float(giving[giving > 0].median()), 2)
        results['mean_lifetime_giving'] = round(float(giving[giving > 0].mean()), 2)
        results['donors_above_1000'] = int((giving >= 1000).sum())
        results['donors_above_5000'] = int((giving >= 5000).sum())
        results['donors_above_10000'] = int((giving >= 10000).sum())
    return results

def ai_readiness_score(completeness_df, dupe_df, giving_stats, total_records):
    score = 100
    deductions = []
    # Record count
    if total_records < 500:
        score -= 30
        deductions.append('CRITICAL: Fewer than 500 records (minimum for DonorSearch)')
    elif total_records < 10000:
        score -= 10
        deductions.append('WARNING: Fewer than 10,000 records (Dataro recommends 10K+ for best results)')
    # Data completeness
    for _, row in completeness_df.iterrows():
        if row['Status'] == 'CRITICAL':
            score -= 10
            deductions.append(f"CRITICAL: {row['Field']} only {row['Completeness_%']}% complete")
        elif row['Status'] == 'WARNING':
            score -= 5
            deductions.append(f"WARNING: {row['Field']} only {row['Completeness_%']}% complete")
    # Duplicates
    if not dupe_df.empty and dupe_df['Pct_of_Total'].max() > 15:
        score -= 15
        deductions.append(f"CRITICAL: {dupe_df['Pct_of_Total'].max()}% duplicate records detected")
    elif not dupe_df.empty and dupe_df['Pct_of_Total'].max() > 5:
        score -= 5
        deductions.append(f"WARNING: {dupe_df['Pct_of_Total'].max()}% duplicate records detected")
    # Giving history depth
    years = giving_stats.get('years_of_history', 0)
    if years < 3:
        score -= 20
        deductions.append(f'CRITICAL: Only {years} years of giving history (3-5 years recommended)')
    elif years < 5:
        score -= 5
        deductions.append(f'WARNING: Only {years} years of giving history (5+ years ideal for Dataro)')
    return max(0, min(100, score)), deductions

def generate_html_report(completeness_df, dupe_df, email_stats, giving_stats, readiness_score, deductions, total_records):
    readiness_color = '#27ae60' if readiness_score >= 80 else '#f39c12' if readiness_score >= 60 else '#e74c3c'
    html = f"""<!DOCTYPE html><html><head><title>Donor Data Quality Audit Report</title>
    <style>body{{font-family:Arial,sans-serif;margin:40px;}}table{{border-collapse:collapse;width:100%;margin:20px 0;}}
    th,td{{border:1px solid #ddd;padding:8px;text-align:left;}}th{{background:#2c3e50;color:white;}}
    .good{{color:#27ae60;font-weight:bold;}}.warning{{color:#f39c12;font-weight:bold;}}.critical{{color:#e74c3c;font-weight:bold;}}
    .score-box{{font-size:48px;color:{readiness_color};font-weight:bold;text-align:center;padding:20px;border:3px solid {readiness_color};border-radius:10px;width:200px;margin:20px auto;}}
    h1{{color:#2c3e50;}}h2{{color:#34495e;border-bottom:2px solid #3498db;padding-bottom:5px;}}</style></head><body>
    <h1>Donor Data Quality Audit Report</h1>
    <p>Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}</p>
    <p>Total Records Analyzed: <strong>{total_records:,}</strong></p>
    <h2>AI Readiness Score</h2>
    <div class='score-box'>{readiness_score}/100</div>
    <ul>{''.join(f'<li class="{"critical" if "CRITICAL" in d else "warning"}">{d}</li>' for d in deductions)}</ul>
    <h2>Field Completeness</h2>{completeness_df.to_html(index=False, classes='completeness')}
    <h2>Duplicate Analysis</h2>{dupe_df.to_html(index=False) if not dupe_df.empty else '<p>No duplicate analysis possible with available columns.</p>'}
    <h2>Email Validation</h2><ul><li>Valid: {email_stats['valid']:,}</li><li>Invalid format: {email_stats['invalid']:,}</li><li>Missing: {email_stats['missing']:,}</li></ul>
    <h2>Giving History Analysis</h2><ul>{''.join(f'<li><strong>{k.replace("_"," ").title()}</strong>: {v}</li>' for k,v in giving_stats.items())}</ul>
    <h2>Recommendations</h2><ol>
    <li>{'De-duplicate records before AI platform ingestion' if not dupe_df.empty and dupe_df['Pct_of_Total'].max() > 5 else 'Duplicate rate is acceptable'}</li>
    <li>{'Validate and update email addresses using NeverBounce or ZeroBounce' if email_stats['invalid'] > 0 else 'Email data looks clean'}</li>
    <li>{'Run NCOA address update to ensure mailing addresses are current' if completeness_df[completeness_df['Field']=="address"]['Completeness_%'].values[0] < 90 if 'address' in completeness_df['Field'].values else True else 'Address data looks sufficient'}</li>
    <li>Recommended AI Platform: {'DonorSearch Enhanced CORE (500-9,999 records)' if total_records < 10000 else 'Dataro or DonorSearch AI Full (10,000+ records)'}</li>
    </ol></body></html>"""
    return html

def main():
    parser = argparse.ArgumentParser(description='Donor Data Quality Audit for AI Prospect Identification')
    parser.add_argument('--input', required=True, help='Path to donor CSV export')
    parser.add_argument('--output', default='audit_report.html', help='Output HTML report path')
    args = parser.parse_args()
    
    print(f'Loading data from {args.input}...')
    df = load_data(args.input)
    print(f'Loaded {len(df):,} records with {len(df.columns)} columns')
    
    col_map = detect_column_mapping(df)
    print(f'Detected column mappings: {col_map}')
    
    completeness = analyze_completeness(df, col_map)
    duplicates = detect_duplicates(df, col_map)
    emails = validate_emails(df, col_map)
    giving = analyze_giving_history(df, col_map)
    score, deductions = ai_readiness_score(completeness, duplicates, giving, len(df))
    
    html = generate_html_report(completeness, duplicates, emails, giving, score, deductions, len(df))
    with open(args.output, 'w') as f:
        f.write(html)
    print(f'Report generated: {args.output}')
    print(f'AI Readiness Score: {score}/100')

if __name__ == '__main__':
    main()

CRM-to-AI Platform Sync Monitor

Type: integration

A Power Automate flow (or Python cron job) that monitors the data sync between the CRM and the AI prospect identification platform. It checks daily that record counts match, identifies sync failures, and alerts the MSP if data freshness exceeds 48 hours. This ensures prospect scores remain current and catches integration breakdowns before the client notices.

Implementation:

Power Automate Flow Definition (JSON export) — Flow Name: CRM-AI Sync Health Monitor — Trigger: Recurrence - Daily at 7:00 AM. For MSPs preferring a Python-based approach, use this script with cron.
python
#!/usr/bin/env python3
"""CRM-to-AI Platform Sync Health Monitor
Schedule via cron: 0 7 * * * /usr/bin/python3 /opt/msp/sync_monitor.py
Or run as Azure Function with Timer Trigger.
"""
import requests
import json
import smtplib
from email.mime.text import MIMEText
from email.mime.multipart import MIMEMultipart
from datetime import datetime, timedelta
import os

# Configuration - store in environment variables or Azure Key Vault
CONFIG = {
    'crm_type': os.getenv('CRM_TYPE', 'raiser_edge_nxt'),  # raiser_edge_nxt | salesforce | donorperfect
    'crm_api_url': os.getenv('CRM_API_URL', 'https://api.sky.blackbaud.com/constituent/v1'),
    'crm_api_key': os.getenv('CRM_API_KEY'),
    'crm_oauth_token': os.getenv('CRM_OAUTH_TOKEN'),
    'ai_platform': os.getenv('AI_PLATFORM', 'donorsearch'),  # donorsearch | dataro
    'ai_api_url': os.getenv('AI_API_URL'),
    'ai_api_key': os.getenv('AI_API_KEY'),
    'smtp_server': os.getenv('SMTP_SERVER', 'smtp.office365.com'),
    'smtp_port': int(os.getenv('SMTP_PORT', '587')),
    'smtp_user': os.getenv('SMTP_USER'),
    'smtp_pass': os.getenv('SMTP_PASS'),
    'alert_recipients': os.getenv('ALERT_RECIPIENTS', 'msp-alerts@yourmsp.com').split(','),
    'client_name': os.getenv('CLIENT_NAME', 'Nonprofit Client'),
    'sync_freshness_hours': int(os.getenv('SYNC_FRESHNESS_HOURS', '48')),
    'record_count_tolerance_pct': float(os.getenv('RECORD_TOLERANCE_PCT', '2.0')),
}

def get_crm_record_count():
    """Get active donor count from CRM."""
    if CONFIG['crm_type'] == 'raiser_edge_nxt':
        headers = {
            'Bb-Api-Subscription-Key': CONFIG['crm_api_key'],
            'Authorization': f"Bearer {CONFIG['crm_oauth_token']}"
        }
        resp = requests.get(f"{CONFIG['crm_api_url']}/constituents?limit=1", headers=headers)
        return resp.json().get('count', 0)
    elif CONFIG['crm_type'] == 'salesforce':
        headers = {'Authorization': f"Bearer {CONFIG['crm_oauth_token']}"}
        query = 'SELECT COUNT() FROM Contact WHERE npe01__Donor__c = true'
        resp = requests.get(f"{CONFIG['crm_api_url']}/query?q={query}", headers=headers)
        return resp.json().get('totalSize', 0)
    elif CONFIG['crm_type'] == 'donorperfect':
        headers = {'Authorization': f"Bearer {CONFIG['crm_api_key']}"}
        resp = requests.get(f"{CONFIG['crm_api_url']}/donors?count_only=true", headers=headers)
        return resp.json().get('count', 0)
    return 0

def get_ai_platform_status():
    """Get record count and last sync timestamp from AI platform."""
    headers = {'Authorization': f"Bearer {CONFIG['ai_api_key']}"}
    if CONFIG['ai_platform'] == 'donorsearch':
        resp = requests.get(f"{CONFIG['ai_api_url']}/account/sync-status", headers=headers)
        data = resp.json()
        return {'record_count': data.get('total_records', 0),
                'last_sync': data.get('last_sync_timestamp', ''),
                'sync_status': data.get('status', 'unknown')}
    elif CONFIG['ai_platform'] == 'dataro':
        resp = requests.get(f"{CONFIG['ai_api_url']}/v1/integration/status", headers=headers)
        data = resp.json()
        return {'record_count': data.get('synced_contacts', 0),
                'last_sync': data.get('last_prediction_run', ''),
                'sync_status': data.get('integration_status', 'unknown')}
    return {'record_count': 0, 'last_sync': '', 'sync_status': 'unknown'}

def check_sync_health():
    issues = []
    crm_count = get_crm_record_count()
    ai_status = get_ai_platform_status()
    ai_count = ai_status['record_count']
    
    # Check record count discrepancy
    if crm_count > 0 and ai_count > 0:
        discrepancy = abs(crm_count - ai_count) / crm_count * 100
        if discrepancy > CONFIG['record_count_tolerance_pct']:
            issues.append(f"RECORD COUNT MISMATCH: CRM has {crm_count:,} records, AI platform has {ai_count:,} ({discrepancy:.1f}% discrepancy, threshold is {CONFIG['record_count_tolerance_pct']}%)")
    elif ai_count == 0:
        issues.append(f"CRITICAL: AI platform reports 0 records. CRM has {crm_count:,}. Sync may be broken.")
    
    # Check sync freshness
    if ai_status['last_sync']:
        last_sync = datetime.fromisoformat(ai_status['last_sync'].replace('Z', '+00:00'))
        hours_since = (datetime.now(last_sync.tzinfo) - last_sync).total_seconds() / 3600
        if hours_since > CONFIG['sync_freshness_hours']:
            issues.append(f"STALE DATA: Last sync was {hours_since:.0f} hours ago (threshold: {CONFIG['sync_freshness_hours']} hours). Last sync: {last_sync.strftime('%Y-%m-%d %H:%M')}")
    else:
        issues.append('WARNING: Unable to determine last sync timestamp.')
    
    # Check sync status
    if ai_status['sync_status'] not in ['active', 'healthy', 'connected', 'ok']:
        issues.append(f"SYNC STATUS ISSUE: AI platform reports sync status as '{ai_status['sync_status']}'")
    
    return {'healthy': len(issues) == 0, 'issues': issues, 'crm_count': crm_count,
            'ai_count': ai_count, 'last_sync': ai_status['last_sync'], 'sync_status': ai_status['sync_status']}

def send_alert(health_report):
    msg = MIMEMultipart('alternative')
    msg['Subject'] = f"[{'ALERT' if not health_report['healthy'] else 'OK'}] AI Sync Monitor - {CONFIG['client_name']}"
    msg['From'] = CONFIG['smtp_user']
    msg['To'] = ', '.join(CONFIG['alert_recipients'])
    body = f"""AI Prospect Platform Sync Health Report\n{'='*50}\nClient: {CONFIG['client_name']}\nTimestamp: {datetime.now().strftime('%Y-%m-%d %H:%M')}\nCRM Type: {CONFIG['crm_type']}\nAI Platform: {CONFIG['ai_platform']}\nCRM Record Count: {health_report['crm_count']:,}\nAI Platform Record Count: {health_report['ai_count']:,}\nLast Sync: {health_report['last_sync']}\nSync Status: {health_report['sync_status']}\nOverall Health: {'HEALTHY' if health_report['healthy'] else 'ISSUES DETECTED'}\n\n"""
    if health_report['issues']:
        body += 'Issues Detected:\n' + '\n'.join(f'  - {i}' for i in health_report['issues'])
        body += '\n\nAction Required: Review integration settings and re-sync if necessary.'
    msg.attach(MIMEText(body, 'plain'))
    with smtplib.SMTP(CONFIG['smtp_server'], CONFIG['smtp_port']) as server:
        server.starttls()
        server.login(CONFIG['smtp_user'], CONFIG['smtp_pass'])
        server.sendmail(CONFIG['smtp_user'], CONFIG['alert_recipients'], msg.as_string())

def main():
    try:
        health = check_sync_health()
        if not health['healthy']:
            send_alert(health)
            print(f"ALERT sent: {len(health['issues'])} issues detected")
        else:
            print(f"Sync healthy: CRM={health['crm_count']:,}, AI={health['ai_count']:,}")
            # Send weekly OK report on Mondays
            if datetime.now().weekday() == 0:
                send_alert(health)
    except Exception as e:
        error_report = {'healthy': False, 'issues': [f'MONITOR ERROR: {str(e)}'], 'crm_count': 0, 'ai_count': 0, 'last_sync': 'unknown', 'sync_status': 'monitor_error'}
        send_alert(error_report)

if __name__ == '__main__':
    main()

Prospect Score Change Alert Workflow

Type: workflow A Power Automate workflow that detects when a donor's AI prospect score crosses a tier threshold (e.g., moves from Tier 2 to Tier 1) and automatically creates a task for the assigned gift officer, sends an email notification, and logs the score change in the CRM. This ensures that high-potential prospects are acted on immediately when the AI model identifies a scoring change.

Implementation

Power Automate Flow: Prospect Score Change Alert

This can also be implemented as a Salesforce Flow or Raiser's Edge NXT Action.

Trigger & Flow Configuration

Trigger: Scheduled — runs daily at 7:30 AM after sync monitor

1
TRIGGER: Recurrence — Frequency: Day, Interval: 1, Start time: 07:30 AM (client timezone)
2
ACTION: Get rows from SharePoint List 'Prospect_Scores_History' — Site: https://clientorg.sharepoint.com/sites/development, List: Prospect_Scores_History, Filter: Modified ge datetime'{utcNow(-1d)}'
3
ACTION: Apply to each (row in Prospect_Scores_History)

Condition 3a: Score Crossed Tier 1 Threshold

If: current_score >= 80 AND previous_score < 80

  • YES branch — ACTION: Send email (V2): To: gift_officer_email, Subject: '🔥 New Tier 1 Prospect: {donor_name}', Body includes donor name, previous score, new score, estimated gift capacity, last gift amount/date, recommended action (schedule personal visit within 2 weeks), and CRM profile link
  • YES branch — ACTION: Create a task (Planner): Plan: Major Gifts Pipeline, Bucket: New Tier 1 Prospects, Title: 'Contact {donor_name} - New Tier 1 AI Score: {current_score}', Due date: addDays(utcNow(), 14), Assigned to: gift_officer_email, Notes: 'AI identified this donor as a high-probability major gift prospect.'
  • YES branch — ACTION: Update row in SharePoint List: alert_sent: Yes, alert_sent_date: utcNow()

Condition 3c: Score Dropped Below Tier 2

If: current_score < 60 AND previous_score >= 60

  • YES branch — ACTION: Send email to gift officer: Subject: '⚠️ Score Decrease: {donor_name} dropped to Tier 3', Body: 'Review donor engagement and update CRM records.'

SharePoint List Schema: Prospect_Scores_History

Create this list to track score changes over time. Configure the following columns:

  • donor_id (Single line of text, required)
  • donor_name (Single line of text)
  • gift_officer_email (Person or Group)
  • previous_score (Number, 0-100)
  • current_score (Number, 0-100)
  • previous_tier (Choice: Tier 1, Tier 2, Tier 3, Tier 4)
  • current_tier (Choice: Tier 1, Tier 2, Tier 3, Tier 4)
  • score_change_date (Date)
  • capacity_estimate (Currency)
  • last_gift_amount (Currency)
  • last_gift_date (Date)
  • crm_profile_link (Hyperlink)
  • alert_sent (Yes/No)
  • alert_sent_date (Date)

Daily Score Refresh Script

Run this script before the Power Automate flow triggers. It pulls the latest scores from the AI platform and populates the SharePoint tracking list.

Daily Score Refresh Script
python
# pulls AI prospect scores and updates SharePoint Prospect_Scores_History
# list. Run before the Power Automate flow triggers at 7:30 AM.

import requests
import json
from datetime import datetime

def refresh_prospect_scores(ai_api_url, ai_api_key, sharepoint_site, sp_list_id, sp_access_token):
    """Pull latest scores from AI platform and update SharePoint tracking list."""
    # Get current scores from AI platform
    headers = {'Authorization': f'Bearer {ai_api_key}'}
    resp = requests.get(f'{ai_api_url}/prospects/scores', headers=headers)
    current_scores = {p['donor_id']: p for p in resp.json()['prospects']}
    
    # Get previous scores from SharePoint
    sp_headers = {'Authorization': f'Bearer {sp_access_token}',
                  'Content-Type': 'application/json'}
    sp_url = f'https://graph.microsoft.com/v1.0/sites/{sharepoint_site}/lists/{sp_list_id}/items?$expand=fields'
    sp_resp = requests.get(sp_url, headers=sp_headers)
    existing = {item['fields']['donor_id']: item for item in sp_resp.json().get('value', [])}
    
    def get_tier(score):
        if score >= 80: return 'Tier 1'
        elif score >= 60: return 'Tier 2'
        elif score >= 40: return 'Tier 3'
        return 'Tier 4'
    
    changes = []
    for donor_id, data in current_scores.items():
        new_score = data.get('propensity_score', 0)
        new_tier = get_tier(new_score)
        old_score = existing.get(donor_id, {}).get('fields', {}).get('current_score', 0)
        old_tier = get_tier(old_score) if old_score else 'Tier 4'
        
        if new_tier != old_tier:
            update_payload = {'fields': {
                'donor_id': donor_id,
                'donor_name': data.get('name', ''),
                'previous_score': old_score,
                'current_score': new_score,
                'previous_tier': old_tier,
                'current_tier': new_tier,
                'score_change_date': datetime.utcnow().isoformat(),
                'capacity_estimate': data.get('estimated_capacity', 0),
                'last_gift_amount': data.get('last_gift_amount', 0),
                'alert_sent': False
            }}
            if donor_id in existing:
                item_id = existing[donor_id]['id']
                requests.patch(f'{sp_url}/{item_id}', headers=sp_headers, json=update_payload)
            else:
                requests.post(f'{sp_url}', headers=sp_headers, json=update_payload)
            changes.append(f"{data.get('name')}: {old_tier} -> {new_tier} (Score: {old_score} -> {new_score})")
    
    return changes

Major Gift Pipeline Dashboard Template

Type: prompt

A Power BI dashboard template specification with DAX measures and visualization configurations for the major gift prospect pipeline. This template is imported into Power BI Desktop and connected to the client's data sources. It provides four dashboard pages: Executive Summary, Prospect Scoring Detail, Gift Officer Portfolio, and Engagement Heatmap.

Implementation

Data Model Tables Required

Table 1: Prospects — Source: AI Platform export (DonorSearch or Dataro) joined with CRM data

  • DonorID (text, primary key)
  • FullName (text)
  • Email (text)
  • City (text)
  • State (text)
  • AIScore (whole number, 0-100)
  • ScoreTier (text: Tier 1/2/3/4)
  • EstimatedCapacity (currency)
  • RecommendedAskAmount (currency)
  • LastGiftAmount (currency)
  • LastGiftDate (date)
  • FirstGiftDate (date)
  • LifetimeGiving (currency)
  • GiftCount (whole number)
  • AssignedGiftOfficer (text)
  • LastContactDate (date)
  • Tags (text, comma-separated: Hidden Gem, Mystery Mogul, Planned Giving, etc.)

Table 2: EngagementSignals — Source: CRM engagement data

  • DonorID (text, foreign key)
  • EngagementType (text: Email Open, Event Attendance, Volunteer, Meeting, Gift, Website Visit)
  • EngagementDate (date)
  • EngagementDetail (text)

Table 3: ScoreHistory — Source: SharePoint Prospect_Scores_History list

  • DonorID (text, foreign key)
  • ScoreDate (date)
  • Score (whole number)
  • Tier (text)

Table 4: Calendar — Auto-generated date table for time intelligence

DAX Measures — Key Performance Indicators

DAX measures for Power BI
dax
# paste into the Model view Measures table

Total Prospects = COUNTROWS(Prospects)

Tier 1 Count = CALCULATE(COUNTROWS(Prospects), Prospects[ScoreTier] = "Tier 1")

Tier 2 Count = CALCULATE(COUNTROWS(Prospects), Prospects[ScoreTier] = "Tier 2")

Total Pipeline Value = SUMX(
    FILTER(Prospects, Prospects[AIScore] >= 60),
    Prospects[RecommendedAskAmount]
)

Avg Days Since Contact = AVERAGEX(
    FILTER(Prospects, Prospects[AIScore] >= 60),
    DATEDIFF(Prospects[LastContactDate], TODAY(), DAY)
)

Prospects Needing Contact = CALCULATE(
    COUNTROWS(Prospects),
    Prospects[AIScore] >= 60,
    DATEDIFF(Prospects[LastContactDate], TODAY(), DAY) > 30
)

New Tier 1 This Month = CALCULATE(
    COUNTROWS(ScoreHistory),
    ScoreHistory[Tier] = "Tier 1",
    MONTH(ScoreHistory[ScoreDate]) = MONTH(TODAY()),
    YEAR(ScoreHistory[ScoreDate]) = YEAR(TODAY())
)

Conversion Rate = DIVIDE(
    CALCULATE(COUNTROWS(Prospects), Prospects[LifetimeGiving] >= [MajorGiftThreshold], Prospects[AIScore] >= 60),
    CALCULATE(COUNTROWS(Prospects), Prospects[AIScore] >= 60),
    0
)

Engagement Depth = AVERAGEX(
    Prospects,
    CALCULATE(
        COUNTROWS(EngagementSignals),
        FILTER(EngagementSignals, EngagementSignals[EngagementDate] >= TODAY() - 365)
    )
)

Hidden Gems = CALCULATE(
    COUNTROWS(Prospects),
    CONTAINSSTRING(Prospects[Tags], "Hidden Gem")
)

Page 1: Executive Pipeline Summary

Layout: 1280×720. Background: White with #2C3E50 header bar.

  • Row 1 — KPI Cards: [Total Prospects] titled 'Total Rated Prospects' | [Tier 1 Count] titled 'Tier 1 (Ready to Ask)' with conditional red formatting if 0 | [Total Pipeline Value] titled 'Estimated Pipeline Value' formatted as $#,##0 | [New Tier 1 This Month] titled 'New Tier 1 This Month'
  • Row 2 — Funnel Chart (left 50%): Values = Count by ScoreTier, Category = ScoreTier. Colors: Tier 1 = #e74c3c, Tier 2 = #f39c12, Tier 3 = #3498db, Tier 4 = #95a5a6
  • Row 2 — Donut Chart (right 50%): Pipeline Value by Gift Officer
  • Row 3 — Line Chart: ScoreHistory count by month, filtered to Tier 1+2 entries. Title: 'Major Gift Pipeline Trend (12 Months)'

Page 2: Prospect Scoring Detail

  • Slicer (top): ScoreTier multi-select, State, AssignedGiftOfficer
  • Table columns: FullName | AIScore | EstimatedCapacity | RecommendedAskAmount | LastGiftAmount | LastGiftDate | Tags
  • Conditional formatting on AIScore: 80–100 green, 60–79 yellow, 40–59 orange, <40 gray
  • Sortable by all columns; default sort: AIScore descending
  • Tooltip on hover: Show engagement count and last 3 engagement activities

Page 3: Gift Officer Portfolio View

  • Slicer (top): AssignedGiftOfficer single-select
  • KPI Row: Assigned Prospects | Portfolio Total Value | Avg Days Since Contact | Overdue Contacts
  • Table: Prospect list for selected officer with action due dates
  • Gauge: Contact cadence compliance (% of prospects contacted within 30 days)

Page 4: Engagement Heatmap

  • Matrix: Rows = Top 50 donors by AI score, Columns = Engagement Types, Values = Count of engagements in last 12 months. Conditional formatting: Gradient white (0) to dark blue (10+)
  • Highlight callout: Donors with AIScore >= 80 but low engagement = 'Cultivation Opportunity'
  • Highlight callout: Donors with high engagement but no gift officer assigned = 'Unassigned Hot Prospects'

Donor Engagement Score Calculator

Type: skill A custom engagement scoring algorithm that supplements the AI platform's propensity scores by calculating an internal engagement score based on the nonprofit's specific touchpoints. This score is stored in the CRM and fed back to the AI platform as an additional signal. It weights recent interactions more heavily and accounts for recency, frequency, and breadth of engagement.

Implementation:

Donor Engagement Score Calculator
python
# calculates a 0–100 engagement score based on recency, frequency, and
# breadth of donor interactions

#!/usr/bin/env python3
"""Donor Engagement Score Calculator
Calculates a 0-100 engagement score based on recency, frequency, and breadth
of donor interactions. Designed to supplement AI prospect propensity scores.

Usage:
  python engagement_scorer.py --input engagements.csv --donors donors.csv --output scored_donors.csv

Input CSV format (engagements.csv):
  donor_id, engagement_type, engagement_date, engagement_value

Engagement types: gift, email_open, email_click, event_attended, volunteer_hours,
                  meeting, phone_call, website_visit, social_media, survey_response,
                  board_service, committee_service, peer_referral
"""
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import argparse

# Engagement type weights (configurable per client)
ENGAGEMENT_WEIGHTS = {
    'gift': 10.0,
    'meeting': 8.0,
    'board_service': 9.0,
    'committee_service': 7.0,
    'event_attended': 6.0,
    'volunteer_hours': 5.0,
    'peer_referral': 7.0,
    'phone_call': 5.0,
    'email_click': 3.0,
    'survey_response': 4.0,
    'email_open': 1.5,
    'website_visit': 1.0,
    'social_media': 1.0,
}

# Recency decay: more recent = higher multiplier
def recency_multiplier(days_ago):
    """Exponential decay: interactions today = 1.0x, 1 year ago = 0.5x, 2 years ago = 0.25x"""
    half_life = 365  # days
    return np.power(0.5, days_ago / half_life)

def calculate_engagement_scores(engagements_df, donors_df):
    now = datetime.now()
    engagements_df['engagement_date'] = pd.to_datetime(engagements_df['engagement_date'], errors='coerce')
    engagements_df['days_ago'] = (now - engagements_df['engagement_date']).dt.days
    engagements_df['days_ago'] = engagements_df['days_ago'].clip(lower=0)
    
    # Apply weights and recency decay
    engagements_df['base_weight'] = engagements_df['engagement_type'].map(ENGAGEMENT_WEIGHTS).fillna(1.0)
    engagements_df['recency_mult'] = engagements_df['days_ago'].apply(recency_multiplier)
    engagements_df['weighted_score'] = engagements_df['base_weight'] * engagements_df['recency_mult']
    
    # Aggregate per donor
    donor_scores = engagements_df.groupby('donor_id').agg(
        raw_score=('weighted_score', 'sum'),
        total_interactions=('engagement_type', 'count'),
        unique_types=('engagement_type', 'nunique'),
        most_recent=('engagement_date', 'max'),
        gift_count=('engagement_type', lambda x: (x == 'gift').sum()),
        last_gift_days=('days_ago', lambda x: x[engagements_df.loc[x.index, 'engagement_type'] == 'gift'].min() if (engagements_df.loc[x.index, 'engagement_type'] == 'gift').any() else 9999),
    ).reset_index()
    
    # Breadth bonus: engaging across multiple channels is a strong signal
    max_types = len(ENGAGEMENT_WEIGHTS)
    donor_scores['breadth_bonus'] = (donor_scores['unique_types'] / max_types) * 15
    
    # Frequency bonus: consistent engagement matters
    # Calculate interactions per month over last 24 months
    recent = engagements_df[engagements_df['days_ago'] <= 730]
    monthly_freq = recent.groupby('donor_id').agg(
        months_active=('engagement_date', lambda x: x.dt.to_period('M').nunique())
    ).reset_index()
    donor_scores = donor_scores.merge(monthly_freq, on='donor_id', how='left')
    donor_scores['months_active'] = donor_scores['months_active'].fillna(0)
    donor_scores['frequency_bonus'] = (donor_scores['months_active'] / 24) * 10
    
    # Final score: raw + breadth + frequency, normalized to 0-100
    donor_scores['total_raw'] = donor_scores['raw_score'] + donor_scores['breadth_bonus'] + donor_scores['frequency_bonus']
    
    # Normalize using percentile ranking (0-100 scale)
    donor_scores['engagement_score'] = donor_scores['total_raw'].rank(pct=True) * 100
    donor_scores['engagement_score'] = donor_scores['engagement_score'].round(0).astype(int)
    
    # Engagement tier
    donor_scores['engagement_tier'] = pd.cut(
        donor_scores['engagement_score'],
        bins=[0, 25, 50, 75, 100],
        labels=['Low', 'Moderate', 'High', 'Very High'],
        include_lowest=True
    )
    
    # Merge with donor master
    result = donors_df.merge(
        donor_scores[['donor_id', 'engagement_score', 'engagement_tier', 'total_interactions',
                       'unique_types', 'most_recent', 'months_active']],
        on='donor_id', how='left'
    )
    result['engagement_score'] = result['engagement_score'].fillna(0).astype(int)
    result['engagement_tier'] = result['engagement_tier'].fillna('None')
    
    return result

def main():
    parser = argparse.ArgumentParser(description='Calculate donor engagement scores')
    parser.add_argument('--input', required=True, help='Engagements CSV file')
    parser.add_argument('--donors', required=True, help='Donors CSV file')
    parser.add_argument('--output', default='scored_donors.csv', help='Output CSV')
    parser.add_argument('--weights-config', default=None, help='Optional JSON file to override engagement weights')
    args = parser.parse_args()
    
    if args.weights_config:
        import json
        with open(args.weights_config) as f:
            custom_weights = json.load(f)
            ENGAGEMENT_WEIGHTS.update(custom_weights)
            print(f'Loaded custom weights from {args.weights_config}')
    
    engagements = pd.read_csv(args.input)
    donors = pd.read_csv(args.donors)
    
    print(f'Processing {len(engagements):,} engagement records for {len(donors):,} donors...')
    result = calculate_engagement_scores(engagements, donors)
    
    result.to_csv(args.output, index=False)
    print(f'Scored donors exported to {args.output}')
    print(f"\nEngagement Score Distribution:")
    print(result['engagement_tier'].value_counts().sort_index())
    print(f"\nTop 10 Most Engaged Donors:")
    print(result.nlargest(10, 'engagement_score')[['donor_id', 'engagement_score', 'engagement_tier', 'total_interactions', 'unique_types']].to_string(index=False))

if __name__ == '__main__':
    main()

AI Governance Policy Generator for Nonprofits

Type: prompt A structured prompt template for generating a customized AI Acceptable Use Policy for nonprofit clients. This policy document covers donor data handling, AI tool usage guidelines, bias monitoring, transparency requirements, and staff responsibilities. The MSP runs this prompt with client-specific details to produce a ready-to-adopt policy document.

Implementation

AI Governance Policy Generator Prompt Template — replace all {PLACEHOLDERS} before running
text
# AI Governance Policy Generator Prompt Template
# Use with: ChatGPT-4, Claude, or Copilot
# Replace all {PLACEHOLDERS} with client-specific information before running

---
SYSTEM PROMPT:
You are a nonprofit governance and data privacy expert. Generate a comprehensive
AI Acceptable Use Policy for a 501(c)(3) nonprofit organization. The policy must
be practical, board-ready, and compliant with applicable state privacy laws.
Write in clear, non-technical language appropriate for nonprofit board members
and development staff.

USER PROMPT:
Please generate an AI Acceptable Use Policy for our nonprofit organization with
the following details:

- Organization Name: {ORGANIZATION_NAME}
- EIN: {EIN}
- State of Incorporation: {STATE}
- Operating States: {OPERATING_STATES_LIST}
- Annual Revenue: {ANNUAL_REVENUE}
- Number of Donor Records: {RECORD_COUNT}
- AI Tools in Use: {AI_TOOLS_LIST} (e.g., DonorSearch AI, Dataro, Power BI)
- CRM Platform: {CRM_PLATFORM}
- Staff Size: {STAFF_SIZE}
- Has existing data privacy policy: {YES_NO}
- Processes donor credit card data: {YES_NO}
- Has international (EU/UK) donors: {YES_NO}

The policy must include these sections:

1. PURPOSE AND SCOPE
   - Why the organization uses AI tools
   - Which staff and systems are covered
   - Relationship to existing data governance policies

2. APPROVED AI TOOLS AND USES
   - Enumerated list of approved AI platforms and their purposes
   - Approved use cases (prospect identification, donor segmentation, lapse prediction)
   - Prohibited uses (donor score sharing outside org, automated solicitation without human review, using AI scores for employment or volunteer decisions)

3. DONOR DATA HANDLING
   - What donor data is shared with AI platforms
   - Data minimization principles
   - Vendor data processing agreements required
   - Data retention and deletion schedules
   - Donor opt-out rights and process

4. TRANSPARENCY AND DISCLOSURE
   - Language to add to privacy policy regarding AI use
   - When and how to disclose AI involvement to donors
   - Board reporting requirements on AI tool usage

5. BIAS AND FAIRNESS
   - Quarterly review of AI-generated prospect lists for demographic bias
   - Process for investigating and addressing scoring anomalies
   - Prohibition on using AI scores as sole basis for donor cultivation decisions

6. STATE PRIVACY LAW COMPLIANCE
   - Specific requirements for {OPERATING_STATES_LIST}
   - Special attention to Colorado Consumer Privacy Act (if applicable)
   - Special attention to Oregon Consumer Privacy Act (if applicable, effective July 1, 2025 for nonprofits)
   - Consent requirements and opt-out mechanisms

7. SECURITY REQUIREMENTS
   - MFA required for all AI platform access
   - Role-based access controls
   - Annual security review of AI vendor SOC 2 compliance
   - Incident response for AI-related data breaches

8. STAFF RESPONSIBILITIES
   - Required training before AI tool access
   - Restrictions on entering donor data into general-purpose AI (ChatGPT, etc.)
   - Reporting obligations for suspected misuse

9. GOVERNANCE AND REVIEW
   - Policy review schedule (annual minimum)
   - Responsible officer (typically Director of Development or COO)
   - Board oversight requirements
   - Policy amendment process

Format as a professional policy document with section numbers, effective date placeholder, and signature lines for Board Chair and Executive Director.

MSP Instructions

1
Fill in all {PLACEHOLDERS} with client information gathered during onboarding
2
Run through Claude or GPT-4 to generate initial draft
3
Review output for accuracy and client-specific customization
4
Present to client's Executive Director and legal counsel for review
5
Finalize and present to Board of Directors for adoption
6
Store signed copy in client documentation folder
7
Schedule annual review reminder in PSA/ticketing system

Testing & Validation

Client Handoff

The client handoff meeting should be a structured 90-minute session with the Director of Development, Major Gifts Officers, Prospect Researcher (if applicable), and Executive Director. Cover the following:

1
Solution Overview (15 min): Review what was implemented, how the AI scoring works at a high level, and what data feeds the model. Set expectations: AI is a prioritization tool that augments human judgment, not a replacement for relationship-based fundraising.
2
Dashboard Walkthrough (20 min): Live demonstration of all 4 Power BI dashboard pages. Show how to filter by tier, gift officer, geography, and score range. Demonstrate how to export a prospect list for a portfolio review meeting. Show the mobile Power BI app for on-the-go access.
3
Daily/Weekly Workflow Review (15 min): Walk through the 'Gift Officer Daily AI Checklist' document. Morning routine: check email alerts for new Tier 1 prospects. Weekly routine: review portfolio score changes in dashboard. Monthly routine: full portfolio review meeting using AI pipeline data.
4
Alert System Demo (10 min): Trigger a test alert and show it flowing through: email notification → Planner task → SharePoint log. Explain how to customize alert thresholds if needed.
5
Data Stewardship Training (15 min): Emphasize that AI accuracy depends on data quality. Train on proper CRM data entry for new gifts, contact reports, event attendance, and address updates. Explain the annual data hygiene process.
6
Compliance & Governance Review (10 min): Present the AI Acceptable Use Policy for adoption. Review the updated privacy policy language. Explain donor opt-out process and staff responsibilities.
7
Documentation Handoff (5 min): Deliver the following documents: (a) AI Prospect Dashboard Quick Reference Guide (PDF), (b) Gift Officer Daily/Weekly AI Checklist (PDF), (c) System Architecture Diagram showing all integrations, (d) AI Acceptable Use Policy (Word), (e) Vendor contact information and support escalation paths, (f) Data Quality Audit baseline report, (g) Recorded training session video.

Success Criteria to Review Together

Note

Schedule a 30-minute check-in for 2 weeks post-launch and a formal 90-day review meeting to assess adoption and ROI.

Maintenance

Ongoing MSP Managed Service Responsibilities:

Daily (Automated)

  • CRM-to-AI platform sync health monitor runs at 7:00 AM (automated script/Power Automate). MSP receives alert only if issues detected.
  • Power BI dashboard data refresh at 6:00 AM. Monitor for refresh failures via Power BI Service alerts.

Weekly

  • Review sync monitor weekly summary report (sent automatically on Mondays).
  • Verify AI platform prediction refresh completed (Dataro refreshes weekly; DonorSearch varies by tier).
  • Check Power Automate flow run history for any failed alert deliveries.
  • Estimated time: 30 minutes/week.

Monthly

  • Run data quality spot check: export 50 random donor records and verify key fields (name, address, email, last gift) are accurate and current.
  • Review platform usage metrics: number of logins, reports generated, prospects viewed. Flag if usage drops significantly (indicates adoption issues).
  • Check for platform updates or new features from AI vendor (DonorSearch/Dataro release notes).
  • Verify all user accounts are current (disable departed staff, add new hires).
  • Estimated time: 2 hours/month.

Quarterly

  • Comprehensive AI model performance review with client's Director of Development. Analyze: (a) How many AI-identified Tier 1 prospects received a solicitation? (b) What was the conversion rate? (c) Are there systematic blind spots in the scoring? (d) Should the major gift threshold be adjusted?
  • Data quality audit using the Donor Data Quality Audit Script. Compare AI Readiness Score to baseline.
  • Review and update engagement signal weights if the client has added new touchpoint types.
  • Verify compliance controls: opt-out list is being honored, privacy policy is current, AI governance policy is being followed.
  • Generate quarterly ROI report: new major gifts attributed to AI-identified prospects vs. platform and service costs.
  • Estimated time: 4-6 hours/quarter.

Annually

  • Full CRM data hygiene: NCOA address update, email validation, deceased record suppression, comprehensive de-duplication.
  • AI vendor contract renewal review: evaluate pricing, compare to competitive alternatives, negotiate terms.
  • Privacy policy annual review and update for any new state laws taking effect.
  • AI Acceptable Use Policy annual review with client board.
  • Security audit: verify MFA compliance, review API key rotation, confirm vendor SOC 2 reports are current.
  • Platform upgrade/migration assessment: evaluate whether to upgrade tiers (e.g., Enhanced CORE → full DSAi) based on results.
  • Estimated time: 8-12 hours/year.

Model Retraining Triggers (contact AI vendor)

  • Client undergoes major organizational change (merger, new program launch, capital campaign)
  • Significant shift in donor base composition (e.g., large acquisition of new donors)
  • AI scoring accuracy drops below 65% in quarterly validation review
  • Client changes their major gift threshold definition
  • More than 2 years since last model retrain (Dataro auto-refreshes; DonorSearch may require manual request)

SLA Recommendations

  • Critical issues (sync failure, platform outage): 4-hour response, 24-hour resolution
  • Standard issues (scoring anomalies, report errors): 1-business-day response, 5-business-day resolution
  • Enhancement requests (new dashboards, workflow changes): scoped and quoted within 5 business days
  • Escalation path: MSP Level 1 → MSP Level 2 (CRM specialist) → AI vendor support → AI vendor account manager

Alternatives

Blackbaud Prospect Insights Native Add-on

For clients already using Blackbaud Raiser's Edge NXT, skip the third-party AI platform entirely and activate Blackbaud's native Prospect Insights and Prospect Insights Pro add-ons. These are built directly into the CRM, eliminating integration complexity. Prospect Insights uses Blackbaud's own AI models to identify major gift and planned giving prospects with prescriptive prioritizations.

Custom Python ML Pipeline with Open-Source Tools

Build a custom major gift propensity model using Python (scikit-learn, XGBoost, or LightGBM), pandas for data processing, and Power BI for visualization. The MSP extracts donor data from the CRM, trains a classification model on historical major gifts, and deploys scoring as an automated pipeline running on Azure (using the nonprofit's $2,000 annual Azure credits).

Kindsight (iWave) Premium Intelligence Platform

Deploy Kindsight's iWave platform as the primary prospect intelligence tool, with NonprofitOS generative AI co-pilot for automated prospect research. iWave aggregates 44 vetted data sources to build comprehensive donor profiles with wealth indicators, philanthropic history, and giving capacity ratings.

AgileSoftLabs White-Label Platform (MSP-Branded Solution)

Instead of reselling a third-party AI platform, deploy AgileSoftLabs' white-label nonprofit CRM and fundraising platform under the MSP's own brand. The platform includes AI-powered analytics, predictive donor insights, and compliance automation. The MSP controls the client relationship, pricing, and branding entirely.

Dataro as Primary with DonorSearch Enrichment

Use Dataro ($250-$499/month) as the primary AI scoring engine for propensity predictions, supplemented by DonorSearch's philanthropic database for wealth screening and capacity ratings. This two-platform approach provides both predictive scoring (Dataro) and descriptive intelligence (DonorSearch) at a moderate combined cost.

Want early access to the full toolkit?