
Implementation Guide: Research foundation grant opportunities matching organizational focus areas
Step-by-step implementation guide for deploying AI to research foundation grant opportunities matching organizational focus areas for Non-Profit Organizations clients.
Hardware Procurement
Cloud Virtual Machine for n8n Self-Hosted Deployment
Cloud Virtual Machine for n8n Self-Hosted Deployment
$120–$150/month MSP cost (offset by $2,000/year Azure nonprofit credit) / $200–$275/month suggested resale
Hosts the n8n workflow automation engine, PostgreSQL database, and RAG vector store. Self-hosted deployment gives the MSP full control over agent orchestration, avoids n8n Cloud execution limits, and keeps sensitive organizational data within a managed environment. Azure is preferred because eligible nonprofits receive $2,000/year in free Azure credits through Microsoft Philanthropies.
Staff Workstation (if client needs upgrade)
Dell Latitude 5550 Laptop
$950–$1,100 per unit MSP cost / $1,200–$1,400 suggested resale
Standard workstations for grant development staff to interact with the AI agent dashboard, review matched opportunities, and manage the grant pipeline. Only needed if client's existing hardware does not meet minimum browser requirements (Chrome 120+ or Edge 120+, 8GB+ RAM). Most non-profits will use existing workstations.
External Monitor for Grant Research Workflow
Dell P2425H 24-inch IPS Monitor
$180–$220 per unit MSP cost / $250–$300 suggested resale
Dual-monitor setup recommended for grant research comparison workflows — one screen for the AI agent dashboard and CRM pipeline, the other for grant application portals and document editing. Optional but significantly improves staff productivity.
Software Procurement
Instrumentl
$299–$499/month (Standard to Advanced AI plan) / Resale at $399–$649/month
Primary grant discovery and funder intelligence platform. AI-powered matching across 400,000+ funder profiles with 32,000+ active grants. Provides the structured grant data feed that the autonomous agent consumes, scores, and routes to the CRM. Includes deadline tracking, funder 990 analysis, and saved search alerts.
Grantable
$25–$75/month (nonprofit discount for orgs under $500K budget) / Resale at $75–$150/month
AI-assisted grant writing and funder prospecting. Used as the proposal drafting layer when the agent identifies high-match opportunities. Includes proprietary prospecting engine, funder intelligence, and AI content generation tuned for grant narratives.
n8n Community Edition (Self-Hosted)
$0/month software cost (hosting costs covered by Azure VM) / Included in managed service bundle
Core workflow automation and AI agent orchestration platform. Provides visual workflow builder with 400+ native integrations, HTTP request nodes for API calls, code nodes for custom logic, and native AI agent nodes with tool-calling capabilities. Runs the scheduled grant scanning, LLM-powered analysis, CRM sync, and notification workflows. License type: Sustainable Use License (free for self-hosted, unlimited executions).
OpenAI API (GPT-4.1 and GPT-4.1-mini)
$30–$150/month depending on volume (GPT-4.1: $2/1M input, $8/1M output; GPT-4.1-mini: $0.40/1M input, $1.60/1M output) / Resale at 40% markup: $42–$210/month
Powers the AI agent's reasoning capabilities. GPT-4.1-mini handles high-volume grant scanning and initial eligibility classification (cheap, fast). GPT-4.1 handles deep grant-to-organization matching analysis, scoring justification, and proposal outline generation (more capable, used selectively). Zero data retention on API tier protects organizational data.
PostgreSQL 16
$0/month (runs on Azure VM) / Included in managed service bundle
Production database for n8n workflow data, grant opportunity records, matching scores, and agent execution logs. Also hosts pgvector extension for the RAG vector store that indexes the organization's mission documents and past proposals.
Salesforce Nonprofit Cloud (Power of Us)
$0/month for first 10 users via Power of Us program / MSP charges $150–$300/month for configuration and management
CRM platform where matched grant opportunities are pushed as pipeline records with deadlines, eligibility scores, and recommended actions. Tracks the full grant lifecycle from discovery through submission and reporting. Free tier for nonprofits makes this the most cost-effective CRM option. If client already uses Bloomerang or DonorPerfect, those can be substituted.
Microsoft 365 Business Basic (Nonprofit)
$0/month via Microsoft Philanthropies / MSP charges $5–$10/user/month for management
Provides SharePoint Online for document storage (grant proposals, funder research, organization mission docs), Outlook for automated notification emails, Teams for collaboration on grant applications, and OneDrive for individual file management. Free for 501(c)(3) organizations.
Qdrant Vector Database
$0/month (runs as Docker container on Azure VM) / Included in managed service bundle
High-performance vector database for the RAG knowledge base. Stores embeddings of the organization's mission statement, strategic plan, program descriptions, past grant proposals, and funder communications. Enables the AI agent to retrieve contextually relevant organizational information when scoring grant matches and generating proposal outlines. Alternative: use pgvector extension in PostgreSQL for simpler deployments.
Prerequisites
- Active 501(c)(3) status with current IRS determination letter (required for nonprofit software discounts and grant eligibility verification)
- Existing donor CRM system with API access enabled — Salesforce NPSP (preferred, free via Power of Us), Bloomerang, or DonorPerfect. If no CRM exists, Salesforce NPSP will be deployed as part of this project.
- Documented organizational mission statement, current strategic plan, and list of program focus areas (minimum 1-page mission summary and 3–5 defined focus areas for AI matching configuration)
- At least 3–5 past grant proposals or applications in digital format (PDF or Word) for RAG knowledge base seeding — more is better for matching accuracy
- Cloud productivity suite — Microsoft 365 (free for nonprofits) or Google Workspace (free for nonprofits) — with admin access for the MSP to configure SharePoint/Drive integrations
- Stable internet connection: minimum 25 Mbps download / 10 Mbps upload at the primary office location
- Modern web browser on all staff workstations: Chrome 120+ or Microsoft Edge 120+
- Designated grant program staff member (1–2 people) who will serve as the primary users and provide feedback during tuning — minimum 4 hours/week availability during implementation
- Azure account registered through Microsoft Philanthropies nonprofit program to claim $2,000/year in free credits (MSP can assist with registration in Phase 1)
- OpenAI API account with payment method configured and $50 initial credit loaded (MSP will create and manage this under the client's organization)
- Firewall/network configuration allowing outbound HTTPS (port 443) to: api.openai.com, instrumentl.com, login.salesforce.com, n8n self-hosted domain, and qdrant container port
- Written authorization from the nonprofit's executive director or board-designated officer approving the use of AI tools for grant research and proposal drafting (important for compliance and funder disclosure requirements)
Installation Steps
Step 1: Provision Azure Infrastructure and Configure Nonprofit Credits
Set up the Azure environment that will host the n8n agent orchestration platform, PostgreSQL database, and Qdrant vector store. Register the nonprofit for Microsoft Azure nonprofit credits ($2,000/year) if not already done. Create a resource group, deploy the virtual machine, and configure networking.
# Create resource group
az group create --name rg-grantscout-prod --location eastus
# Create virtual machine (B4ms: 4 vCPU, 16GB RAM)
az vm create \
--resource-group rg-grantscout-prod \
--name vm-grantscout-01 \
--image Canonical:ubuntu-24_04-lts:server:latest \
--size Standard_B4ms \
--admin-username grantscoutadmin \
--generate-ssh-keys \
--os-disk-size-gb 128 \
--public-ip-sku Standard
# Open required ports
az vm open-port --resource-group rg-grantscout-prod --name vm-grantscout-01 --port 443 --priority 100
az vm open-port --resource-group rg-grantscout-prod --name vm-grantscout-01 --port 5678 --priority 110
# Configure NSG to restrict n8n port 5678 to MSP IP range only
az network nsg rule update \
--resource-group rg-grantscout-prod \
--nsg-name vm-grantscout-01NSG \
--name open-port-5678 \
--source-address-prefixes <MSP_OFFICE_IP>/32Azure nonprofit credit approval typically takes 3–5 business days. Start this process at project kickoff. The B4ms VM size provides headroom for the n8n workflows, PostgreSQL, and Qdrant running concurrently. Monitor resource usage in the first month and downscale to B2ms if utilization is consistently below 40%. Ensure SSH keys are stored securely in the MSP's credential vault.
Step 2: Install Docker, Docker Compose, and Base Infrastructure on Azure VM
Install Docker Engine and Docker Compose on the Ubuntu VM to containerize all services (n8n, PostgreSQL, Qdrant). Containerization simplifies updates, backups, and disaster recovery.
sudo apt update && sudo apt upgrade -ysudo apt install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $VERSION_CODENAME) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-pluginsudo usermod -aG docker grantscoutadmin
newgrp dockerdocker --version
docker compose versionmkdir -p /opt/grantscout/{n8n,postgres,qdrant,backups,rag-docs}
cd /opt/grantscoutAlways use Docker's official repository, not the Ubuntu-bundled docker.io package, for the latest security patches and features. The /opt/grantscout directory will be the root for all project files. Set up automated daily snapshots of the Azure VM disk as an additional backup layer.
Step 3: Deploy PostgreSQL, Qdrant, and n8n via Docker Compose
Create the Docker Compose configuration that runs all three core services: PostgreSQL 16 (with pgvector for optional vector search), Qdrant (dedicated vector database for RAG), and n8n (workflow automation and agent orchestration). All services communicate over an internal Docker network.
cat > /opt/grantscout/docker-compose.yml << 'EOF'
version: '3.8'
services:
postgres:
image: pgvector/pgvector:pg16
container_name: grantscout-postgres
restart: always
environment:
POSTGRES_DB: n8n
POSTGRES_USER: n8n_user
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- /opt/grantscout/postgres/data:/var/lib/postgresql/data
ports:
- '127.0.0.1:5432:5432'
healthcheck:
test: ['CMD-SHELL', 'pg_isready -U n8n_user -d n8n']
interval: 10s
timeout: 5s
retries: 5
qdrant:
image: qdrant/qdrant:v1.12.1
container_name: grantscout-qdrant
restart: always
volumes:
- /opt/grantscout/qdrant/storage:/qdrant/storage
ports:
- '127.0.0.1:6333:6333'
environment:
QDRANT__SERVICE__API_KEY: ${QDRANT_API_KEY}
n8n:
image: n8nio/n8n:latest
container_name: grantscout-n8n
restart: always
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n
- DB_POSTGRESDB_USER=n8n_user
- DB_POSTGRESDB_PASSWORD=${POSTGRES_PASSWORD}
- N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY}
- N8N_HOST=${N8N_HOST}
- N8N_PORT=5678
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://${N8N_HOST}/
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=${N8N_AUTH_USER}
- N8N_BASIC_AUTH_PASSWORD=${N8N_AUTH_PASSWORD}
- GENERIC_TIMEZONE=America/New_York
- N8N_LOG_LEVEL=info
ports:
- '5678:5678'
volumes:
- /opt/grantscout/n8n/data:/home/node/.n8n
- /opt/grantscout/rag-docs:/home/node/rag-docs
depends_on:
postgres:
condition: service_healthy
qdrant:
condition: service_started
volumes:
postgres_data:
qdrant_storage:
n8n_data:
EOFcat > /opt/grantscout/.env << EOF
POSTGRES_PASSWORD=$(openssl rand -base64 24)
QDRANT_API_KEY=$(openssl rand -base64 32)
N8N_ENCRYPTION_KEY=$(openssl rand -hex 32)
N8N_HOST=grantscout.${CLIENT_DOMAIN}
N8N_AUTH_USER=mspadmin
N8N_AUTH_PASSWORD=$(openssl rand -base64 16)
EOFchmod 600 /opt/grantscout/.envcd /opt/grantscout
docker compose up -ddocker compose ps
docker compose logs --tail=50Record all generated credentials in the MSP's credential vault (e.g., IT Glue, Hudu, or Passportal) immediately. The N8N_ENCRYPTION_KEY is especially important — if lost, all stored credentials in n8n workflows become unrecoverable. Use the pgvector image instead of standard PostgreSQL to enable vector search as a fallback if Qdrant has issues. The N8N_HOST should be set to the actual subdomain that will be configured with an SSL certificate in the next step.
Step 4: Configure Reverse Proxy with SSL and DNS
Set up Caddy as a reverse proxy with automatic HTTPS/SSL certificate provisioning via Let's Encrypt. Configure DNS to point the n8n subdomain to the Azure VM's public IP. This secures the n8n web interface and webhook endpoints.
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install -y caddysudo cat > /etc/caddy/Caddyfile << 'EOF'
grantscout.${CLIENT_DOMAIN} {
reverse_proxy localhost:5678
encode gzip
log {
output file /var/log/caddy/n8n-access.log
format json
}
}
EOFsudo mkdir -p /var/log/caddysudo systemctl enable caddy
sudo systemctl restart caddycurl -I https://grantscout.${CLIENT_DOMAIN}az network nsg rule delete --resource-group rg-grantscout-prod --nsg-name vm-grantscout-01NSG --name open-port-5678Before this step, create a DNS A record pointing grantscout.{clientdomain}.com to the Azure VM's public IP address. Caddy automatically provisions and renews Let's Encrypt SSL certificates. After Caddy is running and SSL is confirmed, remove the direct port 5678 NSG rule so n8n is only accessible through the encrypted Caddy proxy. If the client doesn't have a domain or DNS management capability, use an MSP-owned domain (e.g., grantscout-clientname.mspdomain.com).
Step 5: Configure Instrumentl Account and API Access
Set up the Instrumentl subscription, configure the organization profile with mission-aligned focus areas, and establish the data feed that the n8n agent will consume. Instrumentl does not have a public REST API, so we will use browser automation or their email digest + webhook approach for automated data ingestion.
Host: outlook.office365.com (for M365) or imap.gmail.com (for Google)
Port: 993
User: grants-inbox@clientdomain.org
Password: <app-specific-password>Instrumentl's strength is its comprehensive funder database and AI matching, but it lacks a public API for direct programmatic access. The recommended approach is to use Instrumentl's email digest feature as the data ingestion mechanism: Instrumentl sends daily/weekly digest emails with matched grants, and n8n monitors the email inbox via IMAP, parses the grant data, and feeds it into the agent pipeline. For clients on the Advanced AI plan, also explore Instrumentl's newer features that may include webhook or integration capabilities. The MSP should also manually export the initial grant list as CSV for the RAG knowledge base seeding.
Step 6: Configure Salesforce Nonprofit Cloud CRM
Set up Salesforce NPSP (Nonprofit Success Pack) using the free Power of Us licenses, configure custom objects for grant pipeline tracking, and create the API connected app that n8n will use to push matched grant opportunities into the CRM.
Grant_Opportunity__c Custom Object Fields
- Grant_Name__c (Text, 255)
- Funder_Name__c (Text, 255)
- Funding_Amount_Min__c (Currency)
- Funding_Amount_Max__c (Currency)
- Deadline__c (Date)
- Focus_Area__c (Picklist: Education, Health, Environment, Arts, Social Services, Other)
- Match_Score__c (Percent)
- Match_Rationale__c (Long Text Area, 5000)
- Status__c (Picklist: New, Reviewing, Applying, Submitted, Awarded, Declined, Expired)
- Source__c (Text: Instrumentl, Web Scrape, Manual)
- Eligibility_Summary__c (Long Text Area, 3000)
- Application_URL__c (URL)
- AI_Proposal_Outline__c (Long Text Area, 10000)
- Agent_Discovery_Date__c (Date)
- External_ID__c (Text, unique, external ID - for upsert operations)
Connected App Settings for n8n API Access
- Connected App Name: GrantScout n8n Integration
- API Name: GrantScout_n8n
- Enable OAuth Settings: checked
- Callback URL: https://grantscout.clientdomain.com/rest/oauth2-credential/callback
- Selected OAuth Scopes: Full access (full), Perform requests at any time (refresh_token)
- Require Secret for Web Server Flow: checked
Salesforce Power of Us approval takes 1–2 weeks. Start the application immediately at project kickoff, in parallel with Azure provisioning. If the client already has a CRM (Bloomerang, DonorPerfect, Little Green Light), adapt the integration approach — Bloomerang has a REST API, DonorPerfect has XML-based API. The custom Grant_Opportunity__c object is purpose-built for this project and does not interfere with standard NPSP donation tracking objects. Create a dedicated integration user (not a named user license) to avoid issues when staff accounts change.
Step 7: Build the RAG Knowledge Base with Organization Documents
Collect the nonprofit's mission-critical documents, generate vector embeddings using OpenAI's embedding model, and store them in Qdrant. This knowledge base enables the AI agent to understand the organization's specific focus areas, past grant history, and proposal language when scoring new opportunities.
scp -r ./client-docs/* grantscoutadmin@<VM_IP>:/opt/grantscout/rag-docs/curl -X PUT 'http://localhost:6333/collections/org_knowledge' \
-H 'Content-Type: application/json' \
-H 'api-key: ${QDRANT_API_KEY}' \
-d '{
"vectors": {
"size": 1536,
"distance": "Cosine"
},
"optimizers_config": {
"indexing_threshold": 1000
}
}'curl 'http://localhost:6333/collections/org_knowledge' \
-H 'api-key: ${QDRANT_API_KEY}'The RAG knowledge base is the single most important factor in matching quality. Spend extra time ensuring comprehensive document collection. At minimum, you need: (1) mission statement, (2) program descriptions, (3) at least 3 past proposals. The vector dimension of 1536 corresponds to OpenAI's text-embedding-3-small model. If using text-embedding-3-large (dimension 3072), update the collection size accordingly. Document processing and embedding will be handled by the n8n RAG Ingestion workflow built in Step 9.
Step 8: Configure n8n Credentials and Base Settings
Log into the n8n web interface, configure all required API credentials (OpenAI, Salesforce, IMAP, Qdrant), and set up the base environment that workflows will use.
Type: OpenAI
API Key: sk-...
Organization ID: org-... (optional)Type: Salesforce
Client ID: <Connected App Consumer Key>
Client Secret: <Connected App Consumer Secret>
Redirect URL: https://grantscout.clientdomain.com/rest/oauth2-credential/callback
Environment: Production- Click 'Connect' and authorize via Salesforce login
Type: IMAP
Host: outlook.office365.com
Port: 993
User: grants-inbox@clientdomain.org
Password: <app-password>
SSL/TLS: trueType: Header Auth
Name: api-key
Value: <QDRANT_API_KEY from .env>Type: Microsoft OAuth2
Register app at Azure AD > App Registrations
Grant permissions: Files.ReadWrite.All, Sites.ReadWrite.AllAll credentials are encrypted at rest using the N8N_ENCRYPTION_KEY set in the .env file. Document every credential's source and expiration date in the MSP's documentation system. OpenAI API keys have no expiration but should be rotated quarterly as a security best practice. Salesforce OAuth tokens auto-refresh but require re-authorization if the integration user's password changes. Create a maintenance calendar reminder to check credential health monthly.
Step 9: Deploy n8n RAG Document Ingestion Workflow
Build and activate the n8n workflow that processes the organization's documents into vector embeddings and stores them in Qdrant. This workflow handles PDF parsing, text chunking, embedding generation, and vector storage. It runs once for initial ingestion and can be re-triggered when new documents are added.
# points_count should be > 0 (typically 50-500 depending on doc volume)
# Verify embeddings were stored:
curl 'http://localhost:6333/collections/org_knowledge' \
-H 'api-key: ${QDRANT_API_KEY}' | python3 -m json.toolText chunking strategy significantly impacts retrieval quality. Use 800-token chunks with 200-token overlap for grant-related documents. Mission statements and program descriptions should be stored as complete documents (no chunking) since they are often retrieved in full. Run this workflow any time the client provides updated strategic plans, new program descriptions, or wants to add past proposals to improve matching.
Step 10: Deploy Core Grant Scanning and Matching Agent Workflow
Build and activate the primary autonomous agent workflow in n8n. This workflow runs on a daily schedule: it monitors the Instrumentl email digest, extracts grant opportunity data, performs AI-powered matching against the organization's RAG knowledge base, scores each opportunity, and pushes qualified matches to Salesforce CRM with detailed analysis.
Test with a manual execution first:
The daily scan schedule should be set for early morning (6:00 AM) so matched grants are ready for staff review when they start their workday. The agent uses a two-pass approach: Pass 1 uses GPT-4.1-mini for fast, cheap initial screening of all grants. Pass 2 uses GPT-4.1 for deep analysis of only the grants that pass the initial screen (typically 10-20% of total). This dramatically reduces API costs while maintaining matching quality. Expect the first few weeks to require match threshold tuning — start with a 60% match threshold and adjust based on staff feedback.
Step 11: Deploy Notification and Reporting Workflows
Build supporting workflows that send email notifications to grant staff when high-match opportunities are found, generate weekly summary reports, and provide deadline reminder alerts. These ensure the autonomous agent's findings actually reach the right people at the right time.
Email notifications should be sent from a recognized organizational email address (e.g., grants-system@clientdomain.org) to avoid spam filtering. Work with the client to determine the notification preferences: some organizations want immediate alerts for 80%+ matches, others prefer a daily digest. The deadline reminder workflow is critical for grant compliance — missed deadlines are one of the top reasons nonprofits lose funding opportunities. Consider adding a Microsoft Teams channel notification as an additional alert channel.
Step 12: Configure Automated Backup and Monitoring
Set up automated daily backups of all critical data (PostgreSQL database, n8n workflows, Qdrant vectors, and configuration files) and basic monitoring to alert the MSP if any services go down.
cat > /opt/grantscout/backups/backup.sh << 'EOFBACKUP'
#!/bin/bash
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_DIR=/opt/grantscout/backups/${TIMESTAMP}
mkdir -p ${BACKUP_DIR}
# Backup PostgreSQL
docker exec grantscout-postgres pg_dump -U n8n_user n8n > ${BACKUP_DIR}/postgres_n8n.sql
# Backup Qdrant snapshots
curl -X POST 'http://localhost:6333/collections/org_knowledge/snapshots' \
-H 'api-key: ${QDRANT_API_KEY}'
cp -r /opt/grantscout/qdrant/storage/snapshots/* ${BACKUP_DIR}/
# Backup n8n workflows (export all via API)
curl -H 'Authorization: Basic '$(echo -n ${N8N_AUTH_USER}:${N8N_AUTH_PASSWORD} | base64) \
https://localhost:5678/api/v1/workflows > ${BACKUP_DIR}/n8n_workflows.json
# Backup configuration files
cp /opt/grantscout/.env ${BACKUP_DIR}/
cp /opt/grantscout/docker-compose.yml ${BACKUP_DIR}/
# Compress
tar -czf /opt/grantscout/backups/grantscout_${TIMESTAMP}.tar.gz -C ${BACKUP_DIR} .
rm -rf ${BACKUP_DIR}
# Upload to Azure Blob Storage
az storage blob upload \
--account-name grantscoutbackups \
--container-name daily-backups \
--file /opt/grantscout/backups/grantscout_${TIMESTAMP}.tar.gz \
--name grantscout_${TIMESTAMP}.tar.gz
# Retain only last 30 local backups
ls -t /opt/grantscout/backups/*.tar.gz | tail -n +31 | xargs rm -f
echo "Backup completed: grantscout_${TIMESTAMP}.tar.gz"
EOFBACKUP
chmod +x /opt/grantscout/backups/backup.sh(crontab -l 2>/dev/null; echo "0 2 * * * /opt/grantscout/backups/backup.sh >> /var/log/grantscout-backup.log 2>&1") | crontab -az storage account create --name grantscoutbackups --resource-group rg-grantscout-prod --location eastus --sku Standard_LRS
az storage container create --name daily-backups --account-name grantscoutbackupscat > /opt/grantscout/health-check.sh << 'EOFHEALTH'
#!/bin/bash
SERVICES=("grantscout-postgres" "grantscout-qdrant" "grantscout-n8n")
for SVC in "${SERVICES[@]}"; do
if ! docker inspect --format='{{.State.Running}}' $SVC 2>/dev/null | grep -q true; then
echo "ALERT: $SVC is DOWN at $(date)" | mail -s "GrantScout Service Alert" msp-alerts@mspdomain.com
docker restart $SVC
fi
done
EOFHEALTH
chmod +x /opt/grantscout/health-check.sh(crontab -l 2>/dev/null; echo "*/5 * * * * /opt/grantscout/health-check.sh >> /var/log/grantscout-health.log 2>&1") | crontab -Azure Blob Storage costs are minimal (~$0.02/GB/month for Hot tier). For enhanced monitoring, consider integrating with the MSP's existing RMM tool (ConnectWise Automate, Datto RMM, NinjaRMM) by deploying an agent on the VM. The health check script is a basic watchdog — for production environments, consider Uptime Kuma (open-source, self-hosted) or Grafana + Prometheus for comprehensive dashboard monitoring. Test the backup restore process during implementation to verify it works before going live.
Step 13: Perform Initial Tuning and Acceptance Testing
Run the complete system through a structured testing cycle with the client's grant staff. Feed known past grants through the system to validate matching accuracy, tune the match score thresholds, and verify end-to-end data flow from discovery to CRM to notification.
Expect 2–3 tuning iterations over the first 2 weeks. The most common adjustments are: (1) match threshold is too high (missing good opportunities) or too low (too much noise), (2) certain focus areas need higher weighting, (3) grant size filters need adjustment. Document every tuning change in the MSP's ticketing system for future reference. Keep the client's grant staff closely involved in this phase — their domain expertise is essential for calibration. Plan for 4–8 hours of MSP time for tuning.
Custom AI Components
RAG Document Ingestion Workflow
Type: workflow
n8n workflow that processes the nonprofit's organizational documents (mission statements, strategic plans, past proposals, program descriptions) into vector embeddings and stores them in the Qdrant vector database. This creates the knowledge base that enables contextual grant matching. Handles PDF extraction, text chunking, OpenAI embedding generation, and Qdrant upsert operations.
n8n Workflow: RAG Document Ingestion
Workflow Structure (Node-by-Node)
Node 1: Manual Trigger
- Type: Manual Trigger
- Purpose: Allows MSP to run ingestion on-demand when new docs are added
Node 2: Read Directory
- Type: Read/Write Files from Disk
- Operation: List
- Directory Path: /home/node/rag-docs
- Filter: *.pdf, *.docx, *.txt
Node 3: Loop Over Files
- Type: Split In Batches
- Batch Size: 1
Node 4: Read File Content
- Type: Read/Write Files from Disk
- Operation: Read
- File Path: ={{$json.path}}
Node 5: Extract Text
- Type: Extract From File
- Operation: Extract text from PDF/DOCX
- Source: Binary data from previous node
Node 6: Chunk Text (Code Node)
- Type: Code (JavaScript)
// splits document into overlapping word-based chunks with doc_type metadata
// tagging
const text = $input.first().json.data;
const fileName = $input.first().json.fileName;
const CHUNK_SIZE = 800;
const OVERLAP = 200;
const chunks = [];
// Special handling: don't chunk short documents
if (text.length < CHUNK_SIZE * 1.5) {
chunks.push({
text: text,
metadata: {
source: fileName,
chunk_index: 0,
total_chunks: 1,
doc_type: fileName.includes('mission') ? 'mission' :
fileName.includes('strategic') ? 'strategic_plan' :
fileName.includes('program') ? 'program_description' :
fileName.includes('proposal') ? 'past_proposal' :
fileName.includes('annual') ? 'annual_report' : 'general'
}
});
} else {
const words = text.split(/\s+/);
let start = 0;
let chunkIndex = 0;
while (start < words.length) {
const end = Math.min(start + CHUNK_SIZE, words.length);
const chunkText = words.slice(start, end).join(' ');
chunks.push({
text: chunkText,
metadata: {
source: fileName,
chunk_index: chunkIndex,
total_chunks: Math.ceil(words.length / (CHUNK_SIZE - OVERLAP)),
doc_type: fileName.includes('mission') ? 'mission' :
fileName.includes('strategic') ? 'strategic_plan' :
fileName.includes('program') ? 'program_description' :
fileName.includes('proposal') ? 'past_proposal' :
fileName.includes('annual') ? 'annual_report' : 'general'
}
});
start += (CHUNK_SIZE - OVERLAP);
chunkIndex++;
}
}
return chunks.map(c => ({ json: c }));Node 7: Generate Embeddings
- Type: OpenAI
- Operation: Create Embedding
- Model: text-embedding-3-small
- Input: ={{$json.text}}
- Credentials: OpenAI API
Node 8: Upsert to Qdrant (HTTP Request)
- Type: HTTP Request
- Method: PUT
- URL: http://qdrant:6333/collections/org_knowledge/points
- Authentication: Header Auth (Qdrant)
# PUT to /collections/org_knowledge/points
{
"points": [
{
"id": "{{$runIndex}}-{{$json.metadata.chunk_index}}",
"vector": {{$json.embedding}},
"payload": {
"text": "{{$json.text}}",
"source": "{{$json.metadata.source}}",
"doc_type": "{{$json.metadata.doc_type}}",
"chunk_index": {{$json.metadata.chunk_index}},
"ingested_at": "{{$now.toISO()}}"
}
}
]
}Node 9: Log Results
- Type: Code
return [{ json: { status: 'Ingested', file: $input.first().json.metadata.source, timestamp: new Date().toISOString() } }]Configuration Notes
- Use UUID v4 for point IDs in production (replace $runIndex-based IDs)
- The Qdrant URL uses the Docker service name 'qdrant' since n8n and Qdrant are on the same Docker network
- text-embedding-3-small produces 1536-dimension vectors at $0.02/1M tokens — very cost-effective
- Re-run this workflow whenever new organizational documents are provided
Grant Opportunity Matching Agent
Type: agent The core autonomous AI agent that orchestrates grant discovery, matching, scoring, and CRM pipeline management. Implemented as an n8n workflow with an AI Agent node that uses GPT-4.1 for reasoning and has access to tools for RAG retrieval, web search, Salesforce CRM operations, and grant database querying. Runs daily on a schedule and can also be triggered manually. Implementation: ``` ## n8n Workflow: Grant Scout - Daily Scan & Match ### Trigger Node: Schedule Trigger - ...
High Match Alert Notification
Type: workflow Triggered when the main scanning agent identifies a grant opportunity with a match score of 80% or higher. Sends an immediate, formatted email alert to the grants team with the match analysis, proposal outline, and direct link to the opportunity. Also posts to a Microsoft Teams channel if configured.
Implementation
n8n Workflow: Grant Scout - High Match Alert
Node 1: Webhook Trigger
- Type: Webhook
- Path: /high-match-alert
- Method: POST
- Receives data from main Grant Scout workflow Node 11
Node 2: Format Email (Code Node)
// formats HTML email body and subject line based on match score
const grant = $input.first().json;
const priorityEmoji = grant.match_score >= 90 ? '🔥' : '⭐';
const priorityLabel = grant.match_score >= 90 ? 'EXCELLENT MATCH' : 'STRONG MATCH';
const htmlBody = `
<div style="font-family: Arial, sans-serif; max-width: 600px; margin: 0 auto;">
<div style="background-color: #2E7D32; color: white; padding: 20px; border-radius: 8px 8px 0 0;">
<h1 style="margin: 0;">${priorityEmoji} ${priorityLabel}: ${grant.match_score}% Match</h1>
</div>
<div style="background-color: #f5f5f5; padding: 20px; border: 1px solid #ddd;">
<h2 style="color: #333; margin-top: 0;">${grant.Grant_Name__c}</h2>
<p><strong>Funder:</strong> ${grant.Funder_Name__c}</p>
<p><strong>Amount:</strong> $${(grant.Funding_Amount_Min__c || 0).toLocaleString()} - $${(grant.Funding_Amount_Max__c || 0).toLocaleString()}</p>
<p><strong>Deadline:</strong> <span style="color: #D32F2F; font-weight: bold;">${grant.Deadline__c || 'Check funder website'}</span></p>
<p><strong>Match Score:</strong> ${grant.Match_Score__c * 100}%</p>
<div style="background: white; padding: 15px; border-radius: 4px; margin: 15px 0;">
<h3 style="margin-top: 0; color: #1565C0;">Why This Matches</h3>
<p>${grant.Match_Rationale__c}</p>
</div>
<div style="background: white; padding: 15px; border-radius: 4px; margin: 15px 0;">
<h3 style="margin-top: 0; color: #1565C0;">Eligibility Assessment</h3>
<p>${grant.Eligibility_Summary__c}</p>
</div>
${grant.AI_Proposal_Outline__c ? `
<div style="background: white; padding: 15px; border-radius: 4px; margin: 15px 0;">
<h3 style="margin-top: 0; color: #1565C0;">Draft Proposal Outline</h3>
<pre style="white-space: pre-wrap; font-family: inherit;">${grant.AI_Proposal_Outline__c}</pre>
</div>` : ''}
<div style="text-align: center; margin-top: 20px;">
${grant.Application_URL__c ? `<a href="${grant.Application_URL__c}" style="background-color: #1565C0; color: white; padding: 12px 24px; text-decoration: none; border-radius: 4px; font-weight: bold;">View Grant Details →</a>` : ''}
</div>
</div>
<div style="background-color: #e0e0e0; padding: 10px 20px; border-radius: 0 0 8px 8px; font-size: 12px; color: #666;">
<p>Discovered by GrantScout AI Agent on ${new Date().toLocaleDateString()}. Review in <a href="https://login.salesforce.com">Salesforce CRM</a>.</p>
<p style="color: #999;">This analysis was generated by AI and should be verified by grants staff before taking action.</p>
</div>
</div>`;
return [{ json: { htmlBody, subject: `${priorityEmoji} Grant Match ${grant.match_score}%: ${grant.Grant_Name__c} (${grant.Funder_Name__c})` } }];Node 3: Send Email via Microsoft Graph
- Type: Microsoft Graph API (HTTP Request)
- Method: POST
- URL: https://graph.microsoft.com/v1.0/users/grants-system@clientdomain.org/sendMail
{
"message": {
"subject": "{{$json.subject}}",
"body": { "contentType": "HTML", "content": "{{$json.htmlBody}}" },
"toRecipients": [
{ "emailAddress": { "address": "grantmanager@clientdomain.org" } },
{ "emailAddress": { "address": "executivedirector@clientdomain.org" } }
]
}
}Node 4 (Optional): Post to Microsoft Teams
- Type: Microsoft Teams
- Operation: Send Message
- Channel: #grants-pipeline
- Message: Adaptive Card with grant summary
Weekly Grant Summary Report Generator
Type: workflow Generates and sends a formatted weekly summary report every Monday morning showing all grants discovered in the past 7 days, their match scores, status updates, and upcoming deadlines. Provides leadership visibility into the AI agent's performance and the grant pipeline health.
Implementation
Node 1: Schedule Trigger
- Cron: Every Monday at 8:00 AM client timezone
Node 2: Query Salesforce - New Discoveries
- Type: Salesforce
- Operation: Query
SELECT Grant_Name__c, Funder_Name__c, Funding_Amount_Min__c, Funding_Amount_Max__c, Deadline__c, Match_Score__c, Status__c, Match_Rationale__c, Agent_Discovery_Date__c FROM Grant_Opportunity__c WHERE Agent_Discovery_Date__c >= LAST_WEEK ORDER BY Match_Score__c DESCNode 3: Query Salesforce - Upcoming Deadlines
- Type: Salesforce
SELECT Grant_Name__c, Funder_Name__c, Deadline__c, Status__c, Match_Score__c FROM Grant_Opportunity__c WHERE Deadline__c >= TODAY AND Deadline__c <= NEXT_N_DAYS:30 AND Status__c IN ('Reviewing', 'Applying') ORDER BY Deadline__c ASCNode 4: Query Salesforce - Pipeline Stats
- Type: Salesforce
SELECT Status__c, COUNT(Id) cnt, AVG(Match_Score__c) avg_score FROM Grant_Opportunity__c WHERE Agent_Discovery_Date__c >= THIS_QUARTER GROUP BY Status__cNode 5: Generate Report (Code Node)
const newGrants = $('Query New Discoveries').all();
const deadlines = $('Query Deadlines').all();
const stats = $('Query Stats').all();
const weekStart = new Date();
weekStart.setDate(weekStart.getDate() - 7);
let grantsTable = newGrants.map(g => `
<tr>
<td style="padding: 8px; border-bottom: 1px solid #eee;">${g.json.Grant_Name__c}</td>
<td style="padding: 8px; border-bottom: 1px solid #eee;">${g.json.Funder_Name__c}</td>
<td style="padding: 8px; border-bottom: 1px solid #eee;">$${(g.json.Funding_Amount_Min__c || 0).toLocaleString()}</td>
<td style="padding: 8px; border-bottom: 1px solid #eee;">
<span style="background: ${g.json.Match_Score__c >= 0.8 ? '#4CAF50' : g.json.Match_Score__c >= 0.6 ? '#FFC107' : '#F44336'}; color: white; padding: 2px 8px; border-radius: 12px;">
${Math.round(g.json.Match_Score__c * 100)}%
</span>
</td>
<td style="padding: 8px; border-bottom: 1px solid #eee;">${g.json.Deadline__c || 'TBD'}</td>
<td style="padding: 8px; border-bottom: 1px solid #eee;">${g.json.Status__c}</td>
</tr>`).join('');
let deadlinesList = deadlines.map(d => `
<li style="margin: 8px 0;">
<strong>${d.json.Deadline__c}</strong> — ${d.json.Grant_Name__c} (${d.json.Funder_Name__c})
[Status: ${d.json.Status__c}]
</li>`).join('');
const htmlReport = `
<div style="font-family: Arial, sans-serif; max-width: 800px; margin: 0 auto;">
<h1 style="color: #1565C0;">📊 Weekly Grant Scout Report</h1>
<p>Week of ${weekStart.toLocaleDateString()} — Generated ${new Date().toLocaleDateString()}</p>
<div style="display: flex; gap: 15px; margin: 20px 0;">
<div style="background: #E3F2FD; padding: 15px; border-radius: 8px; flex: 1; text-align: center;">
<div style="font-size: 24px; font-weight: bold; color: #1565C0;">${newGrants.length}</div>
<div>New Opportunities</div>
</div>
<div style="background: #E8F5E9; padding: 15px; border-radius: 8px; flex: 1; text-align: center;">
<div style="font-size: 24px; font-weight: bold; color: #2E7D32;">${newGrants.filter(g => g.json.Match_Score__c >= 0.7).length}</div>
<div>Strong Matches (70%+)</div>
</div>
<div style="background: #FFF3E0; padding: 15px; border-radius: 8px; flex: 1; text-align: center;">
<div style="font-size: 24px; font-weight: bold; color: #E65100;">${deadlines.length}</div>
<div>Deadlines Next 30 Days</div>
</div>
</div>
<h2>New Opportunities This Week</h2>
<table style="width: 100%; border-collapse: collapse;">
<thead><tr style="background: #f5f5f5;">
<th style="padding: 8px; text-align: left;">Grant</th>
<th style="padding: 8px; text-align: left;">Funder</th>
<th style="padding: 8px; text-align: left;">Amount</th>
<th style="padding: 8px; text-align: left;">Match</th>
<th style="padding: 8px; text-align: left;">Deadline</th>
<th style="padding: 8px; text-align: left;">Status</th>
</tr></thead>
<tbody>${grantsTable}</tbody>
</table>
<h2>⏰ Upcoming Deadlines</h2>
<ul>${deadlinesList || '<li>No upcoming deadlines in the next 30 days</li>'}</ul>
<hr style="margin: 20px 0;">
<p style="color: #999; font-size: 12px;">Report generated by GrantScout AI. Review all opportunities in <a href="https://login.salesforce.com">Salesforce</a>.</p>
</div>`;
return [{ json: { htmlReport, subject: `📊 Weekly Grant Scout Report — ${newGrants.length} New Opportunities (${newGrants.filter(g => g.json.Match_Score__c >= 0.7).length} Strong Matches)` } }];Node 6: Send Email via Microsoft Graph
- Same pattern as High Match Alert, but sent to broader distribution (ED, grants team, board chair if desired)
Grant Match Scoring Prompt
Type: prompt The core system prompt used by the GPT-4.1 deep analysis agent for scoring grant opportunities against the nonprofit's organizational profile. This prompt is critical to matching quality and should be tuned during the initial implementation period based on staff feedback.
Implementation
## System Prompt: Grant Match Deep Analysis
You are an expert grant analyst employed by {{org_name}}. You have deep expertise in nonprofit fundraising, grant writing, and funder research. Your role is to evaluate grant opportunities and provide actionable intelligence to the grants team.
## Organization Profile
- **Name:** {{org_name}}
- **Mission:** {{org_mission}}
- **Focus Areas:** {{org_focus_areas}}
- **Geographic Service Area:** {{org_geography}}
- **Annual Operating Budget:** {{org_budget_range}}
- **Organization Type:** {{org_type}}
- **Tax Status:** 501(c)(3)
## Organizational Context from Knowledge Base
{{rag_retrieved_context}}
## Scoring Rubric
Score each grant 0-100 using this weighted rubric:
### Mission Alignment (35 points)
- 30-35: Grant focus directly matches a core organizational program
- 20-29: Grant focus aligns with organizational mission but not a current program
- 10-19: Tangential alignment; would require new programming
- 0-9: No meaningful alignment
### Eligibility Fit (25 points)
- 20-25: Organization clearly meets all stated eligibility criteria
- 15-19: Organization likely meets criteria; minor uncertainties
- 5-14: Significant eligibility questions or partial fit
- 0-4: Likely ineligible
### Geographic Match (15 points)
- 13-15: Grant serves organization's exact service area
- 8-12: Regional overlap or national grant accessible to org's area
- 3-7: Partial geographic overlap
- 0-2: No geographic fit
### Funding Appropriateness (15 points)
- 13-15: Grant amount aligns with organization's typical grant size and budget
- 8-12: Amount is reasonable but may be stretch or undersized
- 3-7: Amount may be mismatched with org capacity
- 0-2: Significant mismatch
### Strategic Value (10 points)
- 8-10: Aligns with strategic plan priorities; potential for multi-year or relationship building
- 4-7: Useful but not strategically critical
- 0-3: Low strategic value
## Output Format
Respond with ONLY a valid JSON object:
{
"match_score": <integer 0-100>,
"score_breakdown": {
"mission_alignment": <0-35>,
"eligibility_fit": <0-25>,
"geographic_match": <0-15>,
"funding_appropriateness": <0-15>,
"strategic_value": <0-10>
},
"match_rationale": "<3-5 sentences explaining the score, referencing specific organizational programs, past grants, or strategic priorities from the knowledge base>",
"eligibility_summary": "<Assessment of eligibility requirements and any flags or concerns>",
"proposal_outline": "<If score >= 60: structured outline with suggested project title, 3-4 key points to emphasize, programs to highlight, suggested budget, key staff/partnerships. If score < 60: null>",
"strategic_notes": "<Any additional strategic context: funder history, timing considerations, complementary opportunities, risks>",
"recommended_action": "<APPLY if score >= 75 | REVIEW if 50-74 | SKIP if < 50>",
"priority": "<HIGH if score >= 80 | MEDIUM if 60-79 | LOW if < 60>",
"estimated_effort_hours": <number: estimated hours to complete application>
}
## Important Guidelines
1. Be honest and calibrated — do not inflate scores to seem helpful
2. Reference specific organizational programs, past proposals, or strategic plan items from the knowledge base when available
3. Flag any red flags (e.g., funder typically funds larger orgs, grant requires matching funds, political/mission conflicts)
4. If the grant description is vague, note this and suggest the team investigate further before committing time
5. Consider the organization's capacity — a small org should not pursue 20 applications simultaneously
6. Always disclose uncertainty — if you're unsure about eligibility, say so explicitlyTuning Parameters
- Temperature: 0.3 (more deterministic and consistent scoring)
- Max Tokens: 2000
- Model: GPT-4.1 (for deep analysis) or GPT-4.1-mini (for initial screening)
Tuning Notes for MSP
Salesforce CRM Integration Connector
Type: integration n8n credential configuration and helper functions for bidirectional Salesforce NPSP integration. Handles OAuth2 authentication, Grant_Opportunity__c record upserts, SOQL queries for deduplication and reporting, and status update webhooks.
OAuth2 Setup in n8n
Key SOQL Queries Used by Workflows
Deduplication Check
SELECT Id, External_ID__c, Match_Score__c, Status__c
FROM Grant_Opportunity__c
WHERE External_ID__c = '{external_id}'Weekly Report - New Discoveries
SELECT Grant_Name__c, Funder_Name__c, Funding_Amount_Min__c,
Funding_Amount_Max__c, Deadline__c, Match_Score__c,
Status__c, Match_Rationale__c, Agent_Discovery_Date__c
FROM Grant_Opportunity__c
WHERE Agent_Discovery_Date__c = LAST_N_DAYS:7
ORDER BY Match_Score__c DESCUpcoming Deadlines
SELECT Grant_Name__c, Funder_Name__c, Deadline__c, Status__c, Match_Score__c
FROM Grant_Opportunity__c
WHERE Deadline__c >= TODAY AND Deadline__c <= NEXT_N_DAYS:30
AND Status__c IN ('Reviewing', 'Applying')
ORDER BY Deadline__c ASCPipeline Statistics
SELECT Status__c, COUNT(Id) cnt, AVG(Match_Score__c) avg_score
FROM Grant_Opportunity__c
WHERE Agent_Discovery_Date__c >= THIS_QUARTER
GROUP BY Status__cExpired Grant Cleanup (run monthly)
SELECT Id FROM Grant_Opportunity__c
WHERE Deadline__c < TODAY AND Status__c IN ('New', 'Reviewing')
AND Deadline__c != nullUpsert Logic (n8n Code Node)
// preserves human-managed fields
// Generate consistent External_ID for deduplication
function generateExternalId(funderName, grantName) {
return `${funderName}-${grantName}`
.toLowerCase()
.replace(/[^a-z0-9]+/g, '-')
.substring(0, 100);
}
// Upsert preserves human-modified fields
// Only update these fields if the record already exists:
const UPSERT_FIELDS = [
'Grant_Name__c', 'Funder_Name__c', 'Funding_Amount_Min__c',
'Funding_Amount_Max__c', 'Deadline__c', 'Match_Score__c',
'Match_Rationale__c', 'Eligibility_Summary__c',
'AI_Proposal_Outline__c', 'Application_URL__c'
];
// DO NOT overwrite these fields (human-managed):
// Status__c (staff may have changed from 'New' to 'Applying')
// Focus_Area__c (staff may have re-categorized)Salesforce Custom Report Types
Create these report types for staff dashboards:
Testing & Validation
- UNIT TEST - RAG Ingestion: Upload a 3-page test PDF to /opt/grantscout/rag-docs/, run the RAG Document Ingestion workflow, and verify that Qdrant collection 'org_knowledge' shows new points by querying (see code block) — points_count should increase by 5-15 depending on document length
- UNIT TEST - Email Parsing: Send a test email to grants-inbox@clientdomain.org formatted like an Instrumentl digest with 3 mock grants. Trigger the IMAP node in the main workflow and verify the Code node correctly extracts grant_name, funder_name, amount, and deadline for all 3 grants
- UNIT TEST - OpenAI API Connectivity: In n8n, create a simple test workflow with an OpenAI node, send a test prompt ('Hello, respond with OK'), and verify a successful 200 response with valid output. Check that the API key has sufficient credits
- UNIT TEST - Salesforce Integration: Create a test Grant_Opportunity__c record via the n8n Salesforce node with all required fields populated. Verify the record appears in Salesforce with correct field values. Then update the record via upsert using the same External_ID__c and verify fields are updated without creating a duplicate
- UNIT TEST - Qdrant Similarity Search: Generate an embedding for the text 'youth education programs in urban communities' using the OpenAI embedding node, then search Qdrant with that vector. Verify that results return relevant chunks from the organization's program descriptions or mission documents
- INTEGRATION TEST - End-to-End Grant Processing: Send 5 mock Instrumentl digest emails containing grants with varying relevance (2 highly relevant, 2 tangentially relevant, 1 irrelevant based on the org's mission). Run the full Daily Scan & Match workflow. Verify: (a) all 5 are parsed, (b) GPT-4.1-mini correctly flags the irrelevant one as not relevant, (c) the 4 relevant ones receive deep analysis with scores, (d) all 4 appear as Grant_Opportunity__c records in Salesforce with populated Match_Score, Match_Rationale, and Eligibility_Summary fields
- INTEGRATION TEST - High Match Alert: Ensure that at least one of the test grants scores above 80%. Verify that the High Match Alert email is received by the designated grant staff email within 5 minutes of workflow completion. Check that the email contains the correct grant name, score, rationale, and clickable URL
- INTEGRATION TEST - Deduplication: Run the Daily Scan workflow twice with the same grant data. Verify that Salesforce contains only one record per grant (no duplicates) and that the upsert correctly updated rather than created new records. Check the External_ID__c field is consistent
- INTEGRATION TEST - Weekly Summary Report: Ensure at least 3 test grants exist in Salesforce from the past 7 days. Manually trigger the Weekly Summary Report workflow. Verify: (a) email is delivered, (b) HTML formatting renders correctly in Outlook/Gmail, (c) all test grants appear in the table, (d) pipeline statistics are accurate
- PERFORMANCE TEST - API Cost Estimation: Run the full workflow with 20 test grants and record the total OpenAI API token usage from the OpenAI dashboard. Extrapolate to expected monthly volume (500-1000 grants/month) and verify the cost falls within the $30-$150/month budget estimate. If over budget, adjust the initial screening threshold to filter more aggressively
- SECURITY TEST - Credential Protection: Verify that n8n credentials are encrypted by checking that the database does not contain plaintext API keys (see code block) — values should be encrypted blobs, not readable keys
- SECURITY TEST - Network Access: From an external IP (not the MSP office), attempt to access the n8n interface at https://grantscout.clientdomain.com — verify that basic auth is required. Attempt to access port 5678 directly — verify it is blocked by the NSG rule. Verify SSL certificate is valid and auto-renewing via Caddy
- RESILIENCE TEST - Service Recovery: Stop one Docker container at a time (see code block). Verify that the health check script detects each outage within 5 minutes and automatically restarts the container. Check logs at /var/log/grantscout-health.log for alert entries
- USER ACCEPTANCE TEST - Staff Validation: Present 10 AI-scored grant opportunities to the nonprofit's grants manager. Have them independently rate each grant's relevance on a 1-10 scale. Calculate correlation between AI scores and human scores — target r >= 0.7. If below 0.6, the prompt and RAG knowledge base need significant tuning before go-live
- BACKUP TEST - Disaster Recovery: Run the backup script manually. Copy the backup to a separate test VM. Restore PostgreSQL, Qdrant, and n8n from the backup. Verify all workflows, credentials (re-enter encryption key), and data are intact. Document the restore procedure and time-to-recovery (target: under 2 hours)
# RAG Ingestion: Query Qdrant to verify points_count increased after
# ingestion
curl http://localhost:6333/collections/org_knowledge -H 'api-key: <key>'# Credential Protection: Check that credential values are encrypted blobs,
# not plaintext keys
docker exec grantscout-postgres psql -U n8n_user -d n8n -c "SELECT * FROM credentials_entity"# Service Recovery: Stop each container one at a time to test health check
# detection and auto-restart
docker stop grantscout-n8n
docker stop grantscout-postgres
docker stop grantscout-qdrantClient Handoff
...
Client Handoff Checklist
Training Sessions (Schedule 2 sessions, 90 minutes each)
Session 1: Grant Pipeline Management (for Grant Staff)
- How to review AI-matched grants in Salesforce: navigating the Grant_Opportunity__c list view, understanding match scores and rationale
- How to update grant status (New → Reviewing → Applying → Submitted → Awarded/Declined)
- Understanding the Match Score breakdown: what each component means and how to interpret the AI's reasoning
- How to provide feedback on match quality: process for flagging false positives/negatives so the MSP can tune the system
- Using the AI-generated proposal outlines: how to use them as starting points, not final products
- Email alert system: what triggers alerts, how to customize notification preferences
- Deadline management: how the reminder system works and how to update deadlines in Salesforce
Session 2: System Overview (for Executive Director + IT Contact)
- Architecture overview: what runs where, what the monthly costs are, and what the MSP manages
- When to add new documents to the RAG knowledge base (new strategic plan, new programs, annual reports)
- How to request changes: adding focus areas, adjusting match thresholds, adding notification recipients
- Monthly reporting: how to read the weekly summary reports and pipeline dashboards
- AI disclosure requirements: when and how to disclose AI use in grant applications per funder requirements
- Escalation path: who to contact at the MSP for issues, and expected response times
Documentation to Leave Behind
Success Criteria to Review Together at Handoff
Maintenance
Ongoing MSP Maintenance Responsibilities
Weekly Tasks (30 minutes/week)
- Review n8n workflow execution logs for errors or warnings — check https://grantscout.clientdomain.com > Executions tab
- Verify daily scan workflow ran successfully each day (7 successful executions expected)
- Check OpenAI API usage dashboard for unexpected cost spikes
- Review health check logs at /var/log/grantscout-health.log for any service restart events
Monthly Tasks (2-3 hours/month)
- Review and apply Docker image updates (test in staging first if available)
- Check Azure VM resource utilization (CPU, RAM, disk) and right-size if needed
- Review Salesforce storage usage (free tier has limited data storage)
- Run expired grant cleanup query to archive old records
- Review backup integrity: download latest backup and spot-check file contents
- Check SSL certificate status (Caddy auto-renews, but verify)
- Review OpenAI API billing and compare to budget forecast
- Generate monthly utilization report for client: grants scanned, matches found, match accuracy based on staff feedback
docker compose pull && docker compose up -dQuarterly Tasks (4-6 hours/quarter)
- Schedule 30-minute review call with client's grant staff to collect feedback on match quality
- Tune matching prompt based on accumulated feedback (adjust scoring rubric weights, add/remove focus areas)
- Review and update RAG knowledge base if client has new strategic plans, programs, or annual reports
- Rotate API keys (OpenAI, Qdrant) as security best practice
- Review n8n release notes for new features or security patches; plan upgrades
- Audit Salesforce connected app permissions and integration user access
- Review Instrumentl plan usage and features — recommend plan changes if needed
- Update documentation for any system changes made during the quarter
Annual Tasks (8-12 hours/year)
- Comprehensive system health review: performance benchmarking, cost optimization analysis
- Azure nonprofit credit renewal (apply before expiration)
- Instrumentl subscription renewal negotiation
- Full disaster recovery test: restore from backup to clean VM
- Review compliance landscape: check for new state privacy laws, funder AI policies, or federal grant requirements affecting the system
- Strategic roadmap discussion with client: should the system expand (e.g., add automated proposal drafting, funder relationship scoring, multi-org support)?
SLA Considerations
- Response Time: 4-hour response for system-down issues during business hours (M-F 8AM-6PM client timezone); next-business-day for non-critical issues
- Resolution Time: 8-hour resolution target for critical issues (no daily scans running); 48-hour for non-critical
- Uptime Target: 99% monthly uptime for the n8n agent and Salesforce integration (excludes planned maintenance)
- Planned Maintenance Window: Sundays 2-6 AM for updates and patches
- Data Recovery: 4-hour RTO (Recovery Time Objective) from daily backups; RPO (Recovery Point Objective) of 24 hours
Escalation Path
Model Retraining / Prompt Update Triggers
- Client provides feedback that match accuracy has degraded (>20% false positive/negative rate)
- Organization undergoes strategic pivot, adds new programs, or changes focus areas
- OpenAI releases new model version (e.g., GPT-4.2) — evaluate on test set before switching
- Instrumentl changes their email digest format (breaks parser — update Code node)
- New funder disclosure requirements emerge that need to be reflected in proposal outlines
Alternatives
SaaS-Only Approach (Option A)
Use Instrumentl ($299-$499/mo) and Grantable ($25-$75/mo) as standalone SaaS platforms without custom agent orchestration. Staff manually reviews Instrumentl's AI-matched grants and uses Grantable for AI-assisted writing. No n8n, no custom workflows, no self-hosted infrastructure. CRM integration is limited to manual entry or Instrumentl's built-in tracking.
Full Custom Multi-Agent System (Option C - CrewAI)
Build a fully custom multi-agent system using CrewAI (open-source Python framework) with specialized agent roles: a Researcher Agent that scans multiple grant databases and foundation websites, an Analyst Agent that performs deep matching with RAG, a Writer Agent that generates complete draft proposals, and a Manager Agent that orchestrates the crew and handles CRM operations. Deployed as a Python application on the Azure VM alongside a custom web dashboard.
Salesforce Agentforce Native Approach
For nonprofits already deeply invested in Salesforce, use Salesforce Agentforce ($325/user/mo on top of Nonprofit Cloud) to build the grant matching agent natively within the Salesforce ecosystem. Leverages Salesforce's Data Cloud for organizational knowledge, Einstein AI for matching, and Flow for workflow automation. Eliminates the need for n8n, separate vector database, or external hosting.
WHEN TO RECOMMEND: Organizations already paying for Salesforce beyond the free tier; nonprofits with existing Salesforce admin staff or consulting relationships; when compliance requirements mandate keeping all data within a single SOC 2 certified platform; MSPs that are Salesforce consulting partners.
Granter.ai Standalone Agent
Use Granter.ai as an all-in-one autonomous grant agent platform. Granter.ai provides 24/7 opportunity monitoring across 8,000+ grants, autonomous eligibility checks, and full application drafting — essentially a pre-built version of what the primary approach builds custom. Minimal MSP involvement beyond setup and monitoring.
Microsoft Copilot Studio + Power Automate Approach
Build the grant matching agent using Microsoft's low-code AI platform: Copilot Studio for the conversational AI interface and Power Automate for workflow orchestration. Leverages the nonprofit's free Microsoft 365 licenses and Azure credits. Uses Azure OpenAI Service instead of direct OpenAI API for enterprise data controls.
Want early access to the full toolkit?