
Implementation Guide: Score and rank applicants against job requirements using resume and assessment data
Step-by-step implementation guide for deploying AI to score and rank applicants against job requirements using resume and assessment data for HR & Staffing clients.
Hardware Procurement
Dell UltraSharp U2723QE 27" 4K USB-C Monitor
$420 per unit (MSP cost) / $550 suggested resale
Secondary monitor for recruiters to enable side-by-side candidate comparison — viewing AI-ranked candidate list alongside full resume detail. Improves recruiter throughput during high-volume screening. One per recruiter workstation.
Dell OptiPlex 7020 Micro Desktop
$950 per unit (MSP cost) / $1,250 suggested resale
Shared kiosk or conference room workstations for hiring managers to review AI-scored candidate dashboards during intake meetings. Only needed if client lacks modern workstations in interview rooms. Includes i5-14500T, 16GB RAM, 256GB SSD — sufficient for browser-based SaaS access.
Software Procurement
Manatal ATS — Enterprise Plan
$35/user/month billed annually ($4,200/year for 10 users) — MSP negotiated; resell at $45/user/month
Core applicant tracking system with built-in AI recommendation engine that scores and ranks candidates against job requirements. Includes AI-powered candidate sourcing, resume parsing, customizable pipeline stages, and integrations with LinkedIn, Indeed, and 2,500+ job boards.
OpenAI API — GPT-5.4 mini
$0.15/million input tokens + $0.60/million output tokens; estimated $25–$75/month for 500–2,000 applicants/month
Powers the custom scoring enhancement layer for detailed, criteria-weighted resume analysis with explainable scoring breakdowns. Used when clients need granular scoring beyond Manatal's built-in AI.
OpenAI API — text-embedding-3-small
$0.02/million tokens; estimated $2–$5/month for typical volume
Generates semantic embeddings for job descriptions and resumes to compute cosine similarity scores for semantic matching — identifies candidates whose experience closely aligns with job requirements even when terminology differs.
Azure OpenAI Service
~10% premium over standard OpenAI pricing; estimated $30–$85/month
Enterprise-grade alternative to direct OpenAI API for clients requiring SOC 2 compliance, data residency guarantees, or GDPR-compliant processing. Recommended for clients with EU operations or regulated industries. Deploy through MSP's existing Microsoft CSP agreement for 15–20% margin.
Brainner AI Resume Screening
$34–$99/month base + $9.95 per 100 additional candidates over quota
Alternative bolt-on AI screening layer for clients who wish to retain their existing ATS (Greenhouse, Lever, Bullhorn, etc.) rather than migrating to Manatal. Provides explainable AI scoring per candidate with ATS-agnostic integration.
Zoho Recruit — Enterprise Plan
$50/user/month billed annually; partner commission 20–30%
Budget-conscious alternative ATS with AI candidate matching for clients already in the Zoho ecosystem. Strong for small staffing firms needing ATS + CRM without high cost.
TestGorilla Pre-Employment Assessments
$75–$115/month (Scale plan); resell at $150/month
Pre-employment skills testing platform that generates structured assessment scores fed into the AI scoring pipeline. Provides cognitive ability, role-specific skills, and personality assessments that complement resume-based scoring.
Zapier — Professional Plan
$49.99/month for 2,000 tasks; resell bundled in managed services
Integration middleware connecting Manatal/ATS to HRIS (BambooHR, Rippling), communication tools (Slack, Teams), and the custom scoring webhook. Used when native integrations are insufficient.
Microsoft Entra ID P1 (Azure AD)
$6/user/month (often included in Microsoft 365 Business Premium)
SAML SSO and conditional access for all AI hiring platforms. Enforces MFA, enables single sign-on to Manatal and other tools, and provides audit logging for compliance.
BABL AI Bias Audit Service
$5,000–$15,000 per annual audit; MSP markup 15–25%
Third-party independent bias audit required by NYC Local Law 144 and recommended as best practice everywhere. Audits AI scoring tool for disparate impact across protected categories and provides publishable audit summary.
Prerequisites
- Stable internet connectivity of 50+ Mbps at all recruiter workstations with <100ms latency to cloud providers (Azure/AWS US regions)
- Modern web browsers installed on all recruiter machines — Google Chrome 120+ or Microsoft Edge 120+ required for full Manatal dashboard functionality
- Microsoft 365 Business Premium or equivalent identity provider (Azure AD/Entra ID) configured with SAML SSO capability for all recruiter and hiring manager accounts
- Firewall rules permitting outbound HTTPS (port 443) traffic to: *.manatal.com, api.openai.com, *.azurewebsites.net, *.zapier.com, and *.testgorilla.com
- Existing structured job descriptions for at least 5–10 active roles — each must include: job title, required skills/qualifications, preferred qualifications, years of experience, education requirements, and key responsibilities
- Administrative access to any existing ATS system (if migrating data) with ability to export candidate records in CSV or via API
- Designated client-side project sponsor: HR Manager or Head of Recruiting with authority to define scoring criteria, approve workflows, and sign off on compliance disclosures
- List of all active job boards and sourcing channels currently used (LinkedIn, Indeed, ZipRecruiter, etc.) with login credentials for integration setup
- If operating in New York City: legal counsel review of NYC Local Law 144 obligations and commitment to fund annual bias audit ($5,000–$15,000)
- Python 3.10+ runtime environment available (Azure Functions or AWS Lambda) if deploying the custom scoring enhancement layer — MSP provisions this in their managed cloud tenant
- DNS and email sending capability (SPF/DKIM configured) for automated candidate notification emails from the ATS platform
- Data processing agreement (DPA) executed with client covering candidate PII handling, retention periods, and GDPR obligations if processing EU candidates
Installation Steps
Step 1: Discovery & Workflow Audit
Conduct a 2–4 hour discovery session with the client's HR team to document their current hiring workflow end-to-end. Map each stage from job requisition creation through offer acceptance. Identify pain points, volume metrics (applicants per role, roles per month, time-to-screen), current ATS usage, job board integrations, and assessment tools. Document the client's scoring priorities — which qualifications matter most, how they currently shortlist candidates, and any compliance obligations (NYC, CO, IL, GDPR). This shapes the entire configuration.
Use a shared document or whiteboard. Capture: (1) number of recruiters and hiring managers, (2) monthly applicant volume, (3) top 10 most-hired roles, (4) existing tech stack, (5) compliance jurisdiction. This discovery output drives every subsequent configuration decision. Budget 1 day for this phase.
Step 2: Provision Manatal ATS Tenant
Create the Manatal Enterprise organization. Sign up at manatal.com using the MSP's partner/reseller email, then configure the client's organization as a sub-account. Select the Enterprise plan ($35/user/month annually) to get unlimited jobs and candidates plus the full AI recommendation engine. Create user accounts for each recruiter and hiring manager identified in discovery.
Manatal offers a 14-day free trial — use this to configure and test before committing to paid subscription. The Enterprise plan is required for unlimited candidates and full AI scoring. If the client has fewer than 5 recruiters and limited job volume, the Professional plan ($15/user/month) may suffice but caps at 15 active jobs.
Step 3: Configure SSO and Security
Integrate Manatal with the client's Microsoft Entra ID (Azure AD) for SAML-based single sign-on. This ensures recruiters use their existing corporate credentials, MFA is enforced, and access is logged for compliance. Configure conditional access policies to restrict Manatal access to managed devices and approved locations.
If the client does not have Microsoft 365 Business Premium or Entra ID P1, SSO may not be available. In that case, enforce strong passwords (16+ characters) directly in Manatal and enable Manatal's built-in 2FA. Document this as a security risk in the handoff.
Step 4: Import Historical Data & Configure Job Templates
Migrate existing candidate data from the client's current system into Manatal. Then create standardized job templates for the client's most common roles, embedding structured requirements that the AI engine will score against. Each job template must include weighted scoring criteria.
Data Migration
Job Template Configuration (repeat for each of client's top 10 roles)
The quality of AI scoring is directly proportional to the quality of structured job descriptions. Spend significant time here — poorly defined requirements produce unreliable rankings. Work with the HR sponsor to refine each template. Manatal's AI scores against the skills, experience, and education fields, so these must be comprehensive.
Step 5: Connect Job Boards & Sourcing Channels
Integrate Manatal with the client's active job boards and sourcing platforms so that incoming applications automatically flow into the ATS and get scored by the AI engine. Configure LinkedIn, Indeed, ZipRecruiter, and any niche staffing boards the client uses.
- LinkedIn Integration: Navigate to Settings > Integrations > Job Boards
- LinkedIn Integration Step 1: Select LinkedIn > Connect
- LinkedIn Integration Step 2: Authenticate with client's LinkedIn Recruiter admin account
- LinkedIn Integration Step 3: Enable auto-import of applicants from LinkedIn postings
- Indeed Integration Step 1: Select Indeed > Connect
- Indeed Integration Step 2: Enter Indeed employer account credentials
- Indeed Integration Step 3: Enable Indeed Sponsored Jobs sync (if applicable)
- ZipRecruiter Integration Step 1: Select ZipRecruiter > Connect
- ZipRecruiter Integration Step 2: Authenticate with ZipRecruiter employer account
- Career Page Step 1: Navigate to Settings > Career Page
- Career Page Step 2: Customize branding (logo, colors, description)
- Career Page Step 3: Copy embed code or custom domain CNAME
careers.clientdomain.com -> CNAME manatal-careers.manatal.comnslookup careers.clientdomain.comManatal integrates with 2,500+ job boards. Not all require paid board subscriptions — many are free aggregation boards. LinkedIn Recruiter integration requires the client to have an active LinkedIn Recruiter license. If they only have a basic LinkedIn account, candidates from LinkedIn must be imported manually or via the Manatal Chrome extension.
Step 6: Configure AI Scoring Criteria & Weights
Fine-tune Manatal's AI recommendation engine to align with the client's hiring priorities. The AI scores candidates from 0–100 based on skills match, experience match, education match, and job description relevance. Configure the relative importance (weights) of each factor per job category.
Manatal's AI scoring improves over time as recruiters provide feedback (qualifying/disqualifying candidates). The first 2 weeks are a calibration period — expect to manually review and correct AI rankings. Document the target score thresholds with the HR sponsor: e.g., 80+ = auto-advance to phone screen, 50–79 = recruiter review, <50 = auto-reject (with human review option for compliance).
Step 7: Deploy Custom Scoring Enhancement Layer (Optional)
For clients needing deeper scoring explainability or custom criteria weighting beyond Manatal's built-in AI, deploy a custom scoring microservice using the OpenAI GPT-5.4 mini API. This service receives candidate data via webhook from Manatal, runs detailed criteria-by-criteria analysis, generates an explainable score breakdown, and writes results back to Manatal via API.
az login
az group create --name rg-ai-hiring-scoring --location eastus
az functionapp create --resource-group rg-ai-hiring-scoring --consumption-plan-location eastus --runtime python --runtime-version 3.11 --functions-version 4 --name func-applicant-scorer-<clientid> --storage-account stairhiring<clientid>az functionapp config appsettings set --name func-applicant-scorer-<clientid> --resource-group rg-ai-hiring-scoring --settings OPENAI_API_KEY=sk-xxxxxxxxxxxx MANATAL_API_KEY=xxxxxxxxxxxx MANATAL_BASE_URL=https://api.manatal.com/open/v3cd applicant-scorer
func azure functionapp publish func-applicant-scorer-<clientid>This step is OPTIONAL and only recommended for clients with complex scoring requirements (e.g., staffing agencies managing 50+ distinct job categories with nuanced weighting). For most SMB clients, Manatal's built-in AI scoring is sufficient. The custom layer adds $25–$75/month in API costs plus MSP development time (16–24 hours initial build). Bill as a one-time project ($4,000–$6,000) plus ongoing managed service ($500/month).
Step 8: Integrate Pre-Employment Assessments
Connect TestGorilla (or the client's existing assessment platform) to the ATS so that assessment scores are incorporated into the overall candidate ranking. Configure automated assessment invitations triggered when a candidate reaches a specific pipeline stage.
TestGorilla + Manatal Integration via Zapier
TestGorilla offers 400+ scientifically validated assessments across cognitive ability, programming skills, language proficiency, and personality. Work with the HR sponsor to select 2–3 assessments per job category. If the client already uses Criteria Corp, Vervoe, or HireVue for assessments, substitute accordingly — Zapier supports most major platforms. Assessment data significantly improves ranking accuracy when combined with resume-based AI scoring.
Step 9: Configure Compliance Disclosures & Human-in-the-Loop Safeguards
Implement all required regulatory disclosures and ensure the system enforces human review before any adverse employment decisions. This is critical for legal compliance in NYC, Colorado, Illinois, and GDPR jurisdictions — and is best practice everywhere.
THIS STEP IS NON-NEGOTIABLE. Failure to implement proper disclosures and human review can expose the client to significant legal liability. NYC Local Law 144 penalties range from $500–$1,500 per violation per candidate. EEOC treats AI scoring as a selection procedure subject to Title VII disparate impact analysis. Always recommend the client engage employment counsel to review the compliance configuration. Document everything for the client handoff.
Step 10: Integration Testing with Historical Candidate Data
Before going live, run the AI scoring system against a known set of historical candidates whose hiring outcomes are already determined. This validates that the AI rankings correlate with actual hiring decisions and identifies any scoring anomalies that need calibration.
curl -H 'Authorization: Token <MANATAL_API_KEY>' 'https://api.manatal.com/open/v3/candidates/?job_id=<test_job_id>&ordering=-score' | python -m json.tool > test_scores.jsonimport json
scores = json.load(open('test_scores.json'))
# Compare AI rank vs actual hire decision
# Target: 70%+ of actual hires should appear in AI's top 30%
print('Validation complete - review scores vs outcomes')This calibration step is crucial for building recruiter trust in the AI system. If the AI consistently ranks known-good hires in its top 30%, recruiters will adopt the tool. If correlation is poor, revisit job description quality and scoring criteria weights before go-live. Plan 5–10 business days for this phase including iterations.
Step 11: Recruiter Training & Controlled Rollout
Train all recruiters and hiring managers on the new AI-scored workflow. Begin with a controlled rollout on 2–3 job postings before expanding to all open roles. Provide hands-on training sessions and written documentation.
- Week 1: Enable AI scoring on 2-3 highest-volume roles only
- Week 2: Review scoring accuracy, gather recruiter feedback, adjust criteria
- Week 3: Expand to all active roles
- Week 4: Full production — all new job postings use AI scoring by default
Recruiter adoption is the biggest risk factor. Common resistance points: 'AI will replace my job' (address: AI handles initial screening so you can focus on relationship building and assessment), 'I don't trust the scores' (address: show calibration results, emphasize scores as decision-support not decision-making). Schedule a follow-up training session 2 weeks post-launch to address real-world questions.
Step 12: Go-Live & Production Monitoring Setup
Transition from controlled rollout to full production. Configure monitoring dashboards and alerting so the MSP can proactively manage the platform. Establish the ongoing managed service cadence.
az monitor metrics alert create --name 'ScoringFunctionErrors' --resource-group rg-ai-hiring-scoring --scopes /subscriptions/<sub-id>/resourceGroups/rg-ai-hiring-scoring/providers/Microsoft.Web/sites/func-applicant-scorer-<clientid> --condition 'total requests where resultType == Failed > 5' --window-size 1h --evaluation-frequency 5m --action-group <msp-alerts-action-group>The first 30 days post go-live require the most MSP attention. Plan for 2–4 hours/week of support during this period: answering recruiter questions, tuning scoring criteria, resolving integration hiccups. After stabilization, ongoing effort drops to 2–4 hours/month for routine maintenance.
Custom AI Components
Applicant Scoring Microservice
Type: agent A serverless Azure Function that receives webhook events from Manatal when a new candidate applies, retrieves the candidate's resume and the job requirements, runs a detailed criteria-by-criteria analysis using GPT-5.4 mini, generates a weighted score with explainable reasoning, and writes the results back to the candidate record in Manatal. This provides granular, auditable scoring beyond the ATS's built-in AI.
Implementation
# Azure Functions v4 Python HTTP Trigger
# applicant-scorer/function_app.py
# Azure Functions v4 Python - HTTP Trigger
import azure.functions as func
import json
import os
import logging
import hmac
import hashlib
import httpx
from openai import OpenAI
app = func.FunctionApp()
client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])
MANATAL_API_KEY = os.environ['MANATAL_API_KEY']
MANATAL_BASE_URL = os.environ.get('MANATAL_BASE_URL', 'https://api.manatal.com/open/v3')
WEBHOOK_SECRET = os.environ.get('WEBHOOK_SECRET', '')
SCORING_PROMPT = """You are an expert HR recruiter and hiring analyst. You will be given:
1. A JOB DESCRIPTION with requirements
2. A CANDIDATE RESUME
Score the candidate against each requirement category below on a scale of 0-100.
Provide a brief explanation for each score.
Categories and weights:
- Technical Skills Match (weight: 30%): How well do the candidate's technical skills align with required and preferred skills?
- Experience Level Match (weight: 25%): Does the candidate's years and type of experience match requirements?
- Education Match (weight: 15%): Does the candidate's education meet minimum and preferred requirements?
- Industry Relevance (weight: 15%): Has the candidate worked in relevant industries or domains?
- Overall Resume Quality (weight: 15%): Is the resume well-structured, clear, and does it demonstrate impact?
Respond ONLY with valid JSON in this exact format:
{
"overall_score": <weighted_average_0_to_100>,
"categories": {
"technical_skills": {"score": <0-100>, "weight": 0.30, "explanation": "<brief explanation>"},
"experience_level": {"score": <0-100>, "weight": 0.25, "explanation": "<brief explanation>"},
"education": {"score": <0-100>, "weight": 0.15, "explanation": "<brief explanation>"},
"industry_relevance": {"score": <0-100>, "weight": 0.15, "explanation": "<brief explanation>"},
"resume_quality": {"score": <0-100>, "weight": 0.15, "explanation": "<brief explanation>"}
},
"top_strengths": ["<strength 1>", "<strength 2>", "<strength 3>"],
"key_gaps": ["<gap 1>", "<gap 2>"],
"recommendation": "STRONG_MATCH" | "GOOD_MATCH" | "PARTIAL_MATCH" | "WEAK_MATCH"
}"""
def verify_webhook_signature(payload: bytes, signature: str) -> bool:
if not WEBHOOK_SECRET:
return True # Skip verification if no secret configured
expected = hmac.new(WEBHOOK_SECRET.encode(), payload, hashlib.sha256).hexdigest()
return hmac.compare_digest(expected, signature)
def get_candidate_details(candidate_id: str) -> dict:
headers = {'Authorization': f'Token {MANATAL_API_KEY}'}
with httpx.Client() as http:
resp = http.get(f'{MANATAL_BASE_URL}/candidates/{candidate_id}/', headers=headers)
resp.raise_for_status()
return resp.json()
def get_job_details(job_id: str) -> dict:
headers = {'Authorization': f'Token {MANATAL_API_KEY}'}
with httpx.Client() as http:
resp = http.get(f'{MANATAL_BASE_URL}/jobs/{job_id}/', headers=headers)
resp.raise_for_status()
return resp.json()
def extract_resume_text(candidate: dict) -> str:
"""Extract resume text from candidate record.
Manatal stores parsed resume content in candidate profile fields."""
parts = []
if candidate.get('resume_content'):
parts.append(candidate['resume_content'])
if candidate.get('experiences'):
for exp in candidate['experiences']:
parts.append(f"Experience: {exp.get('title', '')} at {exp.get('company', '')} ({exp.get('start_date', '')}-{exp.get('end_date', 'Present')}): {exp.get('description', '')}")
if candidate.get('educations'):
for edu in candidate['educations']:
parts.append(f"Education: {edu.get('degree', '')} in {edu.get('field', '')} from {edu.get('school', '')} ({edu.get('end_date', '')})")
if candidate.get('skills'):
parts.append(f"Skills: {', '.join([s.get('name', '') for s in candidate['skills']])}")
return '\n'.join(parts) if parts else 'No resume content available'
def build_job_description_text(job: dict) -> str:
parts = [f"Job Title: {job.get('position_name', '')}"]
if job.get('description'):
parts.append(f"Description: {job['description']}")
if job.get('requirements'):
parts.append(f"Requirements: {job['requirements']}")
if job.get('skills'):
parts.append(f"Required Skills: {', '.join([s.get('name', '') for s in job['skills']])}")
if job.get('experience_level'):
parts.append(f"Experience Level: {job['experience_level']}")
if job.get('education_level'):
parts.append(f"Education: {job['education_level']}")
return '\n'.join(parts)
def score_candidate(job_text: str, resume_text: str) -> dict:
response = client.chat.completions.create(
model='gpt-5.4-mini',
messages=[
{'role': 'system', 'content': SCORING_PROMPT},
{'role': 'user', 'content': f'JOB DESCRIPTION:\n{job_text}\n\n---\n\nCANDIDATE RESUME:\n{resume_text}'}
],
temperature=0.1,
max_tokens=1000,
response_format={'type': 'json_object'}
)
return json.loads(response.choices[0].message.content)
def update_candidate_with_score(candidate_id: str, score_result: dict):
headers = {
'Authorization': f'Token {MANATAL_API_KEY}',
'Content-Type': 'application/json'
}
note_text = f"""\u{1F916} AI Scoring Results (Custom Enhancement Layer)
\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501
Overall Score: {score_result['overall_score']}/100
Recommendation: {score_result['recommendation']}
Category Breakdown:
\u2022 Technical Skills: {score_result['categories']['technical_skills']['score']}/100 - {score_result['categories']['technical_skills']['explanation']}
\u2022 Experience Level: {score_result['categories']['experience_level']['score']}/100 - {score_result['categories']['experience_level']['explanation']}
\u2022 Education: {score_result['categories']['education']['score']}/100 - {score_result['categories']['education']['explanation']}
\u2022 Industry Relevance: {score_result['categories']['industry_relevance']['score']}/100 - {score_result['categories']['industry_relevance']['explanation']}
\u2022 Resume Quality: {score_result['categories']['resume_quality']['score']}/100 - {score_result['categories']['resume_quality']['explanation']}
Top Strengths: {', '.join(score_result['top_strengths'])}
Key Gaps: {', '.join(score_result['key_gaps'])}"""
with httpx.Client() as http:
# Add scoring note to candidate
http.post(
f'{MANATAL_BASE_URL}/candidates/{candidate_id}/notes/',
headers=headers,
json={'content': note_text}
)
# Update custom field with numeric score for sorting/filtering
http.patch(
f'{MANATAL_BASE_URL}/candidates/{candidate_id}/',
headers=headers,
json={'custom_fields': {'ai_enhanced_score': score_result['overall_score']}}
)
@app.route(route='score_candidate', methods=['POST'])
def score_candidate_endpoint(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Applicant scoring webhook triggered')
# Verify webhook signature
signature = req.headers.get('X-Webhook-Signature', '')
if not verify_webhook_signature(req.get_body(), signature):
return func.HttpResponse('Unauthorized', status_code=401)
try:
payload = req.get_json()
candidate_id = payload.get('candidate_id') or payload.get('data', {}).get('candidate_id')
job_id = payload.get('job_id') or payload.get('data', {}).get('job_id')
if not candidate_id or not job_id:
return func.HttpResponse('Missing candidate_id or job_id', status_code=400)
# Fetch full details from Manatal API
candidate = get_candidate_details(candidate_id)
job = get_job_details(job_id)
# Extract text
resume_text = extract_resume_text(candidate)
job_text = build_job_description_text(job)
# Score with GPT-5.4 mini
score_result = score_candidate(job_text, resume_text)
# Write results back to Manatal
update_candidate_with_score(candidate_id, score_result)
logging.info(f'Scored candidate {candidate_id} for job {job_id}: {score_result["overall_score"]}/100')
return func.HttpResponse(
json.dumps({'status': 'scored', 'candidate_id': candidate_id, 'score': score_result['overall_score']}),
mimetype='application/json',
status_code=200
)
except Exception as e:
logging.error(f'Scoring error: {str(e)}')
return func.HttpResponse(f'Error: {str(e)}', status_code=500)azure-functions
openai>=1.30.0
httpx>=0.27.0{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
}
},
"extensionBundle": {
"id": "Microsoft.Azure.Functions.ExtensionBundle",
"version": "[4.*, 5.0.0)"
}
}Semantic Job-Resume Matching Engine
Type: skill A supplementary scoring component that uses OpenAI text-embedding-3-small to compute semantic similarity between a job description and a resume. Unlike keyword matching, this captures meaning — for example, a resume mentioning 'built microservices on AWS Lambda' would semantically match a job requiring 'serverless cloud architecture experience' even though the exact phrases differ. The cosine similarity score (0.0–1.0) is incorporated as a bonus factor in the overall scoring.
Implementation:
# Semantic similarity scoring using OpenAI text-embedding-3-small
# semantic_matcher.py
# Can be integrated into the scoring microservice or run standalone
import numpy as np
from openai import OpenAI
import os
client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])
def get_embedding(text: str, model: str = 'text-embedding-3-small') -> list[float]:
"""Generate embedding vector for input text."""
# Truncate to ~8000 tokens worth of text to stay within limits
text = text[:32000] # Rough character limit
response = client.embeddings.create(
input=text,
model=model
)
return response.data[0].embedding
def cosine_similarity(vec_a: list[float], vec_b: list[float]) -> float:
"""Compute cosine similarity between two vectors."""
a = np.array(vec_a)
b = np.array(vec_b)
return float(np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)))
def compute_semantic_match(job_description: str, resume_text: str) -> dict:
"""Compute semantic similarity between job description and resume.
Returns:
dict with similarity_score (0.0-1.0), match_level, and scaled_score (0-100)
"""
job_embedding = get_embedding(job_description)
resume_embedding = get_embedding(resume_text)
similarity = cosine_similarity(job_embedding, resume_embedding)
# Scale similarity to a more intuitive range
# Typical cosine similarities for text range from 0.5-0.95
# Map 0.5-0.95 to 0-100 for scoring purposes
scaled_score = max(0, min(100, (similarity - 0.5) * (100 / 0.45)))
if scaled_score >= 75:
match_level = 'HIGH'
elif scaled_score >= 50:
match_level = 'MEDIUM'
elif scaled_score >= 25:
match_level = 'LOW'
else:
match_level = 'VERY_LOW'
return {
'raw_cosine_similarity': round(similarity, 4),
'scaled_score': round(scaled_score, 1),
'match_level': match_level
}
def batch_rank_candidates(job_description: str, candidates: list[dict]) -> list[dict]:
"""Rank multiple candidates against a single job description.
Args:
job_description: Full job description text
candidates: List of dicts with 'id' and 'resume_text' keys
Returns:
Sorted list of candidates with semantic scores, highest first
"""
job_embedding = get_embedding(job_description)
results = []
for candidate in candidates:
resume_embedding = get_embedding(candidate['resume_text'])
similarity = cosine_similarity(job_embedding, resume_embedding)
scaled_score = max(0, min(100, (similarity - 0.5) * (100 / 0.45)))
results.append({
'candidate_id': candidate['id'],
'semantic_score': round(scaled_score, 1),
'raw_similarity': round(similarity, 4)
})
results.sort(key=lambda x: x['semantic_score'], reverse=True)
# Add rank
for i, r in enumerate(results):
r['rank'] = i + 1
return results
# Integration with main scoring microservice:
# In score_candidate() function, add after GPT-5.4 mini scoring:
#
# semantic_result = compute_semantic_match(job_text, resume_text)
# final_score = (score_result['overall_score'] * 0.7) + (semantic_result['scaled_score'] * 0.3)
# score_result['semantic_match'] = semantic_result
# score_result['final_composite_score'] = round(final_score, 1)Candidate Disclosure Notice Template
Type: prompt
Standardized legal disclosure text that must be displayed to all candidates before AI screening is applied. Covers NYC Local Law 144, Colorado AI Act, Illinois requirements, and GDPR Article 22. This template should be reviewed by employment counsel and customized per client.
Implementation:
# AI Screening Disclosure Notice[CLIENT COMPANY NAME] — Candidate Notice
Use of Artificial Intelligence in Our Hiring Process
As part of our commitment to fair and efficient hiring, [CLIENT COMPANY NAME] uses automated employment decision tools (AEDTs) to assist in evaluating candidates for open positions. We want you to be fully informed about how these tools work and your rights.
What AI Tools We Use
We use AI-powered software to:
- Parse and analyze resume content against job requirements
- Score and rank candidates based on qualifications match
- Suggest candidates who may be strong fits for open positions
How It Works
Our AI screening system evaluates your application materials (resume, cover letter, assessment results) against the stated requirements for the position you applied to. The system considers factors including:
- Skills and qualifications match
- Relevant experience
- Education credentials
- Assessment scores (if applicable)
The AI generates a preliminary score to help our recruiting team prioritize candidate review. This score is used as a decision-support tool only — all hiring decisions are made by human recruiters and hiring managers.
Your Rights
- Human Review: No candidate is rejected solely by an automated system. A human recruiter reviews all candidates, including those with lower AI scores.
- Alternative Process: You may request that your application be reviewed without the use of AI screening tools. To make this request, email [COMPLIANCE_EMAIL] with the subject line "AI Opt-Out Request" and your application reference number.
- Data Access: You may request information about what data was collected and how it was used in your evaluation. Contact [COMPLIANCE_EMAIL].
- Data Deletion: You may request deletion of your personal data from our systems at any time, subject to legal retention requirements.
Bias Audit (NYC Applicants)
In compliance with New York City Local Law 144, an independent bias audit of our AI screening tools was conducted on [AUDIT_DATE] by [AUDITOR_NAME]. A summary of the most recent bias audit results is available at [AUDIT_RESULTS_URL].
EU/UK Applicants (GDPR)
Pursuant to GDPR Article 22, you have the right not to be subject to a decision based solely on automated processing that produces legal effects concerning you. Our AI screening is used as a decision-support tool with mandatory human review, and does not constitute solely automated decision-making. You may exercise your rights under GDPR Articles 13-22 by contacting our Data Protection Officer at [DPO_EMAIL].
Contact
For questions about our use of AI in hiring, contact: [COMPLIANCE_CONTACT_NAME] [COMPLIANCE_EMAIL] [COMPLIANCE_PHONE]
*Last updated: [DATE]* *This notice satisfies requirements under NYC Local Law 144, Colorado SB 24-205, Illinois HB 3773, and GDPR Articles 13-14 and 22. Please have employment counsel review and customize this template for your specific jurisdiction and implementation.*
MSP Implementation Instructions — Complete all steps below before go-live.
Recruiter Scoring Dashboard Automation
Type: workflow
A Zapier-based automation workflow that aggregates AI scores from Manatal's built-in AI, the custom scoring microservice, and assessment platforms, then posts a consolidated summary to a shared Slack channel and/or Microsoft Teams channel. Gives recruiters a real-time feed of top-scored candidates requiring immediate attention.
Implementation
# Trigger on Manatal candidate score update, filter 75+, post to Slack and
# email hiring manager
# Zapier Workflow: Top Candidate Alert
# Trigger: Manatal webhook - Candidate score updated
# Conditions and Actions:
workflow_name: "AI Top Candidate Alert"
trigger:
app: webhooks_by_zapier
event: catch_hook
webhook_url: "https://hooks.zapier.com/hooks/catch/<zap_id>/"
# Configure Manatal to POST to this URL when candidate notes are updated
# (triggered after custom scoring microservice writes results)
filter:
conditions:
- field: "{{body__custom_fields__ai_enhanced_score}}"
operator: "greater_than"
value: 75
# Only alert on candidates scoring 75+
action_1:
app: slack # or microsoft_teams
event: send_channel_message
channel: "#hiring-top-candidates"
message_template: |
🌟 *High-Scoring Candidate Alert*
*Candidate:* {{body__first_name}} {{body__last_name}}
*Position:* {{body__job_name}}
*AI Score:* {{body__custom_fields__ai_enhanced_score}}/100
*Recommendation:* {{body__custom_fields__ai_recommendation}}
*Top Strengths:*
{{body__custom_fields__top_strengths}}
*Key Gaps:*
{{body__custom_fields__key_gaps}}
👉 <{{body__candidate_url}}|View in Manatal>
_Please review within 24 hours to maintain candidate engagement._
action_2:
app: email_by_zapier
event: send_outbound_email
to: "{{hiring_manager_email}}"
subject: "⭐ Top Candidate: {{body__first_name}} {{body__last_name}} for {{body__job_name}} (Score: {{body__custom_fields__ai_enhanced_score}})"
body: |
A high-scoring candidate has been identified by our AI screening system.
Candidate: {{body__first_name}} {{body__last_name}}
Position: {{body__job_name}}
AI Score: {{body__custom_fields__ai_enhanced_score}}/100
Please log into Manatal to review: {{body__candidate_url}}# Additional Zapier Zap: Weekly Digest
workflow_name: "Weekly AI Screening Summary"
trigger:
app: schedule_by_zapier
event: every_week
day: monday
time: "08:00"
action_1:
app: webhooks_by_zapier
event: custom_request
method: GET
url: "https://api.manatal.com/open/v3/candidates/"
headers:
Authorization: "Token {{manatal_api_key}}"
params:
created_after: "{{7_days_ago}}"
ordering: "-score"
limit: 50
action_2:
app: slack
event: send_channel_message
channel: "#hiring-weekly-digest"
message_template: |
📊 *Weekly AI Screening Summary*
_Week of {{current_date}}_
• Total candidates screened: {{total_count}}
• Candidates scoring 80+: {{high_match_count}}
• Candidates scoring 50-79: {{medium_match_count}}
• Top candidate: {{top_candidate_name}} (Score: {{top_score}})
👉 <https://app.manatal.com|View Full Dashboard>MSP Setup Instructions
Bias Monitoring Report Generator
Type: skill A Python script that runs monthly to analyze AI scoring patterns across demographic groups (when data is available) to proactively identify potential disparate impact before formal annual bias audits. Generates a compliance report for the MSP and client HR team.
Implementation
# bias_monitor.py
# Monthly bias monitoring script - run as Azure Function on timer trigger
# or as a scheduled task on MSP's management server
import httpx
import json
import os
from datetime import datetime, timedelta
from collections import defaultdict
import statistics
MANATAL_API_KEY = os.environ['MANATAL_API_KEY']
MANATAL_BASE_URL = os.environ.get('MANATAL_BASE_URL', 'https://api.manatal.com/open/v3')
ALERT_EMAIL = os.environ.get('ALERT_EMAIL', 'msp-compliance@example.com')
def fetch_recent_candidates(days: int = 30) -> list[dict]:
"""Fetch all candidates scored in the last N days."""
headers = {'Authorization': f'Token {MANATAL_API_KEY}'}
since = (datetime.utcnow() - timedelta(days=days)).isoformat()
candidates = []
url = f'{MANATAL_BASE_URL}/candidates/?created_after={since}&limit=100'
with httpx.Client() as http:
while url:
resp = http.get(url, headers=headers)
resp.raise_for_status()
data = resp.json()
candidates.extend(data.get('results', []))
url = data.get('next') # Pagination
return candidates
def analyze_score_distribution(candidates: list[dict]) -> dict:
"""Analyze AI score distributions and flag potential disparate impact."""
# Group scores by available demographic proxies
# NOTE: Direct demographic data may not be available.
# This analyzes score distribution patterns and flags anomalies.
scores = []
scores_by_source = defaultdict(list)
scores_by_job = defaultdict(list)
advancement_by_score_band = defaultdict(lambda: {'total': 0, 'advanced': 0})
for c in candidates:
score = c.get('custom_fields', {}).get('ai_enhanced_score')
if score is None:
# Fall back to Manatal's built-in score if custom score not available
score = c.get('score')
if score is None:
continue
score = float(score)
scores.append(score)
# Group by source channel
source = c.get('source', {}).get('name', 'Unknown')
scores_by_source[source].append(score)
# Group by job
for app in c.get('applications', []):
job_name = app.get('job', {}).get('position_name', 'Unknown')
scores_by_job[job_name].append(score)
# Track advancement rates by score band
stage = c.get('stage', {}).get('name', '')
advanced = stage.lower() in ['phone screen', 'interview', 'offer', 'hired']
if score >= 80:
band = '80-100'
elif score >= 60:
band = '60-79'
elif score >= 40:
band = '40-59'
else:
band = '0-39'
advancement_by_score_band[band]['total'] += 1
if advanced:
advancement_by_score_band[band]['advanced'] += 1
# Calculate statistics
report = {
'report_date': datetime.utcnow().isoformat(),
'period_days': 30,
'total_candidates_scored': len(scores),
'overall_stats': {
'mean_score': round(statistics.mean(scores), 1) if scores else 0,
'median_score': round(statistics.median(scores), 1) if scores else 0,
'stdev': round(statistics.stdev(scores), 1) if len(scores) > 1 else 0,
'min': min(scores) if scores else 0,
'max': max(scores) if scores else 0
},
'scores_by_source': {},
'scores_by_job': {},
'advancement_rates': {},
'flags': []
}
# Analyze by source - flag if any source's mean differs by >15 points
overall_mean = report['overall_stats']['mean_score']
for source, src_scores in scores_by_source.items():
src_mean = round(statistics.mean(src_scores), 1)
report['scores_by_source'][source] = {
'count': len(src_scores),
'mean_score': src_mean
}
if abs(src_mean - overall_mean) > 15 and len(src_scores) >= 10:
report['flags'].append(
f'SOURCE DISPARITY: Candidates from {source} have mean score '
f'{src_mean} vs overall {overall_mean} (n={len(src_scores)})'
)
# Analyze advancement rates by score band (4/5ths rule check)
for band, data in advancement_by_score_band.items():
rate = data['advanced'] / data['total'] if data['total'] > 0 else 0
report['advancement_rates'][band] = {
'total': data['total'],
'advanced': data['advanced'],
'rate': round(rate * 100, 1)
}
# Check 4/5ths rule across score bands
rates = {b: d['advanced'] / d['total']
for b, d in advancement_by_score_band.items()
if d['total'] >= 5}
if rates:
max_rate = max(rates.values())
for band, rate in rates.items():
if max_rate > 0 and rate / max_rate < 0.8:
report['flags'].append(
f'4/5THS RULE WARNING: Score band {band} has advancement rate '
f'{rate:.1%} vs highest band rate {max_rate:.1%} '
f'(ratio: {rate/max_rate:.2f} < 0.80)'
)
return report
def generate_report_text(report: dict) -> str:
"""Generate human-readable bias monitoring report."""
lines = [
'=' * 60,
'AI HIRING BIAS MONITORING REPORT',
f'Generated: {report["report_date"]}',
f'Period: Last {report["period_days"]} days',
f'Candidates Analyzed: {report["total_candidates_scored"]}',
'=' * 60,
'',
'--- OVERALL SCORE DISTRIBUTION ---',
f'Mean: {report["overall_stats"]["mean_score"]}',
f'Median: {report["overall_stats"]["median_score"]}',
f'Std Dev: {report["overall_stats"]["stdev"]}',
f'Range: {report["overall_stats"]["min"]} - {report["overall_stats"]["max"]}',
'',
'--- SCORES BY SOURCE CHANNEL ---'
]
for source, data in report['scores_by_source'].items():
lines.append(f' {source}: mean={data["mean_score"]}, n={data["count"]}')
lines.extend(['', '--- ADVANCEMENT RATES BY SCORE BAND ---'])
for band, data in report['advancement_rates'].items():
lines.append(f' {band}: {data["rate"]}% ({data["advanced"]}/{data["total"]})')
if report['flags']:
lines.extend(['', '⚠️ --- FLAGS REQUIRING REVIEW ---'])
for flag in report['flags']:
lines.append(f' ⚠️ {flag}')
lines.append('')
lines.append('ACTION REQUIRED: Review flagged items with HR team and employment counsel.')
lines.append('Consider engaging bias audit firm if flags persist for 2+ consecutive months.')
else:
lines.extend(['', '✅ No bias flags detected this period.'])
lines.extend(['', '=' * 60])
return '\n'.join(lines)
if __name__ == '__main__':
candidates = fetch_recent_candidates(30)
report = analyze_score_distribution(candidates)
report_text = generate_report_text(report)
print(report_text)
# Save report
filename = f'bias_report_{datetime.utcnow().strftime("%Y%m%d")}.json'
with open(filename, 'w') as f:
json.dump(report, f, indent=2)
print(f'\nReport saved to {filename}')
# If flags exist, this should trigger an email alert
# (implement via Azure Logic App, SendGrid, or MSP's alerting system)
if report['flags']:
print(f'\n⚠️ {len(report["flags"])} flags detected - escalation required')Deployment
Testing & Validation
- CONNECTIVITY TEST: From each recruiter workstation, verify HTTPS access to app.manatal.com, api.openai.com, and hooks.zapier.com by loading each URL in Chrome — confirm no firewall blocks or SSL errors
- SSO TEST: Have each recruiter sign into Manatal via the SSO login page — verify they are redirected to Microsoft Entra ID, prompted for MFA, and returned to Manatal with correct role permissions. Test with one account from a non-compliant device to confirm conditional access blocks appropriately
- RESUME PARSING TEST: Upload 10 resumes in varied formats (PDF, DOCX, plain text) to Manatal via bulk import. Verify that skills, experience, education, and contact information are correctly parsed and populated in candidate profiles. Flag any parsing failures for format-specific investigation
- AI SCORING ACCURACY TEST: Import 50 historical candidates with known outcomes (25 hired, 25 rejected) against 3 test job postings. Run AI scoring and verify that at least 70% of hired candidates appear in the AI's top 30% ranking. Document the correlation rate and adjust scoring criteria if below threshold
- CUSTOM SCORING WEBHOOK TEST: Trigger the Azure Function by manually POSTing a test payload with a valid candidate_id and job_id. Verify the function returns a 200 response with a valid score JSON, and confirm the scoring note appears on the candidate record in Manatal within 60 seconds
- SEMANTIC MATCHING TEST: Run the semantic matcher against 5 job-resume pairs where the resume uses different terminology than the job description (e.g., 'serverless architecture' vs 'AWS Lambda functions'). Verify cosine similarity scores are above 0.70 for known-good matches
- JOB BOARD INTEGRATION TEST: Post a test job to LinkedIn, Indeed, and ZipRecruiter via Manatal. Apply to the test job from an external device. Verify the application appears in Manatal within 15 minutes with correct source attribution and AI scoring triggered automatically
- ASSESSMENT INTEGRATION TEST: Move a test candidate to the 'AI Screened - High Match' pipeline stage. Verify the Zapier automation triggers a TestGorilla assessment invitation email within 5 minutes. Complete the assessment and verify the score is written back to the candidate record in Manatal
- COMPLIANCE DISCLOSURE TEST: Navigate to the client's career page (careers.clientdomain.com) and verify the AI Screening Disclosure Notice is visible before application submission. Confirm the consent checkbox is present and required. Submit a test application and verify the consent timestamp is recorded
- HUMAN-IN-THE-LOOP TEST: Create a test candidate with an AI score below 50. Verify the candidate is automatically moved to the 'Human Review Required' pipeline stage. Confirm that NO auto-reject action is taken and that a recruiter must manually disposition the candidate
- DATA RETENTION TEST: Verify that the data retention policy is active by checking Manatal settings — confirm rejected candidates are set to auto-delete after 12 months and all candidate data after 24 months. Submit a test data deletion request via the GDPR workflow and verify the candidate record is purged within 72 hours
- SLACK/TEAMS ALERT TEST: Score a test candidate above 75 and verify the Zapier workflow posts a formatted alert to the #hiring-top-candidates Slack/Teams channel within 5 minutes, including candidate name, job title, score, and Manatal link
- END-TO-END WORKFLOW TEST: Simulate a complete hiring workflow — post a job, receive an application from a job board, verify AI scoring triggers, confirm the candidate appears ranked correctly on the recruiter dashboard, move through pipeline stages, trigger assessment, receive assessment score, and verify final candidate record contains all scoring data
- LOAD TEST: Import 200 resumes simultaneously via bulk upload and verify that AI scoring completes for all candidates within 30 minutes without errors or timeout failures in the custom scoring webhook
- BIAS MONITORING TEST: Run the bias monitoring script against the test dataset and verify it generates a valid report with score distributions, source breakdowns, advancement rates, and correctly flags any 4/5ths rule violations in the synthetic data
Client Handoff
Client Handoff Checklist
Training Sessions to Deliver
Documentation to Leave Behind
Success Criteria to Review Together
Maintenance
Ongoing MSP Maintenance Responsibilities
Weekly (First 30 Days) → Biweekly (Days 31-90) → Monthly (Ongoing)
- Scoring Accuracy Review: Check recruiter feedback on AI scores — are recruiters consistently overriding scores? If override rate exceeds 30%, investigate and recalibrate scoring criteria. Log in to Manatal, pull the weekly screening summary report, and compare AI rankings against recruiter dispositions.
- Integration Health Check: Verify all Zapier zaps are running (check Zapier dashboard for failed tasks). Confirm job board feeds are flowing (check latest application dates). Test custom scoring webhook with a ping request. Verify Slack/Teams alerts are posting.
- User Management: Process new recruiter account requests, deactivate departed staff, update role permissions as needed.
Monthly
- Bias Monitoring Report: Run the bias_monitor.py script (or verify the Azure Function timer trigger ran successfully). Review the output report for any flags. Escalate flags to client HR manager and recommend corrective action. Store report in Azure Blob for audit trail.
- Platform Updates: Check Manatal release notes for new features or changes that affect AI scoring. Test any updates in a sandbox job before applying to production workflows.
- API Usage & Cost Review: Check OpenAI API usage dashboard — ensure token consumption is within expected range ($25–$75/month for typical volume). Flag any anomalous spikes that could indicate integration loops or abuse.
- Security Review: Verify SSO is still enforced, review Manatal audit logs for unusual access patterns, confirm MFA is active for all users.
Quarterly
- Scoring Criteria Calibration: Meet with client HR team (1-hour session) to review scoring effectiveness. Adjust criteria weights, add/remove skills from job templates, and update score thresholds based on 3 months of hiring data.
- Compliance Review: Review any new AI hiring regulations enacted since last quarter. Update disclosure text if needed. Verify data retention policies are executing correctly (check that old records are being purged on schedule).
- Vendor Review: Check Manatal subscription status and renewal terms. Review any pricing changes. Evaluate whether the client has outgrown their current plan tier.
Annually
- Bias Audit: Coordinate with BABL AI or Holistic AI for the annual independent bias audit (required for NYC, recommended everywhere). Budget 2–4 weeks for the audit process. Publish updated audit summary on career page. Cost: $5,000–$15,000.
- Full System Review: Comprehensive review of the entire AI hiring stack — scoring accuracy trends over 12 months, integration reliability, user adoption metrics, compliance posture, and ROI analysis.
- License Renewal: Renew Manatal subscription (annual billing). Review Zapier plan tier. Renew TestGorilla subscription. Confirm Azure Function consumption costs.
SLA Considerations
- Platform Availability: Manatal SaaS uptime SLA is 99.9%. MSP responsible for monitoring and reporting outages to client within 1 hour.
- Scoring Latency: Custom scoring webhook should return results within 30 seconds per candidate. If latency exceeds 60 seconds, investigate Azure Function cold starts or OpenAI API throttling.
- Integration Failures: Zapier failures should be detected and resolved within 4 business hours. Configure Zapier error notifications to MSP support inbox.
- Security Incidents: Any unauthorized access to candidate data must be reported to client within 24 hours per DPA terms. Engage MSP's incident response process.
Escalation Path
Model Retraining / Recalibration Triggers
- Client adds a new job category not previously scored (requires new job template and criteria review)
- Recruiter override rate exceeds 30% for any job category over a 30-day period
- Bias monitoring flags persist for 2+ consecutive months
- Client expands to a new jurisdiction with AI hiring regulations
- Manatal releases a major AI engine update (review impact on scoring consistency)
- OpenAI deprecates GPT-5.4 mini model (migrate to successor model, re-validate scoring)
Alternatives
Brainner AI Bolt-On with Existing ATS
Instead of migrating to Manatal, deploy Brainner ($34–$99/month) as an AI screening overlay on top of the client's existing ATS (Greenhouse, Lever, Bullhorn, BambooHR, etc.). Brainner ingests candidates via ATS integration, applies AI scoring with explainable match reasoning, and writes results back to the existing system. No ATS migration required.
- PROS: Zero disruption to existing recruiter workflows; no ATS migration risk; ATS-agnostic; explainable AI scoring built-in; lower monthly cost than full ATS replacement.
- CONS: Additional tool to manage alongside existing ATS (two vendors instead of one); limited to screening — no ATS features like pipeline management; per-candidate overage charges ($9.95/100 candidates) can add up for high-volume staffing agencies; less integrated experience than an all-in-one platform.
- RECOMMEND WHEN: Client has a well-functioning ATS they do not want to replace, or when the ATS has features (like Bullhorn's VMS integrations) that cannot be replicated in Manatal.
Zoho Recruit for Zoho Ecosystem Clients
Deploy Zoho Recruit ($25–$50/user/month) with built-in AI candidate matching for clients already using Zoho One, Zoho CRM, or other Zoho products. Leverages existing identity management, integrates natively with Zoho's suite, and offers a free tier for very small operations.
- PROS: Native integration with Zoho ecosystem (CRM, email, analytics); lower cost than Manatal Enterprise; free plan available for 1 active job; 20–30% MSP partner commission through Zoho partner program; familiar interface for Zoho users.
- CONS: AI scoring capabilities are less sophisticated than Manatal's dedicated AI recommendation engine; free and lower tiers are severely limited; not ideal for staffing agencies with high candidate volumes; fewer job board integrations than Manatal.
- RECOMMEND WHEN: Client is already a Zoho shop with 3+ Zoho products deployed, or is extremely budget-constrained (<$500/month total budget for the project).
Fully Custom Build with OpenAI API + Open Source Stack
Build a completely custom applicant scoring pipeline using OpenAI GPT-5.4 mini for scoring, text-embedding-3-small for semantic matching, spaCy for NLP parsing, Apache Tika for document extraction, and PostgreSQL for data storage. Deploy on Azure Functions or AWS Lambda. Integrate via API with the client's existing ATS.
Tradeoffs
- PROS: Maximum customization of scoring criteria and weights
- PROS: No vendor lock-in
- PROS: Lowest per-candidate cost at scale ($0.01–$0.05/candidate)
- PROS: Full control over AI model selection and prompt engineering
- PROS: Can be white-labeled as MSP's proprietary solution
- CONS: Highest implementation complexity (8–16 weeks, requires ML/NLP developer expertise)
- CONS: MSP bears full responsibility for accuracy, bias, and compliance
- CONS: No vendor support — all troubleshooting falls on MSP
- CONS: Requires ongoing model maintenance as OpenAI releases new versions
- CONS: Significant upfront development cost ($15,000–$30,000)
RECOMMEND WHEN: Client is a large staffing agency (50+ recruiters) with unique scoring requirements that no SaaS product addresses, or the MSP wants to build a reusable white-label AI screening product to sell across multiple clients.
Enterprise Platform: Greenhouse + Eightfold AI
Deploy Greenhouse as the ATS ($6,000–$25,000+/year) with Eightfold AI as the talent intelligence layer ($650+/month for mid-market). Provides the most advanced AI-driven talent matching, skills-based hiring, internal mobility intelligence, and deep-learning candidate scoring.
Tradeoffs
- PROS: Most advanced AI in the market (deep learning, not just NLP)
- PROS: Best-in-class structured hiring and bias reduction
- PROS: Greenhouse's 500+ integrations
- PROS: Eightfold's skills ontology covers 1M+ skills
- PROS: Excellent for diversity hiring goals
- PROS: Strong enterprise compliance and audit trail
- CONS: Highest total cost ($30,000–$75,000+/year combined)
- CONS: Long implementation timeline (4–12 weeks with vendor professional services)
- CONS: Overkill for SMBs with <50 employees or <500 applicants/month
- CONS: Requires dedicated HR operations staff to manage
- CONS: Vendor professional services fees on top of subscription
RECOMMEND WHEN: Client is a mid-market to enterprise staffing operation (100+ employees, 1,000+ applicants/month) with budget for best-in-class tooling, strong diversity hiring goals, and dedicated HR operations staff.
Bullhorn + Amplify AI for Staffing Agencies
Deploy Bullhorn ATS/CRM (custom enterprise pricing) with the Bullhorn Amplify AI module specifically designed for staffing agencies. Amplify AI automates candidate matching, job matching, and placement optimization using staffing-specific AI models.
Tradeoffs
- PROS: Purpose-built for staffing agencies (not generic HR)
- PROS: 110+ VMS integrations for contract staffing
- PROS: AI optimized for placement revenue, not just hiring
- PROS: Strong CRM capabilities for candidate relationship management
- PROS: Dominant market position in staffing industry
- CONS: Enterprise pricing (typically $100+/user/month, custom quotes only)
- CONS: No published pricing makes budgeting difficult
- CONS: Long sales cycle and implementation
- CONS: Not suitable for corporate HR teams (staffing agencies only)
- CONS: Heavy platform — overkill for small agencies with <10 recruiters
RECOMMEND WHEN: Client is a staffing/recruiting agency (not a corporate HR department) with 10+ recruiters, active VMS/MSP vendor relationships, and budget for an enterprise-grade staffing platform.
Want early access to the full toolkit?