48 min readIntelligence & insights

Implementation Guide: Score and rank applicants against job requirements using resume and assessment data

Step-by-step implementation guide for deploying AI to score and rank applicants against job requirements using resume and assessment data for HR & Staffing clients.

Hardware Procurement

Dell UltraSharp U2723QE 27" 4K USB-C Monitor

Dell TechnologiesU2723QEQty: 10

$420 per unit (MSP cost) / $550 suggested resale

Secondary monitor for recruiters to enable side-by-side candidate comparison — viewing AI-ranked candidate list alongside full resume detail. Improves recruiter throughput during high-volume screening. One per recruiter workstation.

Dell OptiPlex 7020 Micro Desktop

Dell TechnologiesD15U006Qty: 2

$950 per unit (MSP cost) / $1,250 suggested resale

Shared kiosk or conference room workstations for hiring managers to review AI-scored candidate dashboards during intake meetings. Only needed if client lacks modern workstations in interview rooms. Includes i5-14500T, 16GB RAM, 256GB SSD — sufficient for browser-based SaaS access.

Software Procurement

Manatal ATS — Enterprise Plan

ManatalEnterprise PlanQty: 10 users

$35/user/month billed annually ($4,200/year for 10 users) — MSP negotiated; resell at $45/user/month

Core applicant tracking system with built-in AI recommendation engine that scores and ranks candidates against job requirements. Includes AI-powered candidate sourcing, resume parsing, customizable pipeline stages, and integrations with LinkedIn, Indeed, and 2,500+ job boards.

OpenAI API — GPT-5.4 mini

OpenAIGPT-5.4 mini

$0.15/million input tokens + $0.60/million output tokens; estimated $25–$75/month for 500–2,000 applicants/month

Powers the custom scoring enhancement layer for detailed, criteria-weighted resume analysis with explainable scoring breakdowns. Used when clients need granular scoring beyond Manatal's built-in AI.

OpenAI API — text-embedding-3-small

OpenAItext-embedding-3-small

$0.02/million tokens; estimated $2–$5/month for typical volume

Generates semantic embeddings for job descriptions and resumes to compute cosine similarity scores for semantic matching — identifies candidates whose experience closely aligns with job requirements even when terminology differs.

Azure OpenAI Service

MicrosoftUsage-based API (CSP)

~10% premium over standard OpenAI pricing; estimated $30–$85/month

Enterprise-grade alternative to direct OpenAI API for clients requiring SOC 2 compliance, data residency guarantees, or GDPR-compliant processing. Recommended for clients with EU operations or regulated industries. Deploy through MSP's existing Microsoft CSP agreement for 15–20% margin.

Brainner AI Resume Screening

BrainnerSaaS subscription + usage-based overage

$34–$99/month base + $9.95 per 100 additional candidates over quota

Alternative bolt-on AI screening layer for clients who wish to retain their existing ATS (Greenhouse, Lever, Bullhorn, etc.) rather than migrating to Manatal. Provides explainable AI scoring per candidate with ATS-agnostic integration.

$50/user/month billed annually; partner commission 20–30%

Budget-conscious alternative ATS with AI candidate matching for clients already in the Zoho ecosystem. Strong for small staffing firms needing ATS + CRM without high cost.

TestGorilla Pre-Employment Assessments

TestGorillaScale planQty: 1 subscription

$75–$115/month (Scale plan); resell at $150/month

Pre-employment skills testing platform that generates structured assessment scores fed into the AI scoring pipeline. Provides cognitive ability, role-specific skills, and personality assessments that complement resume-based scoring.

Zapier — Professional Plan

ZapierProfessional PlanQty: 1 subscription

$49.99/month for 2,000 tasks; resell bundled in managed services

Integration middleware connecting Manatal/ATS to HRIS (BambooHR, Rippling), communication tools (Slack, Teams), and the custom scoring webhook. Used when native integrations are insufficient.

Microsoft Entra ID P1 (Azure AD)

MicrosoftPer-user SaaS (via Microsoft 365)

$6/user/month (often included in Microsoft 365 Business Premium)

SAML SSO and conditional access for all AI hiring platforms. Enforces MFA, enables single sign-on to Manatal and other tools, and provides audit logging for compliance.

BABL AI Bias Audit Service

BABL AIProject-based engagementQty: Per annual audit

$5,000–$15,000 per annual audit; MSP markup 15–25%

Third-party independent bias audit required by NYC Local Law 144 and recommended as best practice everywhere. Audits AI scoring tool for disparate impact across protected categories and provides publishable audit summary.

Prerequisites

  • Stable internet connectivity of 50+ Mbps at all recruiter workstations with <100ms latency to cloud providers (Azure/AWS US regions)
  • Modern web browsers installed on all recruiter machines — Google Chrome 120+ or Microsoft Edge 120+ required for full Manatal dashboard functionality
  • Microsoft 365 Business Premium or equivalent identity provider (Azure AD/Entra ID) configured with SAML SSO capability for all recruiter and hiring manager accounts
  • Firewall rules permitting outbound HTTPS (port 443) traffic to: *.manatal.com, api.openai.com, *.azurewebsites.net, *.zapier.com, and *.testgorilla.com
  • Existing structured job descriptions for at least 5–10 active roles — each must include: job title, required skills/qualifications, preferred qualifications, years of experience, education requirements, and key responsibilities
  • Administrative access to any existing ATS system (if migrating data) with ability to export candidate records in CSV or via API
  • Designated client-side project sponsor: HR Manager or Head of Recruiting with authority to define scoring criteria, approve workflows, and sign off on compliance disclosures
  • List of all active job boards and sourcing channels currently used (LinkedIn, Indeed, ZipRecruiter, etc.) with login credentials for integration setup
  • If operating in New York City: legal counsel review of NYC Local Law 144 obligations and commitment to fund annual bias audit ($5,000–$15,000)
  • Python 3.10+ runtime environment available (Azure Functions or AWS Lambda) if deploying the custom scoring enhancement layer — MSP provisions this in their managed cloud tenant
  • DNS and email sending capability (SPF/DKIM configured) for automated candidate notification emails from the ATS platform
  • Data processing agreement (DPA) executed with client covering candidate PII handling, retention periods, and GDPR obligations if processing EU candidates

Installation Steps

Step 1: Discovery & Workflow Audit

Conduct a 2–4 hour discovery session with the client's HR team to document their current hiring workflow end-to-end. Map each stage from job requisition creation through offer acceptance. Identify pain points, volume metrics (applicants per role, roles per month, time-to-screen), current ATS usage, job board integrations, and assessment tools. Document the client's scoring priorities — which qualifications matter most, how they currently shortlist candidates, and any compliance obligations (NYC, CO, IL, GDPR). This shapes the entire configuration.

Note

Use a shared document or whiteboard. Capture: (1) number of recruiters and hiring managers, (2) monthly applicant volume, (3) top 10 most-hired roles, (4) existing tech stack, (5) compliance jurisdiction. This discovery output drives every subsequent configuration decision. Budget 1 day for this phase.

Step 2: Provision Manatal ATS Tenant

Create the Manatal Enterprise organization. Sign up at manatal.com using the MSP's partner/reseller email, then configure the client's organization as a sub-account. Select the Enterprise plan ($35/user/month annually) to get unlimited jobs and candidates plus the full AI recommendation engine. Create user accounts for each recruiter and hiring manager identified in discovery.

1
Navigate to https://www.manatal.com and click 'Start Free Trial'
2
Complete organization setup: Company Name, Industry (Staffing/Recruiting), Company Size
3
Under Settings > Subscription, upgrade to Enterprise Plan ($35/user/month billed annually)
4
Navigate to Settings > Team Members > Invite Members
5
Add each recruiter with role 'Recruiter' and each hiring manager with role 'Hiring Manager'
6
Set organization timezone and default language
Note

Manatal offers a 14-day free trial — use this to configure and test before committing to paid subscription. The Enterprise plan is required for unlimited candidates and full AI scoring. If the client has fewer than 5 recruiters and limited job volume, the Professional plan ($15/user/month) may suffice but caps at 15 active jobs.

Step 3: Configure SSO and Security

Integrate Manatal with the client's Microsoft Entra ID (Azure AD) for SAML-based single sign-on. This ensures recruiters use their existing corporate credentials, MFA is enforced, and access is logged for compliance. Configure conditional access policies to restrict Manatal access to managed devices and approved locations.

1
In Microsoft Entra Admin Center (entra.microsoft.com): Navigate to Enterprise Applications > New Application > Create your own application
2
Name: 'Manatal ATS' > Select 'Integrate any other application' > Create
3
Under Single sign-on > Select SAML
4
Set Identifier (Entity ID): https://app.manatal.com/saml/metadata
5
Set Reply URL: https://app.manatal.com/saml/acs
6
Download Federation Metadata XML
7
In Manatal Admin Panel: Navigate to Settings > Security > SAML SSO
8
Upload the Federation Metadata XML from Entra ID
9
Enable 'Require SSO for all users'
10
Test with one recruiter account before enforcing
11
In Entra ID Conditional Access: Create policy: 'Manatal Access Policy' — Assignments: All recruiters + hiring managers security group
12
Conditions: Any device, any location
13
Grant: Require MFA + Require compliant device
14
Session: Sign-in frequency 12 hours
Note

If the client does not have Microsoft 365 Business Premium or Entra ID P1, SSO may not be available. In that case, enforce strong passwords (16+ characters) directly in Manatal and enable Manatal's built-in 2FA. Document this as a security risk in the handoff.

Step 4: Import Historical Data & Configure Job Templates

Migrate existing candidate data from the client's current system into Manatal. Then create standardized job templates for the client's most common roles, embedding structured requirements that the AI engine will score against. Each job template must include weighted scoring criteria.

Data Migration

1
Export candidates from existing ATS as CSV (columns: name, email, phone, resume_url, source, stage, applied_date)
2
In Manatal: Settings > Data Import > Upload CSV
3
Map CSV columns to Manatal fields
4
Run import and verify record count matches source

Job Template Configuration (repeat for each of client's top 10 roles)

1
Navigate to Jobs > Create Job
2
Fill in structured fields: Job Title (e.g., 'Senior Software Engineer'), Department (e.g., 'Engineering'), Location (e.g., 'Remote - US'), Employment Type: Full-time, Required Skills: ['Python', 'AWS', 'REST APIs', 'PostgreSQL'] (tag-based), Required Experience: '5+ years software development', Required Education: 'Bachelor's in Computer Science or equivalent', Nice-to-Have Skills: ['Kubernetes', 'Terraform', 'CI/CD'], Salary Range: $130,000 - $160,000
3
Save as Template: Settings > Job Templates > Save Current as Template
4
Repeat for all high-volume roles
Note

The quality of AI scoring is directly proportional to the quality of structured job descriptions. Spend significant time here — poorly defined requirements produce unreliable rankings. Work with the HR sponsor to refine each template. Manatal's AI scores against the skills, experience, and education fields, so these must be comprehensive.

Step 5: Connect Job Boards & Sourcing Channels

Integrate Manatal with the client's active job boards and sourcing platforms so that incoming applications automatically flow into the ATS and get scored by the AI engine. Configure LinkedIn, Indeed, ZipRecruiter, and any niche staffing boards the client uses.

  • LinkedIn Integration: Navigate to Settings > Integrations > Job Boards
  • LinkedIn Integration Step 1: Select LinkedIn > Connect
  • LinkedIn Integration Step 2: Authenticate with client's LinkedIn Recruiter admin account
  • LinkedIn Integration Step 3: Enable auto-import of applicants from LinkedIn postings
  • Indeed Integration Step 1: Select Indeed > Connect
  • Indeed Integration Step 2: Enter Indeed employer account credentials
  • Indeed Integration Step 3: Enable Indeed Sponsored Jobs sync (if applicable)
  • ZipRecruiter Integration Step 1: Select ZipRecruiter > Connect
  • ZipRecruiter Integration Step 2: Authenticate with ZipRecruiter employer account
  • Career Page Step 1: Navigate to Settings > Career Page
  • Career Page Step 2: Customize branding (logo, colors, description)
  • Career Page Step 3: Copy embed code or custom domain CNAME
Add CNAME record in client's DNS
dns
careers.clientdomain.com -> CNAME manatal-careers.manatal.com
Verify DNS propagation
shell
nslookup careers.clientdomain.com
Note

Manatal integrates with 2,500+ job boards. Not all require paid board subscriptions — many are free aggregation boards. LinkedIn Recruiter integration requires the client to have an active LinkedIn Recruiter license. If they only have a basic LinkedIn account, candidates from LinkedIn must be imported manually or via the Manatal Chrome extension.

Step 6: Configure AI Scoring Criteria & Weights

Fine-tune Manatal's AI recommendation engine to align with the client's hiring priorities. The AI scores candidates from 0–100 based on skills match, experience match, education match, and job description relevance. Configure the relative importance (weights) of each factor per job category.

1
In Manatal: Navigate to a specific Job > AI Recommendations tab
2
Ensure Required Skills are accurately tagged (AI weighs these heavily)
3
Add 'Must-Have' qualifications in the job description body
4
Use the 'AI-Suggested Candidates' panel to review initial rankings
5
Mark good matches as 'Qualified' and poor matches as 'Not Fit' — this trains the AI's ranking model for future candidates
6
For granular control, use Pipeline Stage scoring: navigate to Settings > Pipeline > Custom Stages
7
Create stages: 'AI Screened - High Match (80-100)', 'AI Screened - Medium Match (50-79)', 'AI Screened - Low Match (0-49)'
8
Configure automation: auto-move candidates to stage based on AI score threshold
Note

Manatal's AI scoring improves over time as recruiters provide feedback (qualifying/disqualifying candidates). The first 2 weeks are a calibration period — expect to manually review and correct AI rankings. Document the target score thresholds with the HR sponsor: e.g., 80+ = auto-advance to phone screen, 50–79 = recruiter review, <50 = auto-reject (with human review option for compliance).

Step 7: Deploy Custom Scoring Enhancement Layer (Optional)

For clients needing deeper scoring explainability or custom criteria weighting beyond Manatal's built-in AI, deploy a custom scoring microservice using the OpenAI GPT-5.4 mini API. This service receives candidate data via webhook from Manatal, runs detailed criteria-by-criteria analysis, generates an explainable score breakdown, and writes results back to Manatal via API.

1. Provision Azure Function App for the scoring microservice
bash
az login
az group create --name rg-ai-hiring-scoring --location eastus
az functionapp create --resource-group rg-ai-hiring-scoring --consumption-plan-location eastus --runtime python --runtime-version 3.11 --functions-version 4 --name func-applicant-scorer-<clientid> --storage-account stairhiring<clientid>
2. Set environment variables
bash
az functionapp config appsettings set --name func-applicant-scorer-<clientid> --resource-group rg-ai-hiring-scoring --settings OPENAI_API_KEY=sk-xxxxxxxxxxxx MANATAL_API_KEY=xxxxxxxxxxxx MANATAL_BASE_URL=https://api.manatal.com/open/v3
3. Deploy the scoring function (see custom_ai_components for full code)
bash
cd applicant-scorer
func azure functionapp publish func-applicant-scorer-<clientid>
1
Configure Manatal webhook to trigger on new candidate application: Navigate to Settings > Integrations > Webhooks > Add Webhook
2
Set Event to: 'Candidate Applied'
3
Set URL to: https://func-applicant-scorer-<clientid>.azurewebsites.net/api/score_candidate
4
Set Method to: POST
5
Set Secret to: <generate-256bit-secret>
Note

This step is OPTIONAL and only recommended for clients with complex scoring requirements (e.g., staffing agencies managing 50+ distinct job categories with nuanced weighting). For most SMB clients, Manatal's built-in AI scoring is sufficient. The custom layer adds $25–$75/month in API costs plus MSP development time (16–24 hours initial build). Bill as a one-time project ($4,000–$6,000) plus ongoing managed service ($500/month).

Step 8: Integrate Pre-Employment Assessments

Connect TestGorilla (or the client's existing assessment platform) to the ATS so that assessment scores are incorporated into the overall candidate ranking. Configure automated assessment invitations triggered when a candidate reaches a specific pipeline stage.

TestGorilla + Manatal Integration via Zapier

1
TestGorilla + Manatal Integration via Zapier: 1. In Zapier: Create New Zap 2. Trigger: Manatal > Candidate Moved to Stage - Stage: 'AI Screened - High Match (80-100)' 3. Action: TestGorilla > Invite Candidate to Assessment - Assessment: Select appropriate role-based assessment - Candidate Email: {{candidate_email}} - Candidate Name: {{candidate_name}}
2
4. Create second Zap for results:
3
Trigger: TestGorilla > Assessment Completed
4
Action: Manatal > Update Candidate - Custom Field 'Assessment Score': {{testgorilla_score}} - Add Note: 'TestGorilla Assessment: {{score}}% - {{assessment_name}}'
5
5. Test both Zaps with a test candidate
6
6. Turn on Zaps
Note

TestGorilla offers 400+ scientifically validated assessments across cognitive ability, programming skills, language proficiency, and personality. Work with the HR sponsor to select 2–3 assessments per job category. If the client already uses Criteria Corp, Vervoe, or HireVue for assessments, substitute accordingly — Zapier supports most major platforms. Assessment data significantly improves ranking accuracy when combined with resume-based AI scoring.

Step 9: Configure Compliance Disclosures & Human-in-the-Loop Safeguards

Implement all required regulatory disclosures and ensure the system enforces human review before any adverse employment decisions. This is critical for legal compliance in NYC, Colorado, Illinois, and GDPR jurisdictions — and is best practice everywhere.

1
Candidate Disclosure Notice — Add to career page and application form. In Manatal: Settings > Career Page > Custom Content. Add disclosure text (see custom_ai_components for template).
2
Configure Human-in-the-Loop requirement. In Manatal: Settings > Pipeline > Automation Rules. CRITICAL: Do NOT configure auto-reject rules. Instead configure: Candidates scoring <50 → Move to 'Human Review Required' stage. Require a recruiter to manually review and take action on every candidate.
3
Create Compliance Audit Trail. In Manatal: Settings > Activity Log > Enable Full Audit Logging. Ensure all candidate status changes are logged with user, timestamp, and reason.
4
Data Retention Policy. In Manatal: Settings > Data Retention. Set auto-deletion: Rejected candidates after 12 months. Set auto-deletion: All candidate data after 24 months (or per client policy). Enable candidate data deletion request workflow (GDPR right to erasure).
5
If NYC jurisdiction: Schedule bias audit. Contact BABL AI (hello@babl.ai) to schedule initial bias audit. Provide: list of AI tools used, scoring methodology documentation, demographic data if available. Timeline: Audit takes 2–4 weeks; must be completed before go-live in NYC.
Warning

THIS STEP IS NON-NEGOTIABLE. Failure to implement proper disclosures and human review can expose the client to significant legal liability. NYC Local Law 144 penalties range from $500–$1,500 per violation per candidate. EEOC treats AI scoring as a selection procedure subject to Title VII disparate impact analysis. Always recommend the client engage employment counsel to review the compliance configuration. Document everything for the client handoff.

Step 10: Integration Testing with Historical Candidate Data

Before going live, run the AI scoring system against a known set of historical candidates whose hiring outcomes are already determined. This validates that the AI rankings correlate with actual hiring decisions and identifies any scoring anomalies that need calibration.

1
Select test dataset: 50-100 historical candidates for 3-5 different roles. Include candidates who were hired, interviewed-but-rejected, and screened-out. Export from old ATS or compile from HR team records.
2
Import test candidates into Manatal (use a test job posting). Navigate to Jobs > Create Job > '[TEST] Senior Developer - Calibration'. Upload test resumes via bulk import.
3
Review AI scores against known outcomes. Export AI scores via Manatal API (see command below).
4
Calculate correlation using the Python script below. Target: 70%+ of actual hires should appear in AI's top 30%.
5
Document results and adjust scoring criteria if needed.
6
Delete test job posting after validation.
Export AI scores for test candidates from Manatal API
bash
curl -H 'Authorization: Token <MANATAL_API_KEY>' 'https://api.manatal.com/open/v3/candidates/?job_id=<test_job_id>&ordering=-score' | python -m json.tool > test_scores.json
Calculate correlation between AI rankings and known hiring outcomes
python
import json
scores = json.load(open('test_scores.json'))
# Compare AI rank vs actual hire decision
# Target: 70%+ of actual hires should appear in AI's top 30%
print('Validation complete - review scores vs outcomes')
Note

This calibration step is crucial for building recruiter trust in the AI system. If the AI consistently ranks known-good hires in its top 30%, recruiters will adopt the tool. If correlation is poor, revisit job description quality and scoring criteria weights before go-live. Plan 5–10 business days for this phase including iterations.

Step 11: Recruiter Training & Controlled Rollout

Train all recruiters and hiring managers on the new AI-scored workflow. Begin with a controlled rollout on 2–3 job postings before expanding to all open roles. Provide hands-on training sessions and written documentation.

1
Overview: What the AI scoring does and doesn't do (15 min)
2
Demo: Walk through a scored candidate list for a real open role (20 min)
3
Hands-on: Each recruiter reviews 10 AI-scored candidates, compares to their own judgment (30 min)
4
Workflow changes: New pipeline stages, when human review is required (15 min)
5
Compliance: Candidate disclosure requirements, what to say if candidates ask about AI (15 min)
6
Q&A and feedback (25 min)
  • Week 1: Enable AI scoring on 2-3 highest-volume roles only
  • Week 2: Review scoring accuracy, gather recruiter feedback, adjust criteria
  • Week 3: Expand to all active roles
  • Week 4: Full production — all new job postings use AI scoring by default
Note

Recruiter adoption is the biggest risk factor. Common resistance points: 'AI will replace my job' (address: AI handles initial screening so you can focus on relationship building and assessment), 'I don't trust the scores' (address: show calibration results, emphasize scores as decision-support not decision-making). Schedule a follow-up training session 2 weeks post-launch to address real-world questions.

Step 12: Go-Live & Production Monitoring Setup

Transition from controlled rollout to full production. Configure monitoring dashboards and alerting so the MSP can proactively manage the platform. Establish the ongoing managed service cadence.

1
Confirm all job postings are active with AI scoring enabled
2
Verify all integrations are functioning: Job board feeds flowing into Manatal, Assessment invitations triggering correctly via Zapier, Custom scoring webhook responding (if deployed), SSO authentication working for all users
Set up monitoring alert for custom scoring layer (if deployed)
bash
az monitor metrics alert create --name 'ScoringFunctionErrors' --resource-group rg-ai-hiring-scoring --scopes /subscriptions/<sub-id>/resourceGroups/rg-ai-hiring-scoring/providers/Microsoft.Web/sites/func-applicant-scorer-<clientid> --condition 'total requests where resultType == Failed > 5' --window-size 1h --evaluation-frequency 5m --action-group <msp-alerts-action-group>
1
Configure weekly reporting — In Manatal: Reports > Scheduled Reports > Create. Report: 'Weekly AI Screening Summary'. Metrics: Candidates screened, avg AI score, top-ranked candidates per role, time-to-screen. Recipients: HR Manager + MSP account manager. Schedule: Every Monday 8:00 AM
2
Document go-live date and baseline metrics for SLA tracking
Note

The first 30 days post go-live require the most MSP attention. Plan for 2–4 hours/week of support during this period: answering recruiter questions, tuning scoring criteria, resolving integration hiccups. After stabilization, ongoing effort drops to 2–4 hours/month for routine maintenance.

Custom AI Components

Applicant Scoring Microservice

Type: agent A serverless Azure Function that receives webhook events from Manatal when a new candidate applies, retrieves the candidate's resume and the job requirements, runs a detailed criteria-by-criteria analysis using GPT-5.4 mini, generates a weighted score with explainable reasoning, and writes the results back to the candidate record in Manatal. This provides granular, auditable scoring beyond the ATS's built-in AI.

Implementation

applicant-scorer/function_app.py
python
# Azure Functions v4 Python HTTP Trigger

# applicant-scorer/function_app.py
# Azure Functions v4 Python - HTTP Trigger

import azure.functions as func
import json
import os
import logging
import hmac
import hashlib
import httpx
from openai import OpenAI

app = func.FunctionApp()

client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])
MANATAL_API_KEY = os.environ['MANATAL_API_KEY']
MANATAL_BASE_URL = os.environ.get('MANATAL_BASE_URL', 'https://api.manatal.com/open/v3')
WEBHOOK_SECRET = os.environ.get('WEBHOOK_SECRET', '')

SCORING_PROMPT = """You are an expert HR recruiter and hiring analyst. You will be given:
1. A JOB DESCRIPTION with requirements
2. A CANDIDATE RESUME

Score the candidate against each requirement category below on a scale of 0-100.
Provide a brief explanation for each score.

Categories and weights:
- Technical Skills Match (weight: 30%): How well do the candidate's technical skills align with required and preferred skills?
- Experience Level Match (weight: 25%): Does the candidate's years and type of experience match requirements?
- Education Match (weight: 15%): Does the candidate's education meet minimum and preferred requirements?
- Industry Relevance (weight: 15%): Has the candidate worked in relevant industries or domains?
- Overall Resume Quality (weight: 15%): Is the resume well-structured, clear, and does it demonstrate impact?

Respond ONLY with valid JSON in this exact format:
{
  "overall_score": <weighted_average_0_to_100>,
  "categories": {
    "technical_skills": {"score": <0-100>, "weight": 0.30, "explanation": "<brief explanation>"},
    "experience_level": {"score": <0-100>, "weight": 0.25, "explanation": "<brief explanation>"},
    "education": {"score": <0-100>, "weight": 0.15, "explanation": "<brief explanation>"},
    "industry_relevance": {"score": <0-100>, "weight": 0.15, "explanation": "<brief explanation>"},
    "resume_quality": {"score": <0-100>, "weight": 0.15, "explanation": "<brief explanation>"}
  },
  "top_strengths": ["<strength 1>", "<strength 2>", "<strength 3>"],
  "key_gaps": ["<gap 1>", "<gap 2>"],
  "recommendation": "STRONG_MATCH" | "GOOD_MATCH" | "PARTIAL_MATCH" | "WEAK_MATCH"
}"""


def verify_webhook_signature(payload: bytes, signature: str) -> bool:
    if not WEBHOOK_SECRET:
        return True  # Skip verification if no secret configured
    expected = hmac.new(WEBHOOK_SECRET.encode(), payload, hashlib.sha256).hexdigest()
    return hmac.compare_digest(expected, signature)


def get_candidate_details(candidate_id: str) -> dict:
    headers = {'Authorization': f'Token {MANATAL_API_KEY}'}
    with httpx.Client() as http:
        resp = http.get(f'{MANATAL_BASE_URL}/candidates/{candidate_id}/', headers=headers)
        resp.raise_for_status()
        return resp.json()


def get_job_details(job_id: str) -> dict:
    headers = {'Authorization': f'Token {MANATAL_API_KEY}'}
    with httpx.Client() as http:
        resp = http.get(f'{MANATAL_BASE_URL}/jobs/{job_id}/', headers=headers)
        resp.raise_for_status()
        return resp.json()


def extract_resume_text(candidate: dict) -> str:
    """Extract resume text from candidate record.
    Manatal stores parsed resume content in candidate profile fields."""
    parts = []
    if candidate.get('resume_content'):
        parts.append(candidate['resume_content'])
    if candidate.get('experiences'):
        for exp in candidate['experiences']:
            parts.append(f"Experience: {exp.get('title', '')} at {exp.get('company', '')} ({exp.get('start_date', '')}-{exp.get('end_date', 'Present')}): {exp.get('description', '')}")
    if candidate.get('educations'):
        for edu in candidate['educations']:
            parts.append(f"Education: {edu.get('degree', '')} in {edu.get('field', '')} from {edu.get('school', '')} ({edu.get('end_date', '')})")
    if candidate.get('skills'):
        parts.append(f"Skills: {', '.join([s.get('name', '') for s in candidate['skills']])}")
    return '\n'.join(parts) if parts else 'No resume content available'


def build_job_description_text(job: dict) -> str:
    parts = [f"Job Title: {job.get('position_name', '')}"]
    if job.get('description'):
        parts.append(f"Description: {job['description']}")
    if job.get('requirements'):
        parts.append(f"Requirements: {job['requirements']}")
    if job.get('skills'):
        parts.append(f"Required Skills: {', '.join([s.get('name', '') for s in job['skills']])}")
    if job.get('experience_level'):
        parts.append(f"Experience Level: {job['experience_level']}")
    if job.get('education_level'):
        parts.append(f"Education: {job['education_level']}")
    return '\n'.join(parts)


def score_candidate(job_text: str, resume_text: str) -> dict:
    response = client.chat.completions.create(
        model='gpt-5.4-mini',
        messages=[
            {'role': 'system', 'content': SCORING_PROMPT},
            {'role': 'user', 'content': f'JOB DESCRIPTION:\n{job_text}\n\n---\n\nCANDIDATE RESUME:\n{resume_text}'}
        ],
        temperature=0.1,
        max_tokens=1000,
        response_format={'type': 'json_object'}
    )
    return json.loads(response.choices[0].message.content)


def update_candidate_with_score(candidate_id: str, score_result: dict):
    headers = {
        'Authorization': f'Token {MANATAL_API_KEY}',
        'Content-Type': 'application/json'
    }
    note_text = f"""\u{1F916} AI Scoring Results (Custom Enhancement Layer)
\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501\u2501
Overall Score: {score_result['overall_score']}/100
Recommendation: {score_result['recommendation']}

Category Breakdown:
\u2022 Technical Skills: {score_result['categories']['technical_skills']['score']}/100 - {score_result['categories']['technical_skills']['explanation']}
\u2022 Experience Level: {score_result['categories']['experience_level']['score']}/100 - {score_result['categories']['experience_level']['explanation']}
\u2022 Education: {score_result['categories']['education']['score']}/100 - {score_result['categories']['education']['explanation']}
\u2022 Industry Relevance: {score_result['categories']['industry_relevance']['score']}/100 - {score_result['categories']['industry_relevance']['explanation']}
\u2022 Resume Quality: {score_result['categories']['resume_quality']['score']}/100 - {score_result['categories']['resume_quality']['explanation']}

Top Strengths: {', '.join(score_result['top_strengths'])}
Key Gaps: {', '.join(score_result['key_gaps'])}"""

    with httpx.Client() as http:
        # Add scoring note to candidate
        http.post(
            f'{MANATAL_BASE_URL}/candidates/{candidate_id}/notes/',
            headers=headers,
            json={'content': note_text}
        )
        # Update custom field with numeric score for sorting/filtering
        http.patch(
            f'{MANATAL_BASE_URL}/candidates/{candidate_id}/',
            headers=headers,
            json={'custom_fields': {'ai_enhanced_score': score_result['overall_score']}}
        )


@app.route(route='score_candidate', methods=['POST'])
def score_candidate_endpoint(req: func.HttpRequest) -> func.HttpResponse:
    logging.info('Applicant scoring webhook triggered')

    # Verify webhook signature
    signature = req.headers.get('X-Webhook-Signature', '')
    if not verify_webhook_signature(req.get_body(), signature):
        return func.HttpResponse('Unauthorized', status_code=401)

    try:
        payload = req.get_json()
        candidate_id = payload.get('candidate_id') or payload.get('data', {}).get('candidate_id')
        job_id = payload.get('job_id') or payload.get('data', {}).get('job_id')

        if not candidate_id or not job_id:
            return func.HttpResponse('Missing candidate_id or job_id', status_code=400)

        # Fetch full details from Manatal API
        candidate = get_candidate_details(candidate_id)
        job = get_job_details(job_id)

        # Extract text
        resume_text = extract_resume_text(candidate)
        job_text = build_job_description_text(job)

        # Score with GPT-5.4 mini
        score_result = score_candidate(job_text, resume_text)

        # Write results back to Manatal
        update_candidate_with_score(candidate_id, score_result)

        logging.info(f'Scored candidate {candidate_id} for job {job_id}: {score_result["overall_score"]}/100')

        return func.HttpResponse(
            json.dumps({'status': 'scored', 'candidate_id': candidate_id, 'score': score_result['overall_score']}),
            mimetype='application/json',
            status_code=200
        )
    except Exception as e:
        logging.error(f'Scoring error: {str(e)}')
        return func.HttpResponse(f'Error: {str(e)}', status_code=500)
applicant-scorer/requirements.txt
text
azure-functions
openai>=1.30.0
httpx>=0.27.0
applicant-scorer/host.json
json
{
  "version": "2.0",
  "logging": {
    "applicationInsights": {
      "samplingSettings": {
        "isEnabled": true,
        "excludedTypes": "Request"
      }
    }
  },
  "extensionBundle": {
    "id": "Microsoft.Azure.Functions.ExtensionBundle",
    "version": "[4.*, 5.0.0)"
  }
}

Semantic Job-Resume Matching Engine

Type: skill A supplementary scoring component that uses OpenAI text-embedding-3-small to compute semantic similarity between a job description and a resume. Unlike keyword matching, this captures meaning — for example, a resume mentioning 'built microservices on AWS Lambda' would semantically match a job requiring 'serverless cloud architecture experience' even though the exact phrases differ. The cosine similarity score (0.0–1.0) is incorporated as a bonus factor in the overall scoring.

Implementation:

semantic_matcher.py
python
# Semantic similarity scoring using OpenAI text-embedding-3-small

# semantic_matcher.py
# Can be integrated into the scoring microservice or run standalone

import numpy as np
from openai import OpenAI
import os

client = OpenAI(api_key=os.environ['OPENAI_API_KEY'])

def get_embedding(text: str, model: str = 'text-embedding-3-small') -> list[float]:
    """Generate embedding vector for input text."""
    # Truncate to ~8000 tokens worth of text to stay within limits
    text = text[:32000]  # Rough character limit
    response = client.embeddings.create(
        input=text,
        model=model
    )
    return response.data[0].embedding

def cosine_similarity(vec_a: list[float], vec_b: list[float]) -> float:
    """Compute cosine similarity between two vectors."""
    a = np.array(vec_a)
    b = np.array(vec_b)
    return float(np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b)))

def compute_semantic_match(job_description: str, resume_text: str) -> dict:
    """Compute semantic similarity between job description and resume.
    
    Returns:
        dict with similarity_score (0.0-1.0), match_level, and scaled_score (0-100)
    """
    job_embedding = get_embedding(job_description)
    resume_embedding = get_embedding(resume_text)
    
    similarity = cosine_similarity(job_embedding, resume_embedding)
    
    # Scale similarity to a more intuitive range
    # Typical cosine similarities for text range from 0.5-0.95
    # Map 0.5-0.95 to 0-100 for scoring purposes
    scaled_score = max(0, min(100, (similarity - 0.5) * (100 / 0.45)))
    
    if scaled_score >= 75:
        match_level = 'HIGH'
    elif scaled_score >= 50:
        match_level = 'MEDIUM'
    elif scaled_score >= 25:
        match_level = 'LOW'
    else:
        match_level = 'VERY_LOW'
    
    return {
        'raw_cosine_similarity': round(similarity, 4),
        'scaled_score': round(scaled_score, 1),
        'match_level': match_level
    }

def batch_rank_candidates(job_description: str, candidates: list[dict]) -> list[dict]:
    """Rank multiple candidates against a single job description.
    
    Args:
        job_description: Full job description text
        candidates: List of dicts with 'id' and 'resume_text' keys
    
    Returns:
        Sorted list of candidates with semantic scores, highest first
    """
    job_embedding = get_embedding(job_description)
    
    results = []
    for candidate in candidates:
        resume_embedding = get_embedding(candidate['resume_text'])
        similarity = cosine_similarity(job_embedding, resume_embedding)
        scaled_score = max(0, min(100, (similarity - 0.5) * (100 / 0.45)))
        results.append({
            'candidate_id': candidate['id'],
            'semantic_score': round(scaled_score, 1),
            'raw_similarity': round(similarity, 4)
        })
    
    results.sort(key=lambda x: x['semantic_score'], reverse=True)
    
    # Add rank
    for i, r in enumerate(results):
        r['rank'] = i + 1
    
    return results

# Integration with main scoring microservice:
# In score_candidate() function, add after GPT-5.4 mini scoring:
#
# semantic_result = compute_semantic_match(job_text, resume_text)
# final_score = (score_result['overall_score'] * 0.7) + (semantic_result['scaled_score'] * 0.3)
# score_result['semantic_match'] = semantic_result
# score_result['final_composite_score'] = round(final_score, 1)

Candidate Disclosure Notice Template

Type: prompt

Standardized legal disclosure text that must be displayed to all candidates before AI screening is applied. Covers NYC Local Law 144, Colorado AI Act, Illinois requirements, and GDPR Article 22. This template should be reviewed by employment counsel and customized per client.

Implementation:

plaintext
# AI Screening Disclosure Notice

[CLIENT COMPANY NAME] — Candidate Notice

Use of Artificial Intelligence in Our Hiring Process

As part of our commitment to fair and efficient hiring, [CLIENT COMPANY NAME] uses automated employment decision tools (AEDTs) to assist in evaluating candidates for open positions. We want you to be fully informed about how these tools work and your rights.

What AI Tools We Use

We use AI-powered software to:

  • Parse and analyze resume content against job requirements
  • Score and rank candidates based on qualifications match
  • Suggest candidates who may be strong fits for open positions

How It Works

Our AI screening system evaluates your application materials (resume, cover letter, assessment results) against the stated requirements for the position you applied to. The system considers factors including:

  • Skills and qualifications match
  • Relevant experience
  • Education credentials
  • Assessment scores (if applicable)

The AI generates a preliminary score to help our recruiting team prioritize candidate review. This score is used as a decision-support tool only — all hiring decisions are made by human recruiters and hiring managers.

Your Rights

  • Human Review: No candidate is rejected solely by an automated system. A human recruiter reviews all candidates, including those with lower AI scores.
  • Alternative Process: You may request that your application be reviewed without the use of AI screening tools. To make this request, email [COMPLIANCE_EMAIL] with the subject line "AI Opt-Out Request" and your application reference number.
  • Data Access: You may request information about what data was collected and how it was used in your evaluation. Contact [COMPLIANCE_EMAIL].
  • Data Deletion: You may request deletion of your personal data from our systems at any time, subject to legal retention requirements.

Bias Audit (NYC Applicants)

In compliance with New York City Local Law 144, an independent bias audit of our AI screening tools was conducted on [AUDIT_DATE] by [AUDITOR_NAME]. A summary of the most recent bias audit results is available at [AUDIT_RESULTS_URL].

EU/UK Applicants (GDPR)

Pursuant to GDPR Article 22, you have the right not to be subject to a decision based solely on automated processing that produces legal effects concerning you. Our AI screening is used as a decision-support tool with mandatory human review, and does not constitute solely automated decision-making. You may exercise your rights under GDPR Articles 13-22 by contacting our Data Protection Officer at [DPO_EMAIL].

Contact

For questions about our use of AI in hiring, contact: [COMPLIANCE_CONTACT_NAME] [COMPLIANCE_EMAIL] [COMPLIANCE_PHONE]

*Last updated: [DATE]* *This notice satisfies requirements under NYC Local Law 144, Colorado SB 24-205, Illinois HB 3773, and GDPR Articles 13-14 and 22. Please have employment counsel review and customize this template for your specific jurisdiction and implementation.*

Note

MSP Implementation Instructions — Complete all steps below before go-live.

1
Customize all [BRACKETED] fields with client information
2
Add this text to the Manatal career page (Settings > Career Page > Custom Content)
3
Add a consent checkbox on the application form: "I acknowledge that [CLIENT] uses AI tools in the screening process as described in the AI Screening Disclosure Notice above."
4
Store consent timestamp in candidate record
5
Have client's employment counsel review before go-live
6
Update the audit date and URL annually after each bias audit

Recruiter Scoring Dashboard Automation

Type: workflow

A Zapier-based automation workflow that aggregates AI scores from Manatal's built-in AI, the custom scoring microservice, and assessment platforms, then posts a consolidated summary to a shared Slack channel and/or Microsoft Teams channel. Gives recruiters a real-time feed of top-scored candidates requiring immediate attention.

Implementation

Zapier Workflow: Top Candidate Alert
yaml
# Trigger on Manatal candidate score update, filter 75+, post to Slack and
# email hiring manager

# Zapier Workflow: Top Candidate Alert
# Trigger: Manatal webhook - Candidate score updated
# Conditions and Actions:

workflow_name: "AI Top Candidate Alert"
trigger:
  app: webhooks_by_zapier
  event: catch_hook
  webhook_url: "https://hooks.zapier.com/hooks/catch/<zap_id>/"
  # Configure Manatal to POST to this URL when candidate notes are updated
  # (triggered after custom scoring microservice writes results)

filter:
  conditions:
    - field: "{{body__custom_fields__ai_enhanced_score}}"
      operator: "greater_than"
      value: 75
  # Only alert on candidates scoring 75+

action_1:
  app: slack  # or microsoft_teams
  event: send_channel_message
  channel: "#hiring-top-candidates"
  message_template: |
    🌟 *High-Scoring Candidate Alert*
    
    *Candidate:* {{body__first_name}} {{body__last_name}}
    *Position:* {{body__job_name}}
    *AI Score:* {{body__custom_fields__ai_enhanced_score}}/100
    *Recommendation:* {{body__custom_fields__ai_recommendation}}
    
    *Top Strengths:*
    {{body__custom_fields__top_strengths}}
    
    *Key Gaps:*
    {{body__custom_fields__key_gaps}}
    
    👉 <{{body__candidate_url}}|View in Manatal>
    
    _Please review within 24 hours to maintain candidate engagement._
  
action_2:
  app: email_by_zapier
  event: send_outbound_email
  to: "{{hiring_manager_email}}"
  subject: "⭐ Top Candidate: {{body__first_name}} {{body__last_name}} for {{body__job_name}} (Score: {{body__custom_fields__ai_enhanced_score}})"
  body: |
    A high-scoring candidate has been identified by our AI screening system.
    
    Candidate: {{body__first_name}} {{body__last_name}}
    Position: {{body__job_name}}
    AI Score: {{body__custom_fields__ai_enhanced_score}}/100
    
    Please log into Manatal to review: {{body__candidate_url}}
Zapier Workflow: Weekly AI Screening Summary — Monday 8am schedule trigger, pull last 50 candidates from Manatal API, post digest to Slack
yaml
# Additional Zapier Zap: Weekly Digest
workflow_name: "Weekly AI Screening Summary"
trigger:
  app: schedule_by_zapier
  event: every_week
  day: monday
  time: "08:00"

action_1:
  app: webhooks_by_zapier
  event: custom_request
  method: GET
  url: "https://api.manatal.com/open/v3/candidates/"
  headers:
    Authorization: "Token {{manatal_api_key}}"
  params:
    created_after: "{{7_days_ago}}"
    ordering: "-score"
    limit: 50

action_2:
  app: slack
  event: send_channel_message
  channel: "#hiring-weekly-digest"
  message_template: |
    📊 *Weekly AI Screening Summary*
    _Week of {{current_date}}_
    
    • Total candidates screened: {{total_count}}
    • Candidates scoring 80+: {{high_match_count}}
    • Candidates scoring 50-79: {{medium_match_count}}
    • Top candidate: {{top_candidate_name}} (Score: {{top_score}})
    
    👉 <https://app.manatal.com|View Full Dashboard>

MSP Setup Instructions

1
Create Zapier account under MSP's master billing (Professional plan $49.99/month)
2
Create the webhook catch URL in Zapier
3
Configure Manatal webhook (Settings > Integrations > Webhooks) to POST candidate data when notes are updated
4
Set up Slack/Teams channel with appropriate membership (recruiters + hiring managers)
5
Test with a sample candidate update
6
Create the weekly digest zap using Zapier's schedule trigger
7
Document the Zapier zap URLs and configuration in the client's runbook

Bias Monitoring Report Generator

Type: skill A Python script that runs monthly to analyze AI scoring patterns across demographic groups (when data is available) to proactively identify potential disparate impact before formal annual bias audits. Generates a compliance report for the MSP and client HR team.

Implementation

bias_monitor.py — Monthly bias monitoring script
python
# bias_monitor.py
# Monthly bias monitoring script - run as Azure Function on timer trigger
# or as a scheduled task on MSP's management server

import httpx
import json
import os
from datetime import datetime, timedelta
from collections import defaultdict
import statistics

MANATAL_API_KEY = os.environ['MANATAL_API_KEY']
MANATAL_BASE_URL = os.environ.get('MANATAL_BASE_URL', 'https://api.manatal.com/open/v3')
ALERT_EMAIL = os.environ.get('ALERT_EMAIL', 'msp-compliance@example.com')

def fetch_recent_candidates(days: int = 30) -> list[dict]:
    """Fetch all candidates scored in the last N days."""
    headers = {'Authorization': f'Token {MANATAL_API_KEY}'}
    since = (datetime.utcnow() - timedelta(days=days)).isoformat()
    candidates = []
    url = f'{MANATAL_BASE_URL}/candidates/?created_after={since}&limit=100'
    
    with httpx.Client() as http:
        while url:
            resp = http.get(url, headers=headers)
            resp.raise_for_status()
            data = resp.json()
            candidates.extend(data.get('results', []))
            url = data.get('next')  # Pagination
    
    return candidates

def analyze_score_distribution(candidates: list[dict]) -> dict:
    """Analyze AI score distributions and flag potential disparate impact."""
    # Group scores by available demographic proxies
    # NOTE: Direct demographic data may not be available.
    # This analyzes score distribution patterns and flags anomalies.
    
    scores = []
    scores_by_source = defaultdict(list)
    scores_by_job = defaultdict(list)
    advancement_by_score_band = defaultdict(lambda: {'total': 0, 'advanced': 0})
    
    for c in candidates:
        score = c.get('custom_fields', {}).get('ai_enhanced_score')
        if score is None:
            # Fall back to Manatal's built-in score if custom score not available
            score = c.get('score')
        if score is None:
            continue
            
        score = float(score)
        scores.append(score)
        
        # Group by source channel
        source = c.get('source', {}).get('name', 'Unknown')
        scores_by_source[source].append(score)
        
        # Group by job
        for app in c.get('applications', []):
            job_name = app.get('job', {}).get('position_name', 'Unknown')
            scores_by_job[job_name].append(score)
        
        # Track advancement rates by score band
        stage = c.get('stage', {}).get('name', '')
        advanced = stage.lower() in ['phone screen', 'interview', 'offer', 'hired']
        
        if score >= 80:
            band = '80-100'
        elif score >= 60:
            band = '60-79'
        elif score >= 40:
            band = '40-59'
        else:
            band = '0-39'
        
        advancement_by_score_band[band]['total'] += 1
        if advanced:
            advancement_by_score_band[band]['advanced'] += 1
    
    # Calculate statistics
    report = {
        'report_date': datetime.utcnow().isoformat(),
        'period_days': 30,
        'total_candidates_scored': len(scores),
        'overall_stats': {
            'mean_score': round(statistics.mean(scores), 1) if scores else 0,
            'median_score': round(statistics.median(scores), 1) if scores else 0,
            'stdev': round(statistics.stdev(scores), 1) if len(scores) > 1 else 0,
            'min': min(scores) if scores else 0,
            'max': max(scores) if scores else 0
        },
        'scores_by_source': {},
        'scores_by_job': {},
        'advancement_rates': {},
        'flags': []
    }
    
    # Analyze by source - flag if any source's mean differs by >15 points
    overall_mean = report['overall_stats']['mean_score']
    for source, src_scores in scores_by_source.items():
        src_mean = round(statistics.mean(src_scores), 1)
        report['scores_by_source'][source] = {
            'count': len(src_scores),
            'mean_score': src_mean
        }
        if abs(src_mean - overall_mean) > 15 and len(src_scores) >= 10:
            report['flags'].append(
                f'SOURCE DISPARITY: Candidates from {source} have mean score '
                f'{src_mean} vs overall {overall_mean} (n={len(src_scores)})'
            )
    
    # Analyze advancement rates by score band (4/5ths rule check)
    for band, data in advancement_by_score_band.items():
        rate = data['advanced'] / data['total'] if data['total'] > 0 else 0
        report['advancement_rates'][band] = {
            'total': data['total'],
            'advanced': data['advanced'],
            'rate': round(rate * 100, 1)
        }
    
    # Check 4/5ths rule across score bands
    rates = {b: d['advanced'] / d['total'] 
             for b, d in advancement_by_score_band.items() 
             if d['total'] >= 5}
    if rates:
        max_rate = max(rates.values())
        for band, rate in rates.items():
            if max_rate > 0 and rate / max_rate < 0.8:
                report['flags'].append(
                    f'4/5THS RULE WARNING: Score band {band} has advancement rate '
                    f'{rate:.1%} vs highest band rate {max_rate:.1%} '
                    f'(ratio: {rate/max_rate:.2f} < 0.80)'
                )
    
    return report

def generate_report_text(report: dict) -> str:
    """Generate human-readable bias monitoring report."""
    lines = [
        '=' * 60,
        'AI HIRING BIAS MONITORING REPORT',
        f'Generated: {report["report_date"]}',
        f'Period: Last {report["period_days"]} days',
        f'Candidates Analyzed: {report["total_candidates_scored"]}',
        '=' * 60,
        '',
        '--- OVERALL SCORE DISTRIBUTION ---',
        f'Mean: {report["overall_stats"]["mean_score"]}',
        f'Median: {report["overall_stats"]["median_score"]}',
        f'Std Dev: {report["overall_stats"]["stdev"]}',
        f'Range: {report["overall_stats"]["min"]} - {report["overall_stats"]["max"]}',
        '',
        '--- SCORES BY SOURCE CHANNEL ---'
    ]
    
    for source, data in report['scores_by_source'].items():
        lines.append(f'  {source}: mean={data["mean_score"]}, n={data["count"]}')
    
    lines.extend(['', '--- ADVANCEMENT RATES BY SCORE BAND ---'])
    for band, data in report['advancement_rates'].items():
        lines.append(f'  {band}: {data["rate"]}% ({data["advanced"]}/{data["total"]})')
    
    if report['flags']:
        lines.extend(['', '⚠️ --- FLAGS REQUIRING REVIEW ---'])
        for flag in report['flags']:
            lines.append(f'  ⚠️ {flag}')
        lines.append('')
        lines.append('ACTION REQUIRED: Review flagged items with HR team and employment counsel.')
        lines.append('Consider engaging bias audit firm if flags persist for 2+ consecutive months.')
    else:
        lines.extend(['', '✅ No bias flags detected this period.'])
    
    lines.extend(['', '=' * 60])
    return '\n'.join(lines)

if __name__ == '__main__':
    candidates = fetch_recent_candidates(30)
    report = analyze_score_distribution(candidates)
    report_text = generate_report_text(report)
    print(report_text)
    
    # Save report
    filename = f'bias_report_{datetime.utcnow().strftime("%Y%m%d")}.json'
    with open(filename, 'w') as f:
        json.dump(report, f, indent=2)
    print(f'\nReport saved to {filename}')
    
    # If flags exist, this should trigger an email alert
    # (implement via Azure Logic App, SendGrid, or MSP's alerting system)
    if report['flags']:
        print(f'\n⚠️ {len(report["flags"])} flags detected - escalation required')

Deployment

1
Deploy as Azure Function with timer trigger (monthly on the 1st at 06:00 UTC)
2
Store reports in Azure Blob Storage for audit trail
3
Configure SendGrid or MSP's email system to send reports to client HR manager and MSP compliance team
4
If any flags are detected, trigger escalation to MSP account manager for review

Testing & Validation

  • CONNECTIVITY TEST: From each recruiter workstation, verify HTTPS access to app.manatal.com, api.openai.com, and hooks.zapier.com by loading each URL in Chrome — confirm no firewall blocks or SSL errors
  • SSO TEST: Have each recruiter sign into Manatal via the SSO login page — verify they are redirected to Microsoft Entra ID, prompted for MFA, and returned to Manatal with correct role permissions. Test with one account from a non-compliant device to confirm conditional access blocks appropriately
  • RESUME PARSING TEST: Upload 10 resumes in varied formats (PDF, DOCX, plain text) to Manatal via bulk import. Verify that skills, experience, education, and contact information are correctly parsed and populated in candidate profiles. Flag any parsing failures for format-specific investigation
  • AI SCORING ACCURACY TEST: Import 50 historical candidates with known outcomes (25 hired, 25 rejected) against 3 test job postings. Run AI scoring and verify that at least 70% of hired candidates appear in the AI's top 30% ranking. Document the correlation rate and adjust scoring criteria if below threshold
  • CUSTOM SCORING WEBHOOK TEST: Trigger the Azure Function by manually POSTing a test payload with a valid candidate_id and job_id. Verify the function returns a 200 response with a valid score JSON, and confirm the scoring note appears on the candidate record in Manatal within 60 seconds
  • SEMANTIC MATCHING TEST: Run the semantic matcher against 5 job-resume pairs where the resume uses different terminology than the job description (e.g., 'serverless architecture' vs 'AWS Lambda functions'). Verify cosine similarity scores are above 0.70 for known-good matches
  • JOB BOARD INTEGRATION TEST: Post a test job to LinkedIn, Indeed, and ZipRecruiter via Manatal. Apply to the test job from an external device. Verify the application appears in Manatal within 15 minutes with correct source attribution and AI scoring triggered automatically
  • ASSESSMENT INTEGRATION TEST: Move a test candidate to the 'AI Screened - High Match' pipeline stage. Verify the Zapier automation triggers a TestGorilla assessment invitation email within 5 minutes. Complete the assessment and verify the score is written back to the candidate record in Manatal
  • COMPLIANCE DISCLOSURE TEST: Navigate to the client's career page (careers.clientdomain.com) and verify the AI Screening Disclosure Notice is visible before application submission. Confirm the consent checkbox is present and required. Submit a test application and verify the consent timestamp is recorded
  • HUMAN-IN-THE-LOOP TEST: Create a test candidate with an AI score below 50. Verify the candidate is automatically moved to the 'Human Review Required' pipeline stage. Confirm that NO auto-reject action is taken and that a recruiter must manually disposition the candidate
  • DATA RETENTION TEST: Verify that the data retention policy is active by checking Manatal settings — confirm rejected candidates are set to auto-delete after 12 months and all candidate data after 24 months. Submit a test data deletion request via the GDPR workflow and verify the candidate record is purged within 72 hours
  • SLACK/TEAMS ALERT TEST: Score a test candidate above 75 and verify the Zapier workflow posts a formatted alert to the #hiring-top-candidates Slack/Teams channel within 5 minutes, including candidate name, job title, score, and Manatal link
  • END-TO-END WORKFLOW TEST: Simulate a complete hiring workflow — post a job, receive an application from a job board, verify AI scoring triggers, confirm the candidate appears ranked correctly on the recruiter dashboard, move through pipeline stages, trigger assessment, receive assessment score, and verify final candidate record contains all scoring data
  • LOAD TEST: Import 200 resumes simultaneously via bulk upload and verify that AI scoring completes for all candidates within 30 minutes without errors or timeout failures in the custom scoring webhook
  • BIAS MONITORING TEST: Run the bias monitoring script against the test dataset and verify it generates a valid report with score distributions, source breakdowns, advancement rates, and correctly flags any 4/5ths rule violations in the synthetic data

Client Handoff

Client Handoff Checklist

Training Sessions to Deliver

1
Recruiter Training (2 hours): Dashboard navigation, interpreting AI scores, reviewing ranked candidate lists, understanding score breakdowns, moving candidates through pipeline stages, when and how to override AI recommendations, compliance dos and don'ts
2
Hiring Manager Training (1 hour): Reading AI scoring summaries, approving candidates for next stages, using the comparison view with secondary monitors, understanding what AI can and cannot determine
3
HR Admin Training (1.5 hours): Creating and editing job templates, managing scoring criteria weights, running reports, handling candidate data requests (GDPR), managing user accounts and permissions
4
Compliance Training (1 hour): AI disclosure requirements by jurisdiction, handling candidate opt-out requests, what to say when candidates ask about AI, bias audit schedule and responsibilities

Documentation to Leave Behind

1
System Architecture Diagram: Visual showing data flow from job boards → Manatal → AI scoring → recruiter dashboard → HRIS
2
Recruiter Quick Reference Guide: 2-page PDF with screenshots showing daily workflows — how to view scores, filter candidates, advance pipeline stages
3
Job Template Creation Guide: Step-by-step instructions for creating new job templates with properly structured requirements for optimal AI scoring
4
Scoring Criteria Reference: Document explaining what each score category means, how weights are applied, and what score thresholds trigger which pipeline actions
5
Compliance Runbook: All disclosure text, consent mechanisms, opt-out procedures, data retention policies, bias audit schedule, and escalation contacts
6
Integration Map: Document listing all connected systems (job boards, assessment tools, HRIS, Slack/Teams), webhook URLs, API keys (stored securely), and Zapier zap IDs
7
Troubleshooting Guide: Common issues and resolutions — resume parsing failures, scoring delays, integration disconnections, SSO login problems
8
MSP Contact Card: Escalation path with SLA response times, support email/phone, account manager name

Success Criteria to Review Together

Maintenance

Ongoing MSP Maintenance Responsibilities

Weekly (First 30 Days) → Biweekly (Days 31-90) → Monthly (Ongoing)

  • Scoring Accuracy Review: Check recruiter feedback on AI scores — are recruiters consistently overriding scores? If override rate exceeds 30%, investigate and recalibrate scoring criteria. Log in to Manatal, pull the weekly screening summary report, and compare AI rankings against recruiter dispositions.
  • Integration Health Check: Verify all Zapier zaps are running (check Zapier dashboard for failed tasks). Confirm job board feeds are flowing (check latest application dates). Test custom scoring webhook with a ping request. Verify Slack/Teams alerts are posting.
  • User Management: Process new recruiter account requests, deactivate departed staff, update role permissions as needed.

Monthly

  • Bias Monitoring Report: Run the bias_monitor.py script (or verify the Azure Function timer trigger ran successfully). Review the output report for any flags. Escalate flags to client HR manager and recommend corrective action. Store report in Azure Blob for audit trail.
  • Platform Updates: Check Manatal release notes for new features or changes that affect AI scoring. Test any updates in a sandbox job before applying to production workflows.
  • API Usage & Cost Review: Check OpenAI API usage dashboard — ensure token consumption is within expected range ($25–$75/month for typical volume). Flag any anomalous spikes that could indicate integration loops or abuse.
  • Security Review: Verify SSO is still enforced, review Manatal audit logs for unusual access patterns, confirm MFA is active for all users.

Quarterly

  • Scoring Criteria Calibration: Meet with client HR team (1-hour session) to review scoring effectiveness. Adjust criteria weights, add/remove skills from job templates, and update score thresholds based on 3 months of hiring data.
  • Compliance Review: Review any new AI hiring regulations enacted since last quarter. Update disclosure text if needed. Verify data retention policies are executing correctly (check that old records are being purged on schedule).
  • Vendor Review: Check Manatal subscription status and renewal terms. Review any pricing changes. Evaluate whether the client has outgrown their current plan tier.

Annually

  • Bias Audit: Coordinate with BABL AI or Holistic AI for the annual independent bias audit (required for NYC, recommended everywhere). Budget 2–4 weeks for the audit process. Publish updated audit summary on career page. Cost: $5,000–$15,000.
  • Full System Review: Comprehensive review of the entire AI hiring stack — scoring accuracy trends over 12 months, integration reliability, user adoption metrics, compliance posture, and ROI analysis.
  • License Renewal: Renew Manatal subscription (annual billing). Review Zapier plan tier. Renew TestGorilla subscription. Confirm Azure Function consumption costs.

SLA Considerations

  • Platform Availability: Manatal SaaS uptime SLA is 99.9%. MSP responsible for monitoring and reporting outages to client within 1 hour.
  • Scoring Latency: Custom scoring webhook should return results within 30 seconds per candidate. If latency exceeds 60 seconds, investigate Azure Function cold starts or OpenAI API throttling.
  • Integration Failures: Zapier failures should be detected and resolved within 4 business hours. Configure Zapier error notifications to MSP support inbox.
  • Security Incidents: Any unauthorized access to candidate data must be reported to client within 24 hours per DPA terms. Engage MSP's incident response process.

Escalation Path

1
Tier 1 (MSP Help Desk): Password resets, basic navigation questions, user account changes — resolve within 4 hours
2
Tier 2 (MSP Technical Lead): Integration failures, scoring anomalies, configuration changes — resolve within 1 business day
3
Tier 3 (MSP AI Specialist / Vendor Support): Scoring algorithm issues, API failures, bias flag investigation — resolve within 3 business days
4
Vendor Escalation: Manatal support (support@manatal.com), OpenAI support, Zapier support — engage when platform-level issues are confirmed
5
Compliance Escalation: Bias flags or regulatory inquiries → Client's employment counsel + MSP compliance team — respond within 24 hours

Model Retraining / Recalibration Triggers

  • Client adds a new job category not previously scored (requires new job template and criteria review)
  • Recruiter override rate exceeds 30% for any job category over a 30-day period
  • Bias monitoring flags persist for 2+ consecutive months
  • Client expands to a new jurisdiction with AI hiring regulations
  • Manatal releases a major AI engine update (review impact on scoring consistency)
  • OpenAI deprecates GPT-5.4 mini model (migrate to successor model, re-validate scoring)

Alternatives

Brainner AI Bolt-On with Existing ATS

Instead of migrating to Manatal, deploy Brainner ($34–$99/month) as an AI screening overlay on top of the client's existing ATS (Greenhouse, Lever, Bullhorn, BambooHR, etc.). Brainner ingests candidates via ATS integration, applies AI scoring with explainable match reasoning, and writes results back to the existing system. No ATS migration required.

  • PROS: Zero disruption to existing recruiter workflows; no ATS migration risk; ATS-agnostic; explainable AI scoring built-in; lower monthly cost than full ATS replacement.
  • CONS: Additional tool to manage alongside existing ATS (two vendors instead of one); limited to screening — no ATS features like pipeline management; per-candidate overage charges ($9.95/100 candidates) can add up for high-volume staffing agencies; less integrated experience than an all-in-one platform.
  • RECOMMEND WHEN: Client has a well-functioning ATS they do not want to replace, or when the ATS has features (like Bullhorn's VMS integrations) that cannot be replicated in Manatal.

Zoho Recruit for Zoho Ecosystem Clients

Deploy Zoho Recruit ($25–$50/user/month) with built-in AI candidate matching for clients already using Zoho One, Zoho CRM, or other Zoho products. Leverages existing identity management, integrates natively with Zoho's suite, and offers a free tier for very small operations.

  • PROS: Native integration with Zoho ecosystem (CRM, email, analytics); lower cost than Manatal Enterprise; free plan available for 1 active job; 20–30% MSP partner commission through Zoho partner program; familiar interface for Zoho users.
  • CONS: AI scoring capabilities are less sophisticated than Manatal's dedicated AI recommendation engine; free and lower tiers are severely limited; not ideal for staffing agencies with high candidate volumes; fewer job board integrations than Manatal.
  • RECOMMEND WHEN: Client is already a Zoho shop with 3+ Zoho products deployed, or is extremely budget-constrained (<$500/month total budget for the project).

Fully Custom Build with OpenAI API + Open Source Stack

Build a completely custom applicant scoring pipeline using OpenAI GPT-5.4 mini for scoring, text-embedding-3-small for semantic matching, spaCy for NLP parsing, Apache Tika for document extraction, and PostgreSQL for data storage. Deploy on Azure Functions or AWS Lambda. Integrate via API with the client's existing ATS.

Tradeoffs

  • PROS: Maximum customization of scoring criteria and weights
  • PROS: No vendor lock-in
  • PROS: Lowest per-candidate cost at scale ($0.01–$0.05/candidate)
  • PROS: Full control over AI model selection and prompt engineering
  • PROS: Can be white-labeled as MSP's proprietary solution
  • CONS: Highest implementation complexity (8–16 weeks, requires ML/NLP developer expertise)
  • CONS: MSP bears full responsibility for accuracy, bias, and compliance
  • CONS: No vendor support — all troubleshooting falls on MSP
  • CONS: Requires ongoing model maintenance as OpenAI releases new versions
  • CONS: Significant upfront development cost ($15,000–$30,000)
Note

RECOMMEND WHEN: Client is a large staffing agency (50+ recruiters) with unique scoring requirements that no SaaS product addresses, or the MSP wants to build a reusable white-label AI screening product to sell across multiple clients.

Enterprise Platform: Greenhouse + Eightfold AI

Deploy Greenhouse as the ATS ($6,000–$25,000+/year) with Eightfold AI as the talent intelligence layer ($650+/month for mid-market). Provides the most advanced AI-driven talent matching, skills-based hiring, internal mobility intelligence, and deep-learning candidate scoring.

Tradeoffs

  • PROS: Most advanced AI in the market (deep learning, not just NLP)
  • PROS: Best-in-class structured hiring and bias reduction
  • PROS: Greenhouse's 500+ integrations
  • PROS: Eightfold's skills ontology covers 1M+ skills
  • PROS: Excellent for diversity hiring goals
  • PROS: Strong enterprise compliance and audit trail
  • CONS: Highest total cost ($30,000–$75,000+/year combined)
  • CONS: Long implementation timeline (4–12 weeks with vendor professional services)
  • CONS: Overkill for SMBs with <50 employees or <500 applicants/month
  • CONS: Requires dedicated HR operations staff to manage
  • CONS: Vendor professional services fees on top of subscription
Note

RECOMMEND WHEN: Client is a mid-market to enterprise staffing operation (100+ employees, 1,000+ applicants/month) with budget for best-in-class tooling, strong diversity hiring goals, and dedicated HR operations staff.

Bullhorn + Amplify AI for Staffing Agencies

Deploy Bullhorn ATS/CRM (custom enterprise pricing) with the Bullhorn Amplify AI module specifically designed for staffing agencies. Amplify AI automates candidate matching, job matching, and placement optimization using staffing-specific AI models.

Tradeoffs

  • PROS: Purpose-built for staffing agencies (not generic HR)
  • PROS: 110+ VMS integrations for contract staffing
  • PROS: AI optimized for placement revenue, not just hiring
  • PROS: Strong CRM capabilities for candidate relationship management
  • PROS: Dominant market position in staffing industry
  • CONS: Enterprise pricing (typically $100+/user/month, custom quotes only)
  • CONS: No published pricing makes budgeting difficult
  • CONS: Long sales cycle and implementation
  • CONS: Not suitable for corporate HR teams (staffing agencies only)
  • CONS: Heavy platform — overkill for small agencies with <10 recruiters
Note

RECOMMEND WHEN: Client is a staffing/recruiting agency (not a corporate HR department) with 10+ recruiters, active VMS/MSP vendor relationships, and budget for an enterprise-grade staffing platform.

Want early access to the full toolkit?