47 min readAmbient capture

Implementation Guide: Transcribe interviews and generate structured candidate evaluation summaries

Step-by-step implementation guide for deploying AI to transcribe interviews and generate structured candidate evaluation summaries for HR & Staffing clients.

Hardware Procurement

Jabra Speak2 75 Conference Speakerphone

Jabra (GN Audio)Jabra Speak2 75 (2775-419 for MS Teams variant, 2775-109 for UC variant)Qty: 3

$180 per unit MSP cost / $265 suggested resale per unit

Primary ambient audio capture device for in-person and hybrid interview rooms. Features 4 beamforming noise-cancelling microphones, USB-A/USB-C connectivity, Bluetooth with up to 32 hours wireless talk time, and 98 ft wireless range with optional Link 390 dongle. Supports rooms with up to 6 participants, ideal for 1-on-1 and small-panel interviews.

Poly Sync 60 Conference Speakerphone

Poly (HP)Poly Sync 60 (216871-01 for USB-A/USB-C)Qty: 1

$400 per unit MSP cost / $560 suggested resale per unit

Large-room ambient capture device for panel interview rooms accommodating up to 12 participants. Microsoft Teams certified. Deployed in the client's main conference room where multi-interviewer panel sessions occur.

Jabra Link 390a USB Bluetooth Dongle

Jabra (GN Audio)Jabra Link 390a (14208-22)Qty: 3

$40 per unit MSP cost / $60 suggested resale per unit

Bluetooth dongle to extend wireless range of Jabra Speak2 75 speakerphones to laptops without reliable built-in Bluetooth. Ensures stable audio connection to the recording application up to 98 ft from the host device.

Rode NT-USB Mini USB Condenser Microphone

Rode MicrophonesNTUSB-MINIQty: 1

$85 per unit MSP cost / $125 suggested resale per unit

High-fidelity USB microphone for a dedicated quiet interview room or recruiter desk where a single interviewer needs studio-grade audio quality. Used as an optional upgrade for rooms where premium transcription accuracy is critical (e.g., executive-level interviews).

Software Procurement

Metaview Pro

MetaviewPer-seat SaaS subscriptionQty: 10 seats

$50/seat/month billed annually ($600/seat/year). For 10 recruiters: $6,000/year

Primary interview intelligence platform. Auto-joins Zoom/Teams/Google Meet interviews as an AI notetaker, generates real-time transcription, produces structured interview notes and candidate summaries, and provides recruiting analytics. SOC 2 certified and GDPR-ready.

Zapier Team Plan

ZapierSaaS subscription (usage-based tiers)Qty: 2,000 tasks/month

$103.50/month ($1,242/year)

Workflow automation middleware to connect Metaview outputs (transcripts, summaries) to ATS platforms that lack native Metaview integration (e.g., Bullhorn, BambooHR, JobDiva). Also used to trigger Slack/email notifications and archive transcripts to cloud storage.

OpenAI API (GPT-5.4)

OpenAIGPT-5.4Qty: Usage-based API

~$0.50–$1.50 per interview for custom summarization prompts. Estimated $50–$150/month for 100 interviews/month

Supplementary AI engine for generating custom structured candidate evaluation scorecards beyond Metaview's built-in summaries. Used in the Zapier workflow to transform raw transcripts into client-specific evaluation formats before pushing to the ATS.

Microsoft 365 Business Standard or Google Workspace Business Standard

Microsoft or GooglePer-seat SaaS subscription

$12.50/user/month (M365) or $14/user/month (Google Workspace) — assumed already in client environment

Provides the video conferencing platform (Microsoft Teams or Google Meet) and identity provider (Azure AD or Google Workspace) for SSO. Also provides cloud storage (OneDrive/SharePoint or Google Drive) for transcript archival.

Zoom Workplace Pro (if applicable)

Zoom Video CommunicationsPer-seat SaaS subscription

$13.33/user/month billed annually — only if client uses Zoom instead of Teams/Meet

Video conferencing platform for virtual interviews. Metaview's bot integrates natively with Zoom to join, record, and transcribe meetings. Zoom Pro or higher is required for meetings longer than 40 minutes.

Jotform or Typeform (Digital Consent Forms)

Jotform / TypeformJotform Bronze or Typeform BasicQty: SaaS subscription

$34–$50/month

Digital consent form platform for capturing candidate recording and AI analysis consent before each interview. Forms are sent via automated email workflow and responses are logged for compliance records.

Prerequisites

  • Active Applicant Tracking System (ATS) with API access enabled — supported ATS includes Greenhouse, Lever, Ashby (native Metaview integration), or Bullhorn, BambooHR, Workable, JobDiva (via Zapier integration). Confirm the client's ATS plan tier includes API access.
  • Video conferencing platform licensed and deployed — Zoom Pro/Business, Microsoft Teams (via M365 Business Standard+), or Google Meet (via Google Workspace Business Standard+). All interviewers must have licensed accounts.
  • Stable internet connectivity in all interview rooms — minimum 5 Mbps upload per concurrent interview session. WiFi should be WPA2/WPA3 Enterprise; wired Ethernet preferred for dedicated interview rooms.
  • Identity provider configured for SSO — Azure Active Directory (Entra ID), Google Workspace, or Okta. All recruiter and hiring manager accounts must be provisioned in the IdP.
  • Admin credentials for the client's ATS, video conferencing platform, email system, and identity provider. MSP must have delegated admin access or work alongside client IT admin.
  • Client legal counsel has reviewed and approved: (1) interview recording consent language, (2) AI disclosure notice for candidates, (3) data retention policy (recommended 12-month default), and (4) jurisdiction-specific compliance requirements (especially if hiring in CA, IL, NY, or EU).
  • Firewall and network allow HTTPS outbound (port 443) to: metaview.ai, api.openai.com, zapier.com, and the client's ATS API endpoint. No deep packet inspection should interfere with WebSocket connections used by meeting bots.
  • Chrome browser (latest stable version) installed on all recruiter/hiring manager workstations — required for Metaview browser extension and optimal meeting bot performance.
  • Designated interview rooms identified — confirm room count, seating capacity, existing AV equipment, and USB/power outlet availability for speakerphone deployment.
  • Client has designated a project champion (typically Head of Recruiting or HR Operations Manager) who will serve as the primary point of contact, make workflow decisions, and drive internal adoption.

Installation Steps

Step 1: Hardware Deployment — Speakerphones and Microphones

Unbox and deploy Jabra Speak2 75 units in each designated interview room and the Poly Sync 60 in the main conference/panel interview room. Connect each speakerphone via USB-C to the room's dedicated interview laptop or docking station. Insert the Jabra Link 390a Bluetooth dongle into each laptop's USB-A port if Bluetooth will be used wirelessly. For the optional Rode NT-USB Mini, mount it on the recruiter's desk using the integrated stand and connect via USB-C. Verify each device is recognized by the operating system as the default audio input device.

1
Windows — Verify audio device is recognized
2
macOS — Verify audio device
3
Set as default recording device (Windows PowerShell — adjust name as needed)
Windows — Verify audio device is recognized
powershell
Get-PnpDevice -Class AudioEndpoint | Where-Object {$_.FriendlyName -like '*Jabra*' -or $_.FriendlyName -like '*Poly*'}
macOS — Verify audio device
bash
system_profiler SPAudioDataType | grep -A 5 'Jabra\|Poly'
Set as default recording device (Windows PowerShell — adjust name as needed)
powershell
$device = Get-AudioDevice -List | Where-Object {$_.Name -like '*Jabra Speak2 75*' -and $_.Type -eq 'Recording'}
Set-AudioDevice -ID $device.ID
Note

Firmware updates: Check Jabra Direct (https://www.jabra.com/software-and-services/jabra-direct) and Poly Lens (https://www.poly.com/us/en/products/services/poly-lens) for firmware updates before deployment. Update firmware on all devices during initial setup. Label each speakerphone with the room name/number using a label maker for asset tracking.

Step 2: Metaview Account Provisioning and SSO Configuration

Create the Metaview organizational account at app.metaview.ai. Configure SSO using the client's identity provider (Azure AD or Google Workspace) via SAML 2.0 or OAuth. Provision user accounts for all recruiters and hiring managers who will conduct interviews. Assign the 'Recruiter' role to active interviewers and 'Viewer' role to hiring managers who only need to review summaries.

1
Navigate to https://app.metaview.ai/signup and create org account
2
Go to Settings > Authentication > Configure SSO
3
For Azure AD: Register Metaview as an Enterprise Application in Entra ID
4
For Google Workspace: Add Metaview as a SAML app in Google Admin Console
5
Bulk invite users via CSV upload in Metaview Admin > Users > Import
Azure AD registration values for Metaview SSO configuration
text
# Azure AD App Registration settings:
App Registration > New Registration > Name: Metaview Interview AI
Redirect URI: https://app.metaview.ai/auth/callback
Copy Client ID and Tenant ID into Metaview SSO settings
Note

Metaview supports both SAML 2.0 and OAuth-based SSO. Confirm with Metaview support which method is available on the Pro plan. If SSO is only available on Enterprise tier, use email-based invitations with enforced strong passwords and require MFA via the IdP. Keep a record of all provisioned users for license tracking.

Step 3: Video Conferencing Integration

Connect Metaview to the client's video conferencing platform. For Zoom: navigate to Metaview Settings > Integrations > Zoom, authorize the connection using a Zoom admin account, and enable 'Auto-join scheduled interviews.' For Microsoft Teams: install the Metaview bot from the Teams App Store (or sideload via Teams Admin Center), authorize with M365 global admin credentials, and configure auto-join policies. For Google Meet: install the Metaview Chrome extension on all recruiter machines and authorize Google Calendar access.

1
Metaview Settings > Integrations > Zoom > Connect
2
Sign in with Zoom admin account
3
Approve requested permissions (read meetings, join meetings, record)
4
Enable 'Auto-join interviews' toggle
1
Sideload bot via PowerShell if not in Teams App Store
Sideload Metaview bot into Microsoft Teams via PowerShell
powershell
Connect-MicrosoftTeams
New-TeamsApp -DistributionMethod organization -Path ./metaview-teams-bot.zip
1
Google Admin > Devices > Chrome > Apps & extensions
2
Add Metaview extension by ID from Chrome Web Store
3
Set installation policy to 'Force install' for Recruiters OU
Note

The Metaview bot joins meetings as a visible participant named 'Metaview Notetaker.' Inform all interviewers that this bot will appear. For compliance, the bot's presence serves as partial notice of recording, but explicit consent must still be obtained separately. Test the integration with a dummy meeting before going live.

Step 4: ATS Integration — Native or Zapier-Based

Connect Metaview to the client's ATS to enable automatic routing of interview summaries and transcripts to candidate records. For Greenhouse, Lever, or Ashby: use Metaview's native integration (Settings > Integrations > [ATS Name], authorize with ATS admin API key). For Bullhorn, BambooHR, or other ATS without native integration: configure Zapier workflows to receive Metaview webhook events and push structured data to the ATS via its API.

Greenhouse Native Integration

1
In Greenhouse: navigate to Settings > Dev Center > API Credentials > Create New
2
Set Type to: Harvest API Key
3
Set Permissions: Candidates (read/write), Scorecards (write), Interviews (read)
4
Copy the API key
5
In Metaview: navigate to Settings > Integrations > Greenhouse > Paste API key > Save

Bullhorn via Zapier

1
Create a Zapier account and connect Metaview (via webhook or native Zap)
2
Set Trigger: Metaview > Interview Completed
3
Set Action: Bullhorn > Create Note on Candidate Record
4
Map fields: candidate_name, interview_date, summary_text, scorecard_json

Example Zapier Webhook Payload from Metaview

Metaview webhook payload structure sent on interview completion
json
{
  "event": "interview.completed",
  "candidate": { "name": "Jane Doe", "email": "jane@example.com" },
  "interview": { "date": "2025-01-15", "duration_minutes": 42 },
  "transcript": "Full transcript text...",
  "summary": "AI-generated summary text..."
}
Note

API key permissions should follow the principle of least privilege. For Greenhouse, only grant Harvest API permissions for Candidates (read/write) and Scorecards (write). Store API keys in a password manager (e.g., IT Glue, Hudu) — never in plaintext documents. Test the integration by running a mock interview and verifying the summary appears in the correct candidate record within the ATS.

Step 5: Custom AI Summarization Pipeline via Zapier + OpenAI

Configure an enhanced summarization workflow that takes Metaview's raw transcript and generates a client-specific structured candidate evaluation scorecard using OpenAI GPT-5.4. This supplements Metaview's built-in summaries with custom evaluation criteria matching the client's hiring rubric. Create a multi-step Zapier workflow: (1) Trigger on Metaview interview completion webhook, (2) Send transcript to OpenAI GPT-5.4 with a structured prompt, (3) Parse the structured JSON response, (4) Push the scorecard to the ATS candidate record, (5) Send a Slack/email notification to the hiring manager.

1
Trigger — Webhooks by Zapier > Catch Hook: Copy the webhook URL and configure in Metaview > Settings > Webhooks. Event: interview.completed
2
Action — OpenAI > Send Prompt: Model: gpt-5.4 | System Prompt: (see custom_ai_components for full prompt) | User Prompt: {{transcript_text}} from Step 1 payload | Temperature: 0.3 | Max Tokens: 2000 | Response Format: JSON
3
Action — Formatter by Zapier > Utilities > JSON Parse: Input: {{openai_response}} from Step 2
4
Action — ATS API call (Greenhouse/Bullhorn/etc.): Push parsed scorecard fields to candidate record
5
Action — Slack > Send Channel Message (or Email by Zapier): Channel: #hiring-updates | Message: Candidate evaluation for {{candidate_name}} is ready
Note

The OpenAI API call costs approximately $0.01–$0.05 per interview transcript (depending on length). Set a max_tokens limit of 2000 to control costs. The temperature of 0.3 ensures consistent, deterministic evaluations. Always test with 5-10 real interview transcripts before enabling in production. Monitor Zapier task usage — each 5-step workflow execution consumes 5 tasks from the plan quota.

Implement a legally compliant consent capture workflow. Create a digital consent form (via Jotform or Typeform) that informs the candidate: (1) the interview will be recorded, (2) AI will be used to transcribe and analyze their responses, (3) what data will be retained and for how long, and (4) how to request data deletion. Integrate the consent form into the interview scheduling workflow so it is sent automatically when an interview is scheduled and must be completed before the interview begins.

1
Create new form at jotform.com with these required fields: Candidate Full Name (text), Candidate Email (email), Position Applied For (text), Consent Checkbox 1: 'I consent to this interview being recorded', Consent Checkbox 2: 'I understand AI will analyze my responses', Consent Checkbox 3: 'I acknowledge the data retention policy', Electronic Signature field, Date/Time (auto-populated)
2
Enable form submission notifications to recruiter email
3
Configure Jotform > Zapier integration
1
Zapier automation for consent — Trigger: Greenhouse > Interview Scheduled (or ATS equivalent)
2
Action 1: Jotform > Prefill form with candidate name, email, position
3
Action 2: Email by Zapier > Send consent form link to candidate
4
Filter: Only proceed to interview if consent form is completed
Consent record storage
plaintext
# Google Sheets or SharePoint row schema

# Store consent records:
# Action: Google Sheets or SharePoint > Add row with consent data
# Columns: Candidate Name, Email, Position, Consent Date, IP Address, Form ID
Critical

CRITICAL: Do not proceed with any recorded interview unless consent is captured and logged. For two-party consent states (CA, FL, PA, IL, MA, MD, MT, NV, NH, WA, DE), written consent is legally required. Even in one-party consent states, best practice is to always obtain explicit consent. Store consent records for the duration of the data retention period plus 2 years. Consult with client legal counsel to finalize consent language.

Step 7: Data Retention and Deletion Policy Configuration

Configure data retention policies across all systems to comply with the client's data governance requirements and applicable regulations. Set Metaview transcript retention to the agreed-upon period (recommend 12 months default). Configure automatic deletion workflows in Zapier to purge transcripts and summaries from secondary storage after the retention period. Establish a candidate data deletion request process compliant with GDPR Article 17 and applicable state laws.

1
Metaview retention configuration: Settings > Data & Privacy > Retention Period > Set to 12 months
2
Enable auto-deletion after retention period in Metaview
1
Cloud storage retention (SharePoint): Configure Information Management Policy in SharePoint document library — Library Settings > Information Management Policy > Enable Retention > Delete after 12 months
2
Cloud storage retention (Google Drive): Use Google Vault for retention management — admin.google.com > Apps > Google Workspace > Google Vault > Retention Rules > Create rule for Interview Transcripts folder with 12-month retention
SharePoint: Unlock site customization before applying Information Management Policy
powershell
Set-SPOSite -Identity https://tenant.sharepoint.com/sites/InterviewArchive -DenyAddAndCustomizePages $false
1
Candidate deletion request automation (Zapier) — Trigger: Jotform > 'Data Deletion Request' form submitted
2
Action 1: Metaview API > Delete candidate data (if API supports)
3
Action 2: Google Sheets > Mark consent record as 'DELETED'
4
Action 3: ATS API > Remove interview notes for candidate
5
Action 4: Email > Confirm deletion to candidate and compliance officer
Note

Data retention periods should be determined by client legal counsel based on their jurisdictions and applicable regulations. GDPR requires response to deletion requests within 30 days. US state laws vary. Document the retention policy in a client-facing privacy notice that is provided to candidates. Keep a deletion log for audit purposes.

Step 8: Interview Room Configuration and Audio Optimization

Physically configure each interview room for optimal audio capture. Place the Jabra Speak2 75 at the center of the interview table, connected via USB-C to the room's laptop. Run a test recording to verify audio quality, speaker separation, and noise cancellation. For rooms with significant ambient noise (HVAC, street noise), adjust microphone gain settings and consider acoustic treatment (foam panels). Configure the room's laptop to use the speakerphone as the default audio input for Zoom/Teams/Meet.

1
Windows — Set Jabra as default audio input for video conferencing: Control Panel > Sound > Recording tab > Right-click 'Jabra Speak2 75' > Set as Default Device
2
macOS: System Preferences > Sound > Input > Select 'Jabra Speak2 75'
3
Zoom — Set audio device: Zoom Settings > Audio > Microphone: Jabra Speak2 75 — Check 'Automatically adjust microphone volume' OFF — Set microphone input level to 75-80%
4
Teams — Set audio device: Teams Settings > Devices > Audio devices > Jabra Speak2 75
Windows PowerShell
powershell
# Launch recording tool (or use Voice Recorder app). Record 60 seconds of
# conversational audio from typical interview distance, then play back and
# verify clarity, volume, and noise cancellation.

Start-Process 'ms-screenclip:'
Note

Optimal speakerphone placement is center-table, within 3-6 feet of all speakers. Avoid placing near laptop fans, air vents, or windows. If the room has hard surfaces (glass, concrete), consider adding a felt desk pad under the speakerphone to reduce reflections. Run audio tests with 2-3 people speaking at interview volume levels. The speakerphone's noise cancellation works best when voices are the dominant sound source.

Step 9: Template and Scorecard Configuration

Work with the client's Head of Recruiting to configure structured interview templates and evaluation scorecard mappings. In Metaview, configure note templates that align with the client's interview stages (phone screen, technical interview, cultural fit, final round). In the custom OpenAI summarization pipeline, configure the evaluation prompt to map to the client's specific competency framework and rating scale. In the ATS, create or update scorecard templates to receive the AI-generated evaluation data.

1
Navigate to Metaview > Templates
2
Create templates for each interview type: Phone Screen Template (Key skills, salary expectations, availability, motivation), Technical Interview Template (Technical skills assessment, problem-solving, relevant experience), Cultural Fit Template (Values alignment, communication style, team dynamics), Final Round Template (Leadership potential, strategic thinking, overall recommendation)
3
Navigate to Greenhouse > Configure > Scorecards
4
For each interview stage, add attributes matching the AI output: Overall Rating (1-5 scale), Key Strengths (text), Areas of Concern (text), Skills Assessment (per-skill rating), Recommendation (Strong Yes / Yes / No / Strong No)
5
Map these fields in the Zapier ATS integration step
Note

Scorecard templates should be co-designed with the client's recruiting leadership. Spend at least 1-2 hours in a workshop session reviewing their current evaluation criteria, interview stages, and desired output format. The AI summarization prompt (see custom_ai_components) should be updated whenever the client changes their evaluation criteria. Document all template configurations in the client's runbook.

Step 10: Pilot Testing with Recruiter Cohort

Before full rollout, run a 2-week pilot with 2-3 designated recruiters. These recruiters should conduct their normal interview schedule with the full system active: ambient capture, transcription, AI summarization, and ATS integration. Collect detailed feedback on transcription accuracy, summary quality, workflow friction, and candidate reactions. Use this feedback to refine templates, prompts, and workflows before rolling out to all users.

1
Create a shared Google Sheet / Excel Online with columns: Date | Recruiter | Candidate | Position | Interview Type | Transcription Accuracy (1-5) | Summary Quality (1-5) | ATS Sync Success (Y/N) | Consent Captured (Y/N) | Issues/Feedback
2
Schedule pilot kickoff meeting — Attendees: MSP tech lead, client recruiting champion, pilot recruiters | Agenda: System walkthrough, consent workflow demo, feedback process | Duration: 90 minutes
3
Schedule pilot review meeting (end of week 2) — Review all pilot interviews in tracking sheet, identify transcription errors and pattern issues, review AI summary quality and ATS sync reliability, decide on prompt/template adjustments before full rollout
Note

Aim for at least 15-20 interviews during the pilot period to get statistically meaningful feedback. Common issues to watch for: (1) speaker misidentification in multi-person interviews, (2) technical jargon transcription errors, (3) AI summaries that are too generic, (4) consent form friction slowing down scheduling. Address all critical issues before proceeding to full rollout. Document all changes made during the pilot in the project change log.

Step 11: Full Rollout and User Training

After pilot validation and adjustments, roll out the system to all recruiters and hiring managers. Conduct training sessions in cohorts of 5-10 users. Training covers: (1) how to ensure the Metaview bot joins their interviews, (2) how to use speakerphones for in-person interviews, (3) how to review and edit AI-generated summaries, (4) how to manage the consent workflow, (5) where to find transcripts and scorecards in the ATS, and (6) how to handle candidate questions about recording/AI. Deploy the Metaview Chrome extension or Teams bot to all user machines.

1
Chrome extension mass deployment via Google Workspace Admin: Go to admin.google.com > Devices > Chrome > Apps & extensions > Users & browsers
2
Select the Recruiters organizational unit
3
Click + > Add from Chrome Web Store > Search 'Metaview'
4
Set Installation Policy: Force install
5
Save
6
For Intune-managed Windows devices (Teams bot deployment): In Microsoft Endpoint Manager > Apps > All Apps > Add
7
App type: Microsoft Store app (or .msix sideload)
8
Assign to 'Recruiters' group
Post-deployment verification commands for Metaview installation
powershell
# Post-deployment verification:
Get-InstalledModule | Where-Object {$_.Name -like '*Metaview*'}
# Or check Chrome extensions: chrome://extensions/ on each machine
Note

Prepare a 1-page quick reference card for each recruiter with: (1) how to verify the bot is in the meeting, (2) how to check if recording consent was received, (3) how to access the transcript after the interview, (4) who to contact for technical issues. Schedule a 30-minute office hours session 1 week after rollout for Q&A. Track adoption metrics: % of interviews captured, % of summaries reviewed, % pushed to ATS.

Step 12: Post-Deployment Monitoring and Optimization

After full rollout, establish ongoing monitoring for system health, adoption, and quality. Configure alerts in Zapier for workflow failures. Set up a monthly review cadence with the client to review adoption metrics, transcription accuracy, summary quality, and compliance status. Optimize the AI summarization prompt based on recruiter feedback collected over the first 30 days of production use.

1
Zapier > Settings > Notifications > Enable 'Task Error' alerts
2
Set notification email to msp-alerts@yourmsp.com
3
Create a Zapier 'Error Handler' Zap: Trigger: Zapier Manager > Task Failed | Action: Slack > Post to #msp-monitoring channel
  • Monthly metrics to track (query from Metaview admin dashboard): Total interviews captured, Average transcription word count, User adoption rate (active users / total licensed users), Consent form completion rate, ATS sync success rate
API usage monitoring (OpenAI)
bash
# track monthly spend and per-interview cost

curl https://api.openai.com/v1/usage -H 'Authorization: Bearer $OPENAI_API_KEY'
Note

Set a 30-day checkpoint to review and refine the AI summarization prompt. Common optimizations include: adding industry-specific terminology to improve accuracy, adjusting the scoring rubric based on recruiter feedback, and tuning the summary length. Establish an escalation path: Tier 1 (recruiter self-service for basic issues) → Tier 2 (MSP helpdesk for technical issues) → Tier 3 (MSP solutions architect for integration/prompt engineering).

Custom AI Components

Structured Candidate Evaluation Prompt

Type: prompt A carefully engineered GPT-5.4 system prompt that transforms raw interview transcripts into structured candidate evaluation scorecards. The prompt enforces consistent evaluation criteria, uses a standardized rating scale, identifies key evidence from the transcript, and outputs valid JSON that can be directly mapped to ATS scorecard fields. This prompt is the core AI component that differentiates the solution from basic transcription.

Implementation

System Prompt for GPT-5.4 Candidate Evaluation
text
## System Prompt for GPT-5.4 Candidate Evaluation

You are an expert HR interview evaluator working for a staffing agency. Your task is to analyze an interview transcript and produce a structured candidate evaluation scorecard.

## Instructions

1. Read the entire interview transcript carefully.
2. Evaluate the candidate across the competency dimensions listed below.
3. For each competency, provide:
   - A rating from 1-5 (1=Poor, 2=Below Average, 3=Average, 4=Above Average, 5=Excellent)
   - 1-2 specific evidence quotes from the transcript supporting your rating
   - A brief justification (1-2 sentences)
4. Provide an overall recommendation: STRONG_YES, YES, MAYBE, NO, or STRONG_NO
5. Identify the top 3 strengths and top 3 concerns.
6. Generate a 150-word executive summary suitable for a hiring manager who has not read the transcript.
7. Flag any potential red flags or inconsistencies in the candidate's responses.

## Competency Dimensions

- **Technical Skills**: Relevant knowledge, tools, methodologies for the role
- **Communication**: Clarity, articulation, active listening, professional demeanor
- **Problem Solving**: Analytical thinking, creativity, structured approach to challenges
- **Experience Relevance**: How well past experience maps to the target role
- **Cultural Fit**: Alignment with team values, collaboration style, adaptability
- **Motivation & Interest**: Genuine enthusiasm for the role and company, career alignment
- **Leadership/Initiative**: Self-direction, ownership mentality, ability to influence

## Output Format

Respond ONLY with valid JSON in this exact structure:

{
  "candidate_name": "[Extracted from transcript or 'Unknown']",
  "position": "[Extracted from transcript or 'Unknown']",
  "interview_date": "[Extracted or current date]",
  "interview_duration_minutes": [estimated from transcript length],
  "competency_scores": {
    "technical_skills": {
      "rating": [1-5],
      "evidence": ["Direct quote 1", "Direct quote 2"],
      "justification": "Brief explanation"
    },
    "communication": {
      "rating": [1-5],
      "evidence": ["Direct quote 1", "Direct quote 2"],
      "justification": "Brief explanation"
    },
    "problem_solving": {
      "rating": [1-5],
      "evidence": ["Direct quote 1", "Direct quote 2"],
      "justification": "Brief explanation"
    },
    "experience_relevance": {
      "rating": [1-5],
      "evidence": ["Direct quote 1", "Direct quote 2"],
      "justification": "Brief explanation"
    },
    "cultural_fit": {
      "rating": [1-5],
      "evidence": ["Direct quote 1", "Direct quote 2"],
      "justification": "Brief explanation"
    },
    "motivation_interest": {
      "rating": [1-5],
      "evidence": ["Direct quote 1", "Direct quote 2"],
      "justification": "Brief explanation"
    },
    "leadership_initiative": {
      "rating": [1-5],
      "evidence": ["Direct quote 1", "Direct quote 2"],
      "justification": "Brief explanation"
    }
  },
  "overall_score": [weighted average, 1 decimal],
  "recommendation": "STRONG_YES | YES | MAYBE | NO | STRONG_NO",
  "top_strengths": ["Strength 1", "Strength 2", "Strength 3"],
  "top_concerns": ["Concern 1", "Concern 2", "Concern 3"],
  "red_flags": ["Flag 1 or empty array if none"],
  "executive_summary": "150-word summary for hiring manager",
  "follow_up_questions": ["Suggested follow-up question 1", "Suggested follow-up question 2"]
}

## Important Guidelines

- Base your evaluation ONLY on the content of the transcript. Do not make assumptions about the candidate's appearance, voice tone, accent, age, gender, or any protected characteristic.
- If a competency cannot be evaluated from the transcript (e.g., the topic was not discussed), set the rating to null and note 'Not assessed in this interview' in the justification.
- Be calibrated: a '3' rating means the candidate meets the basic standard for the role. Reserve '5' for truly exceptional responses with strong, specific evidence.
- Extract the candidate's name and position from the conversation context. If not clearly stated, use 'Unknown'.
- The executive summary should be written in third person and be suitable for pasting directly into an ATS.

Usage in Zapier

1
In the Zapier OpenAI step, configure the settings below.
2
Parse the JSON response in the next Zapier step using 'Formatter > Utilities > JSON Parse'.
3
Map individual fields to ATS scorecard fields as shown below.
Zapier OpenAI Step Configuration
text
Model: gpt-5.4
System Prompt: [paste the above system prompt]
User Prompt: `Please evaluate the following interview transcript:\n\n{{raw_transcript_from_metaview}}`
Temperature: 0.3
Max Tokens: 2000
Response Format: json_object
ATS Scorecard Field Mapping
text
`overall_score`        → ATS overall rating
`recommendation`       → ATS recommendation field
`executive_summary`    → ATS summary/notes field
`competency_scores.*`  → Individual ATS scorecard attributes

Type: workflow A Zapier-based automation workflow that triggers when an interview is scheduled in the ATS, sends a digital consent form to the candidate, tracks completion, and gates the interview recording based on consent status. This ensures 100% compliance with recording consent requirements across all jurisdictions.

Trigger: Greenhouse (or ATS) > Interview Scheduled

  • Trigger fires when a new interview event is created
  • Captures: candidate_name, candidate_email, position_name, interview_datetime, interviewer_name

Action 1: Jotform > Prefill Form

  • Form: 'Interview Recording & AI Consent'
  • Prefill candidate_name, candidate_email, position_name, interview_date
  • Generate unique prefilled form URL

Action 2: Email by Zapier > Send Email

  • To: {{candidate_email}}
  • Subject: 'Action Required: Interview Consent Form for {{position_name}} at {{company_name}}'
Consent email body template for Action 2
text
Dear {{candidate_name}},

Thank you for your upcoming interview for the {{position_name}} position scheduled on {{interview_datetime}}.

As part of our commitment to a thorough and fair hiring process, we use technology to record and transcribe interviews. An AI system will also generate a summary of your responses to help our hiring team make informed decisions.

Before your interview, please review and complete the consent form linked below:

{{prefilled_form_url}}

Key points:
• Your interview will be audio recorded and transcribed
• An AI system will analyze the transcript to generate evaluation notes
• Your data will be retained for {{retention_period}} months
• You may request deletion of your data at any time
• You may decline consent and still proceed with a traditional interview

If you have questions, please contact {{recruiter_email}}.

Best regards,
{{company_name}} Recruiting Team

Action 3: Google Sheets > Add Row to 'Consent Tracking' sheet

  • Columns: candidate_name, candidate_email, position, interview_date, consent_form_sent_date, consent_status='PENDING'

Trigger: Jotform > New Submission on 'Interview Recording & AI Consent'

Action 1: Google Sheets > Update Row in 'Consent Tracking'

  • Lookup: candidate_email
  • Update: consent_status='COMPLETED', consent_date={{submission_date}}, consent_ip={{submitter_ip}}

Action 2: Slack > Send Message

  • Channel: #recruiting-ops
  • Message: '✅ Consent received from {{candidate_name}} for {{position_name}} interview on {{interview_date}}'

Trigger: Schedule by Zapier > Every Hour

Action 1: Google Sheets > Lookup Row

  • Filter: consent_status='PENDING' AND interview_date is within 24 hours

Action 2: (Filter) Only continue if matching rows found

Action 3: Email by Zapier > Send Reminder

  • To: {{candidate_email}}
  • Subject: 'Reminder: Please Complete Consent Form Before Your Interview'
  • Body: [Reminder version of original email with form link]

Action 4: Slack > Send Message

  • Channel: #recruiting-ops
  • Message: '⚠️ Consent PENDING for {{candidate_name}} interview in <24 hours. Recruiter: {{recruiter_name}}'

Trigger: Schedule by Zapier > Every 30 Minutes

Action 1: Google Sheets > Lookup Row

  • Filter: consent_status='PENDING' AND interview_date is within 1 hour

Action 2: Slack > Send Urgent Message

  • Channel: #recruiting-ops
  • Message: '🚨 CONSENT NOT RECEIVED for {{candidate_name}} interview starting in <1 hour. Recording will NOT be enabled. Recruiter must obtain verbal consent or conduct interview without recording.'

Action 3: Email by Zapier > Notify Recruiter

  • To: {{recruiter_email}}
  • Subject: 'ACTION REQUIRED: No Consent for {{candidate_name}} Interview'
Critical

The candidate has not completed the recording consent form. You must either obtain verbal consent at the start of the interview (and document it) or conduct the interview without recording. Do NOT enable the Metaview bot without consent.

1
Candidate Full Name (text, required, prefilled)
2
Candidate Email (email, required, prefilled)
3
Position Applied For (text, required, prefilled)
4
Interview Date (date, required, prefilled)
5
Consent Statement (read-only text block with full disclosure)
6
Checkbox: 'I consent to my interview being audio recorded and transcribed' (required)
7
Checkbox: 'I understand that an AI system will generate a summary of my responses' (required)
8
Checkbox: 'I understand my data will be retained for [X] months and I can request deletion' (required)
9
Electronic Signature (required)
10
Hidden field: submitter_ip (auto-captured)
11
Hidden field: submission_timestamp (auto-captured)

Transcript-to-ATS Scorecard Sync Integration

Type: integration A Zapier workflow that takes the structured JSON evaluation output from the GPT-5.4 prompt and maps it to specific ATS scorecard fields, creating or updating the candidate's scorecard in Greenhouse, Lever, or Bullhorn. Handles error cases, retries, and logging.

Zapier Workflow: ATS Scorecard Sync

For Greenhouse ATS

Trigger: Webhooks by Zapier > Catch Hook (receives parsed evaluation JSON from the summarization Zap)

Step 1: Lookup Candidate in Greenhouse

  • Action: Webhooks by Zapier > Custom Request
  • Method: GET
  • URL: https://harvest.greenhouse.io/v1/candidates?email={{candidate_email}}
  • Headers: Authorization: Basic {{base64_encoded_api_key}}, Content-Type: application/json
  • Parse response to extract: candidate_id, application_id
Greenhouse: Lookup candidate by email
http
GET https://harvest.greenhouse.io/v1/candidates?email={{candidate_email}}
Authorization: Basic {{base64_encoded_api_key}}
Content-Type: application/json

Step 2: Get Interview ID

  • Action: Webhooks by Zapier > Custom Request
  • Method: GET
  • URL: https://harvest.greenhouse.io/v1/applications/{{application_id}}/scheduled_interviews
  • Headers: same as above
  • Filter for most recent interview, extract: interview_id
Greenhouse: Get scheduled interviews for application
http
GET https://harvest.greenhouse.io/v1/applications/{{application_id}}/scheduled_interviews
Authorization: Basic {{base64_encoded_api_key}}
Content-Type: application/json

Step 3: Submit Scorecard

  • Action: Webhooks by Zapier > Custom Request
  • Method: POST
  • URL: https://harvest.greenhouse.io/v1/applications/{{application_id}}/scorecards
  • Headers: same as above
Greenhouse: Submit scorecard with competency ratings
json
{
  "interview": {{interview_id}},
  "interviewer": {{interviewer_user_id}},
  "overall_recommendation": "{{map_recommendation_to_greenhouse_enum}}",
  "attributes": [
    {
      "name": "Technical Skills",
      "type": "rating",
      "rating": "{{competency_scores.technical_skills.rating}}",
      "notes": "{{competency_scores.technical_skills.justification}}\n\nEvidence: {{competency_scores.technical_skills.evidence}}"
    },
    {
      "name": "Communication",
      "type": "rating",
      "rating": "{{competency_scores.communication.rating}}",
      "notes": "{{competency_scores.communication.justification}}\n\nEvidence: {{competency_scores.communication.evidence}}"
    },
    {
      "name": "Problem Solving",
      "type": "rating",
      "rating": "{{competency_scores.problem_solving.rating}}",
      "notes": "{{competency_scores.problem_solving.justification}}\n\nEvidence: {{competency_scores.problem_solving.evidence}}"
    },
    {
      "name": "Experience Relevance",
      "type": "rating",
      "rating": "{{competency_scores.experience_relevance.rating}}",
      "notes": "{{competency_scores.experience_relevance.justification}}\n\nEvidence: {{competency_scores.experience_relevance.evidence}}"
    },
    {
      "name": "Cultural Fit",
      "type": "rating",
      "rating": "{{competency_scores.cultural_fit.rating}}",
      "notes": "{{competency_scores.cultural_fit.justification}}\n\nEvidence: {{competency_scores.cultural_fit.evidence}}"
    },
    {
      "name": "Motivation & Interest",
      "type": "rating",
      "rating": "{{competency_scores.motivation_interest.rating}}",
      "notes": "{{competency_scores.motivation_interest.justification}}\n\nEvidence: {{competency_scores.motivation_interest.evidence}}"
    },
    {
      "name": "Leadership / Initiative",
      "type": "rating",
      "rating": "{{competency_scores.leadership_initiative.rating}}",
      "notes": "{{competency_scores.leadership_initiative.justification}}\n\nEvidence: {{competency_scores.leadership_initiative.evidence}}"
    }
  ],
  "submitted_at": "{{current_iso_datetime}}"
}

Recommendation Mapping

  • STRONG_YES → definitely_yes (Greenhouse enum)
  • YES → yes
  • MAYBE → no_decision
  • NO → no
  • STRONG_NO → definitely_not

Step 4: Add Interview Notes (Executive Summary)

  • Action: Webhooks by Zapier > Custom Request
  • Method: POST
  • URL: https://harvest.greenhouse.io/v1/candidates/{{candidate_id}}/activity_feed/notes
Greenhouse: Post executive summary as admin-only candidate note
json
{
  "user_id": {{system_user_id}},
  "body": "## AI-Generated Interview Summary\n\n{{executive_summary}}\n\n**Top Strengths:** {{top_strengths}}\n**Top Concerns:** {{top_concerns}}\n**Red Flags:** {{red_flags}}\n**Suggested Follow-ups:** {{follow_up_questions}}\n\n---\n_This summary was auto-generated by AI from the interview transcript. Please review for accuracy._",
  "visibility": "admin_only"
}

Step 5: Error Handling

  • Add a Zapier Paths step after Step 3:
  • Path A (success - HTTP 200/201): Continue to Step 4
  • Path B (failure - HTTP 4xx/5xx): Action: Slack > Send Message to #msp-monitoring with message: ❌ ATS scorecard sync failed for {{candidate_name}}. Error: {{http_status}} - {{error_body}}. Manual entry required.
  • Path B (failure): Action: Email > Notify MSP support team

For Bullhorn ATS (Staffing-specific)

Replace Step 3 with a POST request to the Bullhorn Note entity endpoint:

Bullhorn: Store evaluation as a Note entity on the candidate record
json
POST https://rest.bullhornstaffing.com/rest-services/{{corpToken}}/entity/Note

{
  "personReference": {"id": {{candidate_id}}},
  "action": "Interview AI Evaluation",
  "comments": "{{executive_summary}}\n\nOverall Score: {{overall_score}}/5\nRecommendation: {{recommendation}}\n\nStrengths: {{top_strengths}}\nConcerns: {{top_concerns}}"
}
Note

Bullhorn does not have native structured scorecards like Greenhouse. The evaluation is stored as a Note entity attached to the candidate record.

Custom Vocabulary and Jargon Enhancement

Type: skill A pre-processing step that enhances transcription accuracy for industry-specific terminology, client-specific role names, technology stacks, and company names. This is particularly important for technical staffing firms where domain-specific jargon may be mis-transcribed.

Implementation

For Metaview

1
Navigate to Metaview Settings > Transcription > Custom Vocabulary
2
Add domain-specific terms organized by category

Technology Terms (example for IT staffing)

Kubernetes, Terraform, CI/CD, DevOps, SRE, gRPC, GraphQL, PostgreSQL, Redis, Kafka, Elasticsearch, Docker, Ansible, Jenkins, GitHub Actions, TypeScript, React, Next.js, Node.js, Python, FastAPI, Django, AWS, Azure, GCP, Lambda, ECS, EKS, S3, CloudFormation, SOC 2, HIPAA, PCI DSS, GDPR, FedRAMP

Healthcare Staffing Terms (example)

RN, LPN, CNA, NP, PA-C, CRNA, BSN, MSN, DNP, EMR, EHR, Epic, Cerner, Meditech, MEDITECH, ICU, ER, OR, PACU, NICU, L&D, Med-Surg, JCAHO, CMS, OSHA, Joint Commission, BLS, ACLS, PALS, NRP, TNCC

Client-Specific Terms

  • Client company name and common misspellings
  • Client product names
  • Client-specific role titles
  • Names of hiring managers and interviewers
  • Office locations and department names

For OpenAI Whisper API (if using custom pipeline)

Add a post-processing step in the Zapier workflow using a Code by Zapier (Python) step:

Zapier Step: Code by Zapier (Python)
python
# post-processing transcript correction

import re

# Input: raw_transcript from transcription step
raw_transcript = input_data.get('transcript', '')

# Domain-specific corrections dictionary
corrections = {
    # Technology terms
    r'\bkubernetti\b': 'Kubernetes',
    r'\bkubernetes\b': 'Kubernetes',
    r'\bterra form\b': 'Terraform',
    r'\bci cd\b': 'CI/CD',
    r'\bdev ops\b': 'DevOps',
    r'\bgraph ql\b': 'GraphQL',
    r'\bpost gress\b': 'PostgreSQL',
    r'\bpost gres\b': 'PostgreSQL',
    r'\belastic search\b': 'Elasticsearch',
    r'\bnext js\b': 'Next.js',
    r'\bnode js\b': 'Node.js',
    r'\btype script\b': 'TypeScript',
    r'\breact js\b': 'React',
    r'\bfast api\b': 'FastAPI',
    # Healthcare terms
    r'\bjako\b': 'JCAHO',
    r'\bmed surge\b': 'Med-Surg',
    r'\bmedditech\b': 'MEDITECH',
    # Add client-specific corrections here
}

corrected_transcript = raw_transcript
for pattern, replacement in corrections.items():
    corrected_transcript = re.sub(pattern, replacement, corrected_transcript, flags=re.IGNORECASE)

return {'corrected_transcript': corrected_transcript}

Maintenance

  • Review transcription accuracy monthly with recruiters
  • Add new terms as the client hires for new technical domains
  • Update interviewer name list when team changes occur
  • Keep the corrections dictionary in a shared Google Sheet that non-technical staff can update, with a Zapier workflow that syncs it to the processing step

Interview Quality Analytics Dashboard

Type: workflow A Google Sheets-based analytics dashboard that aggregates interview data across all captured interviews, providing the client's recruiting leadership with insights on interviewer consistency, evaluation distribution, hiring funnel conversion, and system adoption metrics.

Implementation:

Sheet 1: Raw Data (Auto-populated via Zapier)

Zapier workflow appends a row after each interview evaluation is generated.

Columns:

  • A: Interview Date
  • B: Candidate Name
  • C: Position
  • D: Interviewer
  • E: Interview Type
  • F: Duration (min)
  • G: Technical Score
  • H: Communication Score
  • I: Problem Solving Score
  • J: Experience Score
  • K: Cultural Fit Score
  • L: Motivation Score
  • M: Leadership Score
  • N: Overall Score
  • O: Recommendation
  • P: Consent Captured
  • Q: ATS Sync Status
  • R: Transcript Word Count

Sheet 2: Dashboard (Formulas)

System Adoption Metrics

System adoption formulas for Sheet 2 Dashboard
spreadsheet
Total Interviews Captured (MTD): =COUNTIFS(A:A, ">="&EOMONTH(TODAY(),-1)+1, A:A, "<="&TODAY())
Adoption Rate: =Total Captured / Total Scheduled (manual input or ATS API)
Consent Completion Rate: =COUNTIF(P:P, "YES") / COUNTA(P:P)
ATS Sync Success Rate: =COUNTIF(Q:Q, "SUCCESS") / COUNTA(Q:Q)

Evaluation Distribution Metrics

Evaluation distribution formulas for Sheet 2 Dashboard
spreadsheet
Avg Overall Score: =AVERAGE(N:N)
Std Dev of Scores: =STDEV(N:N)
Strong Yes Rate: =COUNTIF(O:O, "STRONG_YES") / COUNTA(O:O)
Yes Rate: =COUNTIF(O:O, "YES") / COUNTA(O:O)
No Rate: =COUNTIF(O:O, "NO") / COUNTA(O:O)

Interviewer Consistency

Built as a Pivot Table on Sheet 3 with the following configuration:

  • Rows: Interviewer Name
  • Values: AVG of Overall Score, STDEV of Overall Score, COUNT of interviews
  • Sort by: STDEV descending (highest variance = least consistent)

Time Savings Metrics

Time savings estimate formulas for Sheet 2 Dashboard
spreadsheet
Estimated Time Saved (hours): =COUNTA(A:A) * 0.5  (assuming 30 min saved per interview in note-taking)
Equivalent Recruiter Cost Saved: =Time Saved * $35  (avg recruiter hourly rate)

Sheet 3: Interviewer Calibration Report

Pivot table showing per-interviewer scoring patterns:

  • Average score per competency
  • Score distribution (how often each interviewer gives 1/2/3/4/5)
  • Recommendation distribution
  • Flags interviewers whose average scores are >1 standard deviation from the team mean

Zapier Automation to Populate Dashboard

After the ATS scorecard sync Zap completes, configure the following Zapier action:

  • Action: Google Sheets > Append Row to 'Interview Analytics' spreadsheet
  • Sheet: 'Raw Data'
  • Map all fields from the parsed evaluation JSON

Monthly Report Automation

  • Trigger: Schedule by Zapier > First Monday of each month
  • Action 1: Google Sheets > Get range from Dashboard sheet
  • Action 2: Email by Zapier > Send formatted report to client recruiting leadership
  • Subject: 'Monthly Interview Intelligence Report - {{month}} {{year}}'

Testing & Validation

  • AUDIO CAPTURE TEST: Place the Jabra Speak2 75 in each interview room, position two people at opposite ends of the table (6 ft apart), and conduct a 5-minute mock conversation at normal speaking volume. Record using the video conferencing app (Zoom/Teams). Play back the recording and verify both speakers are captured clearly with minimal background noise. Repeat with 4 people for panel room setup with the Poly Sync 60. Pass criteria: all speakers audible, no clipping, background noise reduced to near-silent levels.
  • TRANSCRIPTION ACCURACY TEST: Run 3 real or mock interviews (15+ minutes each) through the full Metaview transcription pipeline. Compare the transcript against manual review of the recording. Calculate Word Error Rate (WER) by counting incorrect/missing/inserted words divided by total words. Pass criteria: WER below 10% for clear audio; below 15% for challenging audio (accents, cross-talk). Document any consistently mis-transcribed terms for the custom vocabulary list.
  • AI SUMMARIZATION QUALITY TEST: Feed 5 diverse interview transcripts (varying positions, interview types, and candidate quality levels) through the GPT-5.4 evaluation prompt. Have the client's Head of Recruiting review each AI-generated scorecard alongside the transcript. Verify: (1) ratings are reasonably calibrated (no all-5s or all-1s without justification), (2) evidence quotes are accurate and exist in the transcript, (3) the executive summary is factual and not hallucinated, (4) the recommendation aligns with the evidence. Pass criteria: 4 out of 5 evaluations rated 'acceptable' or better by the recruiting lead.
  • ATS INTEGRATION TEST: Trigger the full pipeline for a test candidate in the ATS staging/sandbox environment. Verify: (1) the scorecard appears on the correct candidate record, (2) all 7 competency ratings are populated, (3) the recommendation field is correctly mapped, (4) the executive summary note is attached, (5) the data is visible to the correct user roles. Pass criteria: 100% field mapping accuracy, data appears within 5 minutes of interview completion.
  • CONSENT WORKFLOW TEST: Create a test interview in the ATS. Verify: (1) the consent form email is sent to the candidate within 5 minutes, (2) the form is prefilled with correct candidate/position data, (3) form submission updates the consent tracking sheet, (4) Slack notification fires upon consent completion, (5) 24-hour reminder fires if consent is pending, (6) the 1-hour alert fires and warns the recruiter to not enable recording. Pass criteria: all 6 checkpoints pass with correct data and timing.
  • DATA DELETION TEST: Submit a test data deletion request through the deletion request form. Verify: (1) the request is logged in the consent tracking sheet, (2) Metaview data for the test candidate is queued for deletion, (3) ATS interview notes are flagged/removed, (4) confirmation email is sent to the requestor, (5) deletion is completed within the specified timeframe (target: 72 hours, regulatory max: 30 days for GDPR). Pass criteria: all data stores purged and confirmation sent.
  • END-TO-END LATENCY TEST: Conduct a complete mock interview from start to finish and measure the time from interview end to: (1) transcript availability in Metaview, (2) AI evaluation scorecard generation, (3) scorecard appearing in the ATS, (4) Slack notification to hiring manager. Pass criteria: transcript available within 10 minutes, scorecard in ATS within 20 minutes of interview end, Slack notification within 25 minutes.
  • MULTI-SPEAKER DIARIZATION TEST: Conduct a 3-person panel interview (2 interviewers + 1 candidate) and verify the transcript correctly labels each speaker. Check that the AI evaluation prompt correctly identifies and evaluates only the candidate's responses, not the interviewers' questions. Pass criteria: speaker labeling accuracy above 90%, AI evaluation does not attribute interviewer statements to the candidate.
  • ZAPIER WORKFLOW FAILURE RECOVERY TEST: Deliberately trigger a failure in each Zapier workflow step (e.g., invalid API key, ATS timeout, malformed JSON). Verify: (1) error alerts fire to the #msp-monitoring Slack channel, (2) email notification reaches the MSP support queue, (3) the workflow gracefully fails without corrupting data, (4) manual retry succeeds after fixing the issue. Pass criteria: all failure modes produce alerts within 5 minutes and no data corruption occurs.
  • CONCURRENT INTERVIEW STRESS TEST: Schedule 3 interviews simultaneously across different rooms/video platforms. Verify all 3 are captured, transcribed, evaluated, and synced to the ATS without conflicts or data cross-contamination. Pass criteria: all 3 interviews produce correct, independent scorecards on the correct candidate records.

Client Handoff

The client handoff should be conducted as a formal 3-hour session with the following stakeholders: Head of Recruiting (decision-maker), Recruiting Operations Manager (day-to-day admin), 2-3 lead recruiters (power users), and optionally the client's IT contact and legal/compliance officer.

Training Topics to Cover (with hands-on demos)

1
How the Metaview bot joins meetings — show on Zoom, Teams, and Meet; explain the bot's visible name and how candidates see it
2
In-room speakerphone usage — proper placement, connecting to laptop, switching between USB and Bluetooth, battery/charging management
3
Consent workflow — walk through the full candidate journey from scheduling to consent form completion; show the tracking spreadsheet; demonstrate what happens when consent is not received
4
Reviewing AI-generated summaries — show where summaries appear in the ATS; explain how to read the scorecard; demonstrate how to edit or override AI ratings; emphasize that AI scores are advisory, not final
5
Custom vocabulary management — show how to add new terms when hiring for new technical domains
6
Analytics dashboard — walk through the Google Sheets dashboard; explain each metric; show how to generate monthly reports
7
Data deletion requests — demonstrate the deletion request process; explain retention timeline
8
Troubleshooting common issues — bot didn't join, poor audio quality, transcript errors, ATS sync failures; provide the escalation path

Documentation to Leave Behind

1
Quick Reference Card (1-page laminated card for each recruiter): How to start a recorded interview, verify consent, and access the summary
2
Admin Guide (10-15 pages): Full system architecture diagram, all login credentials (stored in password manager), integration configuration details, custom vocabulary list, scorecard template specifications
3
Compliance Runbook (5-10 pages): Consent form language, data retention policy, deletion request procedure, jurisdiction-specific requirements, bias audit schedule
4
Troubleshooting Guide (5 pages): Common issues and resolution steps, escalation contacts, SLA expectations
5
Monthly Report Template: Pre-configured Google Sheets dashboard with instructions for generating and distributing monthly recruiting intelligence reports

Success Criteria to Review Together

Maintenance

Monthly Maintenance Tasks (MSP Responsibility)

1
Transcription Accuracy Audit (Monthly, 2 hours): Review 5-10 random transcripts against recordings. Calculate WER. Update custom vocabulary list with any new mis-transcribed terms. If WER exceeds 15%, investigate audio quality issues or escalate to Metaview support.
2
AI Summary Quality Review (Monthly, 2 hours): Review 5-10 AI-generated evaluations with the client's recruiting lead. Verify scoring calibration, evidence accuracy, and summary quality. Refine the GPT-5.4 evaluation prompt if systematic issues are identified (e.g., consistently harsh/lenient scoring, missing competency areas).
3
Compliance Audit (Monthly, 1 hour): Review consent tracking spreadsheet. Verify 100% consent capture rate. Check for any pending deletion requests. Confirm data retention policy is being enforced (transcripts older than retention period are purged). Document audit results.
4
Software Updates and Patching (Monthly, 1 hour): Check for Metaview platform updates and new features. Update Jabra/Poly speakerphone firmware via Jabra Direct/Poly Lens. Verify Zapier workflows are running without errors. Check OpenAI API for model updates or deprecation notices.
5
User Provisioning (As needed): Add/remove users as recruiters join/leave. Adjust Metaview seat count and license costs accordingly. Update interviewer names in custom vocabulary.
6
Analytics Report Distribution (Monthly, 30 minutes): Generate and send the monthly Interview Intelligence Report to client recruiting leadership. Include: interviews captured, adoption rate, average scores, interviewer consistency analysis, time savings estimate, system health summary.

Quarterly Maintenance Tasks

1
Prompt Engineering Review (Quarterly, 4 hours): Conduct a comprehensive review of the AI evaluation prompt. Analyze patterns across 50+ evaluations. Adjust scoring calibration, competency weighting, and output format based on accumulated feedback. A/B test prompt changes on 10 transcripts before deploying.
2
Integration Health Check (Quarterly, 2 hours): Test all API connections end-to-end. Verify ATS API keys haven't expired. Check Zapier task usage against plan limits. Review error logs for recurring failures.
3
Compliance Policy Review (Quarterly, 2 hours): Review regulatory landscape for new AI employment laws. Check NYC LL144 bias audit requirements. Monitor Colorado AI Act developments (effective Feb 2026). Update consent language and disclosure notices as needed. Coordinate with client legal counsel.
4
Hardware Inspection (Quarterly, 1 hour): Inspect speakerphones for physical damage, battery health, and firmware currency. Replace any degraded units. Clean microphone grilles.

SLA Considerations

  • Platform availability: Metaview SLA (vendor-managed); MSP monitors for outages
  • Workflow uptime: MSP targets 99% Zapier workflow success rate; alerts within 15 minutes of failure
  • Response time: Tier 1 issues (can't start recording) — 1 hour response; Tier 2 issues (ATS sync failure) — 4 hour response; Tier 3 issues (prompt engineering, integration rebuild) — 1 business day
  • Escalation path: Recruiter → Client Recruiting Ops Manager → MSP Helpdesk → MSP Solutions Architect → Metaview/vendor support

Model/Prompt Retraining Triggers

  • Client changes evaluation criteria or competency framework
  • Client adds new interview stages or types
  • WER exceeds 15% consistently for 2+ months
  • Recruiter satisfaction with AI summaries drops below 80%
  • OpenAI deprecates or updates the GPT-5.4 model
  • Client expands into new hiring domains (e.g., adding healthcare recruiting to an IT staffing firm)

Alternatives

BrightHire as Primary Platform (instead of Metaview)

Use BrightHire as the interview intelligence platform instead of Metaview. BrightHire offers deeper native ATS integrations (especially Greenhouse and Lever), 1-click scorecard completion, built-in bias audit compliance, and enterprise-grade features. It positions itself as the leader in interview intelligence for mid-market to enterprise staffing firms.

Custom API Pipeline (Deepgram + GPT-5.4 + ATS API)

Build a fully custom transcription and evaluation pipeline without a purpose-built interview intelligence platform. Use Deepgram Nova-3 API ($0.0043/min) for transcription with real-time speaker diarization, GPT-5.4 for summarization, and direct ATS API calls for scorecard submission. The MSP owns and operates the entire pipeline, typically deployed as a set of serverless functions (AWS Lambda or Azure Functions) triggered by webhooks.

Otter.ai Business + Manual Scorecard Workflow

Use Otter.ai Business ($20/seat/month) for transcription and AI-generated meeting summaries, with recruiters manually copying relevant sections into ATS scorecards. No automated ATS integration or custom AI evaluation — purely transcription and basic summarization as a productivity tool.

Self-Hosted Whisper + Local LLM (Air-Gapped / Data Sovereignty)

Deploy the entire solution on-premises or in a private cloud using open-source faster-whisper for transcription and an open-source LLM (Llama 3, Mistral, or Phi-3) for summarization. No data leaves the client's infrastructure. Requires an NVIDIA GPU server (T4 or RTX A2000+) and DevOps expertise.

Note

Recommendation: Only for clients with regulatory requirements mandating on-premises data processing (e.g., FedRAMP, defense, certain healthcare). For all other staffing firms, the SaaS approach is strongly preferred due to lower complexity and faster time-to-value.

Criteria Corp Interview Intelligence (IIQ)

Use Criteria Corp's newly launched Interview Intelligence (IIQ) platform, which provides AI-assisted question creation, automated interview scoring based on I/O psychology science, and integration with Criteria's broader assessment suite. The scoring is based solely on transcript content (not appearance or tone), designed to reduce bias.

Want early access to the full toolkit?