
Implementation Guide: Transcribe interviews and generate structured candidate evaluation summaries
Step-by-step implementation guide for deploying AI to transcribe interviews and generate structured candidate evaluation summaries for HR & Staffing clients.
Hardware Procurement
Jabra Speak2 75 Conference Speakerphone
$180 per unit MSP cost / $265 suggested resale per unit
Primary ambient audio capture device for in-person and hybrid interview rooms. Features 4 beamforming noise-cancelling microphones, USB-A/USB-C connectivity, Bluetooth with up to 32 hours wireless talk time, and 98 ft wireless range with optional Link 390 dongle. Supports rooms with up to 6 participants, ideal for 1-on-1 and small-panel interviews.
Poly Sync 60 Conference Speakerphone
$400 per unit MSP cost / $560 suggested resale per unit
Large-room ambient capture device for panel interview rooms accommodating up to 12 participants. Microsoft Teams certified. Deployed in the client's main conference room where multi-interviewer panel sessions occur.
Jabra Link 390a USB Bluetooth Dongle
$40 per unit MSP cost / $60 suggested resale per unit
Bluetooth dongle to extend wireless range of Jabra Speak2 75 speakerphones to laptops without reliable built-in Bluetooth. Ensures stable audio connection to the recording application up to 98 ft from the host device.
Rode NT-USB Mini USB Condenser Microphone
$85 per unit MSP cost / $125 suggested resale per unit
High-fidelity USB microphone for a dedicated quiet interview room or recruiter desk where a single interviewer needs studio-grade audio quality. Used as an optional upgrade for rooms where premium transcription accuracy is critical (e.g., executive-level interviews).
Software Procurement
Metaview Pro
$50/seat/month billed annually ($600/seat/year). For 10 recruiters: $6,000/year
Primary interview intelligence platform. Auto-joins Zoom/Teams/Google Meet interviews as an AI notetaker, generates real-time transcription, produces structured interview notes and candidate summaries, and provides recruiting analytics. SOC 2 certified and GDPR-ready.
Zapier Team Plan
$103.50/month ($1,242/year)
Workflow automation middleware to connect Metaview outputs (transcripts, summaries) to ATS platforms that lack native Metaview integration (e.g., Bullhorn, BambooHR, JobDiva). Also used to trigger Slack/email notifications and archive transcripts to cloud storage.
OpenAI API (GPT-5.4)
~$0.50–$1.50 per interview for custom summarization prompts. Estimated $50–$150/month for 100 interviews/month
Supplementary AI engine for generating custom structured candidate evaluation scorecards beyond Metaview's built-in summaries. Used in the Zapier workflow to transform raw transcripts into client-specific evaluation formats before pushing to the ATS.
Microsoft 365 Business Standard or Google Workspace Business Standard
$12.50/user/month (M365) or $14/user/month (Google Workspace) — assumed already in client environment
Provides the video conferencing platform (Microsoft Teams or Google Meet) and identity provider (Azure AD or Google Workspace) for SSO. Also provides cloud storage (OneDrive/SharePoint or Google Drive) for transcript archival.
Zoom Workplace Pro (if applicable)
$13.33/user/month billed annually — only if client uses Zoom instead of Teams/Meet
Video conferencing platform for virtual interviews. Metaview's bot integrates natively with Zoom to join, record, and transcribe meetings. Zoom Pro or higher is required for meetings longer than 40 minutes.
Jotform or Typeform (Digital Consent Forms)
$34–$50/month
Digital consent form platform for capturing candidate recording and AI analysis consent before each interview. Forms are sent via automated email workflow and responses are logged for compliance records.
Prerequisites
- Active Applicant Tracking System (ATS) with API access enabled — supported ATS includes Greenhouse, Lever, Ashby (native Metaview integration), or Bullhorn, BambooHR, Workable, JobDiva (via Zapier integration). Confirm the client's ATS plan tier includes API access.
- Video conferencing platform licensed and deployed — Zoom Pro/Business, Microsoft Teams (via M365 Business Standard+), or Google Meet (via Google Workspace Business Standard+). All interviewers must have licensed accounts.
- Stable internet connectivity in all interview rooms — minimum 5 Mbps upload per concurrent interview session. WiFi should be WPA2/WPA3 Enterprise; wired Ethernet preferred for dedicated interview rooms.
- Identity provider configured for SSO — Azure Active Directory (Entra ID), Google Workspace, or Okta. All recruiter and hiring manager accounts must be provisioned in the IdP.
- Admin credentials for the client's ATS, video conferencing platform, email system, and identity provider. MSP must have delegated admin access or work alongside client IT admin.
- Client legal counsel has reviewed and approved: (1) interview recording consent language, (2) AI disclosure notice for candidates, (3) data retention policy (recommended 12-month default), and (4) jurisdiction-specific compliance requirements (especially if hiring in CA, IL, NY, or EU).
- Firewall and network allow HTTPS outbound (port 443) to: metaview.ai, api.openai.com, zapier.com, and the client's ATS API endpoint. No deep packet inspection should interfere with WebSocket connections used by meeting bots.
- Chrome browser (latest stable version) installed on all recruiter/hiring manager workstations — required for Metaview browser extension and optimal meeting bot performance.
- Designated interview rooms identified — confirm room count, seating capacity, existing AV equipment, and USB/power outlet availability for speakerphone deployment.
- Client has designated a project champion (typically Head of Recruiting or HR Operations Manager) who will serve as the primary point of contact, make workflow decisions, and drive internal adoption.
Installation Steps
Step 1: Hardware Deployment — Speakerphones and Microphones
Unbox and deploy Jabra Speak2 75 units in each designated interview room and the Poly Sync 60 in the main conference/panel interview room. Connect each speakerphone via USB-C to the room's dedicated interview laptop or docking station. Insert the Jabra Link 390a Bluetooth dongle into each laptop's USB-A port if Bluetooth will be used wirelessly. For the optional Rode NT-USB Mini, mount it on the recruiter's desk using the integrated stand and connect via USB-C. Verify each device is recognized by the operating system as the default audio input device.
Get-PnpDevice -Class AudioEndpoint | Where-Object {$_.FriendlyName -like '*Jabra*' -or $_.FriendlyName -like '*Poly*'}system_profiler SPAudioDataType | grep -A 5 'Jabra\|Poly'$device = Get-AudioDevice -List | Where-Object {$_.Name -like '*Jabra Speak2 75*' -and $_.Type -eq 'Recording'}
Set-AudioDevice -ID $device.IDFirmware updates: Check Jabra Direct (https://www.jabra.com/software-and-services/jabra-direct) and Poly Lens (https://www.poly.com/us/en/products/services/poly-lens) for firmware updates before deployment. Update firmware on all devices during initial setup. Label each speakerphone with the room name/number using a label maker for asset tracking.
Step 2: Metaview Account Provisioning and SSO Configuration
Create the Metaview organizational account at app.metaview.ai. Configure SSO using the client's identity provider (Azure AD or Google Workspace) via SAML 2.0 or OAuth. Provision user accounts for all recruiters and hiring managers who will conduct interviews. Assign the 'Recruiter' role to active interviewers and 'Viewer' role to hiring managers who only need to review summaries.
# Azure AD App Registration settings:
App Registration > New Registration > Name: Metaview Interview AI
Redirect URI: https://app.metaview.ai/auth/callback
Copy Client ID and Tenant ID into Metaview SSO settingsMetaview supports both SAML 2.0 and OAuth-based SSO. Confirm with Metaview support which method is available on the Pro plan. If SSO is only available on Enterprise tier, use email-based invitations with enforced strong passwords and require MFA via the IdP. Keep a record of all provisioned users for license tracking.
Step 3: Video Conferencing Integration
Connect Metaview to the client's video conferencing platform. For Zoom: navigate to Metaview Settings > Integrations > Zoom, authorize the connection using a Zoom admin account, and enable 'Auto-join scheduled interviews.' For Microsoft Teams: install the Metaview bot from the Teams App Store (or sideload via Teams Admin Center), authorize with M365 global admin credentials, and configure auto-join policies. For Google Meet: install the Metaview Chrome extension on all recruiter machines and authorize Google Calendar access.
Connect-MicrosoftTeams
New-TeamsApp -DistributionMethod organization -Path ./metaview-teams-bot.zipThe Metaview bot joins meetings as a visible participant named 'Metaview Notetaker.' Inform all interviewers that this bot will appear. For compliance, the bot's presence serves as partial notice of recording, but explicit consent must still be obtained separately. Test the integration with a dummy meeting before going live.
Step 4: ATS Integration — Native or Zapier-Based
Connect Metaview to the client's ATS to enable automatic routing of interview summaries and transcripts to candidate records. For Greenhouse, Lever, or Ashby: use Metaview's native integration (Settings > Integrations > [ATS Name], authorize with ATS admin API key). For Bullhorn, BambooHR, or other ATS without native integration: configure Zapier workflows to receive Metaview webhook events and push structured data to the ATS via its API.
Greenhouse Native Integration
Bullhorn via Zapier
Example Zapier Webhook Payload from Metaview
{
"event": "interview.completed",
"candidate": { "name": "Jane Doe", "email": "jane@example.com" },
"interview": { "date": "2025-01-15", "duration_minutes": 42 },
"transcript": "Full transcript text...",
"summary": "AI-generated summary text..."
}API key permissions should follow the principle of least privilege. For Greenhouse, only grant Harvest API permissions for Candidates (read/write) and Scorecards (write). Store API keys in a password manager (e.g., IT Glue, Hudu) — never in plaintext documents. Test the integration by running a mock interview and verifying the summary appears in the correct candidate record within the ATS.
Step 5: Custom AI Summarization Pipeline via Zapier + OpenAI
Configure an enhanced summarization workflow that takes Metaview's raw transcript and generates a client-specific structured candidate evaluation scorecard using OpenAI GPT-5.4. This supplements Metaview's built-in summaries with custom evaluation criteria matching the client's hiring rubric. Create a multi-step Zapier workflow: (1) Trigger on Metaview interview completion webhook, (2) Send transcript to OpenAI GPT-5.4 with a structured prompt, (3) Parse the structured JSON response, (4) Push the scorecard to the ATS candidate record, (5) Send a Slack/email notification to the hiring manager.
The OpenAI API call costs approximately $0.01–$0.05 per interview transcript (depending on length). Set a max_tokens limit of 2000 to control costs. The temperature of 0.3 ensures consistent, deterministic evaluations. Always test with 5-10 real interview transcripts before enabling in production. Monitor Zapier task usage — each 5-step workflow execution consumes 5 tasks from the plan quota.
Step 6: Compliance Workflow — Candidate Consent and AI Disclosure
Implement a legally compliant consent capture workflow. Create a digital consent form (via Jotform or Typeform) that informs the candidate: (1) the interview will be recorded, (2) AI will be used to transcribe and analyze their responses, (3) what data will be retained and for how long, and (4) how to request data deletion. Integrate the consent form into the interview scheduling workflow so it is sent automatically when an interview is scheduled and must be completed before the interview begins.
# Google Sheets or SharePoint row schema
# Store consent records:
# Action: Google Sheets or SharePoint > Add row with consent data
# Columns: Candidate Name, Email, Position, Consent Date, IP Address, Form IDCRITICAL: Do not proceed with any recorded interview unless consent is captured and logged. For two-party consent states (CA, FL, PA, IL, MA, MD, MT, NV, NH, WA, DE), written consent is legally required. Even in one-party consent states, best practice is to always obtain explicit consent. Store consent records for the duration of the data retention period plus 2 years. Consult with client legal counsel to finalize consent language.
Step 7: Data Retention and Deletion Policy Configuration
Configure data retention policies across all systems to comply with the client's data governance requirements and applicable regulations. Set Metaview transcript retention to the agreed-upon period (recommend 12 months default). Configure automatic deletion workflows in Zapier to purge transcripts and summaries from secondary storage after the retention period. Establish a candidate data deletion request process compliant with GDPR Article 17 and applicable state laws.
Set-SPOSite -Identity https://tenant.sharepoint.com/sites/InterviewArchive -DenyAddAndCustomizePages $falseData retention periods should be determined by client legal counsel based on their jurisdictions and applicable regulations. GDPR requires response to deletion requests within 30 days. US state laws vary. Document the retention policy in a client-facing privacy notice that is provided to candidates. Keep a deletion log for audit purposes.
Step 8: Interview Room Configuration and Audio Optimization
Physically configure each interview room for optimal audio capture. Place the Jabra Speak2 75 at the center of the interview table, connected via USB-C to the room's laptop. Run a test recording to verify audio quality, speaker separation, and noise cancellation. For rooms with significant ambient noise (HVAC, street noise), adjust microphone gain settings and consider acoustic treatment (foam panels). Configure the room's laptop to use the speakerphone as the default audio input for Zoom/Teams/Meet.
# Launch recording tool (or use Voice Recorder app). Record 60 seconds of
# conversational audio from typical interview distance, then play back and
# verify clarity, volume, and noise cancellation.
Start-Process 'ms-screenclip:'Optimal speakerphone placement is center-table, within 3-6 feet of all speakers. Avoid placing near laptop fans, air vents, or windows. If the room has hard surfaces (glass, concrete), consider adding a felt desk pad under the speakerphone to reduce reflections. Run audio tests with 2-3 people speaking at interview volume levels. The speakerphone's noise cancellation works best when voices are the dominant sound source.
Step 9: Template and Scorecard Configuration
Work with the client's Head of Recruiting to configure structured interview templates and evaluation scorecard mappings. In Metaview, configure note templates that align with the client's interview stages (phone screen, technical interview, cultural fit, final round). In the custom OpenAI summarization pipeline, configure the evaluation prompt to map to the client's specific competency framework and rating scale. In the ATS, create or update scorecard templates to receive the AI-generated evaluation data.
Scorecard templates should be co-designed with the client's recruiting leadership. Spend at least 1-2 hours in a workshop session reviewing their current evaluation criteria, interview stages, and desired output format. The AI summarization prompt (see custom_ai_components) should be updated whenever the client changes their evaluation criteria. Document all template configurations in the client's runbook.
Step 10: Pilot Testing with Recruiter Cohort
Before full rollout, run a 2-week pilot with 2-3 designated recruiters. These recruiters should conduct their normal interview schedule with the full system active: ambient capture, transcription, AI summarization, and ATS integration. Collect detailed feedback on transcription accuracy, summary quality, workflow friction, and candidate reactions. Use this feedback to refine templates, prompts, and workflows before rolling out to all users.
Aim for at least 15-20 interviews during the pilot period to get statistically meaningful feedback. Common issues to watch for: (1) speaker misidentification in multi-person interviews, (2) technical jargon transcription errors, (3) AI summaries that are too generic, (4) consent form friction slowing down scheduling. Address all critical issues before proceeding to full rollout. Document all changes made during the pilot in the project change log.
Step 11: Full Rollout and User Training
After pilot validation and adjustments, roll out the system to all recruiters and hiring managers. Conduct training sessions in cohorts of 5-10 users. Training covers: (1) how to ensure the Metaview bot joins their interviews, (2) how to use speakerphones for in-person interviews, (3) how to review and edit AI-generated summaries, (4) how to manage the consent workflow, (5) where to find transcripts and scorecards in the ATS, and (6) how to handle candidate questions about recording/AI. Deploy the Metaview Chrome extension or Teams bot to all user machines.
# Post-deployment verification:
Get-InstalledModule | Where-Object {$_.Name -like '*Metaview*'}
# Or check Chrome extensions: chrome://extensions/ on each machinePrepare a 1-page quick reference card for each recruiter with: (1) how to verify the bot is in the meeting, (2) how to check if recording consent was received, (3) how to access the transcript after the interview, (4) who to contact for technical issues. Schedule a 30-minute office hours session 1 week after rollout for Q&A. Track adoption metrics: % of interviews captured, % of summaries reviewed, % pushed to ATS.
Step 12: Post-Deployment Monitoring and Optimization
After full rollout, establish ongoing monitoring for system health, adoption, and quality. Configure alerts in Zapier for workflow failures. Set up a monthly review cadence with the client to review adoption metrics, transcription accuracy, summary quality, and compliance status. Optimize the AI summarization prompt based on recruiter feedback collected over the first 30 days of production use.
- Monthly metrics to track (query from Metaview admin dashboard): Total interviews captured, Average transcription word count, User adoption rate (active users / total licensed users), Consent form completion rate, ATS sync success rate
# track monthly spend and per-interview cost
curl https://api.openai.com/v1/usage -H 'Authorization: Bearer $OPENAI_API_KEY'Set a 30-day checkpoint to review and refine the AI summarization prompt. Common optimizations include: adding industry-specific terminology to improve accuracy, adjusting the scoring rubric based on recruiter feedback, and tuning the summary length. Establish an escalation path: Tier 1 (recruiter self-service for basic issues) → Tier 2 (MSP helpdesk for technical issues) → Tier 3 (MSP solutions architect for integration/prompt engineering).
Custom AI Components
Structured Candidate Evaluation Prompt
Type: prompt A carefully engineered GPT-5.4 system prompt that transforms raw interview transcripts into structured candidate evaluation scorecards. The prompt enforces consistent evaluation criteria, uses a standardized rating scale, identifies key evidence from the transcript, and outputs valid JSON that can be directly mapped to ATS scorecard fields. This prompt is the core AI component that differentiates the solution from basic transcription.
Implementation
## System Prompt for GPT-5.4 Candidate Evaluation
You are an expert HR interview evaluator working for a staffing agency. Your task is to analyze an interview transcript and produce a structured candidate evaluation scorecard.
## Instructions
1. Read the entire interview transcript carefully.
2. Evaluate the candidate across the competency dimensions listed below.
3. For each competency, provide:
- A rating from 1-5 (1=Poor, 2=Below Average, 3=Average, 4=Above Average, 5=Excellent)
- 1-2 specific evidence quotes from the transcript supporting your rating
- A brief justification (1-2 sentences)
4. Provide an overall recommendation: STRONG_YES, YES, MAYBE, NO, or STRONG_NO
5. Identify the top 3 strengths and top 3 concerns.
6. Generate a 150-word executive summary suitable for a hiring manager who has not read the transcript.
7. Flag any potential red flags or inconsistencies in the candidate's responses.
## Competency Dimensions
- **Technical Skills**: Relevant knowledge, tools, methodologies for the role
- **Communication**: Clarity, articulation, active listening, professional demeanor
- **Problem Solving**: Analytical thinking, creativity, structured approach to challenges
- **Experience Relevance**: How well past experience maps to the target role
- **Cultural Fit**: Alignment with team values, collaboration style, adaptability
- **Motivation & Interest**: Genuine enthusiasm for the role and company, career alignment
- **Leadership/Initiative**: Self-direction, ownership mentality, ability to influence
## Output Format
Respond ONLY with valid JSON in this exact structure:
{
"candidate_name": "[Extracted from transcript or 'Unknown']",
"position": "[Extracted from transcript or 'Unknown']",
"interview_date": "[Extracted or current date]",
"interview_duration_minutes": [estimated from transcript length],
"competency_scores": {
"technical_skills": {
"rating": [1-5],
"evidence": ["Direct quote 1", "Direct quote 2"],
"justification": "Brief explanation"
},
"communication": {
"rating": [1-5],
"evidence": ["Direct quote 1", "Direct quote 2"],
"justification": "Brief explanation"
},
"problem_solving": {
"rating": [1-5],
"evidence": ["Direct quote 1", "Direct quote 2"],
"justification": "Brief explanation"
},
"experience_relevance": {
"rating": [1-5],
"evidence": ["Direct quote 1", "Direct quote 2"],
"justification": "Brief explanation"
},
"cultural_fit": {
"rating": [1-5],
"evidence": ["Direct quote 1", "Direct quote 2"],
"justification": "Brief explanation"
},
"motivation_interest": {
"rating": [1-5],
"evidence": ["Direct quote 1", "Direct quote 2"],
"justification": "Brief explanation"
},
"leadership_initiative": {
"rating": [1-5],
"evidence": ["Direct quote 1", "Direct quote 2"],
"justification": "Brief explanation"
}
},
"overall_score": [weighted average, 1 decimal],
"recommendation": "STRONG_YES | YES | MAYBE | NO | STRONG_NO",
"top_strengths": ["Strength 1", "Strength 2", "Strength 3"],
"top_concerns": ["Concern 1", "Concern 2", "Concern 3"],
"red_flags": ["Flag 1 or empty array if none"],
"executive_summary": "150-word summary for hiring manager",
"follow_up_questions": ["Suggested follow-up question 1", "Suggested follow-up question 2"]
}
## Important Guidelines
- Base your evaluation ONLY on the content of the transcript. Do not make assumptions about the candidate's appearance, voice tone, accent, age, gender, or any protected characteristic.
- If a competency cannot be evaluated from the transcript (e.g., the topic was not discussed), set the rating to null and note 'Not assessed in this interview' in the justification.
- Be calibrated: a '3' rating means the candidate meets the basic standard for the role. Reserve '5' for truly exceptional responses with strong, specific evidence.
- Extract the candidate's name and position from the conversation context. If not clearly stated, use 'Unknown'.
- The executive summary should be written in third person and be suitable for pasting directly into an ATS.Usage in Zapier
Model: gpt-5.4
System Prompt: [paste the above system prompt]
User Prompt: `Please evaluate the following interview transcript:\n\n{{raw_transcript_from_metaview}}`
Temperature: 0.3
Max Tokens: 2000
Response Format: json_object`overall_score` → ATS overall rating
`recommendation` → ATS recommendation field
`executive_summary` → ATS summary/notes field
`competency_scores.*` → Individual ATS scorecard attributesInterview Consent Automation Workflow
Type: workflow A Zapier-based automation workflow that triggers when an interview is scheduled in the ATS, sends a digital consent form to the candidate, tracks completion, and gates the interview recording based on consent status. This ensures 100% compliance with recording consent requirements across all jurisdictions.
Zap 1: Send Consent Form When Interview Scheduled
Trigger: Greenhouse (or ATS) > Interview Scheduled
- Trigger fires when a new interview event is created
- Captures: candidate_name, candidate_email, position_name, interview_datetime, interviewer_name
Action 1: Jotform > Prefill Form
- Form: 'Interview Recording & AI Consent'
- Prefill candidate_name, candidate_email, position_name, interview_date
- Generate unique prefilled form URL
Action 2: Email by Zapier > Send Email
- To: {{candidate_email}}
- Subject: 'Action Required: Interview Consent Form for {{position_name}} at {{company_name}}'
Dear {{candidate_name}},
Thank you for your upcoming interview for the {{position_name}} position scheduled on {{interview_datetime}}.
As part of our commitment to a thorough and fair hiring process, we use technology to record and transcribe interviews. An AI system will also generate a summary of your responses to help our hiring team make informed decisions.
Before your interview, please review and complete the consent form linked below:
{{prefilled_form_url}}
Key points:
• Your interview will be audio recorded and transcribed
• An AI system will analyze the transcript to generate evaluation notes
• Your data will be retained for {{retention_period}} months
• You may request deletion of your data at any time
• You may decline consent and still proceed with a traditional interview
If you have questions, please contact {{recruiter_email}}.
Best regards,
{{company_name}} Recruiting TeamAction 3: Google Sheets > Add Row to 'Consent Tracking' sheet
- Columns: candidate_name, candidate_email, position, interview_date, consent_form_sent_date, consent_status='PENDING'
Zap 2: Track Consent Form Completion
Trigger: Jotform > New Submission on 'Interview Recording & AI Consent'
Action 1: Google Sheets > Update Row in 'Consent Tracking'
- Lookup: candidate_email
- Update: consent_status='COMPLETED', consent_date={{submission_date}}, consent_ip={{submitter_ip}}
Action 2: Slack > Send Message
- Channel: #recruiting-ops
- Message: '✅ Consent received from {{candidate_name}} for {{position_name}} interview on {{interview_date}}'
Zap 3: Consent Reminder (24 hours before interview)
Trigger: Schedule by Zapier > Every Hour
Action 1: Google Sheets > Lookup Row
- Filter: consent_status='PENDING' AND interview_date is within 24 hours
Action 2: (Filter) Only continue if matching rows found
Action 3: Email by Zapier > Send Reminder
- To: {{candidate_email}}
- Subject: 'Reminder: Please Complete Consent Form Before Your Interview'
- Body: [Reminder version of original email with form link]
Action 4: Slack > Send Message
- Channel: #recruiting-ops
- Message: '⚠️ Consent PENDING for {{candidate_name}} interview in <24 hours. Recruiter: {{recruiter_name}}'
Zap 4: Consent Not Received — Disable Recording
Trigger: Schedule by Zapier > Every 30 Minutes
Action 1: Google Sheets > Lookup Row
- Filter: consent_status='PENDING' AND interview_date is within 1 hour
Action 2: Slack > Send Urgent Message
- Channel: #recruiting-ops
- Message: '🚨 CONSENT NOT RECEIVED for {{candidate_name}} interview starting in <1 hour. Recording will NOT be enabled. Recruiter must obtain verbal consent or conduct interview without recording.'
Action 3: Email by Zapier > Notify Recruiter
- To: {{recruiter_email}}
- Subject: 'ACTION REQUIRED: No Consent for {{candidate_name}} Interview'
The candidate has not completed the recording consent form. You must either obtain verbal consent at the start of the interview (and document it) or conduct the interview without recording. Do NOT enable the Metaview bot without consent.
Jotform Consent Form Fields
Transcript-to-ATS Scorecard Sync Integration
Type: integration A Zapier workflow that takes the structured JSON evaluation output from the GPT-5.4 prompt and maps it to specific ATS scorecard fields, creating or updating the candidate's scorecard in Greenhouse, Lever, or Bullhorn. Handles error cases, retries, and logging.
Zapier Workflow: ATS Scorecard Sync
For Greenhouse ATS
Trigger: Webhooks by Zapier > Catch Hook (receives parsed evaluation JSON from the summarization Zap)
Step 1: Lookup Candidate in Greenhouse
- Action: Webhooks by Zapier > Custom Request
- Method: GET
- URL: https://harvest.greenhouse.io/v1/candidates?email={{candidate_email}}
- Headers: Authorization: Basic {{base64_encoded_api_key}}, Content-Type: application/json
- Parse response to extract: candidate_id, application_id
GET https://harvest.greenhouse.io/v1/candidates?email={{candidate_email}}
Authorization: Basic {{base64_encoded_api_key}}
Content-Type: application/jsonStep 2: Get Interview ID
- Action: Webhooks by Zapier > Custom Request
- Method: GET
- URL: https://harvest.greenhouse.io/v1/applications/{{application_id}}/scheduled_interviews
- Headers: same as above
- Filter for most recent interview, extract: interview_id
GET https://harvest.greenhouse.io/v1/applications/{{application_id}}/scheduled_interviews
Authorization: Basic {{base64_encoded_api_key}}
Content-Type: application/jsonStep 3: Submit Scorecard
- Action: Webhooks by Zapier > Custom Request
- Method: POST
- URL: https://harvest.greenhouse.io/v1/applications/{{application_id}}/scorecards
- Headers: same as above
{
"interview": {{interview_id}},
"interviewer": {{interviewer_user_id}},
"overall_recommendation": "{{map_recommendation_to_greenhouse_enum}}",
"attributes": [
{
"name": "Technical Skills",
"type": "rating",
"rating": "{{competency_scores.technical_skills.rating}}",
"notes": "{{competency_scores.technical_skills.justification}}\n\nEvidence: {{competency_scores.technical_skills.evidence}}"
},
{
"name": "Communication",
"type": "rating",
"rating": "{{competency_scores.communication.rating}}",
"notes": "{{competency_scores.communication.justification}}\n\nEvidence: {{competency_scores.communication.evidence}}"
},
{
"name": "Problem Solving",
"type": "rating",
"rating": "{{competency_scores.problem_solving.rating}}",
"notes": "{{competency_scores.problem_solving.justification}}\n\nEvidence: {{competency_scores.problem_solving.evidence}}"
},
{
"name": "Experience Relevance",
"type": "rating",
"rating": "{{competency_scores.experience_relevance.rating}}",
"notes": "{{competency_scores.experience_relevance.justification}}\n\nEvidence: {{competency_scores.experience_relevance.evidence}}"
},
{
"name": "Cultural Fit",
"type": "rating",
"rating": "{{competency_scores.cultural_fit.rating}}",
"notes": "{{competency_scores.cultural_fit.justification}}\n\nEvidence: {{competency_scores.cultural_fit.evidence}}"
},
{
"name": "Motivation & Interest",
"type": "rating",
"rating": "{{competency_scores.motivation_interest.rating}}",
"notes": "{{competency_scores.motivation_interest.justification}}\n\nEvidence: {{competency_scores.motivation_interest.evidence}}"
},
{
"name": "Leadership / Initiative",
"type": "rating",
"rating": "{{competency_scores.leadership_initiative.rating}}",
"notes": "{{competency_scores.leadership_initiative.justification}}\n\nEvidence: {{competency_scores.leadership_initiative.evidence}}"
}
],
"submitted_at": "{{current_iso_datetime}}"
}Recommendation Mapping
- STRONG_YES →
definitely_yes(Greenhouse enum) - YES →
yes - MAYBE →
no_decision - NO →
no - STRONG_NO →
definitely_not
Step 4: Add Interview Notes (Executive Summary)
- Action: Webhooks by Zapier > Custom Request
- Method: POST
- URL: https://harvest.greenhouse.io/v1/candidates/{{candidate_id}}/activity_feed/notes
{
"user_id": {{system_user_id}},
"body": "## AI-Generated Interview Summary\n\n{{executive_summary}}\n\n**Top Strengths:** {{top_strengths}}\n**Top Concerns:** {{top_concerns}}\n**Red Flags:** {{red_flags}}\n**Suggested Follow-ups:** {{follow_up_questions}}\n\n---\n_This summary was auto-generated by AI from the interview transcript. Please review for accuracy._",
"visibility": "admin_only"
}Step 5: Error Handling
- Add a Zapier Paths step after Step 3:
- Path A (success - HTTP 200/201): Continue to Step 4
- Path B (failure - HTTP 4xx/5xx): Action: Slack > Send Message to #msp-monitoring with message: ❌ ATS scorecard sync failed for {{candidate_name}}. Error: {{http_status}} - {{error_body}}. Manual entry required.
- Path B (failure): Action: Email > Notify MSP support team
For Bullhorn ATS (Staffing-specific)
Replace Step 3 with a POST request to the Bullhorn Note entity endpoint:
POST https://rest.bullhornstaffing.com/rest-services/{{corpToken}}/entity/Note
{
"personReference": {"id": {{candidate_id}}},
"action": "Interview AI Evaluation",
"comments": "{{executive_summary}}\n\nOverall Score: {{overall_score}}/5\nRecommendation: {{recommendation}}\n\nStrengths: {{top_strengths}}\nConcerns: {{top_concerns}}"
}Bullhorn does not have native structured scorecards like Greenhouse. The evaluation is stored as a Note entity attached to the candidate record.
Custom Vocabulary and Jargon Enhancement
Type: skill A pre-processing step that enhances transcription accuracy for industry-specific terminology, client-specific role names, technology stacks, and company names. This is particularly important for technical staffing firms where domain-specific jargon may be mis-transcribed.
Implementation
For Metaview
Technology Terms (example for IT staffing)
Kubernetes, Terraform, CI/CD, DevOps, SRE, gRPC, GraphQL, PostgreSQL, Redis, Kafka, Elasticsearch, Docker, Ansible, Jenkins, GitHub Actions, TypeScript, React, Next.js, Node.js, Python, FastAPI, Django, AWS, Azure, GCP, Lambda, ECS, EKS, S3, CloudFormation, SOC 2, HIPAA, PCI DSS, GDPR, FedRAMP
Healthcare Staffing Terms (example)
RN, LPN, CNA, NP, PA-C, CRNA, BSN, MSN, DNP, EMR, EHR, Epic, Cerner, Meditech, MEDITECH, ICU, ER, OR, PACU, NICU, L&D, Med-Surg, JCAHO, CMS, OSHA, Joint Commission, BLS, ACLS, PALS, NRP, TNCC
Client-Specific Terms
- Client company name and common misspellings
- Client product names
- Client-specific role titles
- Names of hiring managers and interviewers
- Office locations and department names
For OpenAI Whisper API (if using custom pipeline)
Add a post-processing step in the Zapier workflow using a Code by Zapier (Python) step:
# post-processing transcript correction
import re
# Input: raw_transcript from transcription step
raw_transcript = input_data.get('transcript', '')
# Domain-specific corrections dictionary
corrections = {
# Technology terms
r'\bkubernetti\b': 'Kubernetes',
r'\bkubernetes\b': 'Kubernetes',
r'\bterra form\b': 'Terraform',
r'\bci cd\b': 'CI/CD',
r'\bdev ops\b': 'DevOps',
r'\bgraph ql\b': 'GraphQL',
r'\bpost gress\b': 'PostgreSQL',
r'\bpost gres\b': 'PostgreSQL',
r'\belastic search\b': 'Elasticsearch',
r'\bnext js\b': 'Next.js',
r'\bnode js\b': 'Node.js',
r'\btype script\b': 'TypeScript',
r'\breact js\b': 'React',
r'\bfast api\b': 'FastAPI',
# Healthcare terms
r'\bjako\b': 'JCAHO',
r'\bmed surge\b': 'Med-Surg',
r'\bmedditech\b': 'MEDITECH',
# Add client-specific corrections here
}
corrected_transcript = raw_transcript
for pattern, replacement in corrections.items():
corrected_transcript = re.sub(pattern, replacement, corrected_transcript, flags=re.IGNORECASE)
return {'corrected_transcript': corrected_transcript}Maintenance
- Review transcription accuracy monthly with recruiters
- Add new terms as the client hires for new technical domains
- Update interviewer name list when team changes occur
- Keep the corrections dictionary in a shared Google Sheet that non-technical staff can update, with a Zapier workflow that syncs it to the processing step
Interview Quality Analytics Dashboard
Type: workflow A Google Sheets-based analytics dashboard that aggregates interview data across all captured interviews, providing the client's recruiting leadership with insights on interviewer consistency, evaluation distribution, hiring funnel conversion, and system adoption metrics.
Implementation:
Sheet 1: Raw Data (Auto-populated via Zapier)
Zapier workflow appends a row after each interview evaluation is generated.
Columns:
- A: Interview Date
- B: Candidate Name
- C: Position
- D: Interviewer
- E: Interview Type
- F: Duration (min)
- G: Technical Score
- H: Communication Score
- I: Problem Solving Score
- J: Experience Score
- K: Cultural Fit Score
- L: Motivation Score
- M: Leadership Score
- N: Overall Score
- O: Recommendation
- P: Consent Captured
- Q: ATS Sync Status
- R: Transcript Word Count
Sheet 2: Dashboard (Formulas)
System Adoption Metrics
Total Interviews Captured (MTD): =COUNTIFS(A:A, ">="&EOMONTH(TODAY(),-1)+1, A:A, "<="&TODAY())
Adoption Rate: =Total Captured / Total Scheduled (manual input or ATS API)
Consent Completion Rate: =COUNTIF(P:P, "YES") / COUNTA(P:P)
ATS Sync Success Rate: =COUNTIF(Q:Q, "SUCCESS") / COUNTA(Q:Q)Evaluation Distribution Metrics
Avg Overall Score: =AVERAGE(N:N)
Std Dev of Scores: =STDEV(N:N)
Strong Yes Rate: =COUNTIF(O:O, "STRONG_YES") / COUNTA(O:O)
Yes Rate: =COUNTIF(O:O, "YES") / COUNTA(O:O)
No Rate: =COUNTIF(O:O, "NO") / COUNTA(O:O)Interviewer Consistency
Built as a Pivot Table on Sheet 3 with the following configuration:
- Rows: Interviewer Name
- Values: AVG of Overall Score, STDEV of Overall Score, COUNT of interviews
- Sort by: STDEV descending (highest variance = least consistent)
Time Savings Metrics
Estimated Time Saved (hours): =COUNTA(A:A) * 0.5 (assuming 30 min saved per interview in note-taking)
Equivalent Recruiter Cost Saved: =Time Saved * $35 (avg recruiter hourly rate)Sheet 3: Interviewer Calibration Report
Pivot table showing per-interviewer scoring patterns:
- Average score per competency
- Score distribution (how often each interviewer gives 1/2/3/4/5)
- Recommendation distribution
- Flags interviewers whose average scores are >1 standard deviation from the team mean
Zapier Automation to Populate Dashboard
After the ATS scorecard sync Zap completes, configure the following Zapier action:
- Action: Google Sheets > Append Row to 'Interview Analytics' spreadsheet
- Sheet: 'Raw Data'
- Map all fields from the parsed evaluation JSON
Monthly Report Automation
- Trigger: Schedule by Zapier > First Monday of each month
- Action 1: Google Sheets > Get range from Dashboard sheet
- Action 2: Email by Zapier > Send formatted report to client recruiting leadership
- Subject: 'Monthly Interview Intelligence Report - {{month}} {{year}}'
Testing & Validation
- AUDIO CAPTURE TEST: Place the Jabra Speak2 75 in each interview room, position two people at opposite ends of the table (6 ft apart), and conduct a 5-minute mock conversation at normal speaking volume. Record using the video conferencing app (Zoom/Teams). Play back the recording and verify both speakers are captured clearly with minimal background noise. Repeat with 4 people for panel room setup with the Poly Sync 60. Pass criteria: all speakers audible, no clipping, background noise reduced to near-silent levels.
- TRANSCRIPTION ACCURACY TEST: Run 3 real or mock interviews (15+ minutes each) through the full Metaview transcription pipeline. Compare the transcript against manual review of the recording. Calculate Word Error Rate (WER) by counting incorrect/missing/inserted words divided by total words. Pass criteria: WER below 10% for clear audio; below 15% for challenging audio (accents, cross-talk). Document any consistently mis-transcribed terms for the custom vocabulary list.
- AI SUMMARIZATION QUALITY TEST: Feed 5 diverse interview transcripts (varying positions, interview types, and candidate quality levels) through the GPT-5.4 evaluation prompt. Have the client's Head of Recruiting review each AI-generated scorecard alongside the transcript. Verify: (1) ratings are reasonably calibrated (no all-5s or all-1s without justification), (2) evidence quotes are accurate and exist in the transcript, (3) the executive summary is factual and not hallucinated, (4) the recommendation aligns with the evidence. Pass criteria: 4 out of 5 evaluations rated 'acceptable' or better by the recruiting lead.
- ATS INTEGRATION TEST: Trigger the full pipeline for a test candidate in the ATS staging/sandbox environment. Verify: (1) the scorecard appears on the correct candidate record, (2) all 7 competency ratings are populated, (3) the recommendation field is correctly mapped, (4) the executive summary note is attached, (5) the data is visible to the correct user roles. Pass criteria: 100% field mapping accuracy, data appears within 5 minutes of interview completion.
- CONSENT WORKFLOW TEST: Create a test interview in the ATS. Verify: (1) the consent form email is sent to the candidate within 5 minutes, (2) the form is prefilled with correct candidate/position data, (3) form submission updates the consent tracking sheet, (4) Slack notification fires upon consent completion, (5) 24-hour reminder fires if consent is pending, (6) the 1-hour alert fires and warns the recruiter to not enable recording. Pass criteria: all 6 checkpoints pass with correct data and timing.
- DATA DELETION TEST: Submit a test data deletion request through the deletion request form. Verify: (1) the request is logged in the consent tracking sheet, (2) Metaview data for the test candidate is queued for deletion, (3) ATS interview notes are flagged/removed, (4) confirmation email is sent to the requestor, (5) deletion is completed within the specified timeframe (target: 72 hours, regulatory max: 30 days for GDPR). Pass criteria: all data stores purged and confirmation sent.
- END-TO-END LATENCY TEST: Conduct a complete mock interview from start to finish and measure the time from interview end to: (1) transcript availability in Metaview, (2) AI evaluation scorecard generation, (3) scorecard appearing in the ATS, (4) Slack notification to hiring manager. Pass criteria: transcript available within 10 minutes, scorecard in ATS within 20 minutes of interview end, Slack notification within 25 minutes.
- MULTI-SPEAKER DIARIZATION TEST: Conduct a 3-person panel interview (2 interviewers + 1 candidate) and verify the transcript correctly labels each speaker. Check that the AI evaluation prompt correctly identifies and evaluates only the candidate's responses, not the interviewers' questions. Pass criteria: speaker labeling accuracy above 90%, AI evaluation does not attribute interviewer statements to the candidate.
- ZAPIER WORKFLOW FAILURE RECOVERY TEST: Deliberately trigger a failure in each Zapier workflow step (e.g., invalid API key, ATS timeout, malformed JSON). Verify: (1) error alerts fire to the #msp-monitoring Slack channel, (2) email notification reaches the MSP support queue, (3) the workflow gracefully fails without corrupting data, (4) manual retry succeeds after fixing the issue. Pass criteria: all failure modes produce alerts within 5 minutes and no data corruption occurs.
- CONCURRENT INTERVIEW STRESS TEST: Schedule 3 interviews simultaneously across different rooms/video platforms. Verify all 3 are captured, transcribed, evaluated, and synced to the ATS without conflicts or data cross-contamination. Pass criteria: all 3 interviews produce correct, independent scorecards on the correct candidate records.
Client Handoff
The client handoff should be conducted as a formal 3-hour session with the following stakeholders: Head of Recruiting (decision-maker), Recruiting Operations Manager (day-to-day admin), 2-3 lead recruiters (power users), and optionally the client's IT contact and legal/compliance officer.
Training Topics to Cover (with hands-on demos)
Documentation to Leave Behind
Success Criteria to Review Together
Maintenance
Monthly Maintenance Tasks (MSP Responsibility)
Quarterly Maintenance Tasks
SLA Considerations
- Platform availability: Metaview SLA (vendor-managed); MSP monitors for outages
- Workflow uptime: MSP targets 99% Zapier workflow success rate; alerts within 15 minutes of failure
- Response time: Tier 1 issues (can't start recording) — 1 hour response; Tier 2 issues (ATS sync failure) — 4 hour response; Tier 3 issues (prompt engineering, integration rebuild) — 1 business day
- Escalation path: Recruiter → Client Recruiting Ops Manager → MSP Helpdesk → MSP Solutions Architect → Metaview/vendor support
Model/Prompt Retraining Triggers
- Client changes evaluation criteria or competency framework
- Client adds new interview stages or types
- WER exceeds 15% consistently for 2+ months
- Recruiter satisfaction with AI summaries drops below 80%
- OpenAI deprecates or updates the GPT-5.4 model
- Client expands into new hiring domains (e.g., adding healthcare recruiting to an IT staffing firm)
Alternatives
BrightHire as Primary Platform (instead of Metaview)
Use BrightHire as the interview intelligence platform instead of Metaview. BrightHire offers deeper native ATS integrations (especially Greenhouse and Lever), 1-click scorecard completion, built-in bias audit compliance, and enterprise-grade features. It positions itself as the leader in interview intelligence for mid-market to enterprise staffing firms.
Custom API Pipeline (Deepgram + GPT-5.4 + ATS API)
Build a fully custom transcription and evaluation pipeline without a purpose-built interview intelligence platform. Use Deepgram Nova-3 API ($0.0043/min) for transcription with real-time speaker diarization, GPT-5.4 for summarization, and direct ATS API calls for scorecard submission. The MSP owns and operates the entire pipeline, typically deployed as a set of serverless functions (AWS Lambda or Azure Functions) triggered by webhooks.
Otter.ai Business + Manual Scorecard Workflow
Use Otter.ai Business ($20/seat/month) for transcription and AI-generated meeting summaries, with recruiters manually copying relevant sections into ATS scorecards. No automated ATS integration or custom AI evaluation — purely transcription and basic summarization as a productivity tool.
Self-Hosted Whisper + Local LLM (Air-Gapped / Data Sovereignty)
Deploy the entire solution on-premises or in a private cloud using open-source faster-whisper for transcription and an open-source LLM (Llama 3, Mistral, or Phi-3) for summarization. No data leaves the client's infrastructure. Requires an NVIDIA GPU server (T4 or RTX A2000+) and DevOps expertise.
Recommendation: Only for clients with regulatory requirements mandating on-premises data processing (e.g., FedRAMP, defense, certain healthcare). For all other staffing firms, the SaaS approach is strongly preferred due to lower complexity and faster time-to-value.
Criteria Corp Interview Intelligence (IIQ)
Use Criteria Corp's newly launched Interview Intelligence (IIQ) platform, which provides AI-assisted question creation, automated interview scoring based on I/O psychology science, and integration with Criteria's broader assessment suite. The scoring is based solely on transcript content (not appearance or tone), designed to reduce bias.
Want early access to the full toolkit?