73 min readIntelligence & insights

Implementation Guide: Analyze turnover patterns, engagement survey data, and identify retention risk factors

Step-by-step implementation guide for deploying AI to analyze turnover patterns, engagement survey data, and identify retention risk factors for HR & Staffing clients.

Hardware Procurement

Business Workstation for HR Analytics Users

DellDell OptiPlex 7420 Small Form Factor (i5-14500, 16GB DDR5, 512GB NVMe SSD)Qty: 3

$950 per unit MSP cost / $1,200 suggested resale

Dedicated workstations for HR team members who will access Power BI dashboards, Lattice analytics, and run report exports. 16GB RAM and SSD ensure smooth rendering of complex Power BI visuals with multiple data sources. Quantity of 3 covers HR Director, HR Manager, and HRBP/Analyst roles.

Unified Threat Management Firewall

FortinetFortiGate 40F (FG-40F)Qty: 1

$450 MSP cost / $650 suggested resale

Network perimeter security appliance to enforce TLS inspection, DNS filtering, and IPS on all traffic carrying employee PII between the client network and SaaS platforms. Required for compliance with data protection best practices when transmitting sensitive HR data to cloud services. If client already has a modern UTM appliance, this line item can be removed.

Network Switch (if office network upgrade needed)

CiscoCisco Catalyst 1200-24T-4G (C1200-24T-4G)Qty: 1

$350 MSP cost / $500 suggested resale

Managed switch to create a dedicated VLAN for HR department workstations, isolating PII-carrying traffic from general office network. Supports 802.1Q VLAN tagging and basic QoS. Only required if existing network infrastructure lacks VLAN capability.

Software Procurement

Lattice Performance + Engagement + Analytics

LatticePerformance + Engagement + AnalyticsQty: per-seat SaaS

$11/user/month base + $6/user/month Engagement add-on + $4/user/month Analytics add-on = ~$21/user/month; $4,000/year minimum. For 300 employees: ~$75,600/year

Primary people analytics platform providing engagement pulse surveys, performance review aggregation, goal tracking, and AI-powered attrition risk modeling. The Analytics module surfaces turnover patterns by department, manager, tenure, and demographics. The new flight risk detection feature (launching summer 2025) analyzes team stability, sentiment, feedback patterns, and manager changes to flag individual-level attrition risk.

Microsoft Power BI Pro

Microsoftper-seat SaaS (via CSP)Qty: 5 licenses for HR leadership

$14/user/month per license via CSP. 5 licenses for HR leadership = $70/month ($840/year). MSP margin: 12-18% through CSP indirect reseller program.

Custom HR analytics dashboards that pull data from Lattice APIs, HRIS exports, and payroll data to create executive-ready turnover visualizations, retention risk heatmaps, compensation-to-turnover correlation reports, and engagement trend analyses that go beyond Lattice's native reporting.

Finch Unified HRIS/Payroll API

FinchStarter PlanQty: 1-2 connections

$65/connection/month; $65–$130/month ($780–$1,560/year)

Unified API middleware that connects to 220+ HRIS and payroll systems through a single standardized data model. Eliminates the need to build and maintain individual integrations with BambooHR, ADP, Gusto, Paylocity, etc. Provides normalized employee directory, employment history, compensation, and org structure data feeds into Power BI and custom analytics pipelines.

Microsoft 365 Business Premium (or E3/E5 if applicable)

MicrosoftMicrosoft 365 Business PremiumQty: per-seat

$22/user/month (Business Premium) via CSP. Assumed already in place for most clients. If net-new: 300 users = $6,600/month.

Foundation for Azure Active Directory / Entra ID (SSO and SCIM provisioning for Lattice), Microsoft Teams (survey notification delivery), Outlook (survey email distribution), SharePoint (documentation repository), and Power Automate (workflow automation for retention alerts). Required prerequisite — most clients will already have this.

Azure SQL Database (Serverless, General Purpose)

MicrosoftServerless, General Purpose

~$50-$150/month depending on usage (S1 tier for HR data warehouse: ~$30/month + storage). For a 300-person company, HR data is modest — expect $75/month average.

Cloud-hosted SQL database serving as the centralized HR data warehouse. Aggregates data from Finch API (HRIS/payroll), Lattice API (engagement/performance), and any additional CSV imports (exit interviews, historical termination data). Power BI connects directly to this warehouse for dashboard queries. Azure SQL is chosen for native integration with Power BI and M365 ecosystem.

$0 additional (included in M365 license). Premium connectors may require Power Automate Premium at $15/user/month if needed.

Workflow automation for retention alert escalation: when Lattice flags an employee as high flight risk or when a Power BI anomaly detection triggers, Power Automate sends notifications to the relevant HR manager and HRBP via Teams, email, or creates a task in Planner/To-Do for follow-up action planning.

Prerequisites

  • Active Microsoft 365 Business Premium, E3, or E5 tenant with Azure AD / Entra ID configured for the organization — required for SSO, SCIM provisioning, Power BI, and Power Automate
  • Existing HRIS system with current, accurate employee records (e.g., BambooHR, ADP Workforce Now, Gusto, Paylocity, UKG, Rippling). Must contain: employee ID, hire date, termination date and reason (for separated employees), department, manager, job title, location, and compensation data
  • Minimum 12 months of historical termination/separation data available in the HRIS or exportable as CSV. 24+ months is strongly preferred for meaningful predictive model training
  • Corporate email system (M365 Exchange Online or Google Workspace) capable of sending engagement survey invitations to all employees
  • Global Admin access to the M365 tenant for configuring SSO, SCIM, and Power BI workspace provisioning
  • HRIS Administrator credentials with API access enabled (or ability to generate API keys) for the Finch integration
  • Executive sponsor identified (VP HR or CHRO) who will champion the analytics initiative and ensure manager adoption
  • HR point of contact identified who will serve as the day-to-day platform administrator for Lattice and survey management
  • Reliable internet connectivity (25+ Mbps symmetric minimum) at all office locations where HR team accesses dashboards
  • Employee communication plan drafted and approved by legal/HR leadership — employees must be notified about data collection and analytics usage before engagement surveys launch
  • Budget approval for software licensing ($80,000–$95,000/year total for a 300-person org) and MSP implementation services ($10,000–$20,000 one-time + $1,500–$2,500/month ongoing)
  • If operating in NYC: awareness of Local Law 144 compliance requirements for Automated Employment Decision Tools — bias audit may be required if retention risk scores are used in employment decisions
  • Data Protection Impact Assessment (DPIA) completed or scheduled — required under GDPR for EU employees, best practice for all deployments handling employee PII at scale
  • Network infrastructure supports VLAN segmentation for HR workstations (or budget allocated for network switch upgrade)

Installation Steps

...

Step 1: Environment Assessment and Data Quality Audit

Before any software is deployed, conduct a thorough assessment of the client's existing HR technology stack, data quality, and readiness. This step prevents the most common failure mode in HR analytics: garbage data producing garbage insights. Connect to the client's HRIS (via admin login) and audit employee records for completeness. Generate a data quality scorecard covering: percentage of records with complete hire dates, department assignments, manager assignments, termination reasons (for separated employees), and compensation data. Identify gaps that must be remediated before analytics deployment.

bash
# Export employee data from HRIS for quality assessment (example for BambooHR API)
curl -u 'API_KEY:x' 'https://api.bamboohr.com/api/gateway.php/COMPANY_DOMAIN/v1/reports/custom?format=csv&fields=id,firstName,lastName,hireDate,terminationDate,department,division,jobTitle,supervisor,payRate,employmentStatus' -o hr_data_export.csv

# Quick data quality check using Python (run on MSP technician laptop)
python3 -c "
import pandas as pd
df = pd.read_csv('hr_data_export.csv')
print('Total Records:', len(df))
print('\nNull counts per column:')
print(df.isnull().sum())
print('\nCompleteness percentage:')
print((1 - df.isnull().sum() / len(df)) * 100)
print('\nTermination records with missing reason:')
termed = df[df['terminationDate'].notna()]
print(f'{termed["terminationDate"].notna().sum()} terminations, {termed.isnull().sum().get("terminationReason", 0)} missing reason')
"
Note

Target minimum 95% completeness on core fields (hire date, department, manager, job title) before proceeding. If completeness is below 80%, allocate 1-2 weeks for client HR team to remediate data before continuing. Document all data quality findings in a formal Data Readiness Report for the client. This report becomes the baseline against which data quality improvements are measured.

Step 2: Provision Azure SQL Database for HR Data Warehouse

Create a centralized Azure SQL Database that will serve as the HR data warehouse, aggregating data from multiple sources (HRIS via Finch, Lattice engagement/performance data, and historical CSV imports). This warehouse is what Power BI will query for custom dashboards. Use the Azure Portal or CLI to provision a serverless General Purpose database in the client's Azure subscription (or the MSP's Azure tenant if managing on behalf of client).

bash
# Login to Azure CLI
az login

# Create resource group for HR Analytics
az group create --name rg-hr-analytics --location eastus

# Create Azure SQL Server
az sql server create \
  --name sql-hranalytics-CLIENT \
  --resource-group rg-hr-analytics \
  --location eastus \
  --admin-user hradmin \
  --admin-password 'GENERATE_STRONG_PASSWORD_HERE'

# Create serverless database (auto-pause after 60 min idle to save cost)
az sql db create \
  --resource-group rg-hr-analytics \
  --server sql-hranalytics-CLIENT \
  --name db-people-analytics \
  --edition GeneralPurpose \
  --family Gen5 \
  --capacity 2 \
  --compute-model Serverless \
  --auto-pause-delay 60 \
  --min-capacity 0.5 \
  --max-size 32GB

# Configure firewall to allow Azure services and MSP IP
az sql server firewall-rule create \
  --resource-group rg-hr-analytics \
  --server sql-hranalytics-CLIENT \
  --name AllowAzureServices \
  --start-ip-address 0.0.0.0 \
  --end-ip-address 0.0.0.0

az sql server firewall-rule create \
  --resource-group rg-hr-analytics \
  --server sql-hranalytics-CLIENT \
  --name AllowMSPOffice \
  --start-ip-address MSP_PUBLIC_IP \
  --end-ip-address MSP_PUBLIC_IP

# Enable Azure AD authentication
az sql server ad-admin create \
  --resource-group rg-hr-analytics \
  --server sql-hranalytics-CLIENT \
  --display-name 'HR Analytics Admin' \
  --object-id AZURE_AD_GROUP_OBJECT_ID
Note

Use Azure serverless tier to minimize costs — HR analytics queries are bursty (heavy during report generation, idle otherwise). Auto-pause after 60 minutes saves significant cost for SMB clients. Estimated monthly cost: $50-$150 depending on query volume. Store the admin password in the MSP's password vault (e.g., IT Glue, Hudu) immediately. Enable Transparent Data Encryption (TDE) which is on by default for Azure SQL. Consider enabling auditing and Advanced Threat Protection for compliance.

Step 3: Create HR Data Warehouse Schema

Connect to the Azure SQL Database and create the normalized schema for HR analytics. This schema consolidates employee master data, termination history, engagement survey responses, performance ratings, and compensation history into a star-schema optimized for Power BI DirectQuery and Import mode.

Azure SQL DDL
sql
-- HR star schema including dimension tables, fact tables, and date dimension
-- population

# Connect to Azure SQL using sqlcmd or Azure Data Studio
# Then execute the following DDL:

CREATE SCHEMA hr;
GO

-- Dimension: Employees (SCD Type 2 for historical tracking)
CREATE TABLE hr.dim_employees (
    employee_key INT IDENTITY(1,1) PRIMARY KEY,
    employee_id NVARCHAR(50) NOT NULL,
    first_name NVARCHAR(100),
    last_name NVARCHAR(100),
    email NVARCHAR(255),
    hire_date DATE NOT NULL,
    termination_date DATE NULL,
    termination_reason NVARCHAR(255) NULL,
    termination_type NVARCHAR(50) NULL, -- Voluntary, Involuntary, Retirement
    department NVARCHAR(100),
    division NVARCHAR(100),
    job_title NVARCHAR(200),
    job_level NVARCHAR(50),
    manager_employee_id NVARCHAR(50),
    location NVARCHAR(200),
    employment_type NVARCHAR(50), -- Full-time, Part-time, Contract
    tenure_months AS DATEDIFF(MONTH, hire_date, ISNULL(termination_date, GETDATE())),
    is_active BIT DEFAULT 1,
    effective_date DATE DEFAULT GETDATE(),
    end_date DATE NULL,
    is_current BIT DEFAULT 1,
    created_at DATETIME2 DEFAULT GETDATE(),
    updated_at DATETIME2 DEFAULT GETDATE()
);
GO

-- Dimension: Date (for time-series analysis)
CREATE TABLE hr.dim_date (
    date_key INT PRIMARY KEY, -- YYYYMMDD format
    full_date DATE NOT NULL,
    year INT,
    quarter INT,
    month INT,
    month_name NVARCHAR(20),
    week_of_year INT,
    day_of_week INT,
    is_weekend BIT,
    fiscal_year INT,
    fiscal_quarter INT
);
GO

-- Fact: Turnover Events
CREATE TABLE hr.fact_turnover (
    turnover_key INT IDENTITY(1,1) PRIMARY KEY,
    employee_key INT REFERENCES hr.dim_employees(employee_key),
    termination_date_key INT REFERENCES hr.dim_date(date_key),
    hire_date_key INT REFERENCES hr.dim_date(date_key),
    department NVARCHAR(100),
    manager_employee_id NVARCHAR(50),
    location NVARCHAR(200),
    tenure_at_termination_months INT,
    termination_reason NVARCHAR(255),
    termination_type NVARCHAR(50),
    was_regrettable BIT DEFAULT 0,
    last_performance_rating NVARCHAR(50),
    last_engagement_score DECIMAL(5,2),
    salary_at_termination DECIMAL(12,2),
    created_at DATETIME2 DEFAULT GETDATE()
);
GO

-- Fact: Engagement Survey Responses
CREATE TABLE hr.fact_engagement (
    response_key INT IDENTITY(1,1) PRIMARY KEY,
    employee_key INT REFERENCES hr.dim_employees(employee_key),
    survey_date_key INT REFERENCES hr.dim_date(date_key),
    survey_name NVARCHAR(200),
    overall_engagement_score DECIMAL(5,2),
    manager_effectiveness_score DECIMAL(5,2),
    career_development_score DECIMAL(5,2),
    compensation_satisfaction_score DECIMAL(5,2),
    work_life_balance_score DECIMAL(5,2),
    belonging_inclusion_score DECIMAL(5,2),
    recommendation_score DECIMAL(5,2), -- eNPS
    response_comments_sentiment NVARCHAR(20), -- Positive, Neutral, Negative
    created_at DATETIME2 DEFAULT GETDATE()
);
GO

-- Fact: Performance Reviews
CREATE TABLE hr.fact_performance (
    review_key INT IDENTITY(1,1) PRIMARY KEY,
    employee_key INT REFERENCES hr.dim_employees(employee_key),
    review_date_key INT REFERENCES hr.dim_date(date_key),
    review_period NVARCHAR(50),
    overall_rating NVARCHAR(50),
    numeric_rating DECIMAL(3,1),
    goals_met_percentage DECIMAL(5,2),
    manager_employee_id NVARCHAR(50),
    created_at DATETIME2 DEFAULT GETDATE()
);
GO

-- Fact: Compensation History
CREATE TABLE hr.fact_compensation (
    comp_key INT IDENTITY(1,1) PRIMARY KEY,
    employee_key INT REFERENCES hr.dim_employees(employee_key),
    effective_date_key INT REFERENCES hr.dim_date(date_key),
    base_salary DECIMAL(12,2),
    bonus_target_pct DECIMAL(5,2),
    total_comp DECIMAL(12,2),
    comp_ratio DECIMAL(5,2), -- ratio to market midpoint
    last_raise_pct DECIMAL(5,2),
    months_since_last_raise INT,
    created_at DATETIME2 DEFAULT GETDATE()
);
GO

-- Computed: Retention Risk Score (updated by ETL pipeline)
CREATE TABLE hr.retention_risk (
    risk_key INT IDENTITY(1,1) PRIMARY KEY,
    employee_key INT REFERENCES hr.dim_employees(employee_key),
    calculated_date DATE DEFAULT GETDATE(),
    risk_score DECIMAL(5,2), -- 0-100 scale
    risk_tier NVARCHAR(20), -- Low, Medium, High, Critical
    top_risk_factor_1 NVARCHAR(200),
    top_risk_factor_2 NVARCHAR(200),
    top_risk_factor_3 NVARCHAR(200),
    lattice_flight_risk NVARCHAR(20), -- from Lattice AI model
    model_version NVARCHAR(50),
    created_at DATETIME2 DEFAULT GETDATE()
);
GO

-- Populate date dimension (2020-2030)
DECLARE @StartDate DATE = '2020-01-01';
DECLARE @EndDate DATE = '2030-12-31';
WITH DateCTE AS (
    SELECT @StartDate AS dt
    UNION ALL
    SELECT DATEADD(DAY, 1, dt) FROM DateCTE WHERE dt < @EndDate
)
INSERT INTO hr.dim_date (date_key, full_date, year, quarter, month, month_name, week_of_year, day_of_week, is_weekend, fiscal_year, fiscal_quarter)
SELECT
    CAST(FORMAT(dt, 'yyyyMMdd') AS INT),
    dt,
    YEAR(dt), DATEPART(QUARTER, dt), MONTH(dt), DATENAME(MONTH, dt),
    DATEPART(WEEK, dt), DATEPART(WEEKDAY, dt),
    CASE WHEN DATEPART(WEEKDAY, dt) IN (1,7) THEN 1 ELSE 0 END,
    CASE WHEN MONTH(dt) >= 7 THEN YEAR(dt) + 1 ELSE YEAR(dt) END,
    CASE WHEN MONTH(dt) BETWEEN 7 AND 9 THEN 1 WHEN MONTH(dt) BETWEEN 10 AND 12 THEN 2 WHEN MONTH(dt) BETWEEN 1 AND 3 THEN 3 ELSE 4 END
FROM DateCTE OPTION (MAXRECURSION 5000);
GO
Note

This star schema is optimized for Power BI's Vertipaq engine. Use Import mode for historical data and DirectQuery only for real-time risk scores if needed. The dim_date table enables time-intelligence DAX calculations (YoY turnover comparison, rolling averages). Adjust fiscal year logic (currently July start) to match client's fiscal calendar. The retention_risk table will be populated by the custom Python scoring pipeline built in a later step.

Step 4: Configure Finch API Integration for HRIS/Payroll Data Ingestion

Set up Finch as the unified API middleware to connect the client's HRIS and payroll systems to the Azure SQL data warehouse. Finch normalizes data from 220+ HR systems into a standard schema, eliminating the need for custom API integrations per vendor. Register for a Finch developer account, create the application, and configure the OAuth connection flow for the client's HRIS.

1
Register at https://dashboard.tryfinch.com and create a new application
2
Note your client_id and client_secret from the Finch dashboard
3
Generate the Finch Connect URL for the client's HR admin to authorize: https://connect.tryfinch.com/authorize?client_id=YOUR_CLIENT_ID&products=directory,individual,employment,payment,pay-statement&redirect_uri=https://your-msp-domain.com/finch/callback&sandbox=false
Exchange authorization code for Finch access token
bash
# 4. After client HR admin authorizes, exchange the auth code for an access token:
curl -X POST https://api.tryfinch.com/auth/token \
  -H 'Content-Type: application/json' \
  -d '{
    "client_id": "YOUR_CLIENT_ID",
    "client_secret": "YOUR_CLIENT_SECRET",
    "code": "AUTH_CODE_FROM_REDIRECT",
    "redirect_uri": "https://your-msp-domain.com/finch/callback"
  }'
Pull company directory to verify connection
bash
# 5. Test the connection by pulling the company directory:
curl https://api.tryfinch.com/employer/directory \
  -H 'Authorization: Bearer ACCESS_TOKEN' \
  -H 'Finch-API-Version: 2020-09-17'
Pull individual employee details
bash
# 6. Pull individual employee details:
curl -X POST https://api.tryfinch.com/employer/individual \
  -H 'Authorization: Bearer ACCESS_TOKEN' \
  -H 'Content-Type: application/json' \
  -H 'Finch-API-Version: 2020-09-17' \
  -d '{"requests": [{"individual_id": "EMPLOYEE_ID_FROM_DIRECTORY"}]}'
Pull employment details including hire date, title, department, and manager
bash
# 7. Pull employment details (hire date, title, department, manager):
curl -X POST https://api.tryfinch.com/employer/employment \
  -H 'Authorization: Bearer ACCESS_TOKEN' \
  -H 'Content-Type: application/json' \
  -H 'Finch-API-Version: 2020-09-17' \
  -d '{"requests": [{"individual_id": "EMPLOYEE_ID_FROM_DIRECTORY"}]}'
Note

Store the Finch access token securely in Azure Key Vault — not in code or config files. Finch tokens do not expire but can be revoked. The client's HR admin must authorize the connection through the Finch Connect UI; the MSP cannot bypass this. Finch Starter plan at $65/connection/month covers one HRIS connection. If the client uses separate HRIS and payroll providers (e.g., BambooHR for HRIS + ADP for payroll), you'll need 2 connections ($130/month). Test data mappings thoroughly — field names vary by underlying HRIS provider even through Finch's normalized API.

Step 5: Deploy Lattice Platform and Configure SSO/SCIM

Provision the Lattice tenant for the client organization. Configure SAML-based Single Sign-On via Azure AD / Entra ID and enable SCIM automatic user provisioning so employee accounts are created and deactivated automatically based on HRIS data. Purchase the Performance, Engagement, and Analytics modules.

Lattice tenant provisioning, Azure AD SAML SSO configuration, SCIM attribute mapping, and admin panel setup
bash
# 1. Work with Lattice sales to provision tenant at https://lattice.com
# Request modules: Performance + Engagement + Analytics (+ Grow if budget allows)
# Lattice will provide: tenant URL (e.g., clientname.latticehq.com), admin credentials

# 2. Configure Azure AD SSO for Lattice:
# In Azure Portal > Entra ID > Enterprise Applications > New Application
# Search 'Lattice' in the gallery and add it
# Under Single Sign-On, select SAML
# Configure:
#   Identifier (Entity ID): https://clientname.latticehq.com
#   Reply URL (ACS): https://clientname.latticehq.com/auth/saml/callback
#   Sign-On URL: https://clientname.latticehq.com
#   Download the Federation Metadata XML and upload to Lattice admin panel

# 3. Configure SCIM Provisioning in Azure AD:
# In the Lattice Enterprise App > Provisioning
# Set Provisioning Mode to 'Automatic'
# Admin Credentials:
#   Tenant URL: https://clientname.latticehq.com/scim/v2
#   Secret Token: (generate from Lattice Admin > Integrations > SCIM)
# Test Connection, then configure attribute mappings:
#   Map: userPrincipalName -> userName
#   Map: displayName -> displayName
#   Map: givenName -> name.givenName
#   Map: surname -> name.familyName
#   Map: mail -> emails[type eq 'work'].value
#   Map: department -> urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department
#   Map: manager -> urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager

# 4. Set provisioning scope to 'Sync only assigned users and groups'
# Create an Azure AD group 'Lattice-Users' and assign all employees
# Start provisioning and monitor the provisioning logs

# 5. In Lattice Admin Panel:
#   - Set company fiscal year and review cycles
#   - Configure department/team hierarchy matching HRIS structure
#   - Enable Analytics module under Admin > Modules
#   - Enable Engagement module and configure survey templates
Note

SCIM provisioning typically takes 20-40 minutes for initial sync of 300 employees. Monitor the Azure AD provisioning logs for any mapping errors — common issues include department name mismatches and manager circular references. The Lattice admin panel allows configuration of the engagement survey cadence, review cycles, and analytics visibility permissions. Ensure that only HR leadership and designated HRBPs have access to individual-level analytics; managers should only see their direct reports. Lattice's minimum annual commitment is $4,000/year; for 300 employees at $21/user/month (Performance + Engagement + Analytics), expect ~$75,600/year.

Step 6: Import Historical Data and Configure Turnover Dashboards in Lattice

Load historical termination data, past performance review ratings, and any existing engagement survey results into Lattice to establish baseline metrics. Configure Lattice's built-in turnover and engagement analytics dashboards. This historical data is critical — without it, the AI attrition risk models have no training data.

1
Prepare historical termination data CSV with these columns: employee_id, termination_date, termination_type (Voluntary/Involuntary), termination_reason, last_department, last_manager, tenure_months
2
Prepare historical performance data CSV: employee_id, review_period, overall_rating, reviewer_employee_id
3
Upload via Lattice Admin > Data > Import — follow Lattice's CSV import wizard for each data type and map columns to Lattice fields
4
In Lattice Analytics, configure dashboards — Navigate to Analytics > Dashboards and create the following views: a. Turnover Dashboard: overall turnover rate, voluntary vs involuntary, by department, by manager, by tenure band, by quarter b. Engagement Dashboard: overall engagement score, by dimension, by department, trend over time c. Retention Risk Dashboard: flight risk distribution, risk by department, risk factors breakdown (once Lattice AI model activates) d. Manager Effectiveness: engagement scores by manager, turnover rates by manager, performance distribution by manager
5
Configure automated reporting — Navigate to Lattice Admin > Notifications > Scheduled Reports: set weekly engagement pulse summary to HR Director, monthly turnover summary to CHRO and HR Director, and quarterly full analytics report to executive team
Note

Lattice requires minimum 6 months of employee data before its AI flight risk model can generate predictions — 12+ months is recommended for reliable results. If the client has less than 12 months of data in their current HRIS, work with them to export from their previous HRIS or compile from payroll records. Historical data quality directly impacts the accuracy of all downstream analytics. Common import issues: date format mismatches (MM/DD/YYYY vs YYYY-MM-DD), employee ID format differences between systems, and terminated employee records missing from active HRIS exports.

Step 7: Build ETL Pipeline for Automated Data Synchronization

Create an automated data pipeline that pulls employee data from Finch API, engagement/performance data from Lattice API, and loads it into the Azure SQL data warehouse on a scheduled basis. This pipeline ensures Power BI dashboards always reflect current data. Implement using Azure Data Factory or a lightweight Python script running on Azure Functions.

Option A: Python Azure Function for ETL
bash
# provision Function App, storage account, Key Vault secrets, and deploy

# Option A: Python Azure Function for ETL (recommended for cost efficiency)
# Create a new Azure Function App

az functionapp create \
  --resource-group rg-hr-analytics \
  --consumption-plan-location eastus \
  --runtime python \
  --runtime-version 3.11 \
  --functions-version 4 \
  --name func-hr-etl-CLIENT \
  --storage-account sthranalyticsCLIENT \
  --os-type Linux

# Create storage account first if not exists:
az storage account create \
  --name sthranalyticsclient \
  --resource-group rg-hr-analytics \
  --location eastus \
  --sku Standard_LRS

# Store secrets in Azure Key Vault:
az keyvault create --name kv-hr-CLIENT --resource-group rg-hr-analytics --location eastus
az keyvault secret set --vault-name kv-hr-CLIENT --name FinchAccessToken --value 'FINCH_TOKEN'
az keyvault secret set --vault-name kv-hr-CLIENT --name SqlConnectionString --value 'SERVER=sql-hranalytics-CLIENT.database.windows.net;DATABASE=db-people-analytics;UID=hradmin;PWD=PASSWORD'
az keyvault secret set --vault-name kv-hr-CLIENT --name LatticeApiKey --value 'LATTICE_API_KEY'

# Deploy the ETL function (see custom_ai_components for full source code)
# The function runs on a timer trigger: daily at 2 AM UTC
func azure functionapp publish func-hr-etl-CLIENT
Note

Azure Functions on the Consumption plan costs essentially nothing for this workload — expect under $5/month for daily ETL runs processing 300-500 employee records. The timer trigger runs at 2 AM UTC to avoid impacting HRIS/payroll system performance during business hours. Implement retry logic and error notifications: if the ETL fails, an alert should go to the MSP monitoring mailbox via Azure Monitor alerts. Keep the Finch access token in Key Vault and rotate quarterly as a security best practice.

Step 8: Design and Deploy Custom Power BI Retention Analytics Dashboards

Build a comprehensive Power BI dashboard suite that goes beyond Lattice's native analytics. These dashboards provide executive-level visualizations, cross-system correlation analysis (engagement × turnover × compensation), and predictive retention modeling outputs. Connect Power BI to the Azure SQL data warehouse and publish to the Power BI Service for web and Teams access.

1
Open Power BI Desktop on an MSP analyst workstation
2
Connect to Azure SQL Database: Get Data > Azure SQL Database | Server: sql-hranalytics-CLIENT.database.windows.net | Database: db-people-analytics | Authentication: Azure Active Directory
3
Import the following tables: hr.dim_employees, hr.dim_date, hr.fact_turnover, hr.fact_engagement, hr.fact_performance, hr.fact_compensation, hr.retention_risk
4
Create data model relationships in Power BI: fact_turnover[employee_key] -> dim_employees[employee_key] | fact_turnover[termination_date_key] -> dim_date[date_key] | fact_engagement[employee_key] -> dim_employees[employee_key] | fact_engagement[survey_date_key] -> dim_date[date_key] | fact_performance[employee_key] -> dim_employees[employee_key] | fact_compensation[employee_key] -> dim_employees[employee_key] | retention_risk[employee_key] -> dim_employees[employee_key]
5
Create the following DAX measures (see code block below)
6
Build dashboard pages: Page 1: Executive Summary (KPI cards: turnover rate, eNPS, headcount, avg tenure, risk count) | Page 2: Turnover Deep Dive (by dept, manager, tenure, reason; trend lines) | Page 3: Engagement Analysis (scores by dimension, dept heatmap, trend) | Page 4: Retention Risk Heatmap (dept × risk tier matrix, individual risk list for HRBP use) | Page 5: Manager Effectiveness (turnover + engagement by manager scatter plot) | Page 6: Compensation Impact (comp ratio vs turnover correlation, pay equity) | Page 7: Predictive Insights (flight risk model outputs, key drivers, recommended actions)
7
Configure Row-Level Security (RLS): Create roles: HR_Admin (sees all), HR_Manager (sees own department), Executive (sees aggregated only, no individual names) — via Power BI Desktop > Modeling > Manage Roles
8
Publish to Power BI Service: Publish > Select workspace 'HR Analytics - CLIENT' | Configure scheduled refresh: daily at 4 AM UTC (after ETL completes at 2 AM) | Pin key visuals to a shared dashboard | Embed in Microsoft Teams HR channel for easy access
DAX measures for turnover rate, rolling 12-month turnover, average engagement score, and retention risk distribution
dax
-- Turnover Rate (annualized)
Turnover Rate = 
  DIVIDE(
    COUNTROWS(FILTER(fact_turnover, fact_turnover[termination_type] = 'Voluntary')),
    AVERAGE(COUNTROWS(FILTER(dim_employees, dim_employees[is_active] = 1))),
    0
  ) * (12 / DISTINCTCOUNT(dim_date[month]))

-- Rolling 12-Month Turnover
Rolling 12M Turnover = 
  VAR _EndDate = MAX(dim_date[full_date])
  VAR _StartDate = DATEADD(MAX(dim_date[full_date]), -12, MONTH)
  RETURN DIVIDE(
    CALCULATE(COUNTROWS(fact_turnover), DATESBETWEEN(dim_date[full_date], _StartDate, _EndDate)),
    CALCULATE(DISTINCTCOUNT(dim_employees[employee_key]), dim_employees[is_active] = 1),
    0)

-- Average Engagement Score
Avg Engagement = AVERAGE(fact_engagement[overall_engagement_score])

-- Retention Risk Distribution
High Risk Count = COUNTROWS(FILTER(retention_risk, retention_risk[risk_tier] = 'High' || retention_risk[risk_tier] = 'Critical'))
High Risk % = DIVIDE([High Risk Count], COUNTROWS(FILTER(dim_employees, dim_employees[is_active] = 1)), 0)
Note

Row-Level Security (RLS) is absolutely critical — without it, any Power BI user could see all employee data including compensation and risk scores. Test RLS thoroughly by logging in as different user roles. Set the scheduled refresh to run 2 hours after the ETL pipeline completes to ensure fresh data. Use Power BI's built-in anomaly detection on turnover trend lines to automatically flag unusual spikes. For the executive summary page, use large KPI card visuals with conditional formatting (red/yellow/green) based on benchmark thresholds. Export the .pbix file and store in the client's SharePoint document library as a backup.

Step 9: Configure Engagement Survey Program in Lattice

Design and deploy the employee engagement survey program that will feed the retention analytics. Configure both a comprehensive annual engagement survey and quarterly pulse surveys. The survey data becomes the richest input for retention risk prediction — engagement drop-offs are the strongest leading indicator of turnover.

1
Navigate to Engagement > Survey Templates
2
Configure Annual Engagement Survey: Name: '[Client Name] Annual Engagement Survey 2025' Frequency: Annual (launch in Q1 or Q3 to avoid holiday periods) Questions: Use Lattice's validated question bank covering: - Overall Engagement (5 questions) - Manager Effectiveness (5 questions) - Career Growth & Development (4 questions) - Compensation & Benefits Satisfaction (3 questions) - Work-Life Balance & Wellbeing (4 questions) - Belonging & Inclusion (4 questions) - eNPS question: 'How likely are you to recommend [Company] as a place to work?' (0-10) - 2 open-ended questions for qualitative feedback Total: ~28 questions, estimated 8-12 minute completion time Anonymity: Set minimum reporting threshold to 5 responses per group Distribution: Email + Slack/Teams notification Response window: 2 weeks
3
Configure Quarterly Pulse Surveys: Name: '[Client Name] Q[X] Pulse Check' Frequency: Quarterly (offset from annual by 3 months) Questions: 5-7 rotating questions from the annual bank Include eNPS every pulse for trend tracking Response window: 1 week
4
Configure survey demographics/segments: Enable slicing by: Department, Location, Tenure Band, Job Level, Manager, Age Band (if client approves)
5
Set up manager action planning: Lattice Admin > Engagement > Action Plans Enable AI-suggested action items based on survey results Set manager notification for post-survey results access
6
Configure Lattice > Integrations: Enable Slack/Teams integration for survey reminders Enable HRIS sync for demographic segmentation Enable API access for data export to Power BI warehouse
Note

The single most important success factor for engagement surveys is executive communication BEFORE launch. Work with the client's CHRO to send a company-wide message explaining why surveys are being conducted, how data will be used, and anonymity protections. Response rates below 70% produce unreliable data; target 80%+ by using multiple reminder channels. Set the anonymity threshold to 5 minimum responses — this prevents identifying individuals in small teams while still allowing useful department-level analysis. Lattice's Text Analytics feature will analyze open-ended responses and surface theme clusters, which feed into the retention risk factor analysis.

Step 10: Deploy Retention Risk Scoring Pipeline

Set up the custom retention risk scoring pipeline that combines Lattice's AI flight risk output with additional data signals from the HR data warehouse (compensation data, tenure patterns, manager turnover history) to produce a comprehensive risk score. This pipeline runs as a scheduled Azure Function that updates the hr.retention_risk table daily.

bash
# This is deployed as part of the Azure Function App created in Step 7
# See custom_ai_components section for the complete Python implementation

# Deploy the risk scoring function:
cd hr-analytics-functions/
func azure functionapp publish func-hr-etl-CLIENT

# Verify the function is running:
az functionapp function list \
  --name func-hr-etl-CLIENT \
  --resource-group rg-hr-analytics \
  --output table

# Check function execution logs:
func azure functionapp logstream func-hr-etl-CLIENT

# Manually trigger a test run:
curl -X POST 'https://func-hr-etl-CLIENT.azurewebsites.net/api/calculate_retention_risk' \
  -H 'x-functions-key: FUNCTION_KEY' \
  -H 'Content-Type: application/json' \
  -d '{"test_mode": true}'
Note

The retention risk model uses a weighted scoring approach rather than a black-box ML model for the initial deployment. This is intentional: weighted scoring is transparent, explainable, and doesn't require a data scientist for maintenance — making it sustainable for MSP-managed ongoing support. The weights can be calibrated after 6-12 months of actual turnover data by comparing predicted risk scores against who actually left. Once enough data accumulates (typically 50+ voluntary terminations), the model can be upgraded to a Random Forest or XGBoost classifier for higher accuracy. See the custom AI components section for the full implementation.

Step 11: Configure Power Automate Retention Alert Workflows

Build automated workflows that trigger when retention risk scores exceed defined thresholds or when engagement scores drop significantly. These alerts ensure HR managers take timely action on at-risk employees rather than discovering issues during quarterly reviews.

Power Automate Flow Definitions: High Flight Risk Alert, Engagement Score Drop Alert, New Termination Alert
sql
# Power Automate Flow 1: High Flight Risk Alert
# Trigger: Scheduled (daily at 6 AM after risk scores update)
# 
# Flow steps:
# 1. Initialize variable: RiskThreshold = 75
# 2. Execute SQL query against Azure SQL:
#    SELECT e.first_name, e.last_name, e.department, e.job_title,
#           e.manager_employee_id, r.risk_score, r.risk_tier,
#           r.top_risk_factor_1, r.top_risk_factor_2, r.top_risk_factor_3
#    FROM hr.retention_risk r
#    JOIN hr.dim_employees e ON r.employee_key = e.employee_key
#    WHERE r.risk_tier IN ('High', 'Critical')
#    AND r.calculated_date = CAST(GETDATE() AS DATE)
#    AND e.is_active = 1
# 3. Apply to each result:
#    a. Get manager email from Azure AD using manager_employee_id
#    b. Send adaptive card to manager via Teams:
#       Title: '⚠️ Retention Alert: [Employee Name]'
#       Body: Risk Score, Risk Tier, Top 3 Risk Factors
#       Action button: 'Schedule 1:1' (deep link to Outlook calendar)
#       Action button: 'View Full Profile' (deep link to Lattice)
#    c. Send summary email to HR Director with all high-risk employees

# Power Automate Flow 2: Engagement Score Drop Alert
# Trigger: When a new row is added to fact_engagement (via SQL connector)
# Condition: If overall_engagement_score < previous_score - 15% 
#   AND previous_score was in top 50th percentile
# Action: Send Teams notification to HRBP for that department

# Power Automate Flow 3: New Termination Alert
# Trigger: When Finch webhook fires for employment status change to 'terminated'
# Action: Log to fact_turnover table, send notification to HR Director,
#         trigger exit interview scheduling workflow

# Configuration:
# 1. Go to https://make.powerautomate.com
# 2. Create each flow using the Premium SQL Server connector
# 3. Test each flow with sample data before going live
# 4. Set error handling: configure 'Run after' on failure to send
#    error notification to MSP monitoring mailbox
Note

Power Automate Premium connectors (SQL Server, HTTP) require at minimum one Power Automate Premium license ($15/user/month) assigned to the service account running the flows. The adaptive card in Teams is preferred over email because it's actionable — managers can schedule a 1:1 directly from the notification. Set up a Power Automate error monitoring flow that emails the MSP support address if any flow fails — this is part of the managed services SLA. Alert fatigue is real: start with only Critical tier alerts, then expand to High tier after managers demonstrate they're acting on alerts.

Step 12: Configure Network Security and Compliance Controls

Implement network-level security controls to protect employee PII in transit and at rest. Configure the FortiGate firewall (if installed), VLAN segmentation, and Azure security features to meet compliance requirements identified in the DPIA.

FortiGate VLAN and firewall policy configuration, and Azure SQL security hardening via CLI
shell
# 1. FortiGate 40F Configuration (if installed)
# Access FortiGate admin console at https://192.168.1.99

# Create HR VLAN interface:
config system interface
    edit 'HR-Analytics-VLAN'
        set vdom root
        set type vlan
        set vlanid 120
        set interface internal
        set ip 10.120.1.1 255.255.255.0
        set allowaccess ping https ssh
    next
end

# Create address objects for SaaS platforms:
config firewall address
    edit 'Lattice-Cloud'
        set type fqdn
        set fqdn 'latticehq.com'
    next
    edit 'Azure-SQL'
        set type fqdn
        set fqdn 'sql-hranalytics-CLIENT.database.windows.net'
    next
    edit 'PowerBI-Service'
        set type fqdn
        set fqdn 'app.powerbi.com'
    next
    edit 'Finch-API'
        set type fqdn
        set fqdn 'api.tryfinch.com'
    next
end

# Create firewall policy allowing HR VLAN to SaaS only:
config firewall policy
    edit 100
        set name 'HR-Analytics-Outbound'
        set srcintf 'HR-Analytics-VLAN'
        set dstintf 'wan1'
        set srcaddr 'all'
        set dstaddr 'Lattice-Cloud' 'Azure-SQL' 'PowerBI-Service' 'Finch-API'
        set action accept
        set schedule always
        set service HTTPS
        set ssl-ssh-profile certificate-inspection
        set logtraffic all
    next
end

# 2. Azure SQL Security Configuration:
# Enable Advanced Threat Protection:
az sql db threat-policy update \
  --resource-group rg-hr-analytics \
  --server sql-hranalytics-CLIENT \
  --name db-people-analytics \
  --state Enabled \
  --email-addresses 'security@msp-domain.com'

# Enable auditing:
az sql server audit-policy update \
  --resource-group rg-hr-analytics \
  --name sql-hranalytics-CLIENT \
  --state Enabled \
  --storage-account sthranalyticsclient
1
Create policy requiring MFA for Power BI and Lattice access
2
Create policy blocking access from non-compliant devices
3
Create policy restricting access to corporate network / managed devices
Note

The FortiGate configuration is only needed if the client is getting a new firewall as part of this project. If they already have a managed firewall, add the equivalent VLAN and policy rules to their existing device. TLS 1.2+ is enforced by default on all SaaS platforms and Azure SQL — verify this is not downgraded by any SSL inspection appliance. Azure SQL Transparent Data Encryption (TDE) is enabled by default and encrypts data at rest. The Conditional Access policies are highly recommended but require Azure AD P1 or P2 licensing (included in M365 E3/E5 or available as standalone). Document all security configurations in the client's compliance binder.

Step 13: Launch First Engagement Survey and Validate End-to-End Data Flow

Conduct a pilot engagement survey with a small group (HR team + one department) to validate the entire data pipeline from survey response through to Power BI dashboard visualization and retention risk scoring. This is the integration smoke test before full company-wide launch.

1
In Lattice, create a pilot survey: Navigate to Engagement > New Survey. Name: 'Pilot Engagement Survey - [Department Name]'. Audience: Select pilot department (30-50 employees ideal). Questions: Full annual survey question set (28 questions). Launch date: Immediately. Close date: 3 business days from launch.
2
Monitor response rate in Lattice real-time dashboard. Target: 70%+ response rate in pilot group.
3
After survey closes, verify data flow (see commands below).
4
Open Power BI Service and verify dashboards refresh with pilot data.
5
Verify Power Automate alerts fire for any high-risk employees.
Verify Lattice API returns survey results
bash
# Check Lattice API returns survey data:
curl -X GET 'https://api.latticehq.com/v1/engagement/surveys/SURVEY_ID/results' \
  -H 'Authorization: Bearer LATTICE_API_KEY' \
  -H 'Content-Type: application/json'
Manually trigger ETL function to pull survey data into Azure
bash
# Manually trigger ETL function to pull survey data:
curl -X POST 'https://func-hr-etl-CLIENT.azurewebsites.net/api/sync_engagement_data' \
  -H 'x-functions-key: FUNCTION_KEY'
Verify pilot survey data landed in Azure SQL (run via Azure Data Studio or sqlcmd)
sql
SELECT COUNT(*) as response_count FROM hr.fact_engagement WHERE survey_name LIKE '%Pilot%';

SELECT department, AVG(overall_engagement_score) as avg_score
FROM hr.fact_engagement e
JOIN hr.dim_employees d ON e.employee_key = d.employee_key
WHERE survey_name LIKE '%Pilot%'
GROUP BY department;
Trigger retention risk score recalculation
bash
# Trigger retention risk recalculation:
curl -X POST 'https://func-hr-etl-CLIENT.azurewebsites.net/api/calculate_retention_risk' \
  -H 'x-functions-key: FUNCTION_KEY'
Verify retention risk scores were updated for today's date
sql
SELECT e.department, r.risk_tier, COUNT(*) as cnt
FROM hr.retention_risk r
JOIN hr.dim_employees e ON r.employee_key = e.employee_key
WHERE r.calculated_date = CAST(GETDATE() AS DATE)
GROUP BY e.department, r.risk_tier
ORDER BY e.department, r.risk_tier;
Note

The pilot survey is critical — do not skip this step and go straight to full company launch. Common issues discovered during pilot: SCIM sync didn't map department names correctly (causing blank segments in Lattice analytics), ETL function timeout on Lattice API pagination, Power BI scheduled refresh credentials expired, or Power Automate SQL connector lost authentication. Fix all issues discovered before full launch. Document pilot results and present to the HR Director as proof of concept before requesting approval for company-wide survey launch.

Step 14: Full Company Launch and HR Team Training

After successful pilot validation, launch the engagement survey company-wide, conduct formal training for the HR team on all analytics tools, and transition to steady-state operations. This step marks the shift from implementation to managed services.

1
Work with CHRO to send company-wide communication: Email from CEO/CHRO explaining the engagement survey program; FAQ document addressing anonymity, data use, and privacy; Timeline: survey open for 2 weeks
2
Launch full survey in Lattice: Navigate to Engagement > New Survey; Audience: All Active Employees; Include all 28 questions from validated pilot template; Schedule reminders: Day 3, Day 7, Day 12; Close date: 14 days from launch
3
During survey period, monitor daily: Response rate by department (target: 80%+); Send targeted reminders to low-response departments; Address any technical access issues immediately
4
Post-survey: Run final ETL sync to warehouse; Generate Power BI executive summary report; Schedule results presentation with HR leadership; Enable manager-level results access in Lattice; Activate action planning module for managers
5
Conduct training sessions (schedule via Teams): Session 1: HR Director & CHRO - Executive Dashboard Training (1 hour); Session 2: HR Managers & HRBPs - Lattice Analytics Deep Dive (2 hours); Session 3: Department Managers - Reading Your Team's Results (1 hour); Session 4: HR Admin - Platform Administration & Survey Management (2 hours)
6
Deliver documentation package (see client_handoff section)
Note

The first full survey is a major organizational event — undercommunication is the biggest risk to response rates. Partner with the client's internal communications team if they have one. Expect 2-3 weeks of heavy support requests after launch as managers learn to interpret results. Schedule the MSP's first Quarterly Business Review 4-6 weeks after survey close to review findings and establish the ongoing cadence. Ensure the HR Admin is comfortable launching pulse surveys independently — this is the key skill transfer for sustainable operations.

Custom AI Components

Finch-to-Azure-SQL ETL Data Sync Function

Type: integration Azure Function that pulls employee directory, employment details, and compensation data from the Finch unified API and upserts it into the Azure SQL HR data warehouse. Runs on a daily timer trigger at 2 AM UTC. Handles pagination, deduplication, SCD Type 2 updates for employee dimension, and error logging.

sync_hris_data/__init__.py
python
# Azure Function main entry point (Python 3.11, Timer trigger)

# Azure Function: sync_hris_data
# File: sync_hris_data/__init__.py
# Runtime: Python 3.11
# Trigger: Timer (daily at 2 AM UTC)
# Requirements: azure-functions, pyodbc, requests, azure-keyvault-secrets, azure-identity

import azure.functions as func
import pyodbc
import requests
import json
import logging
import os
from datetime import datetime, date
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient

def main(mytimer: func.TimerRequest) -> None:
    logging.info('HRIS ETL sync started at %s', datetime.utcnow())
    
    # Retrieve secrets from Key Vault
    kv_url = os.environ['KEY_VAULT_URL']  # e.g., https://kv-hr-CLIENT.vault.azure.net/
    credential = DefaultAzureCredential()
    kv_client = SecretClient(vault_url=kv_url, credential=credential)
    
    finch_token = kv_client.get_secret('FinchAccessToken').value
    sql_conn_str = kv_client.get_secret('SqlConnectionString').value
    
    # Finch API base URL
    FINCH_BASE = 'https://api.tryfinch.com'
    FINCH_HEADERS = {
        'Authorization': f'Bearer {finch_token}',
        'Finch-API-Version': '2020-09-17',
        'Content-Type': 'application/json'
    }
    
    # Step 1: Pull company directory (all employees)
    logging.info('Pulling company directory from Finch...')
    directory = []
    has_more = True
    offset = 0
    while has_more:
        resp = requests.get(
            f'{FINCH_BASE}/employer/directory',
            headers=FINCH_HEADERS,
            params={'limit': 100, 'offset': offset}
        )
        resp.raise_for_status()
        data = resp.json()
        directory.extend(data.get('individuals', []))
        paging = data.get('paging', {})
        offset += paging.get('count', 0)
        has_more = offset < paging.get('total', 0)
    
    logging.info(f'Retrieved {len(directory)} employees from directory')
    
    # Step 2: Pull individual details in batches of 20
    individual_ids = [{'individual_id': emp['id']} for emp in directory]
    individuals = []
    for i in range(0, len(individual_ids), 20):
        batch = individual_ids[i:i+20]
        resp = requests.post(
            f'{FINCH_BASE}/employer/individual',
            headers=FINCH_HEADERS,
            json={'requests': batch}
        )
        resp.raise_for_status()
        individuals.extend(resp.json().get('responses', []))
    
    # Step 3: Pull employment details in batches of 20
    employments = []
    for i in range(0, len(individual_ids), 20):
        batch = individual_ids[i:i+20]
        resp = requests.post(
            f'{FINCH_BASE}/employer/employment',
            headers=FINCH_HEADERS,
            json={'requests': batch}
        )
        resp.raise_for_status()
        employments.extend(resp.json().get('responses', []))
    
    # Step 4: Merge and load into Azure SQL
    conn = pyodbc.connect(sql_conn_str)
    cursor = conn.cursor()
    
    upsert_count = 0
    for idx, emp_dir in enumerate(directory):
        try:
            emp_id = emp_dir['id']
            ind = individuals[idx].get('body', {}) if idx < len(individuals) else {}
            empl = employments[idx].get('body', {}) if idx < len(employments) else {}
            
            first_name = ind.get('first_name', '')
            last_name = ind.get('last_name', '')
            email_list = ind.get('emails', [])
            email = email_list[0].get('data', '') if email_list else ''
            
            department = emp_dir.get('department', {}).get('name', '') if isinstance(emp_dir.get('department'), dict) else str(emp_dir.get('department', ''))
            manager_id = emp_dir.get('manager', {}).get('id', '') if isinstance(emp_dir.get('manager'), dict) else ''
            
            hire_date = empl.get('start_date', None)
            end_date = empl.get('end_date', None)
            title = empl.get('title', '')
            is_active = 1 if empl.get('is_active', True) else 0
            
            income = empl.get('income', {})
            if income:
                amount = income.get('amount', 0)
                unit = income.get('unit', 'yearly')
                currency = income.get('currency', 'usd')
                # Normalize to annual
                if unit == 'monthly':
                    annual_salary = (amount or 0) / 100 * 12  # Finch returns cents
                elif unit == 'hourly':
                    annual_salary = (amount or 0) / 100 * 2080
                else:
                    annual_salary = (amount or 0) / 100
            else:
                annual_salary = None
            
            location = empl.get('location', {})
            location_str = ''
            if location:
                city = location.get('city', '')
                state = location.get('state', '')
                location_str = f'{city}, {state}'.strip(', ')
            
            employment_type = empl.get('employment', {}).get('type', 'full_time') if isinstance(empl.get('employment'), dict) else 'full_time'
            
            # Upsert logic: check if employee exists, update if changed, insert if new
            cursor.execute(
                '''MERGE hr.dim_employees AS target
                USING (SELECT ? AS employee_id) AS source
                ON target.employee_id = source.employee_id AND target.is_current = 1
                WHEN MATCHED AND (
                    target.department != ? OR target.job_title != ? OR 
                    target.manager_employee_id != ? OR target.is_active != ?
                ) THEN UPDATE SET
                    end_date = CAST(GETDATE() AS DATE),
                    is_current = 0,
                    updated_at = GETDATE()
                WHEN NOT MATCHED THEN INSERT (
                    employee_id, first_name, last_name, email, hire_date,
                    termination_date, department, job_title, manager_employee_id,
                    location, employment_type, is_active, effective_date, is_current
                ) VALUES (
                    ?, ?, ?, ?, ?,
                    ?, ?, ?, ?,
                    ?, ?, ?, CAST(GETDATE() AS DATE), 1
                );''',
                emp_id, department, title, manager_id, is_active,
                emp_id, first_name, last_name, email, hire_date,
                end_date, department, title, manager_id,
                location_str, employment_type, is_active
            )
            
            # For changed records, insert new current version
            if cursor.rowcount > 0:
                cursor.execute(
                    '''IF NOT EXISTS (SELECT 1 FROM hr.dim_employees WHERE employee_id = ? AND is_current = 1)
                    INSERT INTO hr.dim_employees (
                        employee_id, first_name, last_name, email, hire_date,
                        termination_date, department, job_title, manager_employee_id,
                        location, employment_type, is_active, effective_date, is_current
                    ) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, CAST(GETDATE() AS DATE), 1)''',
                    emp_id,
                    emp_id, first_name, last_name, email, hire_date,
                    end_date, department, title, manager_id,
                    location_str, employment_type, is_active
                )
            
            # Upsert compensation data
            if annual_salary:
                cursor.execute(
                    '''MERGE hr.fact_compensation AS target
                    USING (
                        SELECT employee_key FROM hr.dim_employees 
                        WHERE employee_id = ? AND is_current = 1
                    ) AS source ON target.employee_key = source.employee_key
                        AND target.effective_date_key = CAST(FORMAT(GETDATE(), 'yyyyMMdd') AS INT)
                    WHEN NOT MATCHED THEN INSERT (
                        employee_key, effective_date_key, base_salary, total_comp
                    ) VALUES (
                        source.employee_key,
                        CAST(FORMAT(GETDATE(), 'yyyyMMdd') AS INT),
                        ?, ?
                    );''',
                    emp_id, annual_salary, annual_salary
                )
            
            upsert_count += 1
            
        except Exception as e:
            logging.error(f'Error processing employee {emp_dir.get("id", "unknown")}: {str(e)}')
            continue
    
    conn.commit()
    cursor.close()
    conn.close()
    
    logging.info(f'ETL complete. Processed {upsert_count}/{len(directory)} employees.')
function.json
json
# Timer trigger binding configuration (daily at 2 AM UTC)

{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "mytimer",
      "type": "timerTrigger",
      "direction": "in",
      "schedule": "0 0 2 * * *"
    }
  ]
}
requirements.txt — Python package dependencies
text
azure-functions
pyodbc
requests
azure-keyvault-secrets
azure-identity

Lattice Engagement Data Sync Function

Type: integration Azure Function that pulls engagement survey results and performance review data from the Lattice API and loads it into the Azure SQL data warehouse. Runs daily after the HRIS sync to ensure employee dimension is current before loading fact data.

Implementation:

Azure Function: sync_engagement_data
python
# Python 3.11, Timer trigger daily at 2:30 AM UTC

# Azure Function: sync_engagement_data
# File: sync_engagement_data/__init__.py
# Runtime: Python 3.11
# Trigger: Timer (daily at 2:30 AM UTC, 30 min after HRIS sync)

import azure.functions as func
import pyodbc
import requests
import logging
import os
from datetime import datetime
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient

def main(mytimer: func.TimerRequest) -> None:
    logging.info('Lattice engagement sync started at %s', datetime.utcnow())
    
    kv_url = os.environ['KEY_VAULT_URL']
    credential = DefaultAzureCredential()
    kv_client = SecretClient(vault_url=kv_url, credential=credential)
    
    lattice_key = kv_client.get_secret('LatticeApiKey').value
    sql_conn_str = kv_client.get_secret('SqlConnectionString').value
    
    LATTICE_BASE = 'https://api.latticehq.com/v1'
    LATTICE_HEADERS = {
        'Authorization': f'Bearer {lattice_key}',
        'Content-Type': 'application/json'
    }
    
    conn = pyodbc.connect(sql_conn_str)
    cursor = conn.cursor()
    
    # Pull engagement survey results
    # Note: Lattice API endpoints may vary; consult current docs
    try:
        # Get list of surveys
        resp = requests.get(f'{LATTICE_BASE}/engagement/surveys', headers=LATTICE_HEADERS)
        resp.raise_for_status()
        surveys = resp.json().get('data', [])
        
        for survey in surveys:
            survey_id = survey['id']
            survey_name = survey.get('name', '')
            survey_date = survey.get('closed_at', survey.get('created_at', ''))
            
            if not survey_date:
                continue
            
            # Get aggregated results by employee (if accessible via API)
            # Lattice may return anonymized/aggregated data depending on plan
            results_resp = requests.get(
                f'{LATTICE_BASE}/engagement/surveys/{survey_id}/results',
                headers=LATTICE_HEADERS
            )
            
            if results_resp.status_code == 200:
                results = results_resp.json().get('data', {})
                
                # Process dimension-level scores
                dimensions = results.get('dimensions', [])
                for dim in dimensions:
                    dim_name = dim.get('name', '')
                    segments = dim.get('segments', [])
                    
                    for segment in segments:
                        segment_type = segment.get('type', '')  # department, location, etc.
                        segment_value = segment.get('value', '')
                        avg_score = segment.get('average_score', 0)
                        response_count = segment.get('response_count', 0)
                        
                        # Store segment-level engagement data
                        cursor.execute(
                            '''INSERT INTO hr.fact_engagement_segments (
                                survey_name, survey_date, dimension_name,
                                segment_type, segment_value, avg_score, response_count
                            ) VALUES (?, ?, ?, ?, ?, ?, ?)''',
                            survey_name,
                            survey_date[:10] if survey_date else None,
                            dim_name, segment_type, segment_value,
                            avg_score, response_count
                        )
                
                # Process overall scores
                overall = results.get('overall', {})
                enps = results.get('enps', {})
                
                logging.info(
                    f'Survey "{survey_name}": Overall score={overall.get("score", "N/A")}, '
                    f'eNPS={enps.get("score", "N/A")}, '
                    f'Response rate={results.get("response_rate", "N/A")}'
                )
        
        conn.commit()
        logging.info('Engagement data sync complete.')
        
    except requests.exceptions.HTTPError as e:
        logging.error(f'Lattice API error: {e}')
        logging.error(f'Response: {e.response.text if e.response else "No response"}')
    except Exception as e:
        logging.error(f'Engagement sync error: {str(e)}')
    finally:
        cursor.close()
        conn.close()

# function.json:
# {
#   "scriptFile": "__init__.py",
#   "bindings": [
#     {
#       "name": "mytimer",
#       "type": "timerTrigger",
#       "direction": "in",
#       "schedule": "0 30 2 * * *"
#     }
#   ]
# }
Note

Lattice's API availability varies by plan and version. For plans without full API access, use Lattice's CSV export feature combined with Azure Blob Storage trigger: 1. HR admin exports CSV from Lattice monthly 2. Uploads to designated Azure Blob container 3. Blob trigger Azure Function parses and loads to SQL This is a pragmatic fallback that maintains the data pipeline.

Weighted Retention Risk Scoring Engine

Type: agent Core retention risk calculation engine that combines multiple data signals into a composite 0-100 risk score for each active employee. Uses a transparent weighted scoring model that is explainable to HR leaders and auditable for compliance. Signals include: tenure risk bands, engagement score trends, compensation position, manager stability, performance trajectory, and time since last promotion. The model outputs risk scores, risk tiers (Low/Medium/High/Critical), and the top 3 contributing risk factors for each employee.

Implementation:

Azure Function: calculate_retention_risk
python
# File: calculate_retention_risk/__init__.py — Runtime: Python 3.11 —
# Trigger: Timer (daily at 3 AM UTC) or HTTP (for manual trigger)

# Azure Function: calculate_retention_risk
# File: calculate_retention_risk/__init__.py
# Runtime: Python 3.11
# Trigger: Timer (daily at 3 AM UTC) or HTTP (for manual trigger)

import azure.functions as func
import pyodbc
import logging
import os
import json
from datetime import datetime, date
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient

# ============================================================
# RETENTION RISK SCORING MODEL v1.0
# ============================================================
# Methodology: Weighted composite scoring
# Total possible score: 100 (highest risk)
# Each factor contributes a weighted sub-score
# 
# Factor Weights (must sum to 100):
# - Tenure Risk Zone:           20 points
# - Engagement Score Trend:     25 points
# - Compensation Position:      15 points
# - Manager Stability:          10 points
# - Performance Trajectory:     10 points
# - Time Since Last Promotion:  10 points
# - Recent Organizational Change: 5 points
# - Work-Life Balance Signal:    5 points
# ============================================================

MODEL_VERSION = 'v1.0-weighted'

def calculate_tenure_risk(tenure_months: int) -> tuple:
    """Tenure-based flight risk. Research shows highest voluntary turnover
    occurs at 1-2 years and again at 4-5 years (post-vesting)."""
    if tenure_months is None:
        return 10, 'Unknown tenure'
    elif tenure_months < 6:
        return 8, 'New hire (< 6 months) - still in onboarding'
    elif 6 <= tenure_months < 12:
        return 14, 'Approaching 1-year mark - common early departure window'
    elif 12 <= tenure_months < 24:
        return 20, 'In 1-2 year high-risk window - peak voluntary turnover zone'
    elif 24 <= tenure_months < 36:
        return 14, '2-3 years tenure - moderate risk, seeking growth'
    elif 36 <= tenure_months < 48:
        return 10, '3-4 years - lower risk if engaged'
    elif 48 <= tenure_months < 60:
        return 16, '4-5 year mark - equity vesting cliff / career reassessment'
    elif 60 <= tenure_months < 84:
        return 8, '5-7 years - established, lower flight risk'
    else:
        return 4, '7+ years - deeply embedded, lowest tenure risk'

def calculate_engagement_risk(current_score: float, previous_score: float, 
                                company_avg: float) -> tuple:
    """Engagement survey score analysis. Both absolute level and trend matter."""
    if current_score is None:
        return 15, 'No engagement data available - unable to assess sentiment'
    
    risk = 0
    factors = []
    
    # Absolute score assessment (0-5 scale typical for Lattice)
    if current_score < 2.5:
        risk += 20
        factors.append('Very low engagement score (below 2.5/5)')
    elif current_score < 3.0:
        risk += 15
        factors.append('Low engagement score (below 3.0/5)')
    elif current_score < 3.5:
        risk += 8
        factors.append('Below-average engagement score')
    elif current_score >= 4.0:
        risk += 0
        factors.append('Strong engagement score')
    else:
        risk += 3
    
    # Trend analysis (drop from previous survey)
    if previous_score and current_score:
        change = current_score - previous_score
        if change <= -0.8:
            risk += 5
            factors.append(f'Significant engagement drop ({change:.1f} points)')
        elif change <= -0.4:
            risk += 3
            factors.append(f'Moderate engagement decline ({change:.1f} points)')
    
    # Below company average
    if company_avg and current_score < (company_avg - 0.5):
        risk += 2
        factors.append('Significantly below company average engagement')
    
    return min(risk, 25), '; '.join(factors) if factors else 'Engagement within normal range'

def calculate_compensation_risk(comp_ratio: float, months_since_raise: int,
                                  salary: float) -> tuple:
    """Compensation position relative to market and internal equity."""
    risk = 0
    factors = []
    
    if comp_ratio is not None:
        if comp_ratio < 0.80:
            risk += 12
            factors.append(f'Significantly below market (comp ratio: {comp_ratio:.0%})')
        elif comp_ratio < 0.90:
            risk += 8
            factors.append(f'Below market midpoint (comp ratio: {comp_ratio:.0%})')
        elif comp_ratio < 0.95:
            risk += 4
            factors.append(f'Slightly below market (comp ratio: {comp_ratio:.0%})')
        else:
            risk += 0
    else:
        risk += 5
        factors.append('Compensation market data unavailable')
    
    if months_since_raise is not None:
        if months_since_raise > 24:
            risk += 3
            factors.append(f'No raise in {months_since_raise} months')
        elif months_since_raise > 18:
            risk += 2
            factors.append(f'{months_since_raise} months since last raise')
    
    return min(risk, 15), '; '.join(factors) if factors else 'Compensation position adequate'

def calculate_manager_risk(manager_tenure_months: int, manager_turnover_count: int,
                            manager_engagement_avg: float) -> tuple:
    """Manager stability and effectiveness signals."""
    risk = 0
    factors = []
    
    if manager_tenure_months is not None and manager_tenure_months < 6:
        risk += 4
        factors.append('New manager (< 6 months) - transition risk')
    
    if manager_turnover_count is not None and manager_turnover_count >= 3:
        risk += 4
        factors.append(f'Manager has had {manager_turnover_count} reports leave in past 12 months')
    elif manager_turnover_count is not None and manager_turnover_count >= 2:
        risk += 2
        factors.append(f'Manager has had {manager_turnover_count} reports leave in past 12 months')
    
    if manager_engagement_avg is not None and manager_engagement_avg < 3.0:
        risk += 2
        factors.append(f'Team average engagement is low ({manager_engagement_avg:.1f}/5)')
    
    return min(risk, 10), '; '.join(factors) if factors else 'Manager stability adequate'

def calculate_performance_risk(current_rating: float, previous_rating: float) -> tuple:
    """Performance trajectory — both high performers (poached) and declining performers (disengaged) are risks."""
    risk = 0
    factors = []
    
    if current_rating is None:
        return 5, 'No performance data available'
    
    # High performers are retention risks (external demand)
    if current_rating >= 4.5:
        risk += 6
        factors.append('Top performer - high external market demand')
    elif current_rating >= 4.0:
        risk += 3
        factors.append('Strong performer - moderate external demand')
    
    # Declining performance signals disengagement
    if previous_rating and current_rating < previous_rating - 0.5:
        risk += 4
        factors.append(f'Performance declined from {previous_rating:.1f} to {current_rating:.1f}')
    
    return min(risk, 10), '; '.join(factors) if factors else 'Performance trajectory stable'

def calculate_promotion_risk(months_since_promotion: int, job_level: str) -> tuple:
    """Time since last promotion — career stagnation is a top driver of turnover."""
    if months_since_promotion is None:
        return 5, 'Promotion history unavailable'
    
    if months_since_promotion > 36:
        return 8, f'No promotion in {months_since_promotion} months - potential career stagnation'
    elif months_since_promotion > 24:
        return 5, f'{months_since_promotion} months since last promotion'
    elif months_since_promotion > 18:
        return 3, f'{months_since_promotion} months since last promotion'
    else:
        return 0, 'Recent promotion - career progression on track'

def calculate_org_change_risk(recent_reorg: bool, team_size_change_pct: float) -> tuple:
    """Recent organizational disruption signals."""
    risk = 0
    factors = []
    
    if recent_reorg:
        risk += 3
        factors.append('Department underwent recent reorganization')
    
    if team_size_change_pct is not None and abs(team_size_change_pct) > 25:
        risk += 2
        factors.append(f'Team size changed by {team_size_change_pct:.0f}% recently')
    
    return min(risk, 5), '; '.join(factors) if factors else 'No significant org disruption'

def assign_risk_tier(score: float) -> str:
    if score >= 75:
        return 'Critical'
    elif score >= 55:
        return 'High'
    elif score >= 35:
        return 'Medium'
    else:
        return 'Low'

def get_top_factors(factor_scores: list, n: int = 3) -> list:
    """Return top N contributing factors sorted by risk contribution."""
    sorted_factors = sorted(factor_scores, key=lambda x: x[0], reverse=True)
    return [f[1] for f in sorted_factors[:n] if f[0] > 0]


def main(req: func.HttpRequest = None, mytimer: func.TimerRequest = None) -> str:
    logging.info('Retention risk calculation started at %s', datetime.utcnow())
    
    kv_url = os.environ['KEY_VAULT_URL']
    credential = DefaultAzureCredential()
    kv_client = SecretClient(vault_url=kv_url, credential=credential)
    sql_conn_str = kv_client.get_secret('SqlConnectionString').value
    
    conn = pyodbc.connect(sql_conn_str)
    cursor = conn.cursor()
    
    # Get company-wide engagement average for relative comparison
    cursor.execute('''
        SELECT AVG(overall_engagement_score) 
        FROM hr.fact_engagement e
        JOIN (
            SELECT employee_key, MAX(survey_date_key) as latest_survey
            FROM hr.fact_engagement GROUP BY employee_key
        ) latest ON e.employee_key = latest.employee_key 
            AND e.survey_date_key = latest.latest_survey
    ''')
    company_avg_engagement = cursor.fetchone()[0] or 3.5
    
    # Pull all active employees with their latest data points
    cursor.execute('''
        SELECT 
            e.employee_key,
            e.employee_id,
            e.department,
            e.job_title,
            e.job_level,
            e.tenure_months,
            e.manager_employee_id,
            -- Latest engagement
            eng_latest.overall_engagement_score as current_engagement,
            eng_prev.overall_engagement_score as previous_engagement,
            -- Latest performance
            perf_latest.numeric_rating as current_performance,
            perf_prev.numeric_rating as previous_performance,
            -- Compensation
            comp.comp_ratio,
            comp.months_since_last_raise,
            comp.base_salary,
            -- Manager metrics
            mgr_emp.tenure_months as manager_tenure,
            mgr_turnover.turnover_count as manager_turnover_12m,
            mgr_eng.avg_team_engagement as manager_team_engagement
        FROM hr.dim_employees e
        LEFT JOIN (
            SELECT employee_key, overall_engagement_score,
                   ROW_NUMBER() OVER (PARTITION BY employee_key ORDER BY survey_date_key DESC) as rn
            FROM hr.fact_engagement
        ) eng_latest ON e.employee_key = eng_latest.employee_key AND eng_latest.rn = 1
        LEFT JOIN (
            SELECT employee_key, overall_engagement_score,
                   ROW_NUMBER() OVER (PARTITION BY employee_key ORDER BY survey_date_key DESC) as rn
            FROM hr.fact_engagement
        ) eng_prev ON e.employee_key = eng_prev.employee_key AND eng_prev.rn = 2
        LEFT JOIN (
            SELECT employee_key, numeric_rating,
                   ROW_NUMBER() OVER (PARTITION BY employee_key ORDER BY review_date_key DESC) as rn
            FROM hr.fact_performance
        ) perf_latest ON e.employee_key = perf_latest.employee_key AND perf_latest.rn = 1
        LEFT JOIN (
            SELECT employee_key, numeric_rating,
                   ROW_NUMBER() OVER (PARTITION BY employee_key ORDER BY review_date_key DESC) as rn
            FROM hr.fact_performance
        ) perf_prev ON e.employee_key = perf_prev.employee_key AND perf_prev.rn = 2
        LEFT JOIN (
            SELECT employee_key, comp_ratio, months_since_last_raise, base_salary,
                   ROW_NUMBER() OVER (PARTITION BY employee_key ORDER BY effective_date_key DESC) as rn
            FROM hr.fact_compensation
        ) comp ON e.employee_key = comp.employee_key AND comp.rn = 1
        LEFT JOIN hr.dim_employees mgr_emp 
            ON e.manager_employee_id = mgr_emp.employee_id AND mgr_emp.is_current = 1
        LEFT JOIN (
            SELECT t.manager_employee_id, COUNT(*) as turnover_count
            FROM hr.fact_turnover t
            WHERE t.termination_date_key >= CAST(FORMAT(DATEADD(MONTH, -12, GETDATE()), 'yyyyMMdd') AS INT)
            AND t.termination_type = 'Voluntary'
            GROUP BY t.manager_employee_id
        ) mgr_turnover ON e.manager_employee_id = mgr_turnover.manager_employee_id
        LEFT JOIN (
            SELECT d.manager_employee_id, AVG(eng.overall_engagement_score) as avg_team_engagement
            FROM hr.dim_employees d
            JOIN (
                SELECT employee_key, overall_engagement_score,
                       ROW_NUMBER() OVER (PARTITION BY employee_key ORDER BY survey_date_key DESC) as rn
                FROM hr.fact_engagement
            ) eng ON d.employee_key = eng.employee_key AND eng.rn = 1
            WHERE d.is_current = 1 AND d.is_active = 1
            GROUP BY d.manager_employee_id
        ) mgr_eng ON e.manager_employee_id = mgr_eng.manager_employee_id
        WHERE e.is_current = 1 AND e.is_active = 1
    ''')
    
    employees = cursor.fetchall()
    columns = [desc[0] for desc in cursor.description]
    
    processed = 0
    for row in employees:
        emp = dict(zip(columns, row))
        
        # Calculate each risk factor
        tenure_score, tenure_factor = calculate_tenure_risk(emp['tenure_months'])
        engagement_score, engagement_factor = calculate_engagement_risk(
            emp['current_engagement'], emp['previous_engagement'], company_avg_engagement
        )
        comp_score, comp_factor = calculate_compensation_risk(
            emp['comp_ratio'], emp['months_since_last_raise'], emp['base_salary']
        )
        manager_score, manager_factor = calculate_manager_risk(
            emp['manager_tenure'], emp['manager_turnover_12m'], emp['manager_team_engagement']
        )
        performance_score, performance_factor = calculate_performance_risk(
            emp['current_performance'], emp['previous_performance']
        )
        # Promotion and org change would come from additional data sources
        # Using defaults for initial deployment
        promotion_score, promotion_factor = 5, 'Promotion data not yet available'
        org_change_score, org_change_factor = 2, 'Org change data not yet available'
        wlb_score = 3  # Default until WLB survey dimension is populated
        
        # Calculate composite score
        total_score = (
            tenure_score + engagement_score + comp_score + 
            manager_score + performance_score + promotion_score + 
            org_change_score + wlb_score
        )
        total_score = min(total_score, 100)
        
        risk_tier = assign_risk_tier(total_score)
        
        # Get top 3 risk factors
        factor_scores = [
            (tenure_score, tenure_factor),
            (engagement_score, engagement_factor),
            (comp_score, comp_factor),
            (manager_score, manager_factor),
            (performance_score, performance_factor),
            (promotion_score, promotion_factor),
            (org_change_score, org_change_factor),
        ]
        top_factors = get_top_factors(factor_scores, 3)
        
        # Upsert risk score
        cursor.execute(
            '''INSERT INTO hr.retention_risk (
                employee_key, calculated_date, risk_score, risk_tier,
                top_risk_factor_1, top_risk_factor_2, top_risk_factor_3,
                lattice_flight_risk, model_version
            ) VALUES (?, CAST(GETDATE() AS DATE), ?, ?, ?, ?, ?, ?, ?)''',
            emp['employee_key'],
            round(total_score, 2),
            risk_tier,
            top_factors[0] if len(top_factors) > 0 else None,
            top_factors[1] if len(top_factors) > 1 else None,
            top_factors[2] if len(top_factors) > 2 else None,
            None,  # Populated when Lattice flight risk API is available
            MODEL_VERSION
        )
        processed += 1
    
    conn.commit()
    
    # Log summary statistics
    cursor.execute('''
        SELECT risk_tier, COUNT(*) as cnt 
        FROM hr.retention_risk 
        WHERE calculated_date = CAST(GETDATE() AS DATE)
        GROUP BY risk_tier
    ''')
    summary = {row[0]: row[1] for row in cursor.fetchall()}
    
    cursor.close()
    conn.close()
    
    result_msg = f'Risk scoring complete. Processed {processed} employees. Distribution: {json.dumps(summary)}'
    logging.info(result_msg)
    
    if req:
        return func.HttpResponse(result_msg, status_code=200)
    return result_msg

# function.json (timer trigger version):
# {
#   "scriptFile": "__init__.py",
#   "bindings": [
#     {
#       "name": "mytimer",
#       "type": "timerTrigger",
#       "direction": "in",
#       "schedule": "0 0 3 * * *"
#     }
#   ]
# }

# function.json (HTTP trigger version for manual runs):
# {
#   "scriptFile": "__init__.py",
#   "bindings": [
#     {
#       "name": "req",
#       "type": "httpTrigger",
#       "direction": "in",
#       "authLevel": "function",
#       "methods": ["post"]
#     },
#     {
#       "name": "$return",
#       "type": "http",
#       "direction": "out"
#     }
#   ]
# }

Turnover Pattern Analysis DAX Measures

Type: skill A comprehensive library of DAX measures for Power BI that calculate key HR turnover and retention metrics. These measures power the executive dashboards and enable time-intelligence analysis (YoY comparison, rolling averages, trend detection). Includes annualized turnover rate, voluntary/involuntary split, cost of turnover estimation, tenure band analysis, and engagement-to-turnover correlation.

Implementation:

Power BI DAX Measures — HR Retention Analytics
dax
// ==============================================
// POWER BI DAX MEASURES — HR RETENTION ANALYTICS
// ==============================================
// Add these measures to the Power BI data model
// Assumes star schema with fact_turnover, fact_engagement,
// dim_employees, dim_date tables as defined in Step 3
// ==============================================

// --- HEADCOUNT MEASURES ---

Active Headcount = 
    COUNTROWS(
        FILTER(dim_employees, dim_employees[is_active] = TRUE() && dim_employees[is_current] = TRUE())
    )

Avg Headcount Period = 
    VAR _StartHC = CALCULATE([Active Headcount], FIRSTDATE(dim_date[full_date]))
    VAR _EndHC = CALCULATE([Active Headcount], LASTDATE(dim_date[full_date]))
    RETURN DIVIDE(_StartHC + _EndHC, 2)

// --- TURNOVER MEASURES ---

Total Terminations = 
    COUNTROWS(fact_turnover)

Voluntary Terminations = 
    CALCULATE(
        COUNTROWS(fact_turnover),
        fact_turnover[termination_type] = "Voluntary"
    )

Involuntary Terminations = 
    CALCULATE(
        COUNTROWS(fact_turnover),
        fact_turnover[termination_type] = "Involuntary"
    )

Regrettable Terminations = 
    CALCULATE(
        COUNTROWS(fact_turnover),
        fact_turnover[was_regrettable] = TRUE()
    )

Turnover Rate = 
    DIVIDE([Total Terminations], [Avg Headcount Period], 0)

Voluntary Turnover Rate = 
    DIVIDE([Voluntary Terminations], [Avg Headcount Period], 0)

Annualized Turnover Rate = 
    VAR _Months = DISTINCTCOUNT(dim_date[month])
    VAR _MonthlyRate = DIVIDE([Total Terminations], [Avg Headcount Period] * _Months, 0)
    RETURN _MonthlyRate * 12

Rolling 12M Turnover = 
    VAR _EndDate = MAX(dim_date[full_date])
    VAR _StartDate = EDATE(_EndDate, -12)
    VAR _Terms = CALCULATE(
        COUNTROWS(fact_turnover),
        DATESBETWEEN(dim_date[full_date], _StartDate, _EndDate)
    )
    VAR _AvgHC = CALCULATE([Avg Headcount Period], DATESBETWEEN(dim_date[full_date], _StartDate, _EndDate))
    RETURN DIVIDE(_Terms, _AvgHC, 0)

Turnover Rate YoY Change = 
    VAR _CurrentYear = [Annualized Turnover Rate]
    VAR _PriorYear = CALCULATE([Annualized Turnover Rate], SAMEPERIODLASTYEAR(dim_date[full_date]))
    RETURN _CurrentYear - _PriorYear

// --- COST OF TURNOVER ---

Estimated Turnover Cost = 
    // Industry benchmark: cost of replacing an employee = 0.5x to 2x annual salary
    // Using 1x as conservative default; adjust multiplier per client's industry
    VAR _CostMultiplier = 1.0
    RETURN 
        SUMX(
            fact_turnover,
            fact_turnover[salary_at_termination] * _CostMultiplier
        )

Avg Cost Per Termination = 
    DIVIDE([Estimated Turnover Cost], [Total Terminations], 0)

Regrettable Turnover Cost = 
    VAR _CostMultiplier = 1.5  // Higher cost for regrettable departures
    RETURN
        SUMX(
            FILTER(fact_turnover, fact_turnover[was_regrettable] = TRUE()),
            fact_turnover[salary_at_termination] * _CostMultiplier
        )

// --- TENURE ANALYSIS ---

Avg Tenure at Termination = 
    AVERAGE(fact_turnover[tenure_at_termination_months])

Avg Current Tenure = 
    AVERAGE(dim_employees[tenure_months])

First Year Turnover Rate = 
    VAR _FirstYearTerms = CALCULATE(
        COUNTROWS(fact_turnover),
        fact_turnover[tenure_at_termination_months] < 12
    )
    VAR _NewHires = CALCULATE(
        COUNTROWS(dim_employees),
        dim_employees[tenure_months] < 12
    ) + _FirstYearTerms
    RETURN DIVIDE(_FirstYearTerms, _NewHires, 0)

// --- ENGAGEMENT MEASURES ---

Avg Engagement Score = 
    AVERAGE(fact_engagement[overall_engagement_score])

eNPS = 
    VAR _Promoters = CALCULATE(
        COUNTROWS(fact_engagement),
        fact_engagement[recommendation_score] >= 9
    )
    VAR _Detractors = CALCULATE(
        COUNTROWS(fact_engagement),
        fact_engagement[recommendation_score] <= 6
    )
    VAR _Total = COUNTROWS(fact_engagement)
    RETURN DIVIDE(_Promoters - _Detractors, _Total, 0) * 100

Engagement Score Trend = 
    VAR _CurrentAvg = [Avg Engagement Score]
    VAR _PreviousAvg = CALCULATE(
        [Avg Engagement Score],
        PREVIOUSQUARTER(dim_date[full_date])
    )
    RETURN _CurrentAvg - _PreviousAvg

Manager Effectiveness Score = 
    AVERAGE(fact_engagement[manager_effectiveness_score])

Career Development Score = 
    AVERAGE(fact_engagement[career_development_score])

// --- RETENTION RISK MEASURES ---

High Risk Employees = 
    CALCULATE(
        COUNTROWS(retention_risk),
        retention_risk[risk_tier] IN {"High", "Critical"}
    )

High Risk Percentage = 
    DIVIDE([High Risk Employees], [Active Headcount], 0)

Avg Risk Score = 
    AVERAGE(retention_risk[risk_score])

Critical Risk Count = 
    CALCULATE(
        COUNTROWS(retention_risk),
        retention_risk[risk_tier] = "Critical"
    )

Risk Score KPI Color = 
    SWITCH(
        TRUE(),
        [High Risk Percentage] > 0.25, "Red",
        [High Risk Percentage] > 0.15, "Yellow",
        "Green"
    )

// --- MANAGER ANALYTICS ---

Manager Turnover Rate = 
    // Use in context of a manager slicer/filter
    VAR _ManagerTerms = [Total Terminations]
    VAR _ManagerHC = CALCULATE(
        COUNTROWS(dim_employees),
        dim_employees[is_active] = TRUE()
    )
    RETURN DIVIDE(_ManagerTerms, _ManagerHC, 0)

Span of Control = 
    CALCULATE(
        COUNTROWS(dim_employees),
        dim_employees[is_active] = TRUE()
    )

Retention Alert Adaptive Card Template

Type: prompt Microsoft Teams Adaptive Card JSON template used by Power Automate to send rich, interactive retention risk alerts to HR managers and HRBPs. The card displays the at-risk employee's key metrics, top risk factors, and action buttons for scheduling a check-in or viewing the full analytics profile.

Implementation:

Microsoft Teams Adaptive Card JSON payload for retention risk alerts
json
{
  "$schema": "http://adaptivecards.io/schemas/adaptive-card.json",
  "type": "AdaptiveCard",
  "version": "1.4",
  "body": [
    {
      "type": "ColumnSet",
      "columns": [
        {
          "type": "Column",
          "width": "auto",
          "items": [
            {
              "type": "Image",
              "url": "https://cdn-icons-png.flaticon.com/512/6897/6897039.png",
              "size": "Small",
              "style": "Default"
            }
          ]
        },
        {
          "type": "Column",
          "width": "stretch",
          "items": [
            {
              "type": "TextBlock",
              "text": "⚠️ Retention Risk Alert",
              "weight": "Bolder",
              "size": "Medium",
              "color": "@{if(equals(triggerBody()?['risk_tier'],'Critical'),'Attention','Warning')}"
            },
            {
              "type": "TextBlock",
              "text": "An employee on your team has been flagged as @{triggerBody()?['risk_tier']} retention risk.",
              "isSubtle": true,
              "wrap": true,
              "spacing": "None"
            }
          ]
        }
      ]
    },
    {
      "type": "FactSet",
      "facts": [
        {
          "title": "Employee",
          "value": "@{triggerBody()?['first_name']} @{triggerBody()?['last_name']}"
        },
        {
          "title": "Department",
          "value": "@{triggerBody()?['department']}"
        },
        {
          "title": "Job Title",
          "value": "@{triggerBody()?['job_title']}"
        },
        {
          "title": "Risk Score",
          "value": "@{triggerBody()?['risk_score']}/100 (@{triggerBody()?['risk_tier']})"
        },
        {
          "title": "Tenure",
          "value": "@{triggerBody()?['tenure_months']} months"
        }
      ],
      "separator": true
    },
    {
      "type": "TextBlock",
      "text": "Top Risk Factors:",
      "weight": "Bolder",
      "spacing": "Medium"
    },
    {
      "type": "TextBlock",
      "text": "1. @{triggerBody()?['top_risk_factor_1']}",
      "wrap": true,
      "color": "Attention"
    },
    {
      "type": "TextBlock",
      "text": "2. @{triggerBody()?['top_risk_factor_2']}",
      "wrap": true,
      "color": "Warning",
      "isVisible": "@{not(empty(triggerBody()?['top_risk_factor_2']))}"
    },
    {
      "type": "TextBlock",
      "text": "3. @{triggerBody()?['top_risk_factor_3']}",
      "wrap": true,
      "color": "Warning",
      "isVisible": "@{not(empty(triggerBody()?['top_risk_factor_3']))}"
    },
    {
      "type": "TextBlock",
      "text": "Recommended Action: Schedule a 1:1 check-in within the next 48 hours to discuss career development, workload, and engagement. Review the full retention profile for context before the meeting.",
      "wrap": true,
      "spacing": "Medium",
      "isSubtle": true
    }
  ],
  "actions": [
    {
      "type": "Action.OpenUrl",
      "title": "📅 Schedule 1:1 Check-in",
      "url": "https://outlook.office.com/calendar/compose?subject=1:1%20Check-in%20with%20@{triggerBody()?['first_name']}&attendees=@{triggerBody()?['email']}&body=Regular%20check-in%20to%20discuss%20career%20development%20and%20engagement."
    },
    {
      "type": "Action.OpenUrl",
      "title": "📊 View Full Profile in Lattice",
      "url": "https://CLIENTNAME.latticehq.com/people/@{triggerBody()?['employee_id']}"
    },
    {
      "type": "Action.OpenUrl",
      "title": "📈 Open Retention Dashboard",
      "url": "https://app.powerbi.com/groups/WORKSPACE_ID/reports/REPORT_ID/ReportSection_RetentionRisk"
    }
  ]
}
Azure SQL query to retrieve high and critical retention risk employees — used in Power Automate 'Execute a SQL query' action (Connection: db-people-analytics)
sql
SELECT e.employee_key, e.employee_id, e.first_name, e.last_name,
       e.email, e.department, e.job_title, e.tenure_months,
       e.manager_employee_id, r.risk_score, r.risk_tier,
       r.top_risk_factor_1, r.top_risk_factor_2, r.top_risk_factor_3
       FROM hr.retention_risk r
       JOIN hr.dim_employees e ON r.employee_key = e.employee_key
       WHERE r.risk_tier IN ('High', 'Critical')
       AND r.calculated_date = CAST(GETDATE() AS DATE)
       AND e.is_active = 1
1
Create a new Scheduled Cloud Flow (daily at 6 AM local time)
2
Add 'Execute a SQL query' action — Connection: Azure SQL - db-people-analytics — use the SQL query above
3
Add 'Apply to each' on the query results
4
Inside loop: 'Get user profile (V2)' from Office 365 Users using manager_employee_id to get manager's email
5
Add 'Post adaptive card in a chat or channel' (Teams) — Post as: Flow bot | Post in: Chat with Flow bot | Recipient: manager email from step 4 | Adaptive Card: paste the JSON template above with dynamic content
6
After the loop: Add 'Send an email (V2)' to HR Director with summary table of all high/critical risk employees
7
Add 'Run after' failure handler: Send email to MSP support mailbox with error details

Exit Interview Feedback Loop Workflow

Type: workflow Automated workflow that captures exit interview data when an employee terminates, links it back to their pre-departure retention risk score and engagement history, and generates an accuracy report for the retention risk model. This feedback loop is essential for model calibration — it tells us whether employees who left were correctly identified as high risk, enabling continuous improvement of the scoring weights.

Implementation:

Exit Interview Feedback Loop
python
# Power Automate + Azure Function (process_exit_feedback/__init__.py)

# Workflow: Exit Interview Feedback Loop
# Trigger: Finch webhook for employment status change OR
#          manual entry when HR records a termination
# Platform: Power Automate + Azure Function

# Part 1: Power Automate Flow — Trigger and Data Collection
# =========================================================
# Flow Name: 'Exit Interview Capture & Model Feedback'
# 
# Step 1: Trigger — When an item is modified (SharePoint list: 'Terminations')
#   OR Recurrence (daily) with SQL query for new terminations
#   Query: SELECT e.employee_id, e.first_name, e.last_name, e.department,
#          e.termination_date, e.termination_reason, e.termination_type
#          FROM hr.dim_employees e
#          WHERE e.termination_date >= DATEADD(DAY, -1, GETDATE())
#          AND e.is_current = 1
#
# Step 2: For each new termination:
#   a. Create a Microsoft Forms survey link pre-populated with employee info
#   b. Send exit interview invitation email to the departing employee
#   c. Send notification to HR Manager: 'Please conduct exit interview with [Name]'
#   d. Set a 7-day reminder if exit interview not completed
#
# Step 3: When exit interview form is submitted:
#   a. Parse form responses
#   b. Call Azure Function 'process_exit_feedback' with responses

# Part 2: Azure Function — Model Feedback Processing
# ====================================================
# File: process_exit_feedback/__init__.py

import azure.functions as func
import pyodbc
import json
import logging
from datetime import datetime
import os
from azure.identity import DefaultAzureCredential
from azure.keyvault.secrets import SecretClient

def main(req: func.HttpRequest) -> func.HttpResponse:
    logging.info('Processing exit interview feedback')
    
    body = req.get_json()
    employee_id = body.get('employee_id')
    
    # Exit interview data points
    primary_reason = body.get('primary_departure_reason')  # Career growth, Compensation, Manager, Work-life balance, Relocation, Other
    would_return = body.get('would_return')  # Yes/No/Maybe
    manager_rating = body.get('manager_satisfaction', 0)  # 1-5
    growth_rating = body.get('career_growth_satisfaction', 0)  # 1-5
    comp_rating = body.get('compensation_satisfaction', 0)  # 1-5
    culture_rating = body.get('culture_satisfaction', 0)  # 1-5
    recommendation = body.get('would_recommend', 0)  # 0-10 (exit eNPS)
    free_text = body.get('additional_comments', '')
    was_regrettable = body.get('was_regrettable', False)  # HR classification
    
    kv_url = os.environ['KEY_VAULT_URL']
    credential = DefaultAzureCredential()
    kv_client = SecretClient(vault_url=kv_url, credential=credential)
    sql_conn_str = kv_client.get_secret('SqlConnectionString').value
    
    conn = pyodbc.connect(sql_conn_str)
    cursor = conn.cursor()
    
    # Get the employee's last retention risk score before departure
    cursor.execute('''
        SELECT TOP 1 r.risk_score, r.risk_tier, 
               r.top_risk_factor_1, r.top_risk_factor_2, r.top_risk_factor_3
        FROM hr.retention_risk r
        JOIN hr.dim_employees e ON r.employee_key = e.employee_key
        WHERE e.employee_id = ?
        ORDER BY r.calculated_date DESC
    ''', employee_id)
    
    risk_row = cursor.fetchone()
    pre_departure_score = risk_row[0] if risk_row else None
    pre_departure_tier = risk_row[1] if risk_row else None
    predicted_factor_1 = risk_row[2] if risk_row else None
    
    # Store exit interview data
    cursor.execute('''
        INSERT INTO hr.exit_interviews (
            employee_id, interview_date, primary_reason,
            would_return, manager_rating, growth_rating,
            comp_rating, culture_rating, exit_enps,
            was_regrettable, pre_departure_risk_score,
            pre_departure_risk_tier, predicted_top_factor,
            actual_reason_matches_prediction
        ) VALUES (?, GETDATE(), ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    ''',
        employee_id, primary_reason,
        would_return, manager_rating, growth_rating,
        comp_rating, culture_rating, recommendation,
        was_regrettable, pre_departure_score,
        pre_departure_tier, predicted_factor_1,
        1 if _reason_matches_factor(primary_reason, predicted_factor_1) else 0
    )
    
    # Update fact_turnover with exit interview enrichment
    cursor.execute('''
        UPDATE t SET
            t.was_regrettable = ?,
            t.last_engagement_score = (
                SELECT TOP 1 overall_engagement_score
                FROM hr.fact_engagement fe
                JOIN hr.dim_employees e ON fe.employee_key = e.employee_key
                WHERE e.employee_id = ?
                ORDER BY fe.survey_date_key DESC
            )
        FROM hr.fact_turnover t
        JOIN hr.dim_employees e ON t.employee_key = e.employee_key
        WHERE e.employee_id = ?
    ''', was_regrettable, employee_id, employee_id)
    
    conn.commit()
    
    # Generate model accuracy update
    cursor.execute('''
        SELECT 
            COUNT(*) as total_exits,
            SUM(CASE WHEN pre_departure_risk_tier IN ('High', 'Critical') THEN 1 ELSE 0 END) as correctly_flagged,
            SUM(CASE WHEN actual_reason_matches_prediction = 1 THEN 1 ELSE 0 END) as reason_match,
            AVG(pre_departure_risk_score) as avg_pre_departure_score
        FROM hr.exit_interviews
        WHERE interview_date >= DATEADD(MONTH, -6, GETDATE())
    ''')
    accuracy = cursor.fetchone()
    
    accuracy_report = {
        'total_exits_6m': accuracy[0],
        'correctly_flagged_high_risk': accuracy[1],
        'detection_rate': round(accuracy[1] / accuracy[0] * 100, 1) if accuracy[0] > 0 else 0,
        'reason_prediction_accuracy': round(accuracy[2] / accuracy[0] * 100, 1) if accuracy[0] > 0 else 0,
        'avg_pre_departure_score': round(accuracy[3], 1) if accuracy[3] else 0
    }
    
    cursor.close()
    conn.close()
    
    logging.info(f'Exit feedback processed for {employee_id}. Model accuracy: {json.dumps(accuracy_report)}')
    
    return func.HttpResponse(
        json.dumps({'status': 'success', 'accuracy_report': accuracy_report}),
        mimetype='application/json'
    )

def _reason_matches_factor(actual_reason, predicted_factor):
    """Check if the actual departure reason aligns with the predicted risk factor."""
    if not actual_reason or not predicted_factor:
        return False
    reason_lower = actual_reason.lower()
    factor_lower = predicted_factor.lower()
    
    mappings = {
        'compensation': ['comp', 'salary', 'pay', 'market', 'raise'],
        'career growth': ['promotion', 'career', 'growth', 'stagnation', 'development'],
        'manager': ['manager', 'leadership', 'supervisor', 'team'],
        'work-life balance': ['balance', 'workload', 'burnout', 'hours'],
        'engagement': ['engagement', 'sentiment', 'disengaged'],
    }
    
    for category, keywords in mappings.items():
        if any(kw in reason_lower for kw in keywords):
            if any(kw in factor_lower for kw in keywords):
                return True
    return False

# SQL DDL for exit_interviews table (add to Step 3):
# CREATE TABLE hr.exit_interviews (
#     interview_key INT IDENTITY(1,1) PRIMARY KEY,
#     employee_id NVARCHAR(50),
#     interview_date DATETIME2,
#     primary_reason NVARCHAR(200),
#     would_return NVARCHAR(20),
#     manager_rating DECIMAL(3,1),
#     growth_rating DECIMAL(3,1),
#     comp_rating DECIMAL(3,1),
#     culture_rating DECIMAL(3,1),
#     exit_enps INT,
#     was_regrettable BIT,
#     pre_departure_risk_score DECIMAL(5,2),
#     pre_departure_risk_tier NVARCHAR(20),
#     predicted_top_factor NVARCHAR(200),
#     actual_reason_matches_prediction BIT,
#     created_at DATETIME2 DEFAULT GETDATE()
# );

Testing & Validation

  • HRIS Data Quality Test: Run the data quality audit script from Step 1 against the Azure SQL warehouse after initial ETL sync. Verify that at least 95% of active employee records have non-null values for: employee_id, hire_date, department, manager_employee_id, and job_title. Document any gaps and verify they match the original HRIS export (confirming no data loss in transit).
  • Finch API Integration Test: Execute a manual Finch directory pull and compare the employee count returned against the HRIS admin panel's active employee count. Counts should match within 1% (small discrepancies from timing of same-day hires/terminations are acceptable). Verify at least 3 individual employee records match field-for-field between Finch API response and HRIS source.
  • Azure SQL Connectivity Test: From each HR workstation, open Azure Data Studio and connect to sql-hranalytics-CLIENT.database.windows.net using Azure AD authentication. Execute SELECT COUNT(*) FROM hr.dim_employees WHERE is_current = 1 and verify it returns the expected active headcount. This confirms firewall rules, VLAN routing, and Azure AD auth are working end-to-end.
  • Lattice SSO Login Test: From an HR workstation, navigate to clientname.latticehq.com. Verify that clicking 'Sign in with SSO' redirects to the Microsoft login page, authenticates via Azure AD, and returns the user to Lattice with the correct role and department visibility. Test with at least 3 users: HR Admin (full access), HR Manager (department-scoped), and a regular employee (self-view only).
  • SCIM Provisioning Test: Create a test user in Azure AD, assign them to the 'Lattice-Users' group, and verify within 40 minutes that the user appears in Lattice with correct name, email, and department. Then remove the user from the group and verify they are deprovisioned in Lattice within the next sync cycle.
  • Engagement Survey End-to-End Test: Launch the pilot survey in Lattice targeting 5-10 HR team members. Complete the survey from at least 3 different accounts. Verify: (a) survey email was received in Outlook, (b) Teams notification appeared, (c) survey responses are visible in Lattice analytics, (d) anonymity threshold prevents individual identification for groups under 5 respondents.
  • ETL Pipeline Execution Test: Manually trigger the sync_hris_data Azure Function via HTTP endpoint. Check Azure Function logs for successful completion. Query hr.dim_employees and verify record count and sample data accuracy. Then trigger sync_engagement_data and calculate_retention_risk functions sequentially. Verify hr.retention_risk table is populated with risk scores for all active employees.
  • Power BI Dashboard Validation Test: Open the published Power BI report in the browser. Verify: (a) all 7 dashboard pages load without errors, (b) active headcount KPI matches HRIS source, (c) turnover rate calculation is correct by manually calculating from the fact_turnover table, (d) engagement scores match Lattice analytics, (e) retention risk heatmap shows the expected distribution. Cross-reference 5 randomly selected data points against source systems.
  • Row-Level Security Test: Log into Power BI Service as 3 different users: HR Director (should see all departments), Engineering Manager (should see only Engineering department data), and Finance HRBP (should see only Finance data). Verify each user sees only their authorized scope. Attempt to access another department's data by modifying URL parameters — this should fail.
  • Power Automate Alert Test: Manually insert a test record into hr.retention_risk with risk_tier = 'Critical' and a valid employee_key. Wait for the next scheduled Power Automate flow run (or trigger manually). Verify: (a) the manager receives a Teams adaptive card with correct employee details and risk factors, (b) the action buttons (Schedule 1:1, View Profile, Open Dashboard) link to the correct URLs, (c) the HR Director receives the summary email. Delete the test record after validation.
  • Network Security Test: From an HR workstation on VLAN 120, verify HTTPS access to latticehq.com, app.powerbi.com, api.tryfinch.com, and the Azure SQL endpoint. Verify that non-HTTPS traffic to these endpoints is blocked. From a non-HR workstation on a different VLAN, attempt to access the Azure SQL endpoint — this should be blocked by firewall rules.
  • Compliance Validation: Review Azure SQL audit logs to confirm all access is logged. Verify TDE encryption is active on the database. Confirm that the Lattice anonymity threshold is set to 5 minimum responses. Review Power BI sharing settings to ensure reports are not shared with 'entire organization'. Document all findings in the DPIA compliance checklist.
  • Retention Risk Scoring Accuracy Test: Select 10 employees with known characteristics (e.g., a new hire at 8 months with low engagement, a senior employee with no raise in 24 months, a top performer with a new manager). Manually calculate their expected risk scores using the documented weight formulas. Compare against the actual scores in hr.retention_risk. Scores should match within ±2 points (rounding differences acceptable).
  • Disaster Recovery Test: Delete and recreate the Azure Function App from the source code repository. Verify the ETL pipeline resumes correctly on the next scheduled trigger. Restore the Power BI .pbix file from the SharePoint backup and republish — verify dashboards render correctly with current data.

Client Handoff

The client handoff should be conducted as a formal 2-hour session with the HR Director, CHRO (if available), HR Managers, and the designated HR Platform Administrator. Cover the following topics:

1. Platform Access & Navigation (30 min)

  • Walk through Lattice login via SSO, dashboard navigation, and key analytics views
  • Demonstrate Power BI report access via browser and Teams tab
  • Show how to access the retention risk heatmap and drill into individual employee risk profiles
  • Confirm all attendees can log in and see their authorized data scope

2. Survey Management Training (20 min)

  • Train the HR Admin on launching pulse surveys in Lattice (audience selection, scheduling, reminders)
  • Demonstrate how to customize survey questions from the question bank
  • Review the recommended survey cadence: annual full survey + quarterly pulses
  • Discuss survey fatigue prevention: keep pulses to 5-7 questions, vary the topics

3. Interpreting Analytics & Taking Action (30 min)

  • Walk through each Power BI dashboard page explaining what the metrics mean and what action to take
  • Demonstrate the retention risk adaptive card alert in Teams and the expected response workflow
  • Review the 'Retention Playbook' document: for each risk factor, what interventions are recommended
  • Practice: take a real high-risk employee profile and walk through the investigation and action planning process

4. Documentation Delivery (10 min)

Deliver the following documents (stored in SharePoint):

  • Platform Administration Guide (Lattice admin tasks, user management, survey setup)
  • Power BI Dashboard User Guide (navigation, filters, export, common questions)
  • Retention Risk Model Documentation (how scores are calculated, weight factors, tier definitions)
  • Data Dictionary (all tables, fields, and their sources)
  • Integration Architecture Diagram (showing data flow from HRIS → Finch → Azure SQL → Power BI)
  • DPIA / Compliance Documentation (data processing records, privacy notices, consent records)
  • MSP Support Escalation Guide (how to reach MSP support, SLA terms, what constitutes an emergency)
  • Vendor Contact List (Lattice support, Finch support, Microsoft support channels)

5. Success Criteria Review (15 min)

  • Confirm all success criteria are met: dashboards are live, surveys are functional, risk scoring is running, alerts are firing, RLS is enforced
  • Review baseline metrics: current turnover rate, engagement score, headcount, and risk distribution
  • Set 90-day targets: e.g., achieve 80%+ survey response rate, reduce high-risk employee percentage by 10%, complete manager action plans for all Critical-tier employees
  • Schedule the first Quarterly Business Review (QBR) for 6-8 weeks post-launch

6. Ongoing Support Model (15 min)

  • Explain the managed services agreement: what the MSP monitors, maintains, and supports
  • Review the escalation process for technical issues vs. analytics questions
  • Confirm the monthly check-in schedule with the HR Admin
  • Discuss the model calibration process: after 6 months of exit interview data, the MSP will review and adjust risk scoring weights

Maintenance

Monthly MSP Responsibilities

  • Monitor Azure Function execution logs for ETL pipeline failures (check Azure Monitor alerts weekly; resolve within 4-hour SLA for Critical alerts, 24-hour for Warning)
  • Verify Finch API connection health (access token validity, data freshness); re-authorize if connection drops
  • Review Power BI scheduled refresh success/failure logs; troubleshoot credential expiration issues
  • Monitor Azure SQL Database performance metrics (DTU usage, storage consumption, connection failures)
  • Validate data accuracy: spot-check 5 random employee records against HRIS source monthly
  • Review Power Automate flow run history for failures; fix broken flows within 24 hours
  • Apply security patches to Azure Functions runtime and Python dependencies (pip audit; update quarterly)
  • Generate monthly system health report for client HR Admin

Quarterly MSP Responsibilities

  • Conduct Quarterly Business Review (QBR) with HR leadership: present retention trends, model accuracy metrics, and recommended actions
  • Review retention risk model accuracy using exit interview feedback loop data: compare predicted risk tiers against actual departures; if detection rate falls below 60%, investigate and adjust scoring weights
  • Update Power BI dashboards with any new metrics or visualizations requested by client
  • Review and rotate Azure Key Vault secrets (Finch token, SQL credentials)
  • Audit Row-Level Security configuration: verify access controls match current org structure (especially after reorgs)
  • Review Lattice platform updates and enable beneficial new features
  • Update compliance documentation if new state AI employment laws take effect (monitor Illinois HB 3773 effective Jan 2026, Colorado AI Act effective June 2026)
  • Review Azure costs and optimize (right-size SQL tier, verify Function auto-pause is working)

Semi-Annual MSP Responsibilities

  • Comprehensive security review: Azure SQL audit logs, access patterns, anomaly detection alerts
  • Bias audit of retention risk scoring model: analyze risk score distribution by protected classes (gender, age, race/ethnicity if available) to check for disparate impact; if NYC Local Law 144 applies, coordinate independent audit
  • Model retraining assessment: once 50+ voluntary terminations have occurred with exit interview data, evaluate upgrading from weighted scoring to ML-based model (Random Forest/XGBoost); present recommendation and cost estimate to client
  • Review data retention policies: purge retention risk scores older than 24 months for separated employees unless legal hold applies
  • Platform license true-up: verify Lattice and Power BI license counts match actual user base (adjust for headcount changes)

Model Retraining Triggers (escalate to MSP data analytics team)

  • Detection rate (% of voluntary departures that were flagged High/Critical) drops below 50%
  • False positive rate (% of Critical-flagged employees still active after 12 months) exceeds 70%
  • Major organizational change (M&A, RIF, leadership change) that invalidates historical patterns
  • Client expands to new geographies or business lines with different workforce dynamics
  • More than 6 months since last weight calibration

SLA Considerations

  • Platform availability: 99.5% uptime for dashboards during business hours (M-F 7AM-7PM client timezone)
  • ETL pipeline: data freshness within 24 hours of HRIS source
  • Alert delivery: retention risk alerts delivered within 4 hours of daily risk calculation
  • Incident response: Critical issues (complete system outage, data breach) — 1 hour response, 4 hour resolution. High (dashboard errors, ETL failures) — 4 hour response, 24 hour resolution. Medium (feature requests, report modifications) — 1 business day response, 5 business day resolution.
  • Escalation path: Tier 1 (MSP helpdesk) → Tier 2 (MSP cloud engineer) → Tier 3 (MSP data analytics specialist) → Vendor support (Lattice, Microsoft, Finch)

Alternatives

15Five + Power BI Lightweight Stack (SMB 25-150 employees)

Replace Lattice with 15Five for engagement surveys and basic performance management. 15Five Engage at $4/user/month provides pulse surveys, action planning, and benchmarking — at roughly one-fifth the cost of Lattice. Pair with Power BI Pro for custom turnover dashboards pulling data directly from HRIS exports (CSV) rather than through Finch API. Skip the Azure SQL data warehouse and Azure Functions; instead use Power BI dataflows for lightweight ETL.

Microsoft Viva Suite (M365-Native Approach)

For clients deeply embedded in the Microsoft ecosystem, deploy Microsoft Viva Suite ($12/user/month) which includes Viva Glint (engagement surveys replacing Lattice), Viva Insights (workplace analytics from M365 usage patterns), and Viva Pulse (manager-initiated quick surveys). Combine with Power BI for custom dashboards. All data stays within the Microsoft tenant, simplifying compliance. No third-party survey tool needed.

Note

Best for: Clients already on M365 E3/E5 who want a single-vendor approach and are uncomfortable with third-party employee data sharing. Strong choice for MSPs that specialize in Microsoft stack.

Visier People Analytics (Enterprise-Grade)

For larger clients (500+ employees) or staffing agencies managing large workforces, deploy Visier — the market-leading purpose-built people analytics platform. Visier offers 2,000+ pre-built HR metrics, validated predictive attrition models, and deep benchmarking against industry peers. It replaces both the analytics platform AND the custom Power BI dashboard build, as Visier's native visualization is enterprise-grade.

Custom Open-Source ML Pipeline (Advanced/Academic)

BambooHR + Culture Amp Combination (Mid-Market All-in-One)

For clients using BambooHR as their HRIS, combine it with Culture Amp for engagement analytics and predictive turnover modeling. BambooHR provides native turnover reporting and demographic dashboards (included in Pro/Elite tiers), while Culture Amp adds best-in-class engagement surveys, Text Analytics, and predictive flight risk models. The two integrate natively, eliminating the need for Finch API middleware.

Want early access to the full toolkit?