
Implementation Guide: Generate employee handbook updates and policy communications
Step-by-step implementation guide for deploying AI to generate employee handbook updates and policy communications for HR & Staffing clients.
Hardware Procurement
End-User Workstations (Existing Fleet Verification)
$0 — existing client fleet is almost always sufficient. If refresh needed: $800–$1,200 per unit MSP cost / $1,100–$1,600 suggested resale
HR staff workstations for accessing SaaS handbook platforms, Microsoft 365 Copilot, and AI chat interfaces. Requires 8GB+ RAM, SSD, modern evergreen browser. No specialized hardware needed since all AI inference happens cloud-side.
Network Assessment (Existing Infrastructure Verification)
Network Assessment (Existing Infrastructure Verification)
$0 — verify existing 50 Mbps+ symmetric business broadband with <100ms latency to Azure/AWS endpoints
All AI processing occurs via cloud APIs. Adequate bandwidth and low latency are essential for responsive SaaS tool usage and API calls. Run a latency test to api.openai.com and Azure endpoints before kickoff.
Software Procurement
SixFifty Employee Handbook Builder
$399/year per client
Legal-compliance-focused handbook builder providing state-by-state policy generation for all 50 states plus D.C. Serves as the compliance scaffolding layer ensuring AI-generated content meets jurisdictional requirements. Used to generate baseline compliant policy templates that AI then customizes and enhances.
AirMason Handbook Platform
$999/year (Startup tier, up to 99 employees); $2,000–$10,000/year for enterprise tiers
AI-powered compliance updates, drag-and-drop editor, branded formatting. Primary handbook authoring, distribution, and acknowledgment platform. Provides the employee-facing interactive handbook experience. Choose AirMason OR SixFifty based on client needs — AirMason for design-forward handbooks, SixFifty for legal-compliance-first approach.
Blissbook Policy Management Platform
$249/year base; Premium Support $199/month; Custom API Access $89/month
Alternative to AirMason focused on policy versioning, distribution tracking, and e-signature acknowledgment. Best for regulated environments requiring strong audit trails. Use when client's primary need is policy distribution and compliance tracking rather than visual design.
OpenAI API — GPT-4.1
$2.00 per million input tokens / $8.00 per million output tokens. Typical monthly cost for HR handbook use: $5–$25/month
Primary content generation engine for drafting policy sections, rewriting existing policies in plain language, generating multi-state policy variants, and creating employee communications. 1M token context window supports ingesting entire existing handbooks for context-aware updates.
OpenAI API — GPT-5.4 mini
$0.15 per million input tokens / $0.60 per million output tokens. Typical monthly cost: $1–$5/month
Cost-optimized model for high-volume, lower-complexity tasks: generating FAQ answers, formatting policy summaries, creating email notification drafts, and powering the employee Q&A bot for routine questions.
Anthropic API — Claude Sonnet 4
$3.00 per million input tokens / $15.00 per million output tokens. Typical monthly cost: $10–$40/month
Alternative/supplementary content generation engine. Claude excels at nuanced, safety-conscious policy writing with strong guardrails against hallucination. Use for sensitive policy areas (harassment, discrimination, ADA accommodation, termination) where careful language is critical.
Microsoft 365 Business Premium
$22.00/user/month list price. MSP cost via CSP: ~$18.70–$19.80/user/month
Foundation platform providing Exchange Online, SharePoint Online (document storage/versioning), Teams (policy notification distribution), Entra ID (SSO/MFA), and Intune (device management). Required base for Copilot deployment.
Microsoft 365 Copilot
$30.00/user/month list price. MSP promotional pricing: 15% off standalone ($25.50/user/month) or 35% off when bundled with Business Standard.
Embedded AI assistant in Word, PowerPoint, Outlook, and Teams. HR staff use Copilot to draft policy documents in Word, create change management presentations in PowerPoint, compose policy update email communications in Outlook, and summarize feedback in Teams. Lowest-friction AI entry point for M365 clients.
~$65/connected account/month
Unified API middleware that normalizes employee data across 60+ HRIS platforms (BambooHR, Rippling, ADP, Workday, Gusto, Paylocity). Enables single integration codebase to pull employee rosters, department structures, work locations, and hire dates for personalized handbook distribution and state-specific policy assignment.
$0 (self-hosted) or $24/month (Cloud Starter) to $50/month (Cloud Pro)
iPaaS/workflow automation platform connecting HRIS webhooks, OpenAI API, handbook platforms, SharePoint, and Teams. Orchestrates the end-to-end policy generation and distribution pipeline. Preferred over Zapier for MSPs due to self-hosting option and no per-task pricing.
Pinecone Vector Database
Free tier available (100K vectors). Starter: $0.00/month for small use. Standard: ~$70/month for production RAG workloads
Vector database for Retrieval-Augmented Generation (RAG) over existing policy documents, employment law references, and prior handbook versions. Enables the AI to ground its policy generation in the client's actual existing content and authoritative legal sources rather than generating from scratch.
Prerequisites
- Active Microsoft 365 Business Premium (or higher) tenant with Entra ID (Azure AD) configured and MFA enforced for all admin and HR user accounts
- SharePoint Online site collection provisioned for HR document management with appropriate permission groups (HR Admins, HR Editors, HR Viewers)
- Client has identified and can provide: (a) current employee handbook in editable format (Word/PDF), (b) list of all states/jurisdictions where employees work, (c) current HRIS platform name and admin credentials, (d) employment attorney contact for legal review
- Business-grade internet connectivity verified: minimum 50 Mbps symmetric, latency <100ms to api.openai.com and Azure endpoints (run: curl -o /dev/null -s -w '%{time_total}' https://api.openai.com/v1/models)
- Global Admin or Application Admin access to the Microsoft 365 tenant for Copilot license assignment and app registration
- OpenAI Platform account created with billing configured and API key generated (organization-level, not personal). Store key securely — never in code repositories.
- Client HR team has designated a Handbook Owner (primary content approver) and 1–2 Handbook Editors who will be trained on the AI-assisted workflow
- Python 3.10+ or Node.js 18+ development environment available on MSP technician's workstation for API integration scripting and testing
- Client's employment attorney has been engaged and is available for content review during Phase 4 (2–4 week review window). MSP must confirm this before project kickoff — do NOT proceed without legal review commitment.
- DNS and email routing confirmed functional for policy distribution emails; SPF/DKIM/DMARC records configured to prevent handbook notification emails from being flagged as spam
# run to verify connectivity meets requirements
curl -o /dev/null -s -w '%{time_total}' https://api.openai.com/v1/modelsInstallation Steps
Step 1: Audit Current Handbook and Map Compliance Requirements
Before any technical work, perform a thorough audit of the client's existing employee handbook. Document every policy section, note which policies are federal vs. state-specific, identify gaps against current employment law requirements, and catalog the client's operational jurisdictions. This audit drives all subsequent configuration decisions.
# Create audit spreadsheet structure (PowerShell - creates SharePoint list)
Import-Module PnP.PowerShell
Connect-PnPOnline -Url https://[tenant].sharepoint.com/sites/HRDocs -Interactive
New-PnPList -Title 'Handbook Audit' -Template GenericList
Add-PnPField -List 'Handbook Audit' -DisplayName 'Policy Section' -InternalName 'PolicySection' -Type Text -AddToDefaultView
Add-PnPField -List 'Handbook Audit' -DisplayName 'Current Status' -InternalName 'CurrentStatus' -Type Choice -Choices 'Current','Outdated','Missing','Needs Legal Review' -AddToDefaultView
Add-PnPField -List 'Handbook Audit' -DisplayName 'Applicable Jurisdictions' -InternalName 'Jurisdictions' -Type Note -AddToDefaultView
Add-PnPField -List 'Handbook Audit' -DisplayName 'Compliance Risk Level' -InternalName 'RiskLevel' -Type Choice -Choices 'Low','Medium','High','Critical' -AddToDefaultView
Add-PnPField -List 'Handbook Audit' -DisplayName 'AI Generation Priority' -InternalName 'AIPriority' -Type Number -AddToDefaultViewThis step is non-technical but critical. Allocate 4–8 hours for a thorough audit. Use the SixFifty or BLR compliance checklists as reference frameworks. The audit output directly determines which policies the AI will generate, update, or leave untouched. Multi-state employers will have significantly more sections to track.
Step 2: Provision SharePoint Document Infrastructure
Create the SharePoint site structure for handbook document management, version control, and collaboration. This becomes the central repository for all policy documents, AI-generated drafts, approved versions, and archived historical handbooks.
# Connect to SharePoint Online
Connect-PnPOnline -Url https://[tenant]-admin.sharepoint.com -Interactive
# Create dedicated HR Policy site (if not existing)
New-PnPSite -Type TeamSite -Title 'HR Policy Management' -Alias 'HRPolicyMgmt' -Description 'Central repository for employee handbook and policy documents'
# Connect to the new site
Connect-PnPOnline -Url https://[tenant].sharepoint.com/sites/HRPolicyMgmt -Interactive
# Create document libraries with versioning
New-PnPList -Title 'Policy Drafts' -Template DocumentLibrary
Set-PnPList -Identity 'Policy Drafts' -EnableVersioning $true -MajorVersions 50
New-PnPList -Title 'Approved Policies' -Template DocumentLibrary
Set-PnPList -Identity 'Approved Policies' -EnableVersioning $true -MajorVersions 100 -EnableMinorVersions $true
New-PnPList -Title 'Policy Templates' -Template DocumentLibrary
Set-PnPList -Identity 'Policy Templates' -EnableVersioning $true -MajorVersions 20
New-PnPList -Title 'Archived Handbooks' -Template DocumentLibrary
# Create metadata columns for policy tracking
Add-PnPField -List 'Approved Policies' -DisplayName 'Policy Category' -InternalName 'PolicyCategory' -Type Choice -Choices 'Employment Basics','Compensation & Benefits','Leave Policies','Workplace Conduct','Safety & Security','Technology & Data','Anti-Discrimination','State-Specific' -AddToDefaultView
Add-PnPField -List 'Approved Policies' -DisplayName 'Applicable States' -InternalName 'ApplicableStates' -Type Note -AddToDefaultView
Add-PnPField -List 'Approved Policies' -DisplayName 'Last Legal Review' -InternalName 'LastLegalReview' -Type DateTime -AddToDefaultView
Add-PnPField -List 'Approved Policies' -DisplayName 'Next Review Due' -InternalName 'NextReviewDue' -Type DateTime -AddToDefaultView
Add-PnPField -List 'Approved Policies' -DisplayName 'AI Generated' -InternalName 'AIGenerated' -Type Boolean -AddToDefaultView
# Set permissions - restrict Approved Policies library
Set-PnPList -Identity 'Approved Policies' -BreakRoleInheritance -CopyRoleAssignmentsEnable versioning on ALL document libraries — this is critical for audit trails showing what AI generated vs. what humans edited. The 'AI Generated' boolean field is important for compliance tracking. Consider configuring retention policies via Microsoft Purview for regulatory compliance.
Step 3: Configure OpenAI API Access and Secure Key Management
Set up the OpenAI API account with appropriate billing limits, generate API keys, and store them securely using Azure Key Vault. This ensures API credentials are never exposed in code or configuration files.
# Step 3a: Create Azure Key Vault for API key storage
az login
az group create --name rg-hr-ai-integration --location eastus
az keyvault create --name kv-hr-handbook-ai --resource-group rg-hr-ai-integration --location eastus --sku standard
# Step 3b: Store the OpenAI API key (replace with actual key from platform.openai.com)
az keyvault secret set --vault-name kv-hr-handbook-ai --name openai-api-key --value 'sk-proj-XXXXXXXXXXXXXXXXXXXXXXXX'
# Step 3c: Store Anthropic API key as backup provider
az keyvault secret set --vault-name kv-hr-handbook-ai --name anthropic-api-key --value 'sk-ant-XXXXXXXXXXXXXXXXXXXXXXXX'
# Step 3d: Create a service principal for the integration application
az ad app create --display-name 'HR Handbook AI Integration'
# Note the appId from output, then:
az ad sp create --id [appId]
az keyvault set-policy --name kv-hr-handbook-ai --spn [appId] --secret-permissions get list
# Step 3e: Set OpenAI usage limits via API (Python)
python3 -c "
import openai
# Configure at platform.openai.com/account/limits:
# - Set monthly budget cap to $50 (adjust based on client size)
# - Set per-request token limit to 16000 output tokens
# - Enable usage alerts at 50% and 80% of budget
print('Configure limits at: https://platform.openai.com/account/limits')
print('Recommended monthly cap: $50 for typical SMB HR usage')
"Never store API keys in source code, environment variables on shared machines, or configuration files committed to Git. Azure Key Vault is included in the M365 Business Premium Azure AD P1 entitlement. Set conservative spending limits initially ($50/month) — actual HR handbook usage will typically be $5–$25/month. You can always increase limits later.
Step 4: Deploy SixFifty Handbook Builder and Import Existing Content
Set up the SixFifty Employee Handbook Builder as the compliance scaffolding layer. Configure it for the client's specific states of operation, import existing policy content, and generate baseline compliant policy templates for all applicable jurisdictions.
- Step 4a: Navigate to https://app.sixfifty.com and create client account
- Step 4b: Configure company profile: Company name, industry (HR & Staffing), employee count; States of operation (add ALL states where client has employees); Company size thresholds (50+ for FMLA, 15+ for ADA/Title VII, 20+ for COBRA)
- Step 4c: Run the SixFifty handbook questionnaire: Answer jurisdiction-specific questions (at-will status, leave policies, etc.); SixFifty auto-generates compliant policy language for selected states; Export generated policies as Word documents
- Step 4d: Download generated policies to SharePoint
- Step 4e: Create a compliance tracking list: Use the SharePoint list created in Step 1 to map SixFifty-generated policies to audit items
Connect-PnPOnline -Url https://[tenant].sharepoint.com/sites/HRPolicyMgmt -Interactive
Get-ChildItem -Path './sixfifty-exports/*.docx' | ForEach-Object {
Add-PnPFile -Path $_.FullName -Folder 'Policy Templates'
}SixFifty costs only $399/year and covers all 50 states — this is extremely cost-effective compliance insurance. The generated policies become the 'ground truth' templates that the AI content generation system references. If the client prefers AirMason ($999/year) for its superior design capabilities, use AirMason instead and skip this step. Both platforms can coexist if the client wants SixFifty for compliance and AirMason for formatting/distribution.
Step 5: Set Up Pinecone Vector Database for RAG Pipeline
Deploy a Pinecone vector database index to store embeddings of the client's existing handbook, SixFifty compliance templates, and employment law references. This enables Retrieval-Augmented Generation (RAG) so the AI generates policy content grounded in authoritative source material rather than hallucinating legal language.
# Step 5a: Install required Python packages
pip install pinecone-client openai langchain langchain-openai tiktoken python-docx
# Step 5b: Initialize Pinecone index
python3 << 'EOF'
import pinecone
from pinecone import Pinecone, ServerlessSpec
pc = Pinecone(api_key='YOUR_PINECONE_API_KEY') # Store in Key Vault in production
# Create index for policy document embeddings
pc.create_index(
name='hr-handbook-policies',
dimension=3072, # text-embedding-3-large dimension
metric='cosine',
spec=ServerlessSpec(cloud='aws', region='us-east-1'))
print('Pinecone index created: hr-handbook-policies')
EOF
# Step 5c: Ingest existing handbook and SixFifty templates
python3 << 'INGEST_EOF'
import os
from pinecone import Pinecone
from openai import OpenAI
from docx import Document
import hashlib
# Initialize clients
pc = Pinecone(api_key=os.environ.get('PINECONE_API_KEY'))
index = pc.Index('hr-handbook-policies')
oai = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))
def chunk_document(filepath, chunk_size=1000, overlap=200):
doc = Document(filepath)
full_text = '\n'.join([p.text for p in doc.paragraphs if p.text.strip()])
chunks = []
for i in range(0, len(full_text), chunk_size - overlap):
chunk = full_text[i:i + chunk_size]
if len(chunk.strip()) > 50:
chunks.append({
'text': chunk,
'source': os.path.basename(filepath),
'chunk_index': len(chunks),
'doc_hash': hashlib.md5(full_text.encode()).hexdigest()[:8]
})
return chunks
def embed_and_upsert(chunks, namespace='handbook'):
batch_size = 50
for i in range(0, len(chunks), batch_size):
batch = chunks[i:i+batch_size]
texts = [c['text'] for c in batch]
response = oai.embeddings.create(
model='text-embedding-3-large',
input=texts
)
vectors = []
for j, emb in enumerate(response.data):
vec_id = f"{batch[j]['source']}_{batch[j]['chunk_index']}"
vectors.append({
'id': vec_id,
'values': emb.embedding,
'metadata': {
'text': batch[j]['text'],
'source': batch[j]['source'],
'chunk_index': batch[j]['chunk_index']
}
})
index.upsert(vectors=vectors, namespace=namespace)
print(f'Upserted {len(chunks)} vectors to namespace: {namespace}')
# Ingest all .docx files from policy templates directory
policy_dir = './policy-templates/'
for filename in os.listdir(policy_dir):
if filename.endswith('.docx'):
filepath = os.path.join(policy_dir, filename)
chunks = chunk_document(filepath)
embed_and_upsert(chunks, namespace='compliance-templates')
print(f'Ingested: {filename} ({len(chunks)} chunks)')
print('RAG knowledge base initialization complete.')
INGEST_EOFThe RAG pipeline is what differentiates this solution from naive ChatGPT usage. By grounding generation in the client's actual existing policies and SixFifty's compliant templates, the AI produces contextually accurate, legally grounded content. Use text-embedding-3-large (3072 dimensions) for best retrieval quality. The free Pinecone tier (100K vectors) is sufficient for most single-client handbook deployments. For multi-client MSP deployments, use namespaces to isolate each client's data.
Step 6: Deploy Core Policy Generation API Service
Build and deploy the core AI policy generation service that combines RAG retrieval with LLM generation to produce handbook sections. This service accepts a policy topic, retrieves relevant existing content and compliance templates, and generates updated policy language with jurisdiction-specific variants.
# Step 6a: Create the project directory structure
mkdir -p hr-handbook-ai/{api,prompts,config,tests}
cd hr-handbook-ai
python3 -m venv venv
source venv/bin/activate
pip install fastapi uvicorn openai pinecone-client python-docx azure-identity azure-keyvault-secrets pydantic jinja2
# Step 6b: Create main API service (save as api/main.py)
cat > api/main.py << 'API_EOF'
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import Optional, List
import os
from openai import OpenAI
from pinecone import Pinecone
import json
app = FastAPI(title='HR Handbook AI Generator', version='1.0.0')
# Initialize clients (use Key Vault in production)
oai = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))
pc = Pinecone(api_key=os.environ.get('PINECONE_API_KEY'))
index = pc.Index('hr-handbook-policies')
class PolicyRequest(BaseModel):
policy_topic: str
states: List[str]
company_name: str
employee_count: int
action: str = 'generate' # generate | update | simplify
existing_text: Optional[str] = None
tone: str = 'professional' # professional | friendly | formal
additional_context: Optional[str] = None
class PolicyResponse(BaseModel):
policy_title: str
generated_content: str
state_specific_notes: dict
compliance_warnings: List[str]
sources_referenced: List[str]
model_used: str
requires_legal_review: bool
def retrieve_context(query: str, namespace: str = 'compliance-templates', top_k: int = 8) -> str:
query_embedding = oai.embeddings.create(
model='text-embedding-3-large',
input=[query]
).data[0].embedding
results = index.query(
vector=query_embedding,
namespace=namespace,
top_k=top_k,
include_metadata=True
)
context_parts = []
sources = []
for match in results.matches:
if match.score > 0.3:
context_parts.append(match.metadata.get('text', ''))
sources.append(match.metadata.get('source', 'unknown'))
return '\n---\n'.join(context_parts), list(set(sources))
@app.post('/generate-policy', response_model=PolicyResponse)
async def generate_policy(request: PolicyRequest):
# Retrieve relevant context from RAG
context, sources = retrieve_context(
f"{request.policy_topic} {' '.join(request.states)} employment policy"
)
system_prompt = f"""You are an expert HR policy writer for {request.company_name}, a company with {request.employee_count} employees operating in: {', '.join(request.states)}.
CRITICAL RULES:
1. Generate ONLY policy language — no legal advice.
2. Every policy MUST include the disclaimer: 'This policy has been drafted with AI assistance and must be reviewed by qualified employment counsel before adoption.'
3. Include state-specific variations where laws differ across the listed states.
4. Use {request.tone} tone appropriate for an employee handbook.
5. Flag any areas where state laws conflict or where legal counsel review is especially critical.
6. Reference applicable federal laws (FLSA, FMLA, ADA, Title VII, NLRA, OSHA) where relevant.
7. Never fabricate specific legal citations — if unsure of a specific statute number, describe the requirement generally and flag for legal review.
8. Include effective date placeholder: [EFFECTIVE DATE].
9. Include signature/acknowledgment block at the end of each policy.
"""
user_prompt = f"""ACTION: {request.action.upper()}
POLICY TOPIC: {request.policy_topic}
APPLICABLE STATES: {', '.join(request.states)}
{'EXISTING POLICY TEXT TO UPDATE:\n' + request.existing_text if request.existing_text else 'No existing policy — generate from scratch.'}
{'ADDITIONAL CONTEXT: ' + request.additional_context if request.additional_context else ''}
REFERENCE MATERIAL FROM COMPLIANCE DATABASE:
{context}
Please generate the complete policy section with:
1. Policy title
2. Purpose statement
3. Scope (who this applies to)
4. Detailed policy provisions
5. State-specific variations (if applicable)
6. Employee responsibilities
7. Employer responsibilities
8. Violation consequences
9. Related policies cross-references
10. Acknowledgment block
"""
response = oai.chat.completions.create(
model='gpt-4.1',
messages=[
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': user_prompt}
],
temperature=0.3,
max_tokens=4000
)
generated = response.choices[0].message.content
# Extract state-specific notes (simplified parsing)
state_notes = {}
for state in request.states:
if state.upper() in generated.upper():
state_notes[state] = f'State-specific provisions included for {state}'
else:
state_notes[state] = 'No state-specific variation identified — verify with counsel'
warnings = [
'AI-generated content — MUST be reviewed by employment attorney before publication',
f'Multi-state employer ({len(request.states)} states) — verify jurisdiction-specific requirements'
]
if request.employee_count >= 50:
warnings.append('FMLA-covered employer (50+ employees) — ensure leave policies comply')
if 'CA' in [s.upper() for s in request.states]:
warnings.append('California employer — state has extensive additional employment requirements')
return PolicyResponse(
policy_title=request.policy_topic,
generated_content=generated,
state_specific_notes=state_notes,
compliance_warnings=warnings,
sources_referenced=sources,
model_used='gpt-4.1',
requires_legal_review=True
)
@app.get('/health')
async def health():
return {'status': 'healthy', 'version': '1.0.0'}
API_EOF
# Step 6c: Create Dockerfile for containerized deployment
cat > Dockerfile << 'DOCKER_EOF'
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY api/ ./api/
COPY prompts/ ./prompts/
EXPOSE 8000
CMD ["uvicorn", "api.main:app", "--host", "0.0.0.0", "--port", "8000"]
DOCKER_EOF
# Step 6d: Create requirements.txt
cat > requirements.txt << 'REQ_EOF'
fastapi==0.115.0
uvicorn==0.32.0
openai==1.55.0
pinecone-client==5.0.0
python-docx==1.1.0
azure-identity==1.19.0
azure-keyvault-secrets==4.9.0
pydantic==2.10.0
jinja2==3.1.4
REQ_EOF
# Step 6e: Test locally
uvicorn api.main:app --host 0.0.0.0 --port 8000 --reloadThis is the core engine of the solution. Deploy to Azure Container Apps or Azure App Service for production. For simpler deployments, this can also run on any Docker-capable hosting. Temperature is set to 0.3 for consistent, factual policy language — do NOT increase above 0.5 for legal content. The mandatory legal review disclaimer is embedded in the system prompt and cannot be removed by user input.
Step 7: Configure Microsoft 365 Copilot for HR Team
Assign Microsoft 365 Copilot licenses to the designated HR team members and configure Copilot settings for optimal handbook drafting workflow. Set up SharePoint as a Copilot knowledge source so it can reference existing approved policies.
# Step 7a: Assign Copilot licenses via Microsoft 365 Admin Center PowerShell
Install-Module Microsoft.Graph -Scope CurrentUser
Connect-MgGraph -Scopes 'User.ReadWrite.All','Organization.Read.All'
# Get the Copilot SKU ID
Get-MgSubscribedSku | Where-Object { $_.SkuPartNumber -like '*Copilot*' } | Select-Object SkuPartNumber, SkuId
# Assign to HR users (replace with actual UPNs and SKU ID)
$HRUsers = @(
'hr.director@client.com',
'hr.manager@client.com',
'hr.generalist@client.com'
)
$CopilotSkuId = 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX' # from Get-MgSubscribedSku output
foreach ($user in $HRUsers) {
$userId = (Get-MgUser -Filter "userPrincipalName eq '$user'").Id
$addLicenses = @(
@{SkuId = $CopilotSkuId}
)
Set-MgUserLicense -UserId $userId -AddLicenses $addLicenses -RemoveLicenses @()
Write-Host "Copilot license assigned to: $user"
}Copilot licenses should only be assigned to HR team members who will actively use AI-assisted drafting — typically 3–5 seats. At $30/user/month, selective assignment controls costs. The SharePoint knowledge source configuration is critical: it allows Copilot in Word to 'see' existing approved policies, enabling contextual updates. Remind the client that Copilot outputs still require human review before publication.
Step 8: Set Up HRIS Integration via Merge.dev
Configure the Merge.dev unified API to connect to the client's HRIS system. This integration pulls employee roster data, work locations, department structures, and hire dates to enable automated handbook distribution and state-specific policy assignment.
# Step 8a: Create Merge.dev account and configure HRIS integration
# Navigate to https://app.merge.dev and create account
# Add HRIS integration for client's specific platform (e.g., BambooHR, Rippling, ADP)
# Merge provides a Link Token workflow for secure OAuth connection
# Step 8b: Generate Link Token and connect client's HRIS
python3 << 'MERGE_EOF'
import requests
import os
MERGE_API_KEY = os.environ.get('MERGE_API_KEY')
MERGE_ACCOUNT_TOKEN = os.environ.get('MERGE_ACCOUNT_TOKEN')
headers = {
'Authorization': f'Bearer {MERGE_API_KEY}',
'X-Account-Token': MERGE_ACCOUNT_TOKEN,
'Content-Type': 'application/json'
}
# List employees with work locations (for state-specific policy assignment)
response = requests.get(
'https://api.merge.dev/api/hris/v1/employees',
headers=headers,
params={'include_remote_data': 'true', 'page_size': 100}
)
employees = response.json().get('results', [])
for emp in employees[:5]: # Preview first 5
name = f"{emp.get('first_name', '')} {emp.get('last_name', '')}"
location = emp.get('work_location', {})
state = location.get('state', 'Unknown') if location else 'Unknown'
dept = emp.get('department', {})
dept_name = dept.get('name', 'Unknown') if dept else 'Unknown'
print(f'{name} | State: {state} | Dept: {dept_name}')
print(f'\nTotal employees found: {len(employees)}')
MERGE_EOF
# Step 8c: Create webhook for new hire events
python3 << 'WEBHOOK_EOF'
import requests
import os
headers = {
'Authorization': f"Bearer {os.environ.get('MERGE_API_KEY')}",
'Content-Type': 'application/json'
}
# Register webhook for new hire events
webhook_data = {
'event': 'Employee.created',
'target_url': 'https://[your-n8n-instance]/webhook/new-hire-handbook'
}
response = requests.post(
'https://api.merge.dev/api/hris/v1/webhook-receivers',
headers=headers,
json=webhook_data
)
print(f'Webhook registered: {response.status_code}')
WEBHOOK_EOFMerge.dev costs ~$65/connected account/month but dramatically simplifies HRIS integration by providing a single normalized API regardless of whether the client uses BambooHR, Rippling, ADP, Gusto, or any of 60+ platforms. The alternative is building direct API integrations per HRIS, which is only cost-effective if the MSP standardizes all clients on one HRIS. The new-hire webhook triggers automatic handbook assignment workflows.
Step 9: Build n8n Workflow Automation Pipelines
Deploy n8n and configure automated workflows that orchestrate the end-to-end policy generation and distribution pipeline: new hire handbook delivery, policy update notifications, approval workflows, and compliance monitoring alerts.
# Step 9a: Deploy n8n via Docker
docker run -d --name n8n \
-p 5678:5678 \
-v n8n_data:/home/node/.n8n \
-e N8N_BASIC_AUTH_ACTIVE=true \
-e N8N_BASIC_AUTH_USER=admin \
-e N8N_BASIC_AUTH_PASSWORD='[STRONG_PASSWORD]' \
-e WEBHOOK_URL=https://[your-domain]/n8n/ \
n8nio/n8n:latestn8n is preferred over Zapier for MSPs because it can be self-hosted (eliminating per-task costs) and supports complex branching logic needed for multi-state policy routing. For simpler deployments, Power Automate can replace n8n if the client is fully committed to the Microsoft ecosystem. The n8n instance should be deployed on Azure Container Apps or a small Azure VM ($15–$30/month) for production use.
Step 10: Deploy Employee Policy Q&A Bot in Microsoft Teams
Set up a Teams-based chatbot that allows employees to ask questions about company policies and receive AI-generated answers grounded in the approved handbook content via RAG. This reduces HR inquiry volume and ensures consistent policy interpretation.
# Step 10a: Register the bot in Azure
az bot create \
--resource-group rg-hr-ai-integration \
--name hr-policy-qa-bot \
--kind registration \
--display-name 'Policy Assistant' \
--description 'Ask questions about company policies and employee handbook' \
--endpoint 'https://[your-api-domain]/bot/messages' \
--sku F0
# Step 10b: Create Teams app manifest (save as manifest.json)
cat > teams-bot/manifest.json << 'MANIFEST_EOF'
{
"$schema": "https://developer.microsoft.com/en-us/json-schemas/teams/v1.17/MicrosoftTeams.schema.json",
"manifestVersion": "1.17",
"version": "1.0.0",
"id": "[BOT_APP_ID]",
"developer": {
"name": "[MSP Company Name]",
"websiteUrl": "https://[msp-website].com",
"privacyUrl": "https://[msp-website].com/privacy",
"termsOfUseUrl": "https://[msp-website].com/terms"
},
"name": { "short": "Policy Assistant", "full": "Employee Policy Q&A Assistant" },
"description": {
"short": "Ask questions about company policies",
"full": "AI-powered assistant that answers questions about company policies, employee handbook, and HR procedures based on approved policy documents."
},
"icons": { "outline": "outline.png", "color": "color.png" },
"accentColor": "#4F6BED",
"bots": [{
"botId": "[BOT_APP_ID]",
"scopes": ["personal", "team"],
"commandLists": [{
"scopes": ["personal"],
"commands": [
{ "title": "PTO Policy", "description": "Ask about paid time off" },
{ "title": "Leave Policy", "description": "Ask about leave policies" },
{ "title": "Remote Work", "description": "Ask about remote work policy" },
{ "title": "Benefits", "description": "Ask about employee benefits" }
]
}]
}],
"validDomains": ["[your-api-domain]"]
}
MANIFEST_EOF
# Step 10c: Deploy bot backend endpoint (add to existing FastAPI app)
# Add to api/main.py - see custom_ai_components for full bot handler codeThe Q&A bot only answers from approved policy documents in the RAG database — it will not generate new policy content. This is intentional: the bot is an information retrieval tool, not a policy creation tool. Configure the bot to always include a disclaimer: 'This response is based on current company policy documents. For specific situations, please contact your HR representative.' The bot should escalate to a human HR contact for questions it cannot answer confidently.
Step 11: Configure E-Signature Acknowledgment Workflow
Set up automated policy acknowledgment tracking so that when new or updated policies are distributed, employees must electronically acknowledge receipt and understanding. This creates a legally defensible audit trail.
# Option A: Using Blissbook's built-in e-signature (if Blissbook is the handbook platform)
# Configure at https://app.blissbook.com > Settings > E-Signatures
# Enable 'Require acknowledgment for all policy updates'
# Set reminder cadence: 3 days, 7 days, 14 days
# Option B: Using Power Automate with Approvals (for M365-native approach)
# Step 11a: Create a Power Automate flow:
# Trigger: When a file is modified in 'Approved Policies' SharePoint library
# Action 1: Get all employees from HRIS (via Merge.dev HTTP connector)
# Action 2: For each employee, create an Approval request
# Action 3: Send adaptive card via Teams with policy summary + acknowledge button
# Action 4: Log acknowledgment to SharePoint list 'Policy Acknowledgments'
# Action 5: Send reminder if not acknowledged within 7 days
# Action 6: Escalate to HR manager if not acknowledged within 14 days
# Step 11b: Create acknowledgment tracking list
Connect-PnPOnline -Url https://[tenant].sharepoint.com/sites/HRPolicyMgmt -Interactive
New-PnPList -Title 'Policy Acknowledgments' -Template GenericList
Add-PnPField -List 'Policy Acknowledgments' -DisplayName 'Employee Name' -InternalName 'EmployeeName' -Type Text -AddToDefaultView
Add-PnPField -List 'Policy Acknowledgments' -DisplayName 'Employee Email' -InternalName 'EmployeeEmail' -Type Text -AddToDefaultView
Add-PnPField -List 'Policy Acknowledgments' -DisplayName 'Policy Name' -InternalName 'PolicyName' -Type Text -AddToDefaultView
Add-PnPField -List 'Policy Acknowledgments' -DisplayName 'Policy Version' -InternalName 'PolicyVersion' -Type Text -AddToDefaultView
Add-PnPField -List 'Policy Acknowledgments' -DisplayName 'Sent Date' -InternalName 'SentDate' -Type DateTime -AddToDefaultView
Add-PnPField -List 'Policy Acknowledgments' -DisplayName 'Acknowledged Date' -InternalName 'AcknowledgedDate' -Type DateTime -AddToDefaultView
Add-PnPField -List 'Policy Acknowledgments' -DisplayName 'Status' -InternalName 'AckStatus' -Type Choice -Choices 'Pending','Acknowledged','Overdue','Escalated' -AddToDefaultViewPolicy acknowledgment tracking is legally important — it demonstrates the employer communicated policies to employees. If using Blissbook ($249/year), e-signature and tracking are built in. For M365-native approach, Power Automate + Approvals + SharePoint provides comparable functionality at no additional software cost beyond existing M365 licensing. The escalation workflow (HR manager notification after 14 days) ensures compliance gaps are addressed.
Step 12: Production Deployment, Security Hardening, and Go-Live
Deploy all components to production, configure security controls, run final integration tests, and execute the go-live checklist. This step transitions from development/testing to the live production environment.
# Step 12a: Deploy API to Azure Container Apps
az containerapp env create \
--name hr-handbook-env \
--resource-group rg-hr-ai-integration \
--location eastus
az containerapp create \
--name hr-handbook-api \
--resource-group rg-hr-ai-integration \
--environment hr-handbook-env \
--image [your-acr].azurecr.io/hr-handbook-api:1.0.0 \
--target-port 8000 \
--ingress external \
--min-replicas 1 \
--max-replicas 3 \
--secrets openai-key=[KEY_VAULT_REF] pinecone-key=[KEY_VAULT_REF] \
--env-vars OPENAI_API_KEY=secretref:openai-key PINECONE_API_KEY=secretref:pinecone-key
# Step 12b: Configure custom domain and TLS
az containerapp hostname add \
--name hr-handbook-api \
--resource-group rg-hr-ai-integration \
--hostname api-hr.[client-domain].com
# Step 12c: Enable Azure AD authentication on the API
az containerapp auth update \
--name hr-handbook-api \
--resource-group rg-hr-ai-integration \
--enabled true \
--action RequireAuthentication \
--provider MicrosoftIdentityPlatform \
--client-id [APP_CLIENT_ID] \
--tenant-id [TENANT_ID]
# Step 12d: Run go-live checklist
echo '=== GO-LIVE CHECKLIST ==='
echo '[ ] All API keys stored in Azure Key Vault (never in code/config)'
echo '[ ] MFA enforced for all HR users accessing handbook platforms'
echo '[ ] SharePoint permissions verified (no external sharing on HR docs)'
echo '[ ] Copilot data boundaries configured (no external data access)'
echo '[ ] Bot disclaimer text verified in all responses'
echo '[ ] Acknowledgment workflow tested end-to-end with test employee'
echo '[ ] Backup of existing handbook archived in Archived Handbooks library'
echo '[ ] Client employment attorney has signed off on initial AI-generated content'
echo '[ ] OpenAI usage limits configured ($50/month cap)'
echo '[ ] Monitoring alerts configured for API errors and usage spikes'
echo '[ ] Client HR team has completed training (see training checklist)'
echo '[ ] DNS and TLS configured for API endpoint'
echo '[ ] Incident response contact list documented and shared'Do NOT go live until the employment attorney has reviewed and approved all AI-generated policy content. This is a hard gate — no exceptions. The Azure Container Apps deployment provides auto-scaling and built-in TLS. For smaller deployments, Azure App Service (B1 tier, ~$13/month) is a cost-effective alternative. Enable Application Insights for monitoring API performance and error tracking.
Custom AI Components
HR Policy Generation Prompt Library
Type: prompt A curated library of system and user prompt templates optimized for generating specific types of HR policy content. Each prompt template includes compliance guardrails, jurisdiction awareness, and mandatory legal review disclaimers. This is the MSP's proprietary IP and competitive moat.
Implementation:
# store in prompts/policy_templates.json
# HR Policy Generation Prompt Library
# Store these templates in: prompts/policy_templates.json
{
"system_prompts": {
"policy_drafter": {
"id": "sys-policy-drafter-v1",
"template": "You are a professional HR policy writer assisting {{company_name}}, a {{industry}} company with {{employee_count}} employees operating in: {{states_list}}.\n\nYour role is to draft clear, professional employee handbook policy language. You are NOT providing legal advice.\n\nCRITICAL RULES:\n1. Every policy you generate MUST include this disclaimer at the top: 'DRAFT — AI-ASSISTED CONTENT — REQUIRES LEGAL REVIEW BEFORE ADOPTION'\n2. Include state-specific variations when laws differ across the listed states\n3. Flag areas requiring employment attorney review with [LEGAL REVIEW NEEDED] markers\n4. Use {{tone}} tone (professional/friendly/formal)\n5. Never fabricate specific statute numbers or case citations\n6. Always include an effective date placeholder: [EFFECTIVE DATE]\n7. Always include an acknowledgment signature block at the end\n8. Reference applicable federal laws by name (FLSA, FMLA, ADA, Title VII, NLRA, OSHA) where relevant\n9. For multi-state employers, present state-specific variations in a clearly marked table or section\n10. If you are uncertain about a specific legal requirement, state 'Consult with employment counsel regarding [specific topic]' rather than guessing",
"variables": ["company_name", "industry", "employee_count", "states_list", "tone"]
},
"policy_updater": {
"id": "sys-policy-updater-v1",
"template": "You are an HR policy revision specialist for {{company_name}}. Your task is to update existing policy language while preserving the company's established voice and intent.\n\nRULES:\n1. Clearly mark all changes using [ADDED], [MODIFIED], and [REMOVED] tags\n2. Preserve existing policy structure where possible\n3. Include a 'Summary of Changes' section at the top listing every modification\n4. Maintain the disclaimer: 'DRAFT — AI-ASSISTED UPDATE — REQUIRES LEGAL REVIEW'\n5. If updating for compliance with new legislation, cite the legislation by name and effective date\n6. Never remove safety-related provisions without flagging them\n7. Preserve all existing state-specific accommodations unless explicitly instructed to remove them",
"variables": ["company_name"]
},
"communication_writer": {
"id": "sys-comms-writer-v1",
"template": "You are an internal communications specialist for {{company_name}}. You write clear, engaging employee communications about policy changes and HR updates.\n\nSTYLE GUIDELINES:\n1. Write at an 8th-grade reading level for maximum accessibility\n2. Lead with what employees need to know and do\n3. Explain WHY the change is happening (without legalese)\n4. Include clear action items with deadlines\n5. Provide contact information for questions\n6. Keep emails under 500 words; link to full policy for details\n7. Use bullet points and headers for scannability\n8. Tone: {{tone}} and supportive — never condescending",
"variables": ["company_name", "tone"]
}
},
"user_prompt_templates": {
"pto_policy": {
"id": "usr-pto-v1",
"topic": "Paid Time Off (PTO) Policy",
"template": "Generate a comprehensive PTO policy with the following parameters:\n- PTO model: {{pto_model}} (accrual / lump-sum / unlimited)\n- Accrual rate (if applicable): {{accrual_rate}}\n- Maximum carryover: {{max_carryover}}\n- Payout on termination: {{payout_on_term}} (yes/no/state-dependent)\n- Blackout periods: {{blackout_periods}}\n- Minimum notice required: {{notice_period}}\n\nInclude state-specific requirements for: {{states_list}}\nNote: California, Montana, and several other states have specific PTO payout requirements.",
"variables": ["pto_model", "accrual_rate", "max_carryover", "payout_on_term", "blackout_periods", "notice_period", "states_list"]
},
"remote_work_policy": {
"id": "usr-remote-v1",
"topic": "Remote Work / Telecommuting Policy",
"template": "Generate a remote work policy covering:\n- Eligibility criteria: {{eligibility}}\n- Remote work arrangement types: {{arrangement_types}} (fully remote / hybrid / occasional)\n- Equipment and technology provisions: {{equipment_provisions}}\n- Work hours and availability expectations: {{availability}}\n- Home office safety and ergonomics: {{safety_reqs}}\n- Data security requirements: {{data_security}}\n- Communication and collaboration expectations: {{comm_expectations}}\n- Performance measurement approach: {{performance}}\n- Expense reimbursement: {{expense_policy}}\n\nInclude state-specific considerations for: {{states_list}}\nNote: Several states (CA, IL, NY, etc.) have specific remote work expense reimbursement requirements.",
"variables": ["eligibility", "arrangement_types", "equipment_provisions", "availability", "safety_reqs", "data_security", "comm_expectations", "performance", "expense_policy", "states_list"]
},
"harassment_policy": {
"id": "usr-harassment-v1",
"topic": "Anti-Harassment and Anti-Discrimination Policy",
"template": "Generate a comprehensive anti-harassment and anti-discrimination policy covering:\n- Protected classes (federal + state-specific): {{protected_classes}}\n- Types of prohibited conduct (with examples): harassment, discrimination, retaliation\n- Reporting procedures: {{reporting_channels}}\n- Investigation process overview\n- Confidentiality commitments\n- Anti-retaliation protections\n- Training requirements: {{training_reqs}}\n- Consequences for violations\n\nSTATES: {{states_list}}\n[LEGAL REVIEW NEEDED] This policy area is especially sensitive and jurisdiction-specific. CA, NY, IL, CT, DE, ME have mandatory anti-harassment training requirements.",
"variables": ["protected_classes", "reporting_channels", "training_reqs", "states_list"]
},
"social_media_policy": {
"id": "usr-social-v1",
"topic": "Social Media Policy",
"template": "Generate an employee social media policy that balances company interests with employee rights:\n- Company social media accounts management\n- Personal social media use guidelines\n- Confidentiality and proprietary information protections\n- Disclaimer requirements for personal posts about the company\n- Prohibited conduct on social media\n\n[LEGAL REVIEW NEEDED] NLRA Section 7 protects employees' rights to engage in protected concerted activity, including discussing working conditions on social media. This policy MUST NOT unlawfully restrict protected activity. Include explicit NLRA savings clause.",
"variables": []
},
"policy_change_email": {
"id": "usr-change-email-v1",
"topic": "Policy Change Communication Email",
"template": "Draft an employee communication email announcing the following policy change:\n\nPolicy: {{policy_name}}\nType of change: {{change_type}} (new policy / update / removal)\nEffective date: {{effective_date}}\nSummary of changes: {{change_summary}}\nAction required from employees: {{employee_action}}\nDeadline for action: {{action_deadline}}\nContact for questions: {{hr_contact}}\n\nKeep under 500 words. Include link placeholder [HANDBOOK LINK] for full policy.",
"variables": ["policy_name", "change_type", "effective_date", "change_summary", "employee_action", "action_deadline", "hr_contact"]
}
}
}Usage Instructions
{{variables}} with client-specific valuesPolicy RAG Retrieval Agent
Type: agent An intelligent retrieval agent that searches the Pinecone vector database for relevant existing policy content, compliance templates, and employment law references before generating new policy text. It dynamically selects the appropriate namespace and adjusts retrieval strategy based on the policy topic and jurisdictions involved.
Implementation:
# Policy RAG Retrieval Agent
# File: api/rag_agent.py
import os
from typing import List, Dict, Optional, Tuple
from openai import OpenAI
from pinecone import Pinecone
import json
class PolicyRAGAgent:
"""
Intelligent retrieval agent for HR policy content generation.
Searches multiple vector database namespaces and assembles
relevant context for policy drafting.
"""
NAMESPACES = {
'handbook': 'Client\'s existing handbook content',
'compliance-templates': 'SixFifty/BLR compliant policy templates',
'employment-law': 'Employment law summaries and requirements',
'best-practices': 'HR best practice guides and SHRM resources'
}
# Map policy topics to relevant search queries
TOPIC_QUERY_MAP = {
'pto': ['paid time off policy', 'vacation accrual', 'PTO payout requirements', 'sick leave policy'],
'remote_work': ['remote work policy', 'telecommuting', 'work from home', 'expense reimbursement remote'],
'harassment': ['anti-harassment policy', 'discrimination prevention', 'Title VII compliance', 'hostile work environment'],
'leave': ['FMLA leave', 'parental leave', 'medical leave', 'state paid leave requirements'],
'compensation': ['compensation policy', 'pay transparency', 'overtime FLSA', 'equal pay requirements'],
'termination': ['termination policy', 'at-will employment', 'severance', 'final paycheck requirements'],
'social_media': ['social media policy', 'NLRA protected activity', 'online conduct', 'company reputation'],
'safety': ['workplace safety', 'OSHA requirements', 'injury reporting', 'emergency procedures'],
'benefits': ['employee benefits', 'health insurance', 'COBRA', 'ERISA requirements', '401k policy'],
'onboarding': ['new hire onboarding', 'orientation', 'I-9 verification', 'employment eligibility']
}
# States with notably complex employment law
HIGH_COMPLEXITY_STATES = ['CA', 'NY', 'MA', 'WA', 'CO', 'IL', 'CT', 'NJ', 'OR']
def __init__(self):
self.oai = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))
self.pc = Pinecone(api_key=os.environ.get('PINECONE_API_KEY'))
self.index = self.pc.Index('hr-handbook-policies')
def classify_topic(self, user_query: str) -> str:
"""Use LLM to classify the policy topic from free-form input."""
response = self.oai.chat.completions.create(
model='gpt-5.4-mini',
messages=[
{'role': 'system', 'content': f'Classify the following HR policy request into one of these categories: {list(self.TOPIC_QUERY_MAP.keys())}. Respond with ONLY the category key.'},
{'role': 'user', 'content': user_query}
],
temperature=0,
max_tokens=20
)
topic = response.choices[0].message.content.strip().lower()
return topic if topic in self.TOPIC_QUERY_MAP else 'general'
def get_embeddings(self, texts: List[str]) -> List[List[float]]:
"""Generate embeddings for a list of texts."""
response = self.oai.embeddings.create(
model='text-embedding-3-large',
input=texts
)
return [item.embedding for item in response.data]
def retrieve_from_namespace(
self,
query_embedding: List[float],
namespace: str,
top_k: int = 5,
min_score: float = 0.3
) -> List[Dict]:
"""Retrieve relevant documents from a specific namespace."""
results = self.index.query(
vector=query_embedding,
namespace=namespace,
top_k=top_k,
include_metadata=True
)
return [
{
'text': match.metadata.get('text', ''),
'source': match.metadata.get('source', 'unknown'),
'score': match.score,
'namespace': namespace
}
for match in results.matches
if match.score >= min_score
]
def build_context(
self,
policy_topic: str,
states: List[str],
existing_text: Optional[str] = None
) -> Tuple[str, List[str], List[str]]:
"""
Build comprehensive context for policy generation.
Returns: (context_text, sources_list, compliance_flags)
"""
# Determine search queries based on topic
topic_key = self.classify_topic(policy_topic)
search_queries = self.TOPIC_QUERY_MAP.get(topic_key, [policy_topic])
# Add state-specific queries for high-complexity states
for state in states:
if state.upper() in self.HIGH_COMPLEXITY_STATES:
search_queries.append(f'{policy_topic} {state} state requirements')
# Generate embeddings for all queries
all_embeddings = self.get_embeddings(search_queries)
# Retrieve from all relevant namespaces
all_results = []
for query_emb in all_embeddings:
for namespace in self.NAMESPACES:
results = self.retrieve_from_namespace(
query_emb, namespace, top_k=3
)
all_results.extend(results)
# Deduplicate and sort by relevance score
seen_texts = set()
unique_results = []
for r in sorted(all_results, key=lambda x: x['score'], reverse=True):
text_hash = hash(r['text'][:100])
if text_hash not in seen_texts:
seen_texts.add(text_hash)
unique_results.append(r)
# Take top 10 most relevant chunks
top_results = unique_results[:10]
# Build context string
context_parts = []
sources = []
for i, r in enumerate(top_results):
context_parts.append(
f"[Source {i+1}: {r['source']} (relevance: {r['score']:.2f})]\n{r['text']}"
)
sources.append(f"{r['source']} ({r['namespace']})")
context_text = '\n\n---\n\n'.join(context_parts)
# Generate compliance flags
compliance_flags = []
if any(s.upper() == 'CA' for s in states):
compliance_flags.append('CALIFORNIA: Extensive additional requirements — meal/rest breaks, PTO payout, expense reimbursement, anti-harassment training, pay transparency')
if any(s.upper() == 'NY' for s in states):
compliance_flags.append('NEW YORK: NYC-specific requirements may apply — paid safe/sick leave, salary transparency, AI bias audit (Local Law 144)')
if len(states) > 3:
compliance_flags.append(f'MULTI-STATE ({len(states)} states): High complexity — consider state-specific policy addendum approach')
if topic_key in ['harassment', 'termination', 'compensation']:
compliance_flags.append(f'HIGH-SENSITIVITY TOPIC ({topic_key}): Mandatory legal review before publication')
return context_text, list(set(sources)), compliance_flags
# Integration with main API
def get_rag_agent() -> PolicyRAGAgent:
"""Factory function for dependency injection."""
return PolicyRAGAgent()Integration
Import and use in api/main.py:
from api.rag_agent import PolicyRAGAgent
agent = PolicyRAGAgent()
context, sources, flags = agent.build_context(
policy_topic=request.policy_topic,
states=request.states,
existing_text=request.existing_text
)New Hire Handbook Distribution Workflow
Type: workflow An n8n workflow that triggers when a new employee is created in the HRIS (via Merge.dev webhook), determines their work state, assigns the appropriate state-specific handbook version, sends it via email with a Teams notification, and tracks acknowledgment. This automates the most common handbook distribution scenario.
Implementation:
Workflow Logic (Pseudocode):
Node 1: Webhook Trigger
{
"name": "New Hire Webhook",
"type": "n8n-nodes-base.webhook",
"parameters": {
"httpMethod": "POST",
"path": "new-hire-handbook",
"authentication": "headerAuth",
"responseMode": "onReceived",
"responseCode": 200
}
}Node 2: Parse Employee Data
{
"name": "Parse Employee",
"type": "n8n-nodes-base.set",
"parameters": {
"values": {
"string": [
{"name": "employee_name", "value": "={{$json.data.first_name}} {{$json.data.last_name}}"},
{"name": "employee_email", "value": "={{$json.data.work_email}}"},
{"name": "work_state", "value": "={{$json.data.work_location.state}}"},
{"name": "department", "value": "={{$json.data.department.name}}"},
{"name": "start_date", "value": "={{$json.data.start_date}}"}
]
}
}
}Node 3: Generate Welcome Email via OpenAI
Welcome Email System Prompt
Welcome Email User Prompt
{
"name": "Generate Welcome Email",
"type": "n8n-nodes-base.openAi",
"parameters": {
"resource": "chat",
"model": "gpt-5.4-mini",
"messages": {
"values": [
{
"role": "system",
"content": "You are an internal communications specialist. Write a warm, professional welcome email for a new employee receiving their employee handbook. Keep under 300 words. Include: welcome message, brief handbook overview, instruction to read and acknowledge, and HR contact for questions."
},
{
"role": "user",
"content": "Write a welcome email for {{$node['Parse Employee'].json.employee_name}} joining the {{$node['Parse Employee'].json.department}} department starting {{$node['Parse Employee'].json.start_date}}. Company: [CLIENT COMPANY NAME]. HR contact: [HR_CONTACT_EMAIL]."
}
]
},
"options": {
"temperature": 0.5,
"maxTokens": 500
}
}
}Node 4: Send Email via Microsoft 365
{
"name": "Send Handbook Email",
"type": "n8n-nodes-base.microsoftOutlook",
"parameters": {
"resource": "message",
"operation": "send",
"toRecipients": "={{$node['Parse Employee'].json.employee_email}}",
"subject": "Welcome to [Company Name] — Your Employee Handbook",
"bodyContent": "={{$node['Generate Welcome Email'].json.message.content}}",
"bodyContentType": "HTML"
}
}Node 5: Create SharePoint Acknowledgment Record
{
"name": "Create Ack Record",
"type": "n8n-nodes-base.microsoftSharePoint",
"parameters": {
"operation": "create",
"siteId": "[SHAREPOINT_SITE_ID]",
"listId": "[POLICY_ACKNOWLEDGMENTS_LIST_ID]",
"fields": {
"EmployeeName": "={{$node['Parse Employee'].json.employee_name}}",
"EmployeeEmail": "={{$node['Parse Employee'].json.employee_email}}",
"PolicyName": "Employee Handbook — Full",
"SentDate": "={{$now.toISO()}}",
"AckStatus": "Pending"
}
}
}Employee Policy Q&A Bot Handler
Type: integration Backend handler for the Microsoft Teams policy Q&A bot. Receives employee questions, retrieves relevant policy content via RAG from the approved handbook in Pinecone, generates accurate answers grounded in official policy documents, and always includes disclaimers and HR contact information.
Implementation:
# Employee Policy Q&A Bot Handler
# File: api/bot_handler.py
# Add this to the existing FastAPI application
from fastapi import Request, HTTPException
from pydantic import BaseModel
from typing import Optional
import os
import json
from openai import OpenAI
from pinecone import Pinecone
# Bot configuration
BOT_DISCLAIMER = (
"\n\n---\n"
"ℹ️ *This response is based on current company policy documents and is for "
"informational purposes only. It does not constitute legal or HR advice. "
"For specific situations or questions, please contact your HR representative "
"at [HR_CONTACT_EMAIL].*"
)
NO_ANSWER_RESPONSE = (
"I wasn't able to find a clear answer to your question in our current policy "
"documents. Please reach out to your HR representative at [HR_CONTACT_EMAIL] "
"for assistance with this question."
)
class BotMessage(BaseModel):
type: str
text: Optional[str] = None
from_id: Optional[str] = None
from_name: Optional[str] = None
conversation_id: Optional[str] = None
oai = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))
pc = Pinecone(api_key=os.environ.get('PINECONE_API_KEY'))
index = pc.Index('hr-handbook-policies')
def retrieve_policy_context(question: str, top_k: int = 5) -> tuple:
"""Retrieve relevant policy content for the employee's question."""
embedding = oai.embeddings.create(
model='text-embedding-3-large',
input=[question]
).data[0].embedding
# Search approved handbook content only (not drafts)
results = index.query(
vector=embedding,
namespace='handbook',
top_k=top_k,
include_metadata=True
)
context_chunks = []
sources = []
for match in results.matches:
if match.score > 0.35: # Higher threshold for Q&A accuracy
context_chunks.append(match.metadata.get('text', ''))
sources.append(match.metadata.get('source', 'Company Policy'))
return '\n\n'.join(context_chunks), list(set(sources))
def generate_answer(question: str, context: str, sources: list) -> str:
"""Generate an answer grounded in policy documents."""
if not context.strip():
return NO_ANSWER_RESPONSE
system_prompt = """You are a helpful employee policy assistant. Your ONLY job is to answer employee questions based on the provided company policy documents.
RULES:
1. ONLY answer based on the provided policy context — never make up or infer policy details
2. If the context doesn't contain enough information to fully answer, say so clearly
3. Quote relevant policy language when possible
4. Use simple, clear language (8th-grade reading level)
5. If the question involves a specific personal situation (e.g., 'Can I take leave for X?'), direct them to HR for personalized guidance
6. Never provide legal advice or interpret laws
7. Be friendly and helpful in tone
8. Keep answers concise (under 300 words)
9. If multiple policies are relevant, reference each by name"""
user_prompt = f"""Employee question: {question}
Relevant policy documents:
{context}
Sources: {', '.join(sources)}
Provide a helpful, accurate answer based ONLY on the above policy content."""
response = oai.chat.completions.create(
model='gpt-5.4-mini', # Cost-optimized for Q&A
messages=[
{'role': 'system', 'content': system_prompt},
{'role': 'user', 'content': user_prompt}
],
temperature=0.2, # Low temperature for factual accuracy
max_tokens=500
)
return response.choices[0].message.content
# FastAPI endpoint for Teams bot
# This handles the Bot Framework webhook
async def handle_bot_message(request: Request):
"""Process incoming Teams bot messages."""
body = await request.json()
if body.get('type') != 'message':
return {'status': 'ok'} # Ignore non-message activities
question = body.get('text', '').strip()
user_name = body.get('from', {}).get('name', 'Employee')
if not question:
return {
'type': 'message',
'text': 'Please type a question about company policies and I\'ll do my best to help! 📋'
}
# Log the query (for analytics, not storing PII)
print(f'Policy Q&A query from {user_name}: {question[:100]}...')
# Retrieve and generate
context, sources = retrieve_policy_context(question)
answer = generate_answer(question, context, sources)
# Add source references and disclaimer
if sources:
answer += f"\n\n📄 *Sources: {', '.join(sources)}*"
answer += BOT_DISCLAIMER
return {
'type': 'message',
'text': answer
}
# Register in main.py:
# app.post('/bot/messages')(handle_bot_message)Key Design Decisions
- Uses GPT-5.4 mini for cost efficiency (Q&A volume can be high)
- Higher similarity threshold (0.35) than generation (0.30) to reduce false matches
- Only searches 'handbook' namespace (approved content), never drafts
- Temperature 0.2 for maximum factual accuracy
- Mandatory disclaimer on every response
- Graceful handling when no relevant policy found
- No PII stored in logs
Compliance Monitoring and Alert Agent
Type: agent An automated agent that periodically checks for employment law changes by querying AI with current awareness prompts, cross-references against the client's existing handbook content, and generates alert notifications to the HR team when policies may need updating. Runs on a configurable schedule (default: weekly). Implementation:
# api/compliance_monitor.py. Runs as a scheduled task via n8n cron trigger
# or Azure Functions timer.
import os
import json
from datetime import datetime, timedelta
from typing import List, Dict
from openai import OpenAI
from pinecone import Pinecone
import requests
class ComplianceMonitor:
"""
Monitors for potential compliance issues by cross-referencing
the client's handbook content against known compliance topics
and generating update recommendations.
"""
# Key compliance areas to monitor
COMPLIANCE_TOPICS = [
{
'area': 'Pay Transparency',
'states': ['CA', 'CO', 'NY', 'WA', 'IL', 'HI', 'MD', 'NV', 'RI', 'CT'],
'check_prompt': 'Review the company pay transparency policy for compliance with current state requirements. Check for: salary range disclosure requirements, pay equity provisions, prohibition on salary history inquiries.'
},
{
'area': 'Paid Leave',
'states': ['CA', 'CO', 'CT', 'MA', 'MD', 'NJ', 'NY', 'OR', 'RI', 'WA'],
'check_prompt': 'Review the company leave policies for compliance with state paid family and medical leave laws, paid sick leave requirements, and any new leave entitlements.'
},
{
'area': 'AI in Employment',
'states': ['NY', 'IL', 'CO', 'CA'],
'check_prompt': 'Check if the company uses any AI/automated tools for hiring, performance evaluation, or employment decisions that may require disclosure, bias audits, or impact assessments under current state laws.'
},
{
'area': 'Remote Work',
'states': ['ALL'],
'check_prompt': 'Review remote work and telecommuting policies for compliance with expense reimbursement requirements, tax nexus implications, and state-specific remote worker protections.'
},
{
'area': 'Anti-Harassment Training',
'states': ['CA', 'CT', 'DE', 'IL', 'ME', 'NY'],
'check_prompt': 'Verify that anti-harassment training requirements are being met per state mandates, including frequency, content requirements, and recordkeeping.'
},
{
'area': 'Cannabis/Marijuana',
'states': ['CA', 'CO', 'IL', 'NJ', 'NY', 'VA', 'CT', 'MT', 'NM', 'AZ'],
'check_prompt': 'Review drug testing and substance abuse policies for compliance with state cannabis legalization laws and employee protections for off-duty use.'
}
]
def __init__(self, client_states: List[str], company_name: str):
self.oai = OpenAI(api_key=os.environ.get('OPENAI_API_KEY'))
self.pc = Pinecone(api_key=os.environ.get('PINECONE_API_KEY'))
self.index = self.pc.Index('hr-handbook-policies')
self.client_states = [s.upper() for s in client_states]
self.company_name = company_name
def get_applicable_topics(self) -> List[Dict]:
"""Filter compliance topics to those relevant to client's states."""
applicable = []
for topic in self.COMPLIANCE_TOPICS:
if topic['states'] == ['ALL'] or any(
s in self.client_states for s in topic['states']
):
matching_states = (
self.client_states if topic['states'] == ['ALL']
else [s for s in topic['states'] if s in self.client_states]
)
applicable.append({**topic, 'matching_states': matching_states})
return applicable
def retrieve_current_policy(self, topic_area: str) -> str:
"""Retrieve the client's current policy for a given topic."""
embedding = self.oai.embeddings.create(
model='text-embedding-3-large',
input=[topic_area]
).data[0].embedding
results = self.index.query(
vector=embedding,
namespace='handbook',
top_k=5,
include_metadata=True
)
texts = [m.metadata.get('text', '') for m in results.matches if m.score > 0.3]
return '\n'.join(texts) if texts else 'NO EXISTING POLICY FOUND'
def analyze_compliance_gap(self, topic: Dict, current_policy: str) -> Dict:
"""Use AI to analyze potential compliance gaps."""
prompt = f"""You are an HR compliance analyst reviewing {self.company_name}'s employee handbook.
COMPLIANCE AREA: {topic['area']}
APPLICABLE STATES: {', '.join(topic['matching_states'])}
CURRENT POLICY TEXT:
{current_policy}
ANALYSIS REQUEST:
{topic['check_prompt']}
Provide your analysis in this JSON format:
{{
"risk_level": "low|medium|high|critical",
"findings": ["List of specific findings"],
"recommended_actions": ["List of specific recommended actions"],
"states_requiring_attention": ["List of specific states with issues"],
"urgency": "immediate|within_30_days|within_90_days|next_review_cycle"
}}
IMPORTANT: Be conservative in your assessment. Flag potential issues rather than dismissing them. Note: Your analysis is preliminary and should be verified by employment counsel."""
response = self.oai.chat.completions.create(
model='gpt-4.1',
messages=[{'role': 'user', 'content': prompt}],
temperature=0.2,
response_format={'type': 'json_object'}
)
try:
analysis = json.loads(response.choices[0].message.content)
except json.JSONDecodeError:
analysis = {
'risk_level': 'medium',
'findings': ['Analysis could not be parsed — manual review recommended'],
'recommended_actions': ['Manually review this compliance area'],
'states_requiring_attention': topic['matching_states'],
'urgency': 'within_30_days'
}
analysis['area'] = topic['area']
analysis['matching_states'] = topic['matching_states']
return analysis
def run_compliance_scan(self) -> Dict:
"""Execute full compliance scan and return results."""
applicable_topics = self.get_applicable_topics()
results = []
for topic in applicable_topics:
current_policy = self.retrieve_current_policy(topic['area'])
analysis = self.analyze_compliance_gap(topic, current_policy)
results.append(analysis)
# Sort by risk level
risk_order = {'critical': 0, 'high': 1, 'medium': 2, 'low': 3}
results.sort(key=lambda x: risk_order.get(x.get('risk_level', 'low'), 3))
scan_report = {
'scan_date': datetime.now().isoformat(),
'company': self.company_name,
'states_covered': self.client_states,
'topics_scanned': len(results),
'critical_findings': len([r for r in results if r['risk_level'] == 'critical']),
'high_findings': len([r for r in results if r['risk_level'] == 'high']),
'results': results,
'disclaimer': 'This AI-generated compliance scan is preliminary and does not constitute legal advice. All findings should be reviewed by qualified employment counsel.'
}
return scan_report
def format_alert_email(self, scan_report: Dict) -> str:
"""Format scan results as an HTML email for the HR team."""
critical_count = scan_report['critical_findings']
high_count = scan_report['high_findings']
html = f"""<h2>Weekly HR Compliance Scan Report</h2>
<p><strong>Company:</strong> {scan_report['company']}</p>
<p><strong>Scan Date:</strong> {scan_report['scan_date'][:10]}</p>
<p><strong>States Covered:</strong> {', '.join(scan_report['states_covered'])}</p>
<p><strong>Summary:</strong> {critical_count} critical, {high_count} high-priority findings</p>
<hr>"""
for result in scan_report['results']:
color = {'critical': '#d32f2f', 'high': '#f57c00', 'medium': '#fbc02d', 'low': '#388e3c'}
level_color = color.get(result['risk_level'], '#666')
html += f"""
<h3 style='color: {level_color}'>[{result['risk_level'].upper()}] {result['area']}</h3>
<p><strong>States:</strong> {', '.join(result.get('matching_states', []))}</p>
<p><strong>Urgency:</strong> {result.get('urgency', 'N/A')}</p>
<p><strong>Findings:</strong></p><ul>"""
for finding in result.get('findings', []):
html += f'<li>{finding}</li>'
html += '</ul><p><strong>Recommended Actions:</strong></p><ul>'
for action in result.get('recommended_actions', []):
html += f'<li>{action}</li>'
html += '</ul><hr>'
html += f"""<p style='color: #666; font-size: 12px'><em>{scan_report['disclaimer']}</em></p>"""
return html
# Usage (called by n8n cron workflow or Azure Functions timer):
# monitor = ComplianceMonitor(
# client_states=['CA', 'NY', 'TX', 'IL'],
# company_name='Acme Staffing Inc.'
# )
# report = monitor.run_compliance_scan()
# email_html = monitor.format_alert_email(report)
# Send email_html via Microsoft Graph API to HR teamScheduling
Configure in n8n as a Cron trigger node:
- Weekly scan:
0 9 * * 1(Monday 9 AM) - Monthly deep scan:
0 9 1 * *(1st of month 9 AM)
Expected API cost per weekly scan: ~$0.50–$2.00 (6 topics × GPT-4.1 analysis)
Testing & Validation
- POLICY GENERATION TEST: Submit a test policy request via the /generate-policy endpoint for 'Paid Time Off Policy' covering states CA, NY, and TX. Verify the response includes: (a) AI-generated disclaimer at the top, (b) state-specific sections for California PTO payout requirements, (c) [LEGAL REVIEW NEEDED] markers, (d) acknowledgment signature block, (e) response completes in under 30 seconds
- RAG RETRIEVAL ACCURACY TEST: Query the Pinecone index with 'harassment policy California training requirements' and verify that returned chunks include SixFifty compliance template content mentioning California's SB 1343 anti-harassment training mandate. Minimum relevance score should be >0.5 for top 3 results
- MULTI-STATE COMPLIANCE TEST: Generate an anti-harassment policy for a client operating in CA, NY, IL, CT, and TX. Verify that the output correctly identifies mandatory training requirements for CA (SB 1343), NY (NYC Local Law), IL (SB 75), and CT (Time's Up Act), and correctly notes TX has no state-mandated training
- MICROSOFT 365 COPILOT VALIDATION: Open a blank Word document on an HR user's workstation, activate Copilot, and prompt: 'Draft a remote work policy for our company based on our existing HR policies in SharePoint.' Verify Copilot references content from the HR Policy Management SharePoint site and produces relevant output
- HRIS INTEGRATION TEST: Create a test employee record in the client's HRIS (BambooHR/Rippling/etc.), verify the Merge.dev webhook fires within 60 seconds, confirm the n8n workflow triggers, and validate that a test handbook email is sent to the test employee's email address with the correct state-specific handbook version
- E-SIGNATURE ACKNOWLEDGMENT TEST: Trigger a policy distribution to a test employee. Verify: (a) email arrives with handbook attachment/link, (b) Teams notification is sent, (c) SharePoint 'Policy Acknowledgments' list shows new record with Status='Pending', (d) after 7 days the reminder workflow fires, (e) after acknowledgment the Status updates to 'Acknowledged' with timestamp
- TEAMS Q&A BOT TEST: In Microsoft Teams, send the Policy Assistant bot the message 'How many PTO days do I get per year?' Verify: (a) bot responds within 10 seconds, (b) response cites specific policy content from the approved handbook, (c) response includes the mandatory disclaimer, (d) response includes HR contact information. Then send 'What is the company's policy on cryptocurrency investments?' (a question NOT covered by policy) and verify the bot responds with the NO_ANSWER_RESPONSE directing to HR
- COMPLIANCE MONITOR TEST: Run the ComplianceMonitor.run_compliance_scan() for a test client operating in CA, NY, and IL. Verify: (a) scan completes without errors, (b) results include findings for all applicable compliance topics, (c) California-specific findings are flagged, (d) risk levels are assigned to all results, (e) alert email HTML renders correctly in Outlook, (f) total API cost for scan is under $5
- SECURITY VALIDATION: Verify that (a) all API keys are stored in Azure Key Vault and not in any code files or environment variables on shared machines, (b) the FastAPI endpoint requires Azure AD authentication, (c) SharePoint 'Approved Policies' library has break-role-inheritance with restricted edit permissions, (d) MFA is enforced for all HR user accounts, (e) the bot cannot access the 'Policy Drafts' library (only 'handbook' namespace in Pinecone)
- CONTENT ACCURACY SMOKE TEST: Generate 5 different policy types (PTO, harassment, remote work, social media, leave) and have the client's HR director perform a qualitative review. Score each on a 1-5 scale for: accuracy, completeness, tone appropriateness, and state-specific coverage. All policies should score 3+ before go-live. Any score below 3 requires prompt template adjustment and regeneration.
- DISASTER RECOVERY TEST: Temporarily revoke the OpenAI API key and submit a policy generation request. Verify the system returns a graceful error message (not a stack trace) and the n8n workflow does not crash. Restore the key and verify normal operation resumes. Repeat with Pinecone key. Document that the handbook platform (SixFifty/AirMason) continues to function independently of the API service.
Client Handoff
Client Handoff Checklist
Training Sessions (Deliver 3 sessions, 60-90 minutes each)
Session 1: AI-Assisted Policy Drafting (HR Editors)
- How to use the policy generation API via the web interface or direct API calls
- Using Microsoft 365 Copilot in Word to draft and refine policy documents
- Understanding AI-generated content: what to trust, what to verify, where the [LEGAL REVIEW NEEDED] markers appear
- How to use prompt templates for different policy types (PTO, harassment, remote work, etc.)
- Hands-on exercise: Generate, review, and refine a PTO policy update
Session 2: Handbook Distribution & Acknowledgment Tracking (HR Managers)
- How the new hire handbook distribution workflow operates end-to-end
- Monitoring the Policy Acknowledgments SharePoint list
- Handling overdue acknowledgments and escalation procedures
- Using the compliance monitoring weekly reports
- How to initiate a policy update distribution to all employees
Session 3: Teams Policy Q&A Bot Administration (HR Admins)
- How the bot works and what it can/cannot answer
- How to update the RAG knowledge base when policies change (re-ingestion process)
- Monitoring bot usage and common employee questions
- When and how to escalate bot inquiries to human HR
Documentation Package (Leave with Client)
Success Criteria Review (Conduct at 30-day post-go-live meeting)
Maintenance
Ongoing Maintenance Plan
Weekly Tasks (MSP — 1-2 hours/week)
- Review compliance monitoring scan results and flag critical/high findings to client HR team
- Check OpenAI and Pinecone API usage dashboards — verify costs are within budget ($50/month cap)
- Monitor n8n workflow execution logs for failures or errors in automated pipelines
- Review Teams bot interaction logs for unanswered questions (may indicate policy gaps)
- Verify automated backups of SharePoint policy libraries are completing successfully
Monthly Tasks (MSP — 2-4 hours/month)
- Update Pinecone RAG knowledge base with any newly approved policy documents
- Review and rotate API keys if security policy requires monthly rotation
- Patch and update n8n instance, Docker containers, and Python dependencies
- Generate monthly usage report for client: policies generated, bot queries handled, acknowledgment rates, compliance scan summary
- Review Microsoft 365 Copilot adoption metrics for HR users (are they actually using it?)
- Check for SixFifty/AirMason platform updates and apply new compliance templates
Quarterly Tasks (MSP + Client HR — 4-8 hours/quarter)
- Comprehensive compliance review: run deep compliance scan, cross-reference with employment law update services (SixFifty, BLR, SHRM)
- Prompt library optimization: review generated content quality, update prompt templates based on HR team feedback
- Re-ingest updated compliance templates from SixFifty/BLR into Pinecone vector database
- Quarterly business review with client: ROI assessment, usage metrics, satisfaction survey, roadmap discussion
- Test disaster recovery procedures (API failover, manual fallback process)
- Review and update bot response quality — add new policy content as the handbook evolves
Annual Tasks
- Full handbook regeneration cycle: use AI to generate a comprehensive update of the entire handbook against current employment law
- Employment attorney annual review engagement (client responsibility, MSP coordinates)
- Renew software subscriptions (SixFifty $399/yr, AirMason $999/yr if applicable)
- Architecture review: evaluate new AI models, assess whether to upgrade from GPT-4.1 to newer models
- Security audit: review all access permissions, API key storage, data retention policies
- Multi-state expansion check: has the client expanded to new states? Add new jurisdictions to compliance monitoring
SLA Considerations
- Response Time: Policy generation API — 4-hour response for critical issues, next business day for standard requests
- Uptime Target: 99.5% for API service and bot (Azure Container Apps SLA)
- Compliance Alerts: Critical compliance findings escalated to client within 24 hours of detection
- Content Update Turnaround: New policy drafts generated within 2 business days of request; compliance-triggered updates within 5 business days
Escalation Path
Model Update/Retraining Triggers
- OpenAI or Anthropic releases a new model version — test within 2 weeks, migrate if quality/cost improves
- Client adds operations in a new state — re-run SixFifty for new jurisdiction, update RAG database, update compliance monitoring
- Major employment law change (federal or in client's states) — immediate compliance scan + handbook update cycle
- HR team reports declining content quality — review and update prompt templates, re-ingest latest compliance templates
- Bot accuracy drops below 80% (based on HR team spot-checks) — re-ingest updated handbook, review embedding quality
Alternatives
Copilot-Only Approach (No Custom API)
Use Microsoft 365 Copilot as the sole AI content generation tool, with SixFifty or BLR for compliance templates and SharePoint for document management. No custom FastAPI service, no Pinecone RAG, no Teams bot. HR staff use Copilot in Word to draft policies referencing SharePoint-stored compliance templates.
AirMason-Centric Approach (Platform-First)
Use AirMason as the primary end-to-end platform for handbook creation, AI-assisted drafting, compliance updates, branded formatting, and employee distribution. Supplement with Copilot for ad-hoc content drafting. Eliminates the need for custom API development, Pinecone, and n8n.
Fully Custom Self-Hosted LLM Approach
Deploy an open-source LLM (e.g., Llama 3.1 70B or Mistral Large) on dedicated GPU infrastructure for completely air-gapped, on-premises policy generation. All data stays within the client's network — no external API calls.
ChatGPT Teams + Manual Workflow (Budget Approach)
Provide HR team members with ChatGPT Teams licenses ($25/user/month) and a documented prompt library (the same prompt templates from this guide). No custom integrations, no RAG pipeline, no bot. HR staff manually copy-paste prompts, generate content in ChatGPT, and handle distribution through existing email/SharePoint workflows.
BLR Handbook Builder + Power Automate (Microsoft-Native)
Use BLR Employee Handbook Builder ($400/year) for compliance templates, Microsoft 365 Copilot for content drafting, Power Automate for workflow automation (instead of n8n), and SharePoint + Power Apps for a custom handbook distribution portal. Entirely within the Microsoft ecosystem.
Want early access to the full toolkit?