61 min readIntelligence & insights

Implementation Guide: Analyze project profitability and utilization rates by consultant and engagement type

Step-by-step implementation guide for deploying AI to analyze project profitability and utilization rates by consultant and engagement type for Professional Services clients.

Hardware Procurement

Dell PowerEdge T360 Tower Server

Dell TechnologiesPowerEdge T360Qty: 1

$3,500 MSP cost (configured with 32GB RAM, 1TB SSD) / $5,000 suggested resale

On-premises server for Power BI Report Server deployment. Only required if the client has data residency requirements, regulatory constraints (e.g., government consulting), or cannot use cloud SaaS. Hosts Power BI Report Server, SQL Server database, and optional data warehouse. Most deployments will NOT need this — cloud-first is recommended.

HPE ProLiant ML30 Gen11 Tower Server

Hewlett Packard EnterpriseProLiant ML30 Gen11 (P71383-001 base)Qty: 1

$2,500 MSP cost (configured with 32GB RAM, 1TB SSD) / $3,800 suggested resale

Alternative on-premises server option. Lower cost than Dell T360 with comparable specs. Intel Xeon 6315P processor, supports up to 128GB RAM. Same use case as Dell above — only procure if on-premises deployment is required.

Client Workstation RAM Upgrade

DDR4/DDR5 SODIMM 16GB kit (varies by workstation model)

Crucial / KingstonDDR4/DDR5 SODIMM 16GB kit (varies by workstation model)Qty: 5

$40–$80 per module MSP cost / $80–$120 suggested resale

RAM upgrades for analyst/manager workstations that will run Power BI Desktop for report authoring. Power BI Desktop performs best with 32GB RAM; many existing workstations may only have 16GB. Upgrade the 3–5 workstations that will be used for dashboard authoring and heavy analytics work.

Software Procurement

BigTime PSA - Advanced Plan

BigTime SoftwareSaaS per-seat monthlyQty: 30 users

$35/user/month for consultants; $45/user/month for Premier plan for managers needing advanced analytics. For a 30-user firm: ~$1,050–$1,350/month ($12,600–$16,200/year)

Primary Professional Services Automation platform. Provides time tracking, project management, resource allocation, billing/invoicing, and built-in AI-driven utilization forecasting. Serves as the single system of record for all billable hours, project budgets, rate cards, and engagement types. Integrates directly with QuickBooks Online and Xero for accounting sync.

Microsoft Power BI Pro

MicrosoftSaaS per-seat monthly (via Microsoft CSP)Qty: 5–10 manager/partner licenses

$14/user/month ($70–$140/month / $840–$1,680/year)

Business intelligence and analytics platform for building profitability and utilization dashboards. Power BI Pro enables report sharing, collaboration, and scheduled data refreshes. Connects to BigTime PSA via API, QuickBooks via connector, and HRIS via CSV/API imports. Required for all users who will view and interact with analytics dashboards.

Microsoft Power BI Premium Per User (PPU)

MicrosoftSaaS per-seat monthly (via Microsoft CSP)Qty: 3–5 power users

$24/user/month ($72–$120/month or $864–$1,440/year for 3–5 users)

Upgraded Power BI tier that unlocks Copilot AI capabilities including natural language query generation, automated narrative insights, anomaly detection, and AI-powered report creation. Recommended for the 3–5 senior partners/managers who will most benefit from AI-assisted analysis. Can be mixed with Pro licenses for other users.

Microsoft 365 Business Premium

MicrosoftSaaS per-seat monthly (via Microsoft CSP)Qty: 30 users

$22/user/month — $660/month ($7,920/year). Client may already have this.

Required foundation for Power BI Pro licensing. Also provides Azure AD/Entra ID for single sign-on across PSA and BI tools, Microsoft Teams for collaboration, and SharePoint for document management. Most professional services firms already have M365 — verify existing subscription tier.

Free

Desktop application for building and authoring Power BI reports and dashboards. Installed on analyst/manager workstations. This is the primary development tool for creating all profitability and utilization dashboards. Requires Windows 10/11 64-bit.

Zapier - Professional Plan

ZapierSaaS monthlyQty: 1

$49.95–$69.95/month ($599–$839/year)

No-code integration platform for connecting data sources that lack direct Power BI connectors. Used to sync CRM deal data (HubSpot/Salesforce) into a staging spreadsheet or database for Power BI consumption, and to automate alerts when utilization or profitability thresholds are breached.

Azure SQL Database - Basic/Standard Tier

MicrosoftS0 (10 DTU) recommended for SMB

$5–$75/month depending on DTU tier. S0 (10 DTU) at ~$15/month recommended for SMB.

Cloud-hosted SQL database serving as the central data warehouse for the analytics layer. Consolidates data from PSA, accounting, HRIS, and CRM into a single queryable source for Power BI. Provides the foundation for custom profitability calculations, historical trend analysis, and AI model training data.

Polaris PSA with ZeroTime

RepliconSaaS per-seat monthlyQty: 30 users

$49/user/month — $1,470/month ($17,640/year)

Alternative PSA platform with stronger native AI capabilities. SmartBudget feature optimizes revenue and creates rate cards for most profitable bidding scenarios. ZeroTime uses AI to auto-capture time from calendar, email, and app usage, reducing time entry friction. Recommended alternative for firms that prioritize AI-native features over ecosystem breadth.

Scoro - Essential Plan

ScoroSaaS per-seat monthlyQty: 30 users

$19.90/user/month (annual billing). For 30 users: $597/month ($7,164/year)

Alternative all-in-one PSA platform for firms wanting CRM + project management + finance + invoicing in a single tool. Best for smaller firms (5–50 users) that want to minimize the number of separate systems. Minimum 5 users required.

Prerequisites

  • Microsoft 365 Business Premium (or E3/E5) subscription active for all users who will access Power BI dashboards — verify current subscription tier before proceeding
  • Accounting system with API access enabled: QuickBooks Online (Essentials plan or higher), Xero (Standard or Premium), or Sage Intacct. The QBO/Xero account must have API access enabled and an admin-level API key or OAuth app configured
  • CRM system with API access: Salesforce (any edition with API access — Enterprise or above), HubSpot (Professional or above for API access), or Microsoft Dynamics 365
  • Active Azure subscription for Azure SQL Database provisioning — can be created as part of the M365 tenant
  • Minimum 6 months (ideally 12 months) of historical time and billing data available for import. Data must include: consultant names/IDs, project codes, hours logged, bill rates applied, and invoiced amounts. Data can be in CSV, Excel, or database export format
  • Documented and agreed-upon engagement type taxonomy — the firm must define 5–15 engagement types (e.g., Strategy Consulting, IT Implementation, Staff Augmentation, Training, Managed Services) before implementation begins
  • Documented fully-loaded cost rates for each consultant level/role. Must include base salary, benefits burden, and overhead allocation. Finance team sign-off required before dashboard development
  • Defined utilization targets by role/level (e.g., Senior Consultant: 75% billable, Manager: 60% billable, Partner: 40% billable). These targets are required for the utilization dashboard calculations
  • Firm leadership sign-off on data visibility policy: who can see individual-level utilization data vs. team-level aggregates only. Document this in writing before building role-based security in Power BI
  • All consultants must be trained on and committed to daily/weekly time entry discipline. This is the single most critical prerequisite — analytics are worthless without accurate time data
  • Client workstations for Power BI Desktop authoring must run Windows 10 or 11 (64-bit) with minimum 16GB RAM (32GB recommended). Power BI Desktop is not available for macOS
  • Reliable business internet connection: 25+ Mbps per concurrent analytics user
  • If GDPR applies (EU staff or clients): Data Protection Impact Assessment (DPIA) must be completed before deploying individual-level utilization tracking
  • DNS and firewall rules allowing outbound HTTPS to: *.powerbi.com, *.analysis.windows.net, *.bigtime.net (or chosen PSA vendor domain), *.database.windows.net (Azure SQL)

Installation Steps

Step 1: Discovery Workshop & KPI Definition

Conduct a 2–4 hour discovery workshop with firm leadership (managing partner, finance director, operations manager) to define the specific KPIs, engagement types, and profitability calculations that the analytics platform must support. This is the most critical step — getting the metrics wrong here means rebuilding dashboards later.

  • Engagement type taxonomy with definitions
  • Utilization calculation method (available hours denominator — typically 2,080 hrs/yr minus PTO/holidays)
  • Profitability calculation method (revenue minus direct labor cost, or revenue minus fully-loaded cost including overhead)
  • Role/level hierarchy and utilization targets
  • Dashboard audience and access levels
  • Current pain points and specific questions leadership wants answered
Note

Deliver a written KPI Definition Document for client sign-off before proceeding. Template should include: KPI name, formula, data source, target value, and dashboard owner. This document becomes the blueprint for all dashboard development. Allow 1–2 revision cycles. Do NOT skip this step — it prevents 80% of scope creep issues.

Step 2: Provision Azure SQL Database as Central Data Warehouse

Create an Azure SQL Database instance that will serve as the central repository for consolidated analytics data. This warehouse sits between the source systems (PSA, accounting, HRIS) and Power BI, enabling complex joins, historical storage, and custom calculations that would be impossible with direct Power BI connections alone.

bash
az login
az group create --name rg-profitability-analytics --location eastus
az sql server create --name sql-profitability-[clientname] --resource-group rg-profitability-analytics --location eastus --admin-user sqladmin --admin-password [GenerateStrongPassword]
az sql db create --resource-group rg-profitability-analytics --server sql-profitability-[clientname] --name db-analytics --service-objective S0 --backup-storage-redundancy Local
az sql server firewall-rule create --resource-group rg-profitability-analytics --server sql-profitability-[clientname] --name AllowAzureServices --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
az sql server firewall-rule create --resource-group rg-profitability-analytics --server sql-profitability-[clientname] --name AllowOfficeIP --start-ip-address [CLIENT_OFFICE_IP] --end-ip-address [CLIENT_OFFICE_IP]
Note

S0 tier (10 DTU, 250GB) is sufficient for most SMB workloads and costs ~$15/month. Scale up to S1 or S2 if query performance degrades. Enable Azure AD authentication in addition to SQL auth for better security. Store the admin credentials in the client's password manager — never in plain text. Enable Transparent Data Encryption (TDE) which is on by default. Consider enabling Auditing for SOC 2 compliance.

Step 3: Create Data Warehouse Schema

Connect to the Azure SQL Database and create the core schema tables that will hold the consolidated analytics data. This dimensional model is optimized for profitability and utilization reporting.

sql
-- Connect via Azure Data Studio or SSMS to sql-profitability-[clientname].database.windows.net

CREATE SCHEMA analytics;
GO

-- Dimension Tables
CREATE TABLE analytics.dim_consultant (
    consultant_id INT PRIMARY KEY,
    consultant_name NVARCHAR(200),
    email NVARCHAR(200),
    role_level NVARCHAR(100), -- e.g., Analyst, Senior Consultant, Manager, Partner
    department NVARCHAR(100),
    hire_date DATE,
    termination_date DATE NULL,
    hourly_cost_rate DECIMAL(10,2), -- Fully loaded cost rate
    standard_bill_rate DECIMAL(10,2),
    annual_available_hours INT DEFAULT 2080, -- Adjusted for PTO/holidays
    utilization_target DECIMAL(5,2), -- e.g., 0.75 for 75%
    is_active BIT DEFAULT 1,
    last_updated DATETIME2 DEFAULT GETDATE()
);

CREATE TABLE analytics.dim_project (
    project_id INT PRIMARY KEY,
    project_name NVARCHAR(500),
    project_code NVARCHAR(50),
    client_id INT,
    client_name NVARCHAR(200),
    engagement_type NVARCHAR(100), -- From defined taxonomy
    billing_method NVARCHAR(50), -- T&M, Fixed-Fee, Retainer, Milestone
    project_status NVARCHAR(50), -- Active, Complete, On-Hold, Cancelled
    start_date DATE,
    end_date DATE NULL,
    budget_hours DECIMAL(10,2),
    budget_amount DECIMAL(12,2),
    contract_value DECIMAL(12,2),
    project_manager_id INT,
    created_date DATE,
    last_updated DATETIME2 DEFAULT GETDATE()
);

CREATE TABLE analytics.dim_date (
    date_key INT PRIMARY KEY, -- YYYYMMDD format
    full_date DATE,
    day_of_week INT,
    day_name NVARCHAR(10),
    week_number INT,
    month_number INT,
    month_name NVARCHAR(10),
    quarter_number INT,
    quarter_name NVARCHAR(6),
    year_number INT,
    fiscal_year INT,
    fiscal_quarter INT,
    is_weekday BIT,
    is_holiday BIT DEFAULT 0
);

CREATE TABLE analytics.dim_engagement_type (
    engagement_type_id INT PRIMARY KEY IDENTITY(1,1),
    engagement_type_name NVARCHAR(100),
    engagement_type_category NVARCHAR(100), -- Advisory, Implementation, Managed Services, etc.
    description NVARCHAR(500),
    target_margin DECIMAL(5,2)
);

-- Fact Tables
CREATE TABLE analytics.fact_time_entry (
    time_entry_id BIGINT PRIMARY KEY,
    consultant_id INT REFERENCES analytics.dim_consultant(consultant_id),
    project_id INT REFERENCES analytics.dim_project(project_id),
    date_key INT REFERENCES analytics.dim_date(date_key),
    hours_worked DECIMAL(6,2),
    is_billable BIT,
    bill_rate_applied DECIMAL(10,2),
    cost_rate_applied DECIMAL(10,2),
    billable_amount AS (hours_worked * bill_rate_applied),
    cost_amount AS (hours_worked * cost_rate_applied),
    time_entry_status NVARCHAR(50), -- Submitted, Approved, Invoiced
    notes NVARCHAR(1000),
    source_system NVARCHAR(50), -- BigTime, manual import, etc.
    last_updated DATETIME2 DEFAULT GETDATE()
);

CREATE TABLE analytics.fact_invoice (
    invoice_id INT PRIMARY KEY,
    project_id INT REFERENCES analytics.dim_project(project_id),
    client_id INT,
    invoice_date DATE,
    date_key INT REFERENCES analytics.dim_date(date_key),
    invoice_amount DECIMAL(12,2),
    amount_paid DECIMAL(12,2) DEFAULT 0,
    payment_date DATE NULL,
    invoice_status NVARCHAR(50), -- Draft, Sent, Paid, Overdue, Written-Off
    source_system NVARCHAR(50),
    last_updated DATETIME2 DEFAULT GETDATE()
);

CREATE TABLE analytics.fact_project_monthly_snapshot (
    snapshot_id BIGINT IDENTITY(1,1) PRIMARY KEY,
    project_id INT REFERENCES analytics.dim_project(project_id),
    snapshot_month DATE, -- First day of month
    total_hours_to_date DECIMAL(10,2),
    billable_hours_to_date DECIMAL(10,2),
    revenue_recognized_to_date DECIMAL(12,2),
    total_cost_to_date DECIMAL(12,2),
    budget_consumed_pct DECIMAL(5,2),
    gross_margin_pct AS CASE WHEN revenue_recognized_to_date > 0 THEN (revenue_recognized_to_date - total_cost_to_date) / revenue_recognized_to_date ELSE 0 END,
    last_updated DATETIME2 DEFAULT GETDATE()
);

-- Populate date dimension (5 years)
DECLARE @StartDate DATE = '2020-01-01';
DECLARE @EndDate DATE = '2029-12-31';
WHILE @StartDate <= @EndDate
BEGIN
    INSERT INTO analytics.dim_date (date_key, full_date, day_of_week, day_name, week_number, month_number, month_name, quarter_number, quarter_name, year_number, fiscal_year, fiscal_quarter, is_weekday)
    VALUES (
        CAST(FORMAT(@StartDate,'yyyyMMdd') AS INT),
        @StartDate,
        DATEPART(WEEKDAY, @StartDate),
        DATENAME(WEEKDAY, @StartDate),
        DATEPART(WEEK, @StartDate),
        MONTH(@StartDate),
        DATENAME(MONTH, @StartDate),
        DATEPART(QUARTER, @StartDate),
        'Q' + CAST(DATEPART(QUARTER, @StartDate) AS NVARCHAR),
        YEAR(@StartDate),
        CASE WHEN MONTH(@StartDate) >= 7 THEN YEAR(@StartDate)+1 ELSE YEAR(@StartDate) END,
        CASE WHEN MONTH(@StartDate) >= 7 THEN DATEPART(QUARTER, @StartDate)-2 ELSE DATEPART(QUARTER, @StartDate)+2 END,
        CASE WHEN DATEPART(WEEKDAY, @StartDate) BETWEEN 2 AND 6 THEN 1 ELSE 0 END
    );
    SET @StartDate = DATEADD(DAY, 1, @StartDate);
END;
Note

The fiscal year calculation assumes a July fiscal year start (common in professional services). Adjust the CASE statement in dim_date population if the client uses a different fiscal year. The computed columns in fact_time_entry automatically calculate billable_amount and cost_amount. Index strategy: create non-clustered indexes on all foreign key columns and commonly filtered columns (date_key, consultant_id, project_id, is_billable, engagement_type) after initial data load.

Step 4: Deploy and Configure BigTime PSA Platform

Set up the BigTime PSA instance as the primary operational system for time tracking, project management, and billing. Configure it with the engagement types, rate cards, project codes, and accounting integration defined in the Discovery Workshop.

Note

BigTime setup is done through the web admin interface at app.bigtime.net. Key configuration steps: (1) Create the firm's organizational hierarchy (departments, teams), (2) Import or create all consultant profiles with standard bill rates, (3) Set up the engagement type taxonomy as Project Types, (4) Configure rate cards by engagement type and consultant level, (5) Create project code naming convention (e.g., [ClientCode]-[Year]-[SequenceNum]), (6) Enable DCAA-compliant timekeeping if the firm has government contracts, (7) Configure approval workflows (consultant → manager → partner), (8) Set up QuickBooks Online or Xero integration under Settings → Integrations. For QuickBooks: use the native BigTime-QBO connector; map BigTime project types to QBO classes; map consultants to QBO employees. Test with a single project before enabling full sync. If migrating from spreadsheets or another tool, use BigTime's CSV import templates for historical project and time data.

Step 5: Configure Accounting System Integration

Establish bidirectional data flow between BigTime PSA and the client's accounting system (QuickBooks Online, Xero, or Sage Intacct). This ensures invoices generated in BigTime flow to accounting, and payment data flows back for cash collection tracking. Also configure the data pipeline from accounting to the Azure SQL warehouse for overhead allocation data.

1
After BigTime-QBO sync is configured, set up a nightly export to Azure SQL
2
Use Azure Data Factory or a Python script scheduled via Azure Functions
qbo_to_azure.py
python
# Nightly Azure Function to sync QBO invoices into Azure SQL analytics
# schema

# Python script: qbo_to_azure.py (runs nightly via Azure Function Timer Trigger)
# pip install intuitlib quickbooks-python pyodbc

import pyodbc
from quickbooks import QuickBooks
from quickbooks.objects.account import Account
from quickbooks.objects.invoice import Invoice
from intuitlib.client import AuthClient

# OAuth2 connection to QBO
auth_client = AuthClient(
    client_id='[QBO_CLIENT_ID]',
    client_secret='[QBO_CLIENT_SECRET]',
    redirect_uri='https://[your-function-app].azurewebsites.net/api/callback',
    environment='production'
)

qb_client = QuickBooks(
    auth_client=auth_client,
    refresh_token='[REFRESH_TOKEN]',
    company_id='[COMPANY_ID]'
)

# Fetch invoices updated in last 24 hours
from datetime import datetime, timedelta
yesterday = (datetime.now() - timedelta(days=1)).strftime('%Y-%m-%dT00:00:00')
invoices = Invoice.filter(
    MetaData_LastUpdatedTime=yesterday,
    qb=qb_client
)

# Write to Azure SQL
conn = pyodbc.connect('Driver={ODBC Driver 18 for SQL Server};Server=sql-profitability-[clientname].database.windows.net;Database=db-analytics;Uid=sqladmin;Pwd=[password];Encrypt=yes;TrustServerCertificate=no;')
cursor = conn.cursor()

for inv in invoices:
    cursor.execute('''
        MERGE analytics.fact_invoice AS target
        USING (VALUES (?, ?, ?, ?, ?, ?, ?, ?)) AS source
            (invoice_id, project_id, client_id, invoice_date, invoice_amount, amount_paid, payment_date, invoice_status)
        ON target.invoice_id = source.invoice_id
        WHEN MATCHED THEN UPDATE SET
            amount_paid = source.amount_paid,
            payment_date = source.payment_date,
            invoice_status = source.invoice_status,
            last_updated = GETDATE()
        WHEN NOT MATCHED THEN INSERT
            (invoice_id, project_id, client_id, invoice_date, date_key, invoice_amount, amount_paid, payment_date, invoice_status, source_system)
            VALUES (source.invoice_id, source.project_id, source.client_id, source.invoice_date,
                    CAST(FORMAT(source.invoice_date,'yyyyMMdd') AS INT),
                    source.invoice_amount, source.amount_paid, source.payment_date, source.invoice_status, 'QuickBooksOnline');
    ''', inv.Id, None, inv.CustomerRef.value, inv.TxnDate, float(inv.TotalAmt), float(inv.Balance or 0), None, 'Sent')

conn.commit()
conn.close()
Note

The Python script above is a simplified example — production version must handle OAuth2 token refresh, error handling, logging, and project_id mapping (BigTime project codes to QBO references). For clients on Sage Intacct, use the Sage Intacct SDK for Python which provides a more structured API. QuickBooks Online API has rate limits of 500 requests per minute — the script should implement exponential backoff. Store all credentials in Azure Key Vault, not in code. Schedule this Azure Function to run nightly at 2:00 AM local time via a Timer Trigger CRON expression: 0 0 2 * * *

Step 6: Build BigTime-to-Azure Data Pipeline for Time Entries

Create an automated data pipeline that extracts time entry and project data from BigTime PSA and loads it into the Azure SQL data warehouse. This is the core data feed for all utilization and profitability analytics.

bigtime_etl.py
python
# Azure Function (Timer Trigger) to sync BigTime time entries and projects
# into Azure SQL every 4 hours

# Python script: bigtime_etl.py (Azure Function - Timer Trigger, runs every 4 hours)
# pip install requests pyodbc

import requests
import pyodbc
import os
from datetime import datetime, timedelta

BIGTIME_API_BASE = 'https://iq.bigtime.net/BigtimeData/api/v2'
BIGTIME_TOKEN = os.environ['BIGTIME_API_TOKEN']  # Store in Azure Key Vault
AZURE_SQL_CONN = os.environ['AZURE_SQL_CONNECTION_STRING']

headers = {
    'X-Auth-Token': BIGTIME_TOKEN,
    'X-Auth-Realm': os.environ['BIGTIME_FIRM_ID'],
    'Content-Type': 'application/json'
}

def sync_time_entries():
    """Fetch time entries updated in last 24 hours and upsert to Azure SQL"""
    yesterday = (datetime.now() - timedelta(days=1)).strftime('%Y-%m-%d')
    
    # Get time entries
    response = requests.get(
        f'{BIGTIME_API_BASE}/time/Sheet',
        headers=headers,
        params={'UpdatedAfter': yesterday}
    )
    entries = response.json()
    
    conn = pyodbc.connect(AZURE_SQL_CONN)
    cursor = conn.cursor()
    
    for entry in entries:
        cursor.execute('''
            MERGE analytics.fact_time_entry AS target
            USING (VALUES (?,?,?,?,?,?,?,?,?,?)) AS source
                (time_entry_id, consultant_id, project_id, date_key, hours_worked,
                 is_billable, bill_rate_applied, cost_rate_applied, time_entry_status, notes)
            ON target.time_entry_id = source.time_entry_id
            WHEN MATCHED THEN UPDATE SET
                hours_worked = source.hours_worked,
                bill_rate_applied = source.bill_rate_applied,
                time_entry_status = source.time_entry_status,
                last_updated = GETDATE()
            WHEN NOT MATCHED THEN INSERT
                (time_entry_id, consultant_id, project_id, date_key, hours_worked,
                 is_billable, bill_rate_applied, cost_rate_applied, time_entry_status, notes, source_system)
                VALUES (source.time_entry_id, source.consultant_id, source.project_id,
                        CAST(FORMAT(source.date_key,'yyyyMMdd') AS INT),
                        source.hours_worked, source.is_billable, source.bill_rate_applied,
                        source.cost_rate_applied, source.time_entry_status, source.notes, 'BigTime');
        ''', entry.get('Id'), entry.get('StaffSid'), entry.get('ProjectSid'),
            entry.get('Dt'), entry.get('Hours_Worked'), 
            1 if entry.get('Billable') else 0,
            entry.get('BillRate', 0), entry.get('CostRate', 0),
            entry.get('ApprovalStatus', 'Submitted'), entry.get('Notes', ''))
    
    conn.commit()
    conn.close()
    return f'Synced {len(entries)} time entries'

def sync_projects():
    """Fetch all active projects and update dim_project"""
    response = requests.get(
        f'{BIGTIME_API_BASE}/project',
        headers=headers
    )
    projects = response.json()
    
    conn = pyodbc.connect(AZURE_SQL_CONN)
    cursor = conn.cursor()
    
    for proj in projects:
        cursor.execute('''
            MERGE analytics.dim_project AS target
            USING (VALUES (?,?,?,?,?,?,?,?,?,?)) AS source
                (project_id, project_name, project_code, client_name, engagement_type,
                 billing_method, project_status, start_date, budget_hours, budget_amount)
            ON target.project_id = source.project_id
            WHEN MATCHED THEN UPDATE SET
                project_status = source.project_status,
                budget_hours = source.budget_hours,
                budget_amount = source.budget_amount,
                last_updated = GETDATE()
            WHEN NOT MATCHED THEN INSERT
                (project_id, project_name, project_code, client_name, engagement_type,
                 billing_method, project_status, start_date, budget_hours, budget_amount)
                VALUES (source.project_id, source.project_name, source.project_code,
                        source.client_name, source.engagement_type, source.billing_method,
                        source.project_status, source.start_date, source.budget_hours, source.budget_amount);
        ''', proj.get('ProjectSid'), proj.get('ProjectNm'), proj.get('ProjectCode'),
            proj.get('ClientNm'), proj.get('ProjectType', 'Unclassified'),
            proj.get('BillingType', 'T&M'), proj.get('Status', 'Active'),
            proj.get('StartDt'), proj.get('BudgetHrs', 0), proj.get('BudgetAmt', 0))
    
    conn.commit()
    conn.close()
    return f'Synced {len(projects)} projects'

# Main execution
if __name__ == '__main__':
    print(sync_time_entries())
    print(sync_projects())
Note

BigTime API v2 documentation is available at https://iq.bigtime.net/BigtimeData/api/v2/help. The API requires an auth token obtained by POSTing to /BigtimeData/api/v2/session with staff credentials. Store the token in Azure Key Vault and refresh as needed. The field names (StaffSid, ProjectSid, etc.) are BigTime-specific — verify against the actual API response structure during implementation. For Polaris PSA or Scoro, substitute their respective REST APIs. Run this function every 4 hours during business hours (CRON: 0 0 6-22/4 * * *) to keep dashboards reasonably current without hitting API rate limits.

Step 7: Import Historical Data

Load 6–12 months of historical time, billing, and project data into the Azure SQL warehouse. This historical data is essential for trend analysis, benchmarking, and AI model training. Data typically comes from CSV exports from the legacy system (spreadsheets, previous PSA tool, or accounting exports).

1
Prepare CSV files with standardized column headers matching the schema
2
Load via bcp (example for time entries)
3
After import, validate record counts
4
Validate totals against known financial data
Bulk import historical time entries using bcp utility
sql
bcp analytics.fact_time_entry in historical_time_entries.csv -S sql-profitability-[clientname].database.windows.net -d db-analytics -U sqladmin -P [password] -c -t ',' -F 2
Validate record counts and date ranges across all imported datasets
sql
SELECT 'Time Entries' AS dataset, COUNT(*) AS record_count, 
       MIN(full_date) AS earliest_date, MAX(full_date) AS latest_date
FROM analytics.fact_time_entry t
JOIN analytics.dim_date d ON t.date_key = d.date_key
UNION ALL
SELECT 'Projects', COUNT(*), MIN(start_date), MAX(start_date)
FROM analytics.dim_project
UNION ALL
SELECT 'Consultants', COUNT(*), MIN(hire_date), MAX(hire_date)
FROM analytics.dim_consultant
UNION ALL
SELECT 'Invoices', COUNT(*), MIN(invoice_date), MAX(invoice_date)
FROM analytics.fact_invoice;
Validate monthly revenue and cost totals against known financial data
sql
SELECT 
    YEAR(d.full_date) AS year,
    MONTH(d.full_date) AS month,
    SUM(CASE WHEN t.is_billable = 1 THEN t.hours_worked ELSE 0 END) AS billable_hours,
    SUM(t.hours_worked * t.bill_rate_applied) AS gross_revenue,
    SUM(t.hours_worked * t.cost_rate_applied) AS total_cost
FROM analytics.fact_time_entry t
JOIN analytics.dim_date d ON t.date_key = d.date_key
GROUP BY YEAR(d.full_date), MONTH(d.full_date)
ORDER BY year, month;
Critical

The monthly revenue totals from the warehouse MUST reconcile with the client's accounting system within 2% tolerance. If they don't, investigate data mapping issues before proceeding to dashboard development. Common issues: (1) missing time entries for certain consultants, (2) incorrect bill rate mapping, (3) duplicate entries from overlapping export periods, (4) timezone mismatches causing date assignment errors. Budget 2–3 days for data cleansing and reconciliation. Have the client's finance director validate the monthly summary against their P&L.

Step 8: Build Power BI Semantic Model and Core Measures

Create the Power BI semantic model (formerly dataset) that connects to the Azure SQL warehouse and defines all calculated measures for profitability and utilization analytics. This is the analytical engine that powers all dashboards.

Power BI DAX Measures
dax
# create these in Power BI Desktop → Model view → New Measure

-- ===== UTILIZATION MEASURES =====

Total Hours = SUM(fact_time_entry[hours_worked])

Billable Hours = CALCULATE(SUM(fact_time_entry[hours_worked]), fact_time_entry[is_billable] = TRUE())

Non-Billable Hours = CALCULATE(SUM(fact_time_entry[hours_worked]), fact_time_entry[is_billable] = FALSE())

Available Hours = 
    SUMX(
        VALUES(dim_consultant[consultant_id]),
        DIVIDE(
            LOOKUPVALUE(dim_consultant[annual_available_hours], dim_consultant[consultant_id], dim_consultant[consultant_id]),
            12
        ) * DISTINCTCOUNT(dim_date[month_number])
    )

Utilization Rate = DIVIDE([Billable Hours], [Available Hours], 0)

Utilization Target = 
    AVERAGEX(
        VALUES(dim_consultant[consultant_id]),
        LOOKUPVALUE(dim_consultant[utilization_target], dim_consultant[consultant_id], dim_consultant[consultant_id])
    )

Utilization Variance = [Utilization Rate] - [Utilization Target]

Utilization RAG = 
    SWITCH(TRUE(),
        [Utilization Rate] >= [Utilization Target], "Green",
        [Utilization Rate] >= [Utilization Target] * 0.9, "Amber",
        "Red"
    )

-- ===== REVENUE MEASURES =====

Gross Revenue = SUMX(fact_time_entry, fact_time_entry[hours_worked] * fact_time_entry[bill_rate_applied])

Net Revenue = SUM(fact_invoice[invoice_amount])

Revenue per Billable Hour = DIVIDE([Gross Revenue], [Billable Hours], 0)

Realization Rate = DIVIDE([Net Revenue], [Gross Revenue], 0)

-- ===== COST MEASURES =====

Direct Labor Cost = SUMX(fact_time_entry, fact_time_entry[hours_worked] * fact_time_entry[cost_rate_applied])

Cost per Hour = DIVIDE([Direct Labor Cost], [Total Hours], 0)

-- ===== PROFITABILITY MEASURES =====

Gross Margin = [Gross Revenue] - [Direct Labor Cost]

Gross Margin % = DIVIDE([Gross Margin], [Gross Revenue], 0)

Net Margin = [Net Revenue] - [Direct Labor Cost]

Net Margin % = DIVIDE([Net Margin], [Net Revenue], 0)

Effective Multiplier = DIVIDE([Revenue per Billable Hour], [Cost per Hour], 0)

-- ===== PROJECT MEASURES =====

Budget Consumed % = 
    DIVIDE(
        SUM(fact_time_entry[hours_worked]),
        LOOKUPVALUE(dim_project[budget_hours], dim_project[project_id], MAX(dim_project[project_id])),
        0
    )

Budget Revenue Consumed % = 
    DIVIDE(
        [Gross Revenue],
        LOOKUPVALUE(dim_project[budget_amount], dim_project[project_id], MAX(dim_project[project_id])),
        0
    )

Project Health = 
    SWITCH(TRUE(),
        [Budget Consumed %] > 1.0, "Over Budget",
        [Budget Consumed %] > 0.85 && [Gross Margin %] < 0.15, "At Risk",
        [Gross Margin %] >= 0.20, "Healthy",
        "Monitor"
    )

-- ===== TREND MEASURES =====

Revenue MoM Growth = 
    VAR CurrentMonth = [Gross Revenue]
    VAR PriorMonth = CALCULATE([Gross Revenue], DATEADD(dim_date[full_date], -1, MONTH))
    RETURN DIVIDE(CurrentMonth - PriorMonth, PriorMonth, 0)

Utilization 3Mo Rolling Avg = 
    CALCULATE(
        [Utilization Rate],
        DATESINPERIOD(dim_date[full_date], MAX(dim_date[full_date]), -3, MONTH)
    )
Note

Create these measures in a dedicated Measures table in Power BI Desktop. Group them into display folders: Utilization, Revenue, Cost, Profitability, Project Health, Trends. The Available Hours calculation assumes monthly granularity — adjust if the client needs weekly views. The Realization Rate (Net Revenue / Gross Revenue) captures write-downs and discounts — this is often the most eye-opening metric for firm leadership. Test all measures against known historical data before building visuals.

Step 9: Build Power BI Dashboards

Create the four core dashboards in Power BI Desktop that deliver the profitability and utilization analytics. Each dashboard serves a specific audience and decision-making need. Publish to Power BI Service when complete.

Dashboard 1: Firm Overview (Audience: Partners/Leadership)

  • KPI cards: Firm-wide Utilization Rate, Gross Margin %, Revenue MTD, Revenue YTD
  • Utilization trend line chart (12-month rolling by month)
  • Revenue by engagement type (stacked bar chart)
  • Profitability by engagement type (clustered bar: revenue vs cost)
  • Top 10 / Bottom 10 projects by margin %
  • Filters: Date range, Department, Engagement Type

Dashboard 2: Consultant Utilization (Audience: Managers/Resource Planners)

  • Matrix: Consultant (rows) × Month (columns) with utilization % in cells
  • Conditional formatting: Green ≥ target, Amber ≥ 90% of target, Red < 90%
  • Billable vs Non-Billable hours stacked bar by consultant
  • Utilization vs Target scatter plot (consultant as dots)
  • Bench report: Consultants with < 50% utilization this month
  • Filters: Role Level, Department, Manager, Date Range

Dashboard 3: Project Profitability (Audience: Project Managers/Partners)

  • Project list table: Name, Client, Type, Budget, Actual Hours, Revenue, Cost, Margin %
  • Conditional formatting on Margin % (Red < 10%, Amber 10–20%, Green > 20%)
  • Budget burn-down chart (budget hours remaining over time)
  • Revenue vs Cost waterfall chart for selected project
  • Scope creep indicator: Actual/Budget hours ratio trend
  • Drill-through from any project to detailed time entries
  • Filters: Project Status, Engagement Type, Client, Date Range

Dashboard 4: Engagement Type Analysis (Audience: Partners/Strategy)

  • Engagement type comparison matrix: Avg Margin %, Avg Utilization, Revenue Share
  • Revenue mix pie/donut chart by engagement type
  • Margin trend by engagement type (line chart, 12 months)
  • Client profitability by engagement type (heatmap)
  • Win rate correlation (if CRM data available)
  • Filters: Date Range, Department

After Building All Four Dashboards

1
Configure Row-Level Security (RLS) in Power BI Desktop
2
Publish to Power BI Service workspace
3
Configure scheduled refresh (4x daily: 6am, 10am, 2pm, 6pm)
4
Set up email subscriptions for key stakeholders
5
Create Power BI App for clean end-user experience

Row-Level Security (RLS) Configuration

  • Create roles: Partner (see all), Manager (see their team), Consultant (see own data)
DAX filter for Consultant role
dax
# restricts each user to their own data

[consultant_id] = USERPRINCIPALNAME()
DAX filter for Manager role
dax
# restricts view to the manager's department

[department] = LOOKUPVALUE(...)
Note

Use the client's brand colors for the dashboard theme. Create a custom Power BI theme JSON file with the firm's primary and secondary colors, fonts, and logo. For Row-Level Security: test thoroughly by using 'View as Role' in Power BI Desktop before publishing. Scheduled refresh requires a Power BI data gateway if connecting to on-premises sources — for Azure SQL, direct cloud connection works without a gateway. Consider creating a mobile-optimized layout for each dashboard page for partners who want to check metrics on their phones. The four dashboards typically require 40–80 hours of development time depending on complexity and number of revision cycles.

Step 10: Configure Alerts and Automated Reports

Set up proactive alerting so that firm leadership is notified when key metrics breach thresholds, rather than requiring them to check dashboards manually. Configure automated email delivery of key reports.

1
Data-Driven Alerts (on dashboard tile pin): Pin the Firm Utilization Rate KPI card to a dashboard → Click '...' on the tile → 'Manage alerts' → Create alert: 'Firm Utilization Below Target' | Condition: Below | Threshold: 0.65 | Check every: 24 hours | Notification: Email + Push. Then pin Gross Margin % KPI card → Alert: 'Margin Below 15%' | Condition: Below | Threshold: 0.15 | Check every: 24 hours
2
Email Subscriptions: Navigate to each report page → Click 'Subscribe' → 'Subscribe others'. Weekly Utilization Report → Partners/Managers → Every Monday 8AM. Monthly Profitability Report → Partners → 1st of each month 8AM. Project Health Report → Project Managers → Every Friday 4PM
3
Power Automate Integration (for Slack/Teams notifications): Create flow: 'When a data-driven alert triggers' → 'Post to Teams channel' → Channel: #firm-analytics
Power Automate Teams notification message template
text
'Alert: {AlertTitle} - Current value: {AlertValue}'
Note

Power BI data-driven alerts only work on KPI cards, gauges, and single-number card visuals pinned to dashboards. For more complex alert conditions (e.g., individual consultant utilization dropping below target), use Power Automate with the Power BI connector. Email subscriptions can include up to 24 recipients per subscription — create separate subscriptions for different audience groups. Consider creating a Microsoft Teams channel (#profitability-analytics) where all alerts are posted for firm-wide visibility.

Step 11: Implement Row-Level Security and Access Controls

Configure role-based access to ensure consultants can only see their own data, managers can see their team's data, and partners can see firm-wide data. This is critical for both privacy compliance and organizational trust.

Define Roles in Power BI Desktop

1
Navigate to Power BI Desktop → Modeling tab → Manage Roles

Role 1: Partner (Full Access) — No DAX filter needed; partners see all data.

Role 2: Manager (Department Access) — Apply the following DAX filter to the dim_consultant table:

DAX Filter Expression for Manager role on dim_consultant table
dax
[department] = LOOKUPVALUE(
    dim_consultant[department],
    dim_consultant[email],
    USERPRINCIPALNAME()
)

Role 3: Consultant (Individual Access) — Apply the following DAX filter to the dim_consultant table:

DAX Filter Expression for Consultant role on dim_consultant table
dax
[email] = USERPRINCIPALNAME()

Assign Azure AD Security Groups After Publishing

1
Go to workspace → Dataset → Security
2
Add Azure AD security groups to each role: 'PS-Analytics-Partners' → Partner role, 'PS-Analytics-Managers' → Manager role, 'PS-Analytics-Consultants' → Consultant role
3
Test with 'Test as role' feature for each security group

Create Azure AD Security Groups via PowerShell

Create required Azure AD security groups for Power BI RLS role assignments
powershell
Connect-AzureAD
New-AzureADGroup -DisplayName 'PS-Analytics-Partners' -MailEnabled $false -SecurityEnabled $true -MailNickName 'ps-analytics-partners'
New-AzureADGroup -DisplayName 'PS-Analytics-Managers' -MailEnabled $false -SecurityEnabled $true -MailNickName 'ps-analytics-managers'
New-AzureADGroup -DisplayName 'PS-Analytics-Consultants' -MailEnabled $false -SecurityEnabled $true -MailNickName 'ps-analytics-consultants'
Note

RLS is enforced when viewing reports in Power BI Service or embedded — it does NOT apply when viewing in Power BI Desktop (Desktop always shows full data for the author). Test each role thoroughly by adding a test user to each Azure AD group and viewing the published report as that user. For GDPR compliance: if the firm has EU employees, document the RLS configuration as part of the Data Protection Impact Assessment (DPIA). The consultant-level RLS ensures each person only sees their own utilization, addressing the transparency resistance concern. Managers should only see direct reports — if the org hierarchy is complex, consider adding a manager_id column to dim_consultant for more precise filtering.

Step 12: Deploy Power BI App and End-User Onboarding

Package the dashboards into a Power BI App for clean end-user distribution, configure the scheduled refresh, and conduct user acceptance testing with the client's finance and operations leadership.

Publish the Power BI App

1
Navigate to the workspace containing the published reports
2
Click 'Create app'
3
Configure app settings: Name: '[FirmName] Profitability & Utilization Analytics', Description: 'Real-time project profitability, consultant utilization, and engagement analysis', Logo: Upload client firm logo, Theme color: Client brand primary color
4
Content tab: Select all four report pages — Reorder: Firm Overview → Utilization → Project Profitability → Engagement Analysis
5
Access tab: Specific individuals or groups: Add the three Azure AD security groups. Enable 'Allow users to connect to underlying datasets' for power users only
6
Click 'Publish app'

Configure Scheduled Refresh

1
Go to workspace → Dataset → Settings → Scheduled refresh
2
Data source credentials: Enter Azure SQL credentials
3
Schedule: 4 times daily (6:00, 10:00, 14:00, 18:00) in client timezone
4
Enable 'Send refresh failure notification to dataset owner'
5
Add MSP admin email as additional notification recipient

Test the App

1
Open in InPrivate/Incognito browser
2
Log in as a test user from each security role
3
Verify data visibility matches RLS expectations
4
Verify all visuals render correctly
5
Verify drill-through and cross-filtering work
6
Test on mobile device (Power BI mobile app)
Note

The Power BI App provides a read-only, curated experience that is simpler than workspace access. Users install the app from AppSource (internal) or a direct link. After initial publish, updates to the workspace reports must be explicitly re-published to the app — this gives the MSP control over when changes go live. Set up a dedicated Teams channel for user feedback during the first 30 days. Plan for 2–3 dashboard revision cycles based on user feedback before considering the analytics layer 'stable'.

Custom AI Components

Profitability Anomaly Detection Agent

Type: workflow

An automated workflow that runs daily to detect anomalies in project profitability and consultant utilization. It identifies projects that are trending toward unprofitability, consultants with unusual utilization patterns, and engagement types with declining margins. Results are surfaced as alerts in Microsoft Teams and as a summary report in Power BI.

Implementation

Architecture

  • Trigger: Azure Function Timer Trigger (runs daily at 7:00 AM)
  • Compute: Azure Function (Python 3.11)
  • Data Source: Azure SQL Database (analytics schema)
  • Notification: Microsoft Teams Webhook

Azure Function Code (anomaly_detection/__init__.py)

Azure Function — anomaly_detection/__init__.py
python
import azure.functions as func
import pyodbc
import json
import requests
import os
from datetime import datetime, timedelta
import statistics

def main(timer: func.TimerRequest) -> None:
    """Daily anomaly detection for project profitability and utilization."""
    
    conn_str = os.environ['AZURE_SQL_CONNECTION_STRING']
    teams_webhook = os.environ['TEAMS_WEBHOOK_URL']
    
    conn = pyodbc.connect(conn_str)
    cursor = conn.cursor()
    
    anomalies = []
    
    # === ANOMALY 1: Projects trending toward negative margin ===
    cursor.execute('''
        WITH ProjectMetrics AS (
            SELECT 
                p.project_id,
                p.project_name,
                p.client_name,
                p.engagement_type,
                p.budget_amount,
                SUM(t.hours_worked * t.bill_rate_applied) AS revenue_to_date,
                SUM(t.hours_worked * t.cost_rate_applied) AS cost_to_date,
                SUM(t.hours_worked) AS hours_to_date,
                p.budget_hours,
                CASE WHEN SUM(t.hours_worked * t.bill_rate_applied) > 0
                    THEN (SUM(t.hours_worked * t.bill_rate_applied) - SUM(t.hours_worked * t.cost_rate_applied)) 
                         / SUM(t.hours_worked * t.bill_rate_applied)
                    ELSE 0 END AS current_margin_pct,
                CASE WHEN p.budget_hours > 0
                    THEN SUM(t.hours_worked) / p.budget_hours
                    ELSE 0 END AS budget_consumed_pct
            FROM analytics.dim_project p
            JOIN analytics.fact_time_entry t ON p.project_id = t.project_id
            WHERE p.project_status = 'Active'
            GROUP BY p.project_id, p.project_name, p.client_name, 
                     p.engagement_type, p.budget_amount, p.budget_hours
        )
        SELECT * FROM ProjectMetrics
        WHERE current_margin_pct < 0.10  -- Margin below 10%
           OR (budget_consumed_pct > 0.80 AND current_margin_pct < 0.15)  -- 80%+ budget used, thin margin
           OR budget_consumed_pct > 1.0  -- Over budget
        ORDER BY current_margin_pct ASC
    ''')
    
    at_risk_projects = cursor.fetchall()
    for proj in at_risk_projects:
        severity = 'CRITICAL' if proj.current_margin_pct < 0 else 'WARNING'
        anomalies.append({
            'type': 'Project Profitability',
            'severity': severity,
            'entity': proj.project_name,
            'client': proj.client_name,
            'detail': f'Margin: {proj.current_margin_pct:.1%} | Budget consumed: {proj.budget_consumed_pct:.1%} | Revenue: ${proj.revenue_to_date:,.0f} | Cost: ${proj.cost_to_date:,.0f}'
        })
    
    # === ANOMALY 2: Consultant utilization outside normal range ===
    cursor.execute('''
        WITH WeeklyUtil AS (
            SELECT 
                c.consultant_id,
                c.consultant_name,
                c.role_level,
                c.utilization_target,
                d.year_number,
                d.week_number,
                SUM(CASE WHEN t.is_billable = 1 THEN t.hours_worked ELSE 0 END) AS billable_hours,
                SUM(t.hours_worked) AS total_hours
            FROM analytics.dim_consultant c
            JOIN analytics.fact_time_entry t ON c.consultant_id = t.consultant_id
            JOIN analytics.dim_date d ON t.date_key = d.date_key
            WHERE c.is_active = 1
              AND d.full_date >= DATEADD(WEEK, -12, GETDATE())  -- Last 12 weeks
            GROUP BY c.consultant_id, c.consultant_name, c.role_level, 
                     c.utilization_target, d.year_number, d.week_number
        ),
        ConsultantStats AS (
            SELECT
                consultant_id,
                consultant_name,
                role_level,
                utilization_target,
                AVG(billable_hours / 40.0) AS avg_utilization,
                STDEV(billable_hours / 40.0) AS stdev_utilization,
                -- Current week (most recent)
                (SELECT TOP 1 billable_hours / 40.0 
                 FROM WeeklyUtil w2 
                 WHERE w2.consultant_id = WeeklyUtil.consultant_id 
                 ORDER BY year_number DESC, week_number DESC) AS current_week_util
            FROM WeeklyUtil
            GROUP BY consultant_id, consultant_name, role_level, utilization_target
            HAVING COUNT(*) >= 4  -- Need at least 4 weeks of data
        )
        SELECT * FROM ConsultantStats
        WHERE ABS(current_week_util - avg_utilization) > 2 * ISNULL(NULLIF(stdev_utilization, 0), 0.1)
           OR current_week_util < utilization_target * 0.7  -- More than 30% below target
        ORDER BY (current_week_util - utilization_target) ASC
    ''')
    
    util_anomalies = cursor.fetchall()
    for cons in util_anomalies:
        severity = 'WARNING' if cons.current_week_util < cons.utilization_target * 0.5 else 'INFO'
        anomalies.append({
            'type': 'Utilization Anomaly',
            'severity': severity,
            'entity': cons.consultant_name,
            'client': cons.role_level,
            'detail': f'Current: {cons.current_week_util:.1%} | 12-wk avg: {cons.avg_utilization:.1%} | Target: {cons.utilization_target:.1%}'
        })
    
    # === ANOMALY 3: Engagement type margin degradation ===
    cursor.execute('''
        WITH MonthlyEngagement AS (
            SELECT
                p.engagement_type,
                d.year_number,
                d.month_number,
                SUM(t.hours_worked * t.bill_rate_applied) AS revenue,
                SUM(t.hours_worked * t.cost_rate_applied) AS cost,
                CASE WHEN SUM(t.hours_worked * t.bill_rate_applied) > 0
                    THEN (SUM(t.hours_worked * t.bill_rate_applied) - SUM(t.hours_worked * t.cost_rate_applied))
                         / SUM(t.hours_worked * t.bill_rate_applied)
                    ELSE 0 END AS margin_pct
            FROM analytics.dim_project p
            JOIN analytics.fact_time_entry t ON p.project_id = t.project_id
            JOIN analytics.dim_date d ON t.date_key = d.date_key
            WHERE d.full_date >= DATEADD(MONTH, -6, GETDATE())
            GROUP BY p.engagement_type, d.year_number, d.month_number
        ),
        MarginTrend AS (
            SELECT
                engagement_type,
                AVG(margin_pct) AS avg_margin,
                -- Simple slope: compare last 2 months avg vs prior 4 months avg
                AVG(CASE WHEN year_number * 100 + month_number >= 
                    (SELECT MAX(year_number * 100 + month_number) - 1 FROM MonthlyEngagement)
                    THEN margin_pct END) AS recent_margin,
                AVG(CASE WHEN year_number * 100 + month_number < 
                    (SELECT MAX(year_number * 100 + month_number) - 1 FROM MonthlyEngagement)
                    THEN margin_pct END) AS prior_margin
            FROM MonthlyEngagement
            GROUP BY engagement_type
            HAVING COUNT(*) >= 3
        )
        SELECT *, (recent_margin - prior_margin) AS margin_change
        FROM MarginTrend
        WHERE (recent_margin - prior_margin) < -0.05  -- Margin dropped 5+ percentage points
        ORDER BY margin_change ASC
    ''')
    
    engagement_anomalies = cursor.fetchall()
    for eng in engagement_anomalies:
        anomalies.append({
            'type': 'Engagement Margin Decline',
            'severity': 'WARNING',
            'entity': eng.engagement_type,
            'client': '',
            'detail': f'Recent margin: {eng.recent_margin:.1%} | Prior margin: {eng.prior_margin:.1%} | Change: {eng.margin_change:+.1%}'
        })
    
    conn.close()
    
    # === Send Teams Notification ===
    if anomalies:
        critical_count = sum(1 for a in anomalies if a['severity'] == 'CRITICAL')
        warning_count = sum(1 for a in anomalies if a['severity'] == 'WARNING')
        
        sections = []
        for a in sorted(anomalies, key=lambda x: ('CRITICAL','WARNING','INFO').index(x['severity'])):
            icon = '🔴' if a['severity'] == 'CRITICAL' else '🟡' if a['severity'] == 'WARNING' else '🔵'
            sections.append({
                'activityTitle': f"{icon} {a['severity']}: {a['type']}",
                'activitySubtitle': a['entity'] + (f" ({a['client']})" if a['client'] else ''),
                'text': a['detail']
            })
        
        card = {
            '@type': 'MessageCard',
            '@context': 'http://schema.org/extensions',
            'themeColor': 'FF0000' if critical_count > 0 else 'FFA500',
            'summary': f'Profitability Analytics: {critical_count} critical, {warning_count} warnings',
            'title': f'📊 Daily Profitability Alert — {datetime.now().strftime("%B %d, %Y")}',
            'sections': sections[:20],  # Teams cards limited to ~20 sections
            'potentialAction': [{
                '@type': 'OpenUri',
                'name': 'Open Analytics Dashboard',
                'targets': [{'os': 'default', 'uri': os.environ.get('POWERBI_APP_URL', '#')}]
            }]
        }
        
        requests.post(teams_webhook, json=card)

function.json

Azure Function binding configuration — function.json
json
{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "timer",
      "type": "timerTrigger",
      "direction": "in",
      "schedule": "0 0 7 * * 1-5"
    }
  ]
}

Environment Variables (in Azure Function App Settings)

  • AZURE_SQL_CONNECTION_STRING: Full ODBC connection string to Azure SQL
  • TEAMS_WEBHOOK_URL: Incoming webhook URL for the Teams channel
  • POWERBI_APP_URL: Direct URL to the Power BI App

Deployment

Publish the Azure Function app
bash
func azure functionapp publish func-profitability-alerts-[clientname]

Utilization Forecasting Model

Type: skill A predictive model that forecasts individual consultant utilization 4–8 weeks ahead based on current project assignments, pipeline data from CRM, and historical utilization patterns. This enables proactive resource rebalancing before bench time occurs. Implemented as a Python model that runs weekly and writes forecasts back to the Azure SQL warehouse for Power BI consumption.

Implementation

Model Approach

Uses a combination of:

1
Known future assignments: Hours already scheduled on active projects (from PSA)
2
Pipeline probability: Weighted hours from CRM pipeline (deal amount × probability × estimated hours)
3
Historical pattern: Seasonal and role-based utilization patterns from past 12 months
4
Exponential smoothing: For consultants without future assignments, forecast based on recent trend

Code: utilization_forecast/__init__.py

utilization_forecast/__init__.py
python
# Weekly Azure Function that generates 8-week forward-looking utilization
# predictions

import azure.functions as func
import pyodbc
import numpy as np
from datetime import datetime, timedelta
import os
import json

def main(timer: func.TimerRequest) -> None:
    """Weekly utilization forecast — generates 8-week forward-looking predictions."""
    
    conn = pyodbc.connect(os.environ['AZURE_SQL_CONNECTION_STRING'])
    cursor = conn.cursor()
    
    cursor.execute('''
        SELECT 
            c.consultant_id,
            c.consultant_name,
            c.role_level,
            c.utilization_target,
            c.annual_available_hours,
            d.year_number,
            d.week_number,
            ISNULL(SUM(CASE WHEN t.is_billable = 1 THEN t.hours_worked ELSE 0 END), 0) AS billable_hours,
            ISNULL(SUM(t.hours_worked), 0) AS total_hours
        FROM analytics.dim_consultant c
        CROSS JOIN (
            SELECT DISTINCT year_number, week_number 
            FROM analytics.dim_date 
            WHERE full_date BETWEEN DATEADD(WEEK, -52, GETDATE()) AND GETDATE()
        ) d
        LEFT JOIN analytics.fact_time_entry t 
            ON c.consultant_id = t.consultant_id 
            AND t.date_key IN (SELECT date_key FROM analytics.dim_date 
                               WHERE year_number = d.year_number AND week_number = d.week_number)
        WHERE c.is_active = 1
        GROUP BY c.consultant_id, c.consultant_name, c.role_level, 
                 c.utilization_target, c.annual_available_hours,
                 d.year_number, d.week_number
        ORDER BY c.consultant_id, d.year_number, d.week_number
    ''')
    
    historical_data = cursor.fetchall()
    
    # Organize by consultant
    consultants = {}
    for row in historical_data:
        cid = row.consultant_id
        if cid not in consultants:
            consultants[cid] = {
                'name': row.consultant_name,
                'role': row.role_level,
                'target': float(row.utilization_target),
                'weekly_available': float(row.annual_available_hours) / 52,
                'weekly_billable': [],
                'weekly_total': []
            }
        consultants[cid]['weekly_billable'].append(float(row.billable_hours))
        consultants[cid]['weekly_total'].append(float(row.total_hours))
    
    # Generate 8-week forecast for each consultant
    forecasts = []
    today = datetime.now()
    
    for cid, data in consultants.items():
        billable_series = np.array(data['weekly_billable'])
        available = data['weekly_available']
        
        if len(billable_series) < 8:
            continue  # Not enough history
        
        # Simple Exponential Smoothing (Holt's method for trend)
        alpha = 0.3  # Smoothing factor for level
        beta = 0.1   # Smoothing factor for trend
        
        # Initialize
        level = billable_series[-12:].mean()  # Recent 12-week average
        trend = (billable_series[-4:].mean() - billable_series[-12:-4].mean()) / 8
        
        # Seasonal adjustment (compare same period last year if available)
        seasonal_factors = np.ones(8)
        if len(billable_series) >= 52:
            for w in range(8):
                # Lookup same week last year
                historical_week_idx = len(billable_series) - 52 + w
                if 0 <= historical_week_idx < len(billable_series):
                    hist_val = billable_series[historical_week_idx]
                    recent_avg = billable_series[-12:].mean()
                    if recent_avg > 0:
                        seasonal_factors[w] = hist_val / recent_avg
                    seasonal_factors[w] = max(0.5, min(1.5, seasonal_factors[w]))  # Bound
        
        # Generate forecasts
        for week_offset in range(1, 9):
            forecast_hours = max(0, (level + trend * week_offset) * seasonal_factors[week_offset - 1])
            forecast_util = min(1.0, forecast_hours / available) if available > 0 else 0
            forecast_date = today + timedelta(weeks=week_offset)
            
            forecasts.append((
                cid,
                data['name'],
                data['role'],
                forecast_date.strftime('%Y-%m-%d'),
                week_offset,
                round(forecast_hours, 1),
                round(forecast_util, 3),
                round(data['target'], 3),
                'exponential_smoothing',
                datetime.now().strftime('%Y-%m-%d %H:%M:%S')
            ))
    
    # Write forecasts to Azure SQL
    cursor.execute('''
        IF NOT EXISTS (SELECT * FROM sys.tables WHERE name = 'forecast_utilization' AND schema_id = SCHEMA_ID('analytics'))
        CREATE TABLE analytics.forecast_utilization (
            forecast_id BIGINT IDENTITY(1,1) PRIMARY KEY,
            consultant_id INT,
            consultant_name NVARCHAR(200),
            role_level NVARCHAR(100),
            forecast_week_start DATE,
            weeks_ahead INT,
            forecast_billable_hours DECIMAL(6,1),
            forecast_utilization DECIMAL(5,3),
            utilization_target DECIMAL(5,3),
            model_version NVARCHAR(50),
            generated_at DATETIME2
        )
    ''')
    
    # Clear previous forecasts
    cursor.execute('DELETE FROM analytics.forecast_utilization')
    
    # Insert new forecasts
    cursor.executemany('''
        INSERT INTO analytics.forecast_utilization 
        (consultant_id, consultant_name, role_level, forecast_week_start, weeks_ahead,
         forecast_billable_hours, forecast_utilization, utilization_target, model_version, generated_at)
        VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
    ''', forecasts)
    
    conn.commit()
    conn.close()
    
    return f'Generated {len(forecasts)} utilization forecasts for {len(consultants)} consultants'

function.json

Azure Function timer trigger configuration — runs weekly at 6:00 AM every Monday
json
{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "timer",
      "type": "timerTrigger",
      "direction": "in",
      "schedule": "0 0 6 * * 1"
    }
  ]
}

Power BI Integration

Add a new page to the Consultant Utilization dashboard with the following views:

  • Forecast Utilization Heatmap: Consultant (rows) × Week (columns), colored by forecast vs target
  • At-Risk Bench Report: Consultants forecasted to drop below 50% utilization in next 4 weeks
  • Capacity Planning View: Total available capacity (hours) by role level for next 8 weeks
DAX measures for the forecast dashboard
dax
Forecast Utilization = AVERAGE(forecast_utilization[forecast_utilization])

Forecast vs Target Gap = AVERAGE(forecast_utilization[forecast_utilization]) - AVERAGE(forecast_utilization[utilization_target])

Bench Risk Flag = IF([Forecast Utilization] < 0.50, "HIGH RISK", IF([Forecast Utilization] < 0.65, "MODERATE RISK", "OK"))

Engagement Profitability Insight Prompt

Type: prompt A structured prompt template designed for use with Power BI Copilot (PPU tier) that enables firm leadership to ask natural language questions about profitability and utilization. The prompt provides context about the data model and firm-specific terminology so Copilot generates accurate, relevant answers.

Implementation

Copilot Pane Instructions (paste into Power BI report Copilot settings)

Power BI Copilot System Prompt
text
# paste into the 'Help Copilot give better answers' text box in Power BI
# report Copilot settings

You are an analytics assistant for a professional services firm. You help partners, managers, and consultants understand project profitability, consultant utilization, and engagement performance.

## Data Model Context
- **fact_time_entry**: Every hour logged by a consultant on a project. Key fields: hours_worked, is_billable, bill_rate_applied, cost_rate_applied
- **dim_consultant**: Consultant profiles with role_level (Analyst, Senior Consultant, Manager, Senior Manager, Partner), department, cost_rate, bill_rate, and utilization_target
- **dim_project**: Projects with client_name, engagement_type, billing_method (T&M, Fixed-Fee, Retainer), budget_hours, budget_amount, project_status
- **fact_invoice**: Invoices with amounts and payment status
- **forecast_utilization**: 8-week forward utilization forecasts per consultant

## Key Metrics Definitions
- **Utilization Rate** = Billable Hours / Available Hours (available = annual_available_hours / 12 per month)
- **Gross Margin %** = (Revenue - Direct Labor Cost) / Revenue
- **Realization Rate** = Invoiced Revenue / Gross Revenue (captures write-downs)
- **Effective Multiplier** = Revenue per Hour / Cost per Hour (target: 2.5-3.5x)
- **Budget Consumed %** = Actual Hours / Budget Hours

## Engagement Types
[CUSTOMIZE: List the client's specific engagement types here, e.g.]
- Strategy Consulting: Advisory engagements, typically T&M, target margin 35%+
- IT Implementation: Technology deployment projects, typically fixed-fee, target margin 20-25%
- Staff Augmentation: Resource placement, T&M, target margin 15-20%
- Training & Workshops: Delivery of training, fixed-fee, target margin 40%+
- Managed Services: Ongoing support, retainer, target margin 25-30%

## Utilization Targets by Role
- Analyst/Associate: 80%
- Senior Consultant: 75%
- Manager: 65%
- Senior Manager: 55%
- Partner/Director: 40%

## When answering questions:
1. Always specify the time period being analyzed
2. Compare metrics to targets when available
3. Highlight outliers (both positive and negative)
4. Suggest actionable next steps when identifying issues
5. Reference specific projects, consultants, or engagement types by name
6. Use currency formatting for financial values and percentage formatting for rates

Example Queries for Training Documentation

Provide these to client leadership as example questions they can ask Copilot:

1
"Which engagement types had the highest margin last quarter?"
2
"Show me the top 5 consultants by utilization this month vs their target"
3
"What is the profitability trend for IT Implementation projects over the last 6 months?"
4
"Which active projects are currently below 15% margin?"
5
"Compare the utilization of the Strategy team vs the Implementation team this quarter"
6
"What is our effective multiplier by role level?"
7
"Show me consultants who are forecasted to be below 50% utilization in the next 4 weeks"
8
"What was our realization rate by client last quarter?"
9
"Which clients are most profitable on a fully loaded basis?"
10
"Summarize our firm's financial health this month compared to last month"

Setup Instructions

1
In Power BI Service, open the report in Edit mode
2
Navigate to the Copilot pane (requires PPU license)
3
Click the settings gear icon in the Copilot pane
4
Paste the system prompt into the 'Help Copilot give better answers' text box
5
Save the report and re-publish to the app
6
Test each example query and refine the prompt based on Copilot's responses

PSA-to-Warehouse ETL Orchestrator

Type: integration A centralized ETL orchestration layer that coordinates all data movement between the PSA platform, accounting system, HRIS, and the Azure SQL data warehouse. Uses Azure Data Factory for production workloads or Azure Functions for simpler deployments. Handles incremental loads, error handling, retry logic, and data quality validation.

Implementation: Azure Data Factory Pipeline

For clients with more complex data integration needs (multiple source systems, large data volumes), use Azure Data Factory instead of standalone Azure Functions.

Pipeline Architecture

Pipeline activity tree
text
Pipeline: PL_Daily_ETL_Orchestrator
├── Activity 1: Extract BigTime Time Entries (Copy Activity → REST to SQL)
├── Activity 2: Extract BigTime Projects (Copy Activity → REST to SQL)
├── Activity 3: Extract BigTime Consultants (Copy Activity → REST to SQL)
├── Activity 4: Extract QBO Invoices (Copy Activity → REST to SQL)
├── Activity 5: Extract QBO Chart of Accounts (Copy Activity → REST to SQL)
├── Activity 6: Load HRIS Cost Rates (Copy Activity → CSV/SFTP to SQL)
├── Activity 7: Data Quality Validation (Stored Procedure Activity)
├── Activity 8: Monthly Snapshot Calculation (Stored Procedure Activity)
├── Activity 9: Notify on Failure (Web Activity → Teams webhook)
└── Activity 10: Trigger Power BI Refresh (Web Activity → Power BI REST API)

Azure Data Factory ARM Template (key pipeline definition)

Azure Data Factory ARM template — key pipeline definition
json
{
  "name": "PL_Daily_ETL_Orchestrator",
  "properties": {
    "activities": [
      {
        "name": "Extract_BigTime_TimeEntries",
        "type": "Copy",
        "dependsOn": [],
        "policy": {
          "retry": 3,
          "retryIntervalInSeconds": 60
        },
        "typeProperties": {
          "source": {
            "type": "RestSource",
            "httpRequestTimeout": "00:05:00",
            "requestInterval": "00.00:00:01.000",
            "requestMethod": "GET"
          },
          "sink": {
            "type": "AzureSqlSink",
            "writeBehavior": "upsert",
            "upsertSettings": {
              "useTempDB": true,
              "keys": ["time_entry_id"]
            },
            "sqlWriterStoredProcedureName": "[analytics].[usp_UpsertTimeEntries]"
          }
        }
      }
    ],
    "triggers": [
      {
        "name": "TR_Daily_0600",
        "type": "ScheduleTrigger",
        "typeProperties": {
          "recurrence": {
            "frequency": "Day",
            "interval": 1,
            "startTime": "2025-01-01T06:00:00Z",
            "timeZone": "Eastern Standard Time"
          }
        }
      }
    ]
  }
}

Data Quality Validation Stored Procedure

Data quality validation stored procedure
sql
CREATE PROCEDURE [analytics].[usp_DataQualityCheck]
AS
BEGIN
    DECLARE @issues TABLE (check_name NVARCHAR(200), severity NVARCHAR(20), detail NVARCHAR(1000));
    
    -- Check 1: Orphaned time entries (no matching project)
    INSERT INTO @issues
    SELECT 'Orphaned Time Entries', 'ERROR',
           CAST(COUNT(*) AS NVARCHAR) + ' time entries with no matching project'
    FROM analytics.fact_time_entry t
    LEFT JOIN analytics.dim_project p ON t.project_id = p.project_id
    WHERE p.project_id IS NULL
    HAVING COUNT(*) > 0;
    
    -- Check 2: Time entries with zero bill rate on billable projects
    INSERT INTO @issues
    SELECT 'Zero Bill Rate', 'WARNING',
           CAST(COUNT(*) AS NVARCHAR) + ' billable entries with $0 bill rate'
    FROM analytics.fact_time_entry
    WHERE is_billable = 1 AND bill_rate_applied = 0
    HAVING COUNT(*) > 0;
    
    -- Check 3: Consultants with no time entries in last 2 weeks (active only)
    INSERT INTO @issues
    SELECT 'Missing Time Entries', 'WARNING',
           c.consultant_name + ' (' + c.role_level + ') has no time entries in last 14 days'
    FROM analytics.dim_consultant c
    WHERE c.is_active = 1
      AND NOT EXISTS (
          SELECT 1 FROM analytics.fact_time_entry t
          JOIN analytics.dim_date d ON t.date_key = d.date_key
          WHERE t.consultant_id = c.consultant_id
            AND d.full_date >= DATEADD(DAY, -14, GETDATE())
      );
    
    -- Check 4: Revenue variance vs accounting (monthly)
    INSERT INTO @issues
    SELECT 'Revenue Variance', 'ERROR',
           'Month ' + CAST(MONTH(GETDATE()) AS NVARCHAR) + ': PSA revenue $' + 
           CAST(CAST(psa_rev AS INT) AS NVARCHAR) + ' vs Invoice revenue $' + 
           CAST(CAST(inv_rev AS INT) AS NVARCHAR) + ' (variance: ' + 
           CAST(CAST(ABS(psa_rev - inv_rev) / NULLIF(psa_rev, 0) * 100 AS INT) AS NVARCHAR) + '%)'
    FROM (
        SELECT 
            SUM(t.hours_worked * t.bill_rate_applied) AS psa_rev
        FROM analytics.fact_time_entry t
        JOIN analytics.dim_date d ON t.date_key = d.date_key
        WHERE d.month_number = MONTH(GETDATE())
          AND d.year_number = YEAR(GETDATE())
          AND t.is_billable = 1
    ) psa
    CROSS JOIN (
        SELECT SUM(invoice_amount) AS inv_rev
        FROM analytics.fact_invoice
        WHERE MONTH(invoice_date) = MONTH(GETDATE())
          AND YEAR(invoice_date) = YEAR(GETDATE())
    ) inv
    WHERE ABS(psa_rev - inv_rev) / NULLIF(psa_rev, 0) > 0.05;  -- >5% variance
    
    -- Check 5: Cost rate validation
    INSERT INTO @issues
    SELECT 'Missing Cost Rate', 'ERROR',
           c.consultant_name + ' has $0 cost rate — profitability calculations will be inaccurate'
    FROM analytics.dim_consultant c
    WHERE c.is_active = 1 AND (c.hourly_cost_rate IS NULL OR c.hourly_cost_rate = 0);
    
    -- Log results
    INSERT INTO analytics.data_quality_log (check_date, check_name, severity, detail)
    SELECT GETDATE(), check_name, severity, detail FROM @issues;
    
    -- Return issues for pipeline notification
    SELECT * FROM @issues;
END;

Monthly Snapshot Stored Procedure

Monthly snapshot stored procedure
sql
CREATE PROCEDURE [analytics].[usp_GenerateMonthlySnapshot]
AS
BEGIN
    DECLARE @snapshot_month DATE = DATEFROMPARTS(YEAR(GETDATE()), MONTH(GETDATE()), 1);
    
    -- Delete existing snapshot for current month (allows re-run)
    DELETE FROM analytics.fact_project_monthly_snapshot 
    WHERE snapshot_month = @snapshot_month;
    
    INSERT INTO analytics.fact_project_monthly_snapshot
    (project_id, snapshot_month, total_hours_to_date, billable_hours_to_date,
     revenue_recognized_to_date, total_cost_to_date, budget_consumed_pct)
    SELECT
        p.project_id,
        @snapshot_month,
        SUM(t.hours_worked),
        SUM(CASE WHEN t.is_billable = 1 THEN t.hours_worked ELSE 0 END),
        SUM(CASE WHEN t.is_billable = 1 THEN t.hours_worked * t.bill_rate_applied ELSE 0 END),
        SUM(t.hours_worked * t.cost_rate_applied),
        CASE WHEN p.budget_hours > 0 
             THEN SUM(t.hours_worked) / p.budget_hours 
             ELSE 0 END
    FROM analytics.dim_project p
    JOIN analytics.fact_time_entry t ON p.project_id = t.project_id
    JOIN analytics.dim_date d ON t.date_key = d.date_key
    WHERE d.full_date <= EOMONTH(@snapshot_month)
    GROUP BY p.project_id, p.budget_hours;
END;

Deployment Commands

1
Login to Azure
2
Create Data Factory
3
Deploy pipeline from ARM template
4
Create linked services for BigTime REST API and Azure SQL (configure via Azure Portal → Data Factory → Manage → Linked Services)
Azure CLI deployment commands
bash
az login

az datafactory create --resource-group rg-profitability-analytics --factory-name adf-profitability-[clientname] --location eastus

az deployment group create --resource-group rg-profitability-analytics --template-file adf-pipeline-template.json

Weekly Executive Summary Generator

Type: agent An automated agent that generates a natural-language weekly executive summary of firm profitability and utilization metrics. It runs every Monday morning and delivers a formatted summary to the managing partner via email and Teams. The summary highlights key wins, concerns, trends, and recommended actions — eliminating the need for manual report preparation.

Architecture

  • Trigger: Azure Function Timer (Monday 7:30 AM)
  • Data: Azure SQL warehouse (queries key metrics)
  • AI: Azure OpenAI Service (GPT-4) for narrative generation
  • Delivery: Microsoft Graph API (email) + Teams webhook

Code: weekly_summary/__init__.py

weekly_summary/__init__.py — Azure Function main entrypoint
python
import azure.functions as func
import pyodbc
import os
import json
import requests
from datetime import datetime, timedelta

def main(timer: func.TimerRequest) -> None:
    conn = pyodbc.connect(os.environ['AZURE_SQL_CONNECTION_STRING'])
    cursor = conn.cursor()
    
    # Gather key metrics for the past week and comparison period
    metrics = {}
    
    # Firm-wide utilization this week vs last week
    cursor.execute('''
        SELECT
            SUM(CASE WHEN d.full_date >= DATEADD(WEEK, -1, GETDATE()) AND t.is_billable = 1 
                     THEN t.hours_worked ELSE 0 END) AS this_week_billable,
            SUM(CASE WHEN d.full_date >= DATEADD(WEEK, -1, GETDATE()) 
                     THEN t.hours_worked ELSE 0 END) AS this_week_total,
            SUM(CASE WHEN d.full_date >= DATEADD(WEEK, -2, GETDATE()) 
                     AND d.full_date < DATEADD(WEEK, -1, GETDATE()) AND t.is_billable = 1 
                     THEN t.hours_worked ELSE 0 END) AS last_week_billable,
            SUM(CASE WHEN d.full_date >= DATEADD(WEEK, -2, GETDATE()) 
                     AND d.full_date < DATEADD(WEEK, -1, GETDATE()) 
                     THEN t.hours_worked ELSE 0 END) AS last_week_total
        FROM analytics.fact_time_entry t
        JOIN analytics.dim_date d ON t.date_key = d.date_key
    ''')
    row = cursor.fetchone()
    
    active_consultants_query = cursor.execute(
        'SELECT COUNT(*) FROM analytics.dim_consultant WHERE is_active = 1'
    ).fetchone()[0]
    
    weekly_available = active_consultants_query * 40
    metrics['this_week_util'] = row.this_week_billable / weekly_available if weekly_available > 0 else 0
    metrics['last_week_util'] = row.last_week_billable / weekly_available if weekly_available > 0 else 0
    metrics['this_week_billable_hours'] = row.this_week_billable or 0
    metrics['active_consultants'] = active_consultants_query
    
    # Revenue and margin this week
    cursor.execute('''
        SELECT 
            SUM(t.hours_worked * t.bill_rate_applied) AS revenue,
            SUM(t.hours_worked * t.cost_rate_applied) AS cost
        FROM analytics.fact_time_entry t
        JOIN analytics.dim_date d ON t.date_key = d.date_key
        WHERE d.full_date >= DATEADD(WEEK, -1, GETDATE()) AND t.is_billable = 1
    ''')
    rev_row = cursor.fetchone()
    metrics['weekly_revenue'] = rev_row.revenue or 0
    metrics['weekly_cost'] = rev_row.cost or 0
    metrics['weekly_margin'] = (metrics['weekly_revenue'] - metrics['weekly_cost']) / metrics['weekly_revenue'] if metrics['weekly_revenue'] > 0 else 0
    
    # Top performing projects (by margin)
    cursor.execute('''
        SELECT TOP 3 p.project_name, p.client_name, p.engagement_type,
            SUM(t.hours_worked * t.bill_rate_applied) AS revenue,
            (SUM(t.hours_worked * t.bill_rate_applied) - SUM(t.hours_worked * t.cost_rate_applied)) 
                / NULLIF(SUM(t.hours_worked * t.bill_rate_applied), 0) AS margin_pct
        FROM analytics.fact_time_entry t
        JOIN analytics.dim_project p ON t.project_id = p.project_id
        JOIN analytics.dim_date d ON t.date_key = d.date_key
        WHERE d.full_date >= DATEADD(WEEK, -1, GETDATE()) AND t.is_billable = 1
        GROUP BY p.project_name, p.client_name, p.engagement_type
        HAVING SUM(t.hours_worked) > 10
        ORDER BY margin_pct DESC
    ''')
    metrics['top_projects'] = [{'name': r.project_name, 'client': r.client_name, 
                                 'type': r.engagement_type, 'margin': f'{r.margin_pct:.1%}'} 
                                for r in cursor.fetchall()]
    
    # At-risk projects
    cursor.execute('''
        SELECT TOP 5 p.project_name, p.client_name,
            (SUM(t.hours_worked * t.bill_rate_applied) - SUM(t.hours_worked * t.cost_rate_applied)) 
                / NULLIF(SUM(t.hours_worked * t.bill_rate_applied), 0) AS margin_pct,
            SUM(t.hours_worked) / NULLIF(p.budget_hours, 0) AS budget_pct
        FROM analytics.fact_time_entry t
        JOIN analytics.dim_project p ON t.project_id = p.project_id
        WHERE p.project_status = 'Active'
        GROUP BY p.project_name, p.client_name, p.budget_hours
        HAVING (SUM(t.hours_worked * t.bill_rate_applied) - SUM(t.hours_worked * t.cost_rate_applied)) 
                / NULLIF(SUM(t.hours_worked * t.bill_rate_applied), 0) < 0.15
        ORDER BY margin_pct ASC
    ''')
    metrics['at_risk_projects'] = [{'name': r.project_name, 'client': r.client_name,
                                     'margin': f'{r.margin_pct:.1%}', 'budget': f'{r.budget_pct:.0%}'}
                                    for r in cursor.fetchall()]
    
    # Under-utilized consultants
    cursor.execute('''
        SELECT TOP 5 c.consultant_name, c.role_level, c.utilization_target,
            ISNULL(SUM(CASE WHEN t.is_billable = 1 THEN t.hours_worked ELSE 0 END), 0) / 40.0 AS this_week_util
        FROM analytics.dim_consultant c
        LEFT JOIN analytics.fact_time_entry t ON c.consultant_id = t.consultant_id
            AND t.date_key IN (SELECT date_key FROM analytics.dim_date WHERE full_date >= DATEADD(WEEK, -1, GETDATE()))
        WHERE c.is_active = 1
        GROUP BY c.consultant_name, c.role_level, c.utilization_target
        HAVING ISNULL(SUM(CASE WHEN t.is_billable = 1 THEN t.hours_worked ELSE 0 END), 0) / 40.0 < c.utilization_target * 0.7
        ORDER BY this_week_util ASC
    ''')
    metrics['underutilized'] = [{'name': r.consultant_name, 'role': r.role_level,
                                  'actual': f'{r.this_week_util:.0%}', 'target': f'{r.utilization_target:.0%}'}
                                 for r in cursor.fetchall()]
    
    conn.close()
    
    # Generate narrative with Azure OpenAI
    prompt = f"""
    Generate a concise executive summary for a professional services firm's weekly performance.
    Write in a professional but direct tone. Use bullet points for key items. Include specific numbers.
    
    ## This Week's Data (ending {datetime.now().strftime('%B %d, %Y')})
    - Active consultants: {metrics['active_consultants']}
    - Firm utilization: {metrics['this_week_util']:.1%} (last week: {metrics['last_week_util']:.1%})
    - Billable hours: {metrics['this_week_billable_hours']:,.0f}
    - Weekly revenue: ${metrics['weekly_revenue']:,.0f}
    - Weekly gross margin: {metrics['weekly_margin']:.1%}
    
    ## Top Performing Projects
    {json.dumps(metrics['top_projects'], indent=2)}
    
    ## At-Risk Projects (below 15% margin)
    {json.dumps(metrics['at_risk_projects'], indent=2)}
    
    ## Under-Utilized Consultants (>30% below target)
    {json.dumps(metrics['underutilized'], indent=2)}
    
    Structure the summary as:
    1. **Week in Review** (2-3 sentences on overall performance)
    2. **Wins** (top projects/metrics to celebrate)
    3. **Watch Items** (at-risk projects and under-utilized staff)
    4. **Recommended Actions** (specific, actionable steps for this week)
    """
    
    openai_response = requests.post(
        f"{os.environ['AZURE_OPENAI_ENDPOINT']}/openai/deployments/{os.environ['AZURE_OPENAI_DEPLOYMENT']}/chat/completions?api-version=2024-02-01",
        headers={
            'api-key': os.environ['AZURE_OPENAI_KEY'],
            'Content-Type': 'application/json'
        },
        json={
            'messages': [
                {'role': 'system', 'content': 'You are a professional services firm analytics advisor. Write clear, actionable executive summaries.'},
                {'role': 'user', 'content': prompt}
            ],
            'temperature': 0.3,
            'max_tokens': 1000
        }
    )
    
    summary_text = openai_response.json()['choices'][0]['message']['content']
    
    # Send to Teams
    teams_card = {
        '@type': 'MessageCard',
        '@context': 'http://schema.org/extensions',
        'themeColor': '0078D4',
        'summary': 'Weekly Profitability Summary',
        'title': f'📊 Weekly Executive Summary — {datetime.now().strftime("%B %d, %Y")}',
        'sections': [{
            'text': summary_text.replace('\n', '<br>')
        }],
        'potentialAction': [{
            '@type': 'OpenUri',
            'name': 'Open Full Dashboard',
            'targets': [{'os': 'default', 'uri': os.environ.get('POWERBI_APP_URL', '#')}]
        }]
    }
    
    requests.post(os.environ['TEAMS_WEBHOOK_URL'], json=teams_card)

function.json

function.json — Azure Function timer trigger binding
json
{
  "scriptFile": "__init__.py",
  "bindings": [
    {
      "name": "timer",
      "type": "timerTrigger",
      "direction": "in",
      "schedule": "0 30 7 * * 1"
    }
  ]
}

Required Azure OpenAI Setup

1
Create Azure OpenAI resource in Azure Portal
2
Deploy GPT-4 model (or GPT-5.4 for cost savings)
3
Add environment variables: AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_KEY, AZURE_OPENAI_DEPLOYMENT
4
Estimated cost: <$5/month for weekly summaries (minimal token usage)
Note

Alternative: For clients without Azure OpenAI access — Replace the OpenAI call with a template-based summary using Python string formatting — less dynamic but zero AI cost.

Testing & Validation

  • RECONCILIATION TEST: Compare monthly revenue totals from the Azure SQL warehouse against the client's accounting system (QBO/Xero/Intacct) P&L for the most recent 6 months. Variance must be within 2% for each month. If variance exceeds 2%, investigate bill rate mapping, missing time entries, or duplicate records before proceeding.
  • UTILIZATION CALCULATION TEST: For 5 randomly selected consultants, manually calculate their utilization rate for the most recent complete month using raw time entry data. Compare against the Power BI dashboard value. Results must match within 0.5 percentage points.
  • MARGIN CALCULATION TEST: For 5 randomly selected completed projects, manually calculate gross margin (revenue - direct labor cost) / revenue using raw data. Compare against the Power BI Project Profitability dashboard. Results must match within 1 percentage point.
  • ROW-LEVEL SECURITY TEST: Log into Power BI Service as a test user from each of the three security roles (Partner, Manager, Consultant). Verify: (1) Partner sees all firm data, (2) Manager sees only their department's data, (3) Consultant sees only their own individual data. Attempt to access restricted data through drill-through and cross-filtering to confirm RLS cannot be bypassed.
  • DATA FRESHNESS TEST: Log a test time entry in BigTime at 9:00 AM. Verify it appears in the Azure SQL warehouse within 4 hours (by the next ETL cycle). Verify it appears in the Power BI dashboard after the next scheduled refresh. Document the actual end-to-end latency.
  • ALERT THRESHOLD TEST: Create a test project in BigTime with artificially low bill rates to produce a margin below 10%. Run the anomaly detection Azure Function manually and verify a Teams notification is generated within 5 minutes with the correct project name, margin percentage, and severity level.
  • FORECAST ACCURACY BASELINE: Run the utilization forecasting model and record its 4-week predictions for all consultants. After 4 weeks, compare predicted utilization against actual. Document the Mean Absolute Error (MAE) as the baseline for future model improvement. Acceptable initial MAE: <15 percentage points.
  • DASHBOARD PERFORMANCE TEST: Open each of the four Power BI dashboards and measure page load time. Target: <5 seconds for initial load, <2 seconds for filter/slicer changes. If performance is poor, optimize by reducing visual count per page, using aggregation tables, or upgrading Azure SQL tier.
  • INTEGRATION CONTINUITY TEST: Temporarily revoke the BigTime API token and verify the ETL pipeline: (1) fails gracefully without corrupting existing data, (2) sends a failure notification to the MSP admin via Teams/email, (3) automatically retries on the next scheduled run after the token is restored.
  • MOBILE RESPONSIVENESS TEST: Access all four dashboards via the Power BI mobile app (iOS and Android) on a partner's phone. Verify KPI cards are readable, charts render correctly, and drill-through navigation works on touch screens.
  • WEEKLY SUMMARY TEST: Trigger the Weekly Executive Summary Generator function manually. Verify: (1) it queries correct data from Azure SQL, (2) the AI-generated narrative is factually accurate against the raw data, (3) the Teams message renders with proper formatting, (4) the 'Open Dashboard' link works.
  • END-TO-END SMOKE TEST: Walk through a complete business scenario: (1) A consultant logs 8 hours on a project in BigTime, (2) The manager approves the time entry, (3) The ETL pipeline picks up the entry, (4) The Power BI dashboard reflects the updated hours and profitability, (5) The consultant can see their own entry via RLS-restricted view. Verify each step within expected SLA timeframes.

Client Handoff

Client Handoff Checklist

Training Sessions to Deliver (3 sessions, 1.5 hours each)

Session 1: Partner/Leadership Dashboard Training

  • How to access the Power BI App and navigate between dashboards
  • Reading the Firm Overview dashboard: utilization trends, revenue by engagement type, profitability heatmap
  • Understanding the Project Profitability drill-through: identifying at-risk projects
  • Using Copilot AI (PPU users): example natural language queries for ad-hoc analysis
  • Setting up personal email subscriptions for weekly/monthly reports
  • Interpreting the Weekly Executive Summary and acting on recommendations

Session 2: Manager/PM Operational Training

  • Consultant Utilization dashboard: reading the heatmap, identifying bench risk, reviewing forecasts
  • Project Profitability deep-dive: budget burn-down tracking, scope creep indicators
  • Engagement Type Analysis: comparing performance across service lines
  • Using filters and slicers effectively for their team/department view
  • Understanding data refresh schedules and when to expect updated numbers
  • How to request dashboard modifications through the MSP support channel

Session 3: Time Entry Discipline & Data Quality

  • Why accurate time entry is the foundation of all analytics (show impact of bad data)
  • BigTime time entry best practices: daily entry, correct project coding, billable vs non-billable
  • Engagement type selection and project code standards
  • What happens when time entries are late, miscoded, or missing
  • Approval workflow responsibilities for managers

Documentation to Deliver

1
KPI Definition Document — Complete glossary of all metrics with formulas, targets, and data sources
2
Dashboard User Guide — PDF/video walkthrough of each dashboard page with annotated screenshots
3
Data Governance Policy — Rules for time entry deadlines, project coding standards, engagement type definitions, and cost rate update procedures
4
Access Control Matrix — Who has access to what data (Partner/Manager/Consultant roles)
5
Copilot Query Cookbook — 20+ example natural language queries with expected results
6
Escalation Procedures — How to report data issues, request changes, and contact MSP support
7
Architecture Diagram — Visual showing all systems, data flows, and integration points

Success Criteria to Review Together

30/60/90 Day Post-Launch Check-ins

  • Day 30: Review dashboard adoption (are people actually using it?), address first-round feedback, fix any data quality issues
  • Day 60: Assess forecast accuracy, refine utilization targets based on actual data, add any requested dashboard enhancements
  • Day 90: Full performance review against baseline metrics, discuss Phase 2 enhancements (predictive pricing, client lifetime value, resource optimization)

Maintenance

Ongoing Maintenance Plan

Daily Automated Tasks (No MSP Intervention Required)

  • ETL pipeline runs 4x daily (6am, 10am, 2pm, 6pm) — monitored via Azure Monitor alerts
  • Anomaly detection agent runs at 7:00 AM weekdays — delivers Teams alerts if thresholds breached
  • Power BI dataset refresh 4x daily — failure notifications sent to MSP admin automatically

Weekly MSP Tasks (30 minutes/week)

  • Monday: Review ETL pipeline run history in Azure Data Factory Monitor — check for failed runs, data quality warnings
  • Monday: Verify Weekly Executive Summary was generated and delivered correctly
  • Monday: Check Azure SQL database size and DTU utilization — ensure not approaching tier limits
  • Friday: Review data quality log table (analytics.data_quality_log) for new issues

Monthly MSP Tasks (2–4 hours/month)

  • Revenue reconciliation: Compare monthly warehouse totals against accounting P&L — investigate any variance >2%
  • Cost rate update: Coordinate with client HR/finance to update consultant cost rates if there are new hires, departures, or salary changes. Update dim_consultant table.
  • Utilization target review: Confirm targets still align with firm strategy; adjust if roles or expectations change
  • Forecast model accuracy check: Compare prior month's forecasts against actuals; document MAE trend
  • Azure resource review: Check Azure SQL DTU consumption, Azure Function execution counts, storage usage; right-size if needed
  • Power BI license audit: Verify all active users have appropriate license tier; remove departed employees
  • Security review: Verify Azure AD group memberships match current org chart; remove terminated employees from all access groups

Quarterly MSP Tasks (4–8 hours/quarter)

  • Dashboard enhancement cycle: Collect user feedback and implement 2–3 dashboard improvements per quarter
  • Forecast model retraining: Re-run the utilization forecast model with updated parameters if accuracy has degraded (MAE >15%)
  • Integration health check: Verify all API connections (BigTime, QBO, HRIS) are functioning; refresh OAuth tokens as needed
  • Compliance review: Verify SOC 2 audit trail integrity; confirm GDPR/CCPA data handling procedures are followed
  • Capacity planning: Review data volume growth and project Azure SQL scaling needs for next 6 months
  • Vendor update review: Check for BigTime, Power BI, and Azure SQL updates that may affect integrations or require configuration changes

Annual MSP Tasks

  • Full system audit: End-to-end review of data accuracy, security controls, and system performance
  • ASC 606 alignment check: Verify revenue recognition calculations still comply with current accounting standards
  • License renewal and optimization: Review all software subscriptions; negotiate renewals; assess whether tier changes are warranted
  • Strategic roadmap review: Meet with client leadership to discuss analytics maturity; plan Phase 2/3 enhancements

SLA Considerations

  • Data freshness SLA: Dashboard data no more than 6 hours old during business hours (covered by 4x daily refresh)
  • Issue response time: Critical data issues (wrong financial data) — 4 hour response, 24 hour resolution. Non-critical issues — 1 business day response, 1 week resolution.
  • Uptime: 99.5% for Azure SQL and Power BI Service (governed by Microsoft SLAs, which are 99.9%+)
  • Dashboard changes: Minor changes (filter additions, color changes) within 3 business days. Major changes (new dashboard pages, new data sources) scoped as change orders.

Escalation Path

1
Level 1 (Client internal): Data entry issues, access requests → Client's internal admin or operations manager
2
Level 2 (MSP support): Dashboard bugs, integration failures, data discrepancies → MSP help desk / assigned account engineer
3
Level 3 (MSP senior): Architecture changes, compliance issues, forecast model problems → MSP solution architect
4
Level 4 (Vendor): Platform bugs, API changes, licensing disputes → BigTime support / Microsoft support

Model Retraining Triggers

Retrain/recalibrate the utilization forecasting model when:

  • Forecast MAE exceeds 15 percentage points for 2 consecutive months
  • Firm adds or loses >20% of consultants (significant headcount change)
  • Firm enters a new practice area or engagement type not in historical data
  • Seasonal patterns shift significantly (e.g., new major client with different seasonality)
  • More than 12 months have passed since last recalibration

Alternatives

All-in-One PSA with Built-in Analytics (No BI Layer)

Use a PSA platform like BigTime, Scoro, or Polaris PSA with their native reporting and analytics dashboards, skipping the Power BI layer entirely. All profitability and utilization analytics are consumed within the PSA platform's built-in reporting module.

Replace BigTime + Power BI with Polaris PSA (Replicon), which positions itself as a 'self-driving PSA' with AI-native analytics including SmartBudget for profitability optimization and ZeroTime for automated time capture. Provides real-time insights, resource allocation AI, and profitability optimization without a separate BI layer.

Tableau + Salesforce-Native PSA (Certinia)

For firms deeply invested in the Salesforce ecosystem, use Certinia (formerly FinancialForce) as the PSA platform with Tableau as the analytics layer. Certinia runs natively on the Salesforce platform and its new Agentforce-based AI agents handle staffing and customer success automation.

Open-Source Analytics Stack (Metabase + PostgreSQL)

Replace Power BI with Metabase (open-source, self-hosted) and Azure SQL with self-hosted PostgreSQL. Eliminates all Microsoft licensing costs for the analytics layer while maintaining full dashboard capabilities.

Microsoft Fabric End-to-End (Advanced Analytics)

For firms wanting cutting-edge AI/ML capabilities, deploy the full Microsoft Fabric stack instead of simple Azure SQL + Power BI Pro. Includes Fabric Data Lakehouse, Data Warehouse, Data Science notebooks, and Real-Time Intelligence — all with built-in Copilot AI.

Want early access to the full toolkit?