60 min readIntelligence & insights

Implementation Guide: Analyze student assessment data to identify learning gaps and at-risk students

Step-by-step implementation guide for deploying AI to analyze student assessment data to identify learning gaps and at-risk students for Education clients.

Hardware Procurement

Wireless Access Points

UbiquitiU6-Pro (UniFi 6 Professional)Qty: 6

$159/unit MSP cost / $240/unit suggested resale

Provide reliable, high-density Wi-Fi coverage throughout school buildings so teachers and administrators can access cloud-hosted analytics dashboards, assessment platforms, and the SIS from classrooms, offices, and common areas without latency or drops.

PoE Managed Switch

UbiquitiUSW-24-PoE (UniFi Switch 24 PoE)Qty: 1

$400/unit MSP cost / $600/unit suggested resale

Power the UniFi access points via PoE and provide centralized, managed Layer 2 switching for the school's data network. Enables VLAN segmentation to isolate student device traffic from administrative/analytics traffic.

Next-Generation Firewall / UTM

FortinetFortiGate 40F (FG-40F)Qty: 1

$500 hardware + $350/year FortiGuard licensing MSP cost / $800 hardware + $550/year licensing suggested resale

CIPA-compliant content filtering (required for E-Rate eligibility), intrusion prevention, VPN for remote admin access, and application-layer visibility. Protects all student and staff traffic and provides audit logging required for compliance.

On-Premises Server (if applicable)

On-Premises Server

Dell TechnologiesPowerEdge T350 (Xeon E-2334, 32GB DDR4 ECC, 2×2TB SATA RAID 1)Qty: 1

$2,500/unit MSP cost / $3,750/unit suggested resale

Hosts the Ed-Fi Operational Data Store (ODS), PostgreSQL analytics database, and optionally Metabase or Apache Superset for self-hosted dashboards. Only required for districts with >2,000 students or state-mandated data residency requirements. Supports up to 128GB RAM for growth.

Network-Attached Storage for Backup

RS1221+ (8-Bay Rackmount NAS)

SynologyRS1221+Qty: 1

$1,100 chassis + $600 for 4×4TB Seagate IronWolf drives MSP cost / $2,500 total suggested resale

Provides on-site backup target for Ed-Fi ODS database dumps, SIS export archives, and compliance documentation. Implements the local tier of a 3-2-1 backup strategy alongside cloud backup (Veeam or Synology C2).

Uninterruptible Power Supply

APC by Schneider ElectricSMT1500RM2U (Smart-UPS 1500VA 2U Rackmount)Qty: 1

$600/unit MSP cost / $900/unit suggested resale

Protects the on-premises server and NAS from power fluctuations and brief outages. Provides 15–20 minutes of runtime for graceful shutdown, preventing data corruption in the Ed-Fi ODS/PostgreSQL database.

Administrator Endpoint Devices

LenovoLenovo 100e Chromebook Gen 4 (82W00004US)Qty: 5

$250/unit MSP cost / $350/unit suggested resale

Dedicated endpoints for school administrators and counselors to access Panorama/Otus dashboards, Power BI reports, and early-warning alert interfaces. Chromebooks are low-maintenance and integrate natively with Google Workspace for Education.

Software Procurement

Panorama Education - Student Success Platform

Panorama EducationSaaS — annual contract, per-district pricing

$15,000–$43,000/year depending on district size (500–5,000 students)

Core analytics platform: early warning system for at-risk students, MTSS intervention tracking, academic and SEL data dashboards, learning gap analysis by standard and demographic group. Ingests SIS, assessment, attendance, and behavior data. Includes Solara AI for predictive risk scoring.

Free for basic SSO and roster sync; Premium: ~$3,000–$8,000/year for a mid-size district

Single Sign-On (SSO) for teachers and students across all EdTech platforms. Automated roster synchronization from SIS to Panorama and other tools via Clever Secure Sync. Eliminates manual CSV roster uploads and reduces integration errors.

Microsoft 365 A3 for Education

MicrosoftMicrosoft 365 A3 for EducationQty: Per-user SaaS — annual subscription (CSP resale)

$3.25/user/month for faculty; includes Power BI Pro. Suggested resale: $4.50–$5.50/user/month

Provides Microsoft Teams for collaboration, SharePoint for compliance documentation storage, Outlook/Exchange for alert email delivery, and Power BI Pro (included in A3) for building custom analytics dashboards beyond what Panorama provides natively. Azure AD (Entra ID) provides identity management backbone.

Microsoft Power BI Pro

MicrosoftPer-user SaaSQty: Per user

Included with M365 A3 license; standalone: $14/user/month (as of April 2025)

Supplemental dashboard and reporting layer for custom analytics: district-level trend reports, school board presentations, grant compliance reporting, and ad-hoc data exploration beyond Panorama's native dashboards. Connects to Ed-Fi ODS or direct SIS database for custom queries.

Ed-Fi ODS/API

Ed-Fi Alliance

$0 licensing; ~$200–$500/month cloud hosting or included on on-prem server

Open-source data standard and operational data store that normalizes student data from multiple sources (SIS, LMS, assessments) into a unified schema. Version 5.4 supports the 2024–2025 school year; Version 6.0 supports 2026–2028. Required for districts needing to aggregate data from multiple disparate systems or comply with state Ed-Fi mandates (e.g., Texas, Wisconsin).

Veeam Backup & Replication Community Edition

Veeam SoftwareCommunity Edition

Free (Community) or $500–$2,000/year for licensed edition. Suggested resale: $800–$3,000/year

Automated backup of on-premises Ed-Fi ODS database, server configurations, and compliance documentation to local NAS and cloud (Veeam Cloud Connect or Synology C2). Ensures data recoverability for FERPA compliance.

GoGuardian Admin

GoGuardian (Liminex Inc.)SaaS — per-device annual license

$4.00–$7.50/device/year. Typical district: $2,000–$7,500/year

CIPA-compliant web content filtering for student Chromebooks and devices. Required for E-Rate eligibility. Provides classroom management features and screen monitoring. Complements FortiGate firewall-level filtering with device-level controls.

Renaissance Star Assessments (Optional Add-On)

Renaissance LearningStar AssessmentsQty: Per-student annual license

$4–$8/student/year; typical district: $4,000–$40,000/year

Computer-adaptive assessments in reading, math, and early literacy. Provides nationally-normed growth scores (SGP — Student Growth Percentiles) that feed directly into Panorama's learning gap analysis. If the district does not already have Star, MAP, or iReady, this is the recommended assessment platform to generate the input data the analytics solution requires.

Prerequisites

  • Active Student Information System (SIS) with current enrollment, demographics, attendance, and grade data. Supported: PowerSchool SIS, Infinite Campus, Skyward, Tyler SIS. Data must be current within the last 30 days.
  • At least one standardized assessment platform actively generating student performance data: Renaissance Star, NWEA MAP, Curriculum Associates iReady, or state-mandated assessments with exportable results.
  • Internet connectivity of at least 25 Mbps per 100 concurrent users; 100 Mbps+ recommended. Verify with speed tests from multiple classrooms during peak hours.
  • Google Workspace for Education or Microsoft 365 A1+ tenant configured with staff and student accounts for SSO federation.
  • Building-wide Wi-Fi coverage verified: no dead zones in classrooms, offices, or common areas where analytics dashboards will be accessed.
  • Designated district data administrator or IT contact with admin-level access to the SIS, assessment platform, and identity provider (Google Admin Console or Microsoft Entra ID).
  • Completed Data Processing Agreement (DPA) template approved by the district's legal counsel or data privacy officer, ready for vendor signature.
  • CIPA-compliant content filtering in place (or planned for simultaneous deployment) if the district receives E-Rate funding.
  • Minimum 2 years of historical assessment and attendance data available for export (CSV or API) to enable meaningful trend analysis and baseline risk scoring.
  • Administrative buy-in: signed project charter from the superintendent or principal authorizing student data sharing with the analytics platform under the FERPA 'school official' exception.

Installation Steps

...

Step 1: Discovery & Data Audit

Conduct a comprehensive audit of the client's existing data ecosystem. Document every system that holds student data: SIS, LMS, assessment platforms, special education systems, and communication tools. Assess data quality by pulling sample exports and checking for missing fields, inconsistent formatting, and duplicate records. Identify the primary SIS and its API capabilities. Map the complete data flow from enrollment through assessment to reporting.

Export student roster from PowerSchool via API and verify key data fields
bash
# Sample: Export student roster from PowerSchool via API to verify data structure
curl -X GET 'https://{district-powerschool-url}/ws/v1/district/student?page=1&pagesize=50' -H 'Authorization: Bearer {access_token}' -H 'Accept: application/json' | python3 -m json.tool > sample_roster.json
# Verify key fields exist
cat sample_roster.json | python3 -c "import json,sys; data=json.load(sys.stdin); students=data.get('students',{}).get('student',[]); fields=['name','student_number','grade_level','school_enrollment']; [print(f'{f}: {sum(1 for s in students if f in str(s))}/{len(students)} records') for f in fields]"
# Export sample assessment data (Renaissance Star example via SFTP)
sftp {star-sftp-user}@{star-sftp-host}:/exports/ <<< 'ls -la'
Note

This step typically takes 1–2 weeks. The single most important finding is the quality and completeness of the SIS data. If enrollment records are missing demographic data, grade levels, or have high rates of duplicate student IDs, remediation must happen before proceeding. Create a 'Data Readiness Scorecard' documenting each source system's readiness (green/yellow/red).

Step 2: Compliance Documentation & DPA Execution

Before any student data is shared with the analytics platform vendor, execute all required compliance documentation. Prepare and sign Data Processing Agreements (DPAs) with Panorama Education, Clever, and any other vendor that will receive student PII. Verify each vendor's SOC 2 Type II certification and FERPA compliance attestation. Draft the annual FERPA parental notification. For students under 13, prepare COPPA consent notices reflecting the June 2025 FTC amendments (separate consent required for non-integral third-party data sharing). Create the district's Student Data Privacy Bill of Rights (required in states like New York).

bash
# Check vendor compliance documentation — download and archive
mkdir -p /compliance/dpas /compliance/soc2_reports /compliance/ferpa_attestations
# Panorama DPA: request from https://www.panoramaed.com/privacy
# Clever DPA: available at https://clever.com/trust/privacy/dpa
# Create compliance tracking spreadsheet
cat > /compliance/vendor_compliance_tracker.csv << 'EOF'
Vendor,DPA_Signed,DPA_Date,SOC2_Verified,SOC2_Expiry,FERPA_Attestation,COPPA_Compliant,Data_Deletion_Policy,State_Privacy_Compliant
Panorama Education,No,,No,,No,No,No,No
Clever,No,,No,,No,No,No,No
Renaissance Learning,No,,No,,No,No,No,No
Microsoft (M365),No,,No,,No,No,No,No
EOF
Critical

CRITICAL: No student data may be transmitted to any vendor until the DPA is fully executed. This is a FERPA requirement and the MSP's legal liability. Many districts have a Student Data Privacy Consortium (SDPC) membership — check if DPAs are already on file at https://privacy.a4l.org. The June 2025 COPPA amendments require separate parental consent for AI training use of children's data — verify with Panorama that student data is NOT used to train their models, or obtain separate consent.

Step 3: Network Infrastructure Deployment

Install and configure network hardware to ensure reliable, secure connectivity for the analytics platform. Deploy Ubiquiti UniFi access points throughout the building, configure the PoE switch with VLANs (separate admin/student/IoT networks), and set up the FortiGate firewall with CIPA-compliant content filtering. Configure DNS to use the firewall's DNS proxy for content filtering enforcement.

shell
# --- UniFi Controller Setup (Cloud Key or self-hosted) ---
# Adopt APs via UniFi Network Application
# SSH into each AP for initial adoption if not auto-discovered:
ssh ubnt@{ap-ip-address}
set-inform http://{unifi-controller-ip}:8080/inform

# --- FortiGate 40F Initial Configuration ---
# Connect via console cable (115200 baud) or default IP 192.168.1.99
config system interface
  edit port1
    set ip 10.0.1.1/24
    set allowaccess ping https ssh
  next
end

# Configure VLAN for admin network
config system interface
  edit admin-vlan
    set vdom root
    set ip 10.0.10.1/24
    set allowaccess ping https ssh
    set interface port1
    set vlanid 10
  next
end

# Configure CIPA-compliant web filtering
config webfilter profile
  edit "CIPA-Student-Filter"
    set web-content-log enable
    config ftgd-wf
      config filters
        edit 1
          set category 2
          set action block
        next
        # Block: Adult/Mature Content, Malware, Phishing, Proxy Avoidance
        # Categories: 2,3,4,7,59,62,83,86,88,90
      end
    end
  next
end

# Apply to student VLAN policy
config firewall policy
  edit 1
    set srcintf "student-vlan"
    set dstintf port2
    set srcaddr all
    set dstaddr all
    set action accept
    set schedule always
    set service ALL
    set utm-status enable
    set webfilter-profile "CIPA-Student-Filter"
    set ssl-ssh-profile "certificate-inspection"
    set logtraffic all
  next
end
Note

VLAN segmentation is essential: admin traffic (analytics dashboards, SIS access) must be on a separate VLAN from student devices. This prevents students from potentially accessing admin-level analytics. Ensure the FortiGate's FortiGuard subscription is active for up-to-date web filtering categories. Document the firewall configuration and export the config file for backup.

Step 4: Identity Provider & SSO Configuration with Clever

Set up Clever as the SSO and roster synchronization middleware. Connect the district's SIS to Clever for automated roster sync. Configure SAML/OAuth federation between Clever and the district's identity provider (Google Workspace or Microsoft Entra ID). This creates the foundation for seamless, secure access to the analytics platform.

1
Create district account at https://clever.com/signup/district
2
Navigate to Settings > SIS Sync
3
Select SIS type (e.g., PowerSchool, Infinite Campus)
4
For PowerSchool: Install Clever plugin from PowerSchool Plugin Store — In PowerSchool Admin: System > Plugin Management > Install. Plugin Name: 'Clever Secure Sync'. Configure sync scope: All schools or selected schools
5
Configure Clever SSO with Google Workspace: In Google Admin Console: Apps > Web and mobile apps > Add app > Search for 'Clever'. Configure SAML SSO — Entity ID: https://clever.com/oauth/saml/metadata/{district_id}, ACS URL: https://clever.com/oauth/saml/consume, Name ID: Email address
6
Verify roster sync: In Clever Dashboard: Data > SIS Sync > View sync status. Check: Students, Teachers, Sections, Schools counts match SIS
Clever API verification calls for MSP monitoring of district sync status
bash
# --- Clever API verification (for MSP monitoring) ---
curl -X GET 'https://api.clever.com/v3.0/districts/{district_id}' \
  -H 'Authorization: Bearer {clever_api_token}' \
  -H 'Accept: application/json'

# Check student count matches SIS
curl -X GET 'https://api.clever.com/v3.0/students?limit=1' \
  -H 'Authorization: Bearer {clever_api_token}' | python3 -c "import json,sys; d=json.load(sys.stdin); print(f'Total students synced: check X-Total header')"
Note

Clever's free tier covers basic SSO and roster sync for most districts. Only upgrade to Premium if the district needs advanced features like LaunchPad portal or district-level analytics on app usage. The initial roster sync takes 24–48 hours to fully process. Run a manual sync and verify student counts against the SIS before proceeding to platform setup. Common issues: duplicate students (different IDs in SIS vs. Clever), missing teacher-to-section assignments, and inactive students still syncing.

Step 5: Panorama Education Platform Deployment

Deploy and configure the Panorama Education Student Success platform. Work with the Panorama implementation team (they assign a dedicated Customer Success Manager) to configure the data integration, set up early warning indicators, define risk tiers, and build the initial dashboard views. This step involves significant collaboration between the MSP, the district data admin, and the Panorama onboarding team.

Panorama Deployment: SIS Data Feed, Assessment Feed, and Early Warning Indicator Configuration
bash
# Panorama deployment is primarily a guided SaaS setup process.
# The MSP's role is to facilitate data integration and verify accuracy.

# --- Step 5a: Initiate Panorama Onboarding ---
# Contact: partnerships@panoramaed.com or your assigned Panorama CSM
# Provide: District name, NCES ID, SIS type, student count, assessment platforms

# --- Step 5b: Configure SIS Data Feed ---
# Option 1: Clever Secure Sync (recommended)
#   - In Panorama Admin > Integrations > Add Integration > Clever
#   - Authorize Panorama in Clever Dashboard > Applications > Add Application
#   - Map fields: student_id, grade_level, school, demographics

# Option 2: Ed-Fi ODS API Connection
#   - In Panorama Admin > Integrations > Add Integration > Ed-Fi
#   - Provide Ed-Fi ODS API URL: https://{edfi-server}/api/data/v3
#   - API Key and Secret from Ed-Fi Admin App

# Option 3: SFTP CSV Upload (fallback)
#   - Generate nightly CSV exports from SIS
#   - Upload to Panorama SFTP: sftp {district}@sftp.panoramaed.com
#   - File format: student_id,first_name,last_name,grade,school_id,enrollment_date

# --- Step 5c: Configure Assessment Data Feed ---
# For Renaissance Star:
#   - In Panorama Admin > Assessments > Add Source > Renaissance
#   - Provide Star API credentials or SFTP path
#   - Map: student_id, assessment_date, subject, scale_score, percentile_rank, SGP

# For NWEA MAP:
#   - Export from MAP Admin > Data Export > Comprehensive Student File
#   - Upload via Panorama SFTP or API integration

# --- Step 5d: Configure Early Warning Indicators ---
# In Panorama Admin > Student Success > Early Warning System:
# Indicator 1: Attendance Rate
#   - Yellow flag: < 90% attendance (Chronic absence threshold)
#   - Red flag: < 80% attendance (Severe chronic absence)
# Indicator 2: Course Grades
#   - Yellow flag: Any grade D or below
#   - Red flag: 2+ grades D/F or GPA < 1.5
# Indicator 3: Assessment Performance
#   - Yellow flag: Below 25th percentile on Star/MAP
#   - Red flag: Below 10th percentile or declining SGP (< 35)
# Indicator 4: Behavior
#   - Yellow flag: 2+ disciplinary referrals in a quarter
#   - Red flag: Suspension or 4+ referrals

# Overall Risk Tier Calculation:
# Low Risk: 0 flags
# Some Risk: 1 yellow flag
# At Risk: 2+ yellow flags OR 1 red flag
# High Risk: 2+ red flags
Note

Panorama's onboarding typically takes 4–8 weeks with their CSM guiding the process. The MSP's value-add is ensuring the SIS data is clean before it reaches Panorama, troubleshooting integration errors, and serving as the technical bridge between the district's IT team and Panorama's implementation team. Request access to Panorama's Partner Portal for MSP-specific documentation and training. The early warning thresholds should be calibrated with the district's leadership — the defaults above are research-based starting points from the What Works Clearinghouse.

Step 6: Ed-Fi ODS Deployment (On-Premises or Cloud)

For districts requiring a centralized data warehouse or those in states mandating Ed-Fi compliance (e.g., Texas PEIMS), deploy the Ed-Fi Operational Data Store (ODS) and API. This serves as the canonical data hub connecting SIS, assessment, and analytics platforms. Skip this step if Clever Secure Sync provides sufficient integration for a smaller district.

Ed-Fi ODS deployment options: Docker (Option A) and Windows Server (Option B), plus SIS-to-Ed-Fi connector configuration and sync scheduling
bash
# --- Option A: Docker-based deployment (Recommended for most MSPs) ---
# Prerequisites: Docker Engine 20.10+, Docker Compose v2, 8GB+ RAM

# Clone Ed-Fi Docker repository
git clone https://github.com/Ed-Fi-Alliance-OSS/Ed-Fi-ODS-Docker.git
cd Ed-Fi-ODS-Docker

# Configure environment
cp .env.example .env
nano .env
# Set these values:
# POSTGRES_USER=edfi_admin
# POSTGRES_PASSWORD={strong-random-password}
# ODS_POSTGRES_HOST=ed-fi-db-ods
# ADMIN_POSTGRES_HOST=ed-fi-db-admin
# API_MODE=SharedInstance
# LOGS_FOLDER=/var/log/edfi

# Launch the stack
docker compose -f docker-compose-ods-api.yml up -d

# Verify API is running
curl -s http://localhost:8001 | python3 -m json.tool
# Expected: JSON with version info and data model URLs

# --- Option B: Windows Server deployment ---
# Download installer from https://techdocs.ed-fi.org/
# EdFi.Suite3.Installer.WebApi.5.4.0.nupkg
# Run PowerShell installer:
Import-Module ./EdFi.Suite3.Installer.WebApi
$params = @{
    DbConnectionInfo = @{
        Server = '(local)'
        DatabaseName = 'EdFi_ODS'
        UseIntegratedSecurity = $true
    }
    WebSitePath = 'C:\inetpub\Ed-Fi\WebApi'
    WebSitePort = 443
    CertThumbprint = '{ssl-cert-thumbprint}'
}
Install-EdFiOdsWebApi @params

# --- Configure SIS-to-Ed-Fi sync ---
# For PowerSchool: Use Ed-Fi PowerSchool Connector
# Download from Ed-Fi Exchange: https://exchange.ed-fi.org
# Configure in connector settings:
# SIS_BASE_URL=https://{powerschool-url}/ws/v1
# EDFI_API_URL=https://{edfi-server}/api/data/v3
# EDFI_API_KEY={api-key-from-admin-app}
# EDFI_API_SECRET={api-secret-from-admin-app}

# Run initial full sync
./run-connector.sh --mode full

# Schedule nightly incremental sync via cron
echo '0 2 * * * cd /opt/edfi-connector && ./run-connector.sh --mode incremental >> /var/log/edfi-sync.log 2>&1' | crontab -
Note

Ed-Fi ODS v5.4 is the current production release for 2024–2025 school year. Version 6.0 (released 2025) targets 2026–2028. Plan for an upgrade path. Docker deployment is recommended for MSPs — it simplifies updates and isolates dependencies. For the Dell PowerEdge T350, allocate at minimum 16GB RAM to the Ed-Fi stack (PostgreSQL + API + Admin App). The initial full SIS sync can take several hours for large districts (>5,000 students). Schedule it during off-hours and monitor PostgreSQL logs for errors.

Step 7: Power BI Custom Dashboard Configuration

Deploy Microsoft Power BI Pro dashboards for custom analytics views that complement Panorama's native dashboards. These are particularly valuable for school board presentations, grant compliance reporting, and ad-hoc analysis that requires drill-down beyond what Panorama provides. Connect Power BI to the Ed-Fi ODS (or directly to the SIS database) as a data source.

Power BI Desktop: Ed-Fi ODS connection, DAX measures for at-risk scoring, and publish/RLS configuration
dax
# --- Power BI Desktop: Connect to Ed-Fi ODS PostgreSQL ---
# 1. Open Power BI Desktop (free download from Microsoft)
# 2. Get Data > PostgreSQL database
#    Server: {edfi-server-ip}:5432
#    Database: EdFi_ODS
#    Data Connectivity mode: DirectQuery (for real-time) or Import (for performance)

# 3. Load key tables:
#    - edfi.Student
#    - edfi.StudentSchoolAssociation
#    - edfi.StudentSectionAssociation
#    - edfi.Grade
#    - edfi.StudentAssessment
#    - edfi.StudentAssessmentStudentObjectiveAssessment
#    - edfi.AttendanceEvent (or DailyAttendance)
#    - edfi.DisciplineAction

# --- Alternative: Connect directly to PowerSchool Oracle/MSSQL ---
# If Ed-Fi ODS is not deployed, use PowerSchool's reporting database:
# ODBC connection to PowerSchool Oracle DB (read replica)
# Tables: Students, Attendance, StoredGrades, CC (course enrollments)

# --- Power BI DAX Measures for At-Risk Scoring ---
# Create these in Power BI Desktop > Modeling > New Measure:

# Attendance Rate per Student:
AttendanceRate = 
  DIVIDE(
    COUNTROWS(FILTER(Attendance, Attendance[EventCategory] = "Present")),
    COUNTROWS(Attendance),
    0
  )

# Chronic Absence Flag:
IsChronicallyAbsent = IF([AttendanceRate] < 0.90, 1, 0)

# Assessment Risk Flag:
AssessmentRisk = 
  IF(
    CALCULATE(MAX(StudentAssessment[PercentileRank])) < 25,
    IF(CALCULATE(MAX(StudentAssessment[PercentileRank])) < 10, "High", "Moderate"),
    "Low"
  )

# --- Publish to Power BI Service ---
# 1. Publish from Desktop to your M365 tenant workspace
# 2. Configure scheduled refresh (e.g., daily at 4 AM)
# 3. Set up Row-Level Security (RLS):
#    - Role: Teacher — filter StudentSectionAssociation[StaffUSI] = USERNAME()
#    - Role: Principal — filter StudentSchoolAssociation[SchoolId] = {school_id}
#    - Role: District Admin — no filter (sees all)
# 4. Share dashboard via Power BI app or embed in Microsoft Teams
Note

Row-Level Security (RLS) is CRITICAL for FERPA compliance — teachers must only see data for students in their assigned sections, and principals must only see students at their school. Test RLS thoroughly with the 'View as Role' feature before publishing. Schedule data refresh outside school hours to avoid impacting SIS performance. Power BI Pro licenses are included in Microsoft 365 A3 for Education — no additional cost if the district is already on A3.

Step 8: Early Warning Alert System Configuration

Configure automated alerts so that teachers, counselors, and administrators receive timely notifications when a student's risk level changes. Panorama provides built-in email notifications; additionally, set up a Power Automate workflow for SMS/Teams alerts and weekly digest reports for principals.

  • Panorama Native Alerts — In Panorama Admin > Notifications:
  • 'Student Enters High Risk' alert → Email to: assigned counselor + homeroom teacher
  • 'Attendance Below 85%' alert → Email to: attendance coordinator
  • 'New Failing Grade' alert → Email to: subject teacher + department head
  • Power Automate Workflow for Enhanced Alerts — Create a new Automated Cloud Flow in Power Automate:
  • Trigger: Scheduled (daily at 7:00 AM)
  • Step 1: Run a SQL query against Ed-Fi ODS or Power BI dataset
  • Step 2: For each student returned — Send an Adaptive Card to Microsoft Teams channel 'Student-Support-Team' with Title: '⚠️ New High-Risk Student Alert', Body: '{FirstName} {LastSurname} - Grade {GradeLevel} - {SchoolName}', Details: 'Risk factors: {AttendanceFlag}, {GradeFlag}, {AssessmentFlag}', Action Button: 'View in Panorama' → Link to student profile
  • Step 3: Send weekly digest email to principal — Aggregate all risk level changes for the week, include: # new High Risk, # improved (moved down), # unchanged, attach Power BI PDF export of weekly risk dashboard
Power Automate Step 1: SQL query to identify new high-risk students
sql
SELECT StudentUniqueId, FirstName, LastSurname, RiskLevel
         FROM vw_StudentRiskScores
         WHERE RiskLevel = 'High'
         AND RiskLevelChangedDate = CURRENT_DATE - 1
Custom SQL View for Risk Score Aggregation — deploy to PostgreSQL (Ed-Fi ODS)
bash
psql -h localhost -U edfi_admin -d EdFi_ODS << 'EOSQL'
CREATE OR REPLACE VIEW vw_StudentRiskScores AS
SELECT
  s.StudentUniqueId,
  s.FirstName,
  s.LastSurname,
  ssa.SchoolId,
  g.GradeLevel,
  ROUND(att.AttendanceRate * 100, 1) AS AttendancePct,
  grades.GPA,
  assess.LatestPercentile,
  CASE
    WHEN att.AttendanceRate < 0.80 OR grades.GPA < 1.5 OR assess.LatestPercentile < 10
      THEN 'High'
    WHEN att.AttendanceRate < 0.90 OR grades.FailingCount > 0 OR assess.LatestPercentile < 25
      THEN 'Moderate'
    ELSE 'Low'
  END AS RiskLevel
FROM edfi.Student s
JOIN edfi.StudentSchoolAssociation ssa ON s.StudentUSI = ssa.StudentUSI
LEFT JOIN LATERAL (
  SELECT GradeLevelDescriptorId FROM edfi.StudentSchoolAssociation
  WHERE StudentUSI = s.StudentUSI ORDER BY EntryDate DESC LIMIT 1
  ) g ON true
LEFT JOIN LATERAL (
  -- Calculate attendance rate
  SELECT
    COUNT(CASE WHEN ae.AttendanceEventCategoryDescriptorId IN (SELECT DescriptorId FROM edfi.Descriptor WHERE CodeValue = 'In Attendance') THEN 1 END)::float / NULLIF(COUNT(*), 0) AS AttendanceRate
  FROM edfi.StudentSchoolAttendanceEvent ae
  WHERE ae.StudentUSI = s.StudentUSI
    AND ae.EventDate >= CURRENT_DATE - INTERVAL '90 days'
  ) att ON true
LEFT JOIN LATERAL (
  -- Calculate GPA and failing count
  SELECT
    AVG(CASE WHEN NumericGradeEarned IS NOT NULL THEN NumericGradeEarned END) / 25.0 AS GPA,
    COUNT(CASE WHEN LetterGradeEarned IN ('D','F') THEN 1 END) AS FailingCount
  FROM edfi.Grade gr
  WHERE gr.StudentUSI = s.StudentUSI
  ) grades ON true
LEFT JOIN LATERAL (
  -- Get latest assessment percentile
  SELECT PercentileRank AS LatestPercentile
  FROM edfi.StudentAssessmentPerformanceLevel sapl
  JOIN edfi.StudentAssessment sa ON sa.StudentAssessmentIdentifier = sapl.StudentAssessmentIdentifier
  WHERE sa.StudentUSI = s.StudentUSI
  ORDER BY sa.AdministrationDate DESC LIMIT 1
  ) assess ON true;
EOSQL
Note

The SQL view above is a starting template — adjust field names and descriptor IDs to match your specific Ed-Fi ODS configuration. The risk thresholds (80% attendance, 1.5 GPA, 10th percentile) are based on research-backed early warning indicators from the National High School Center. Calibrate with the district after the first month of data. If Ed-Fi ODS is not deployed, Panorama's native alerts are sufficient — skip the SQL view and Power Automate steps.

Step 9: Teacher and Administrator Training

Deliver structured professional development (PD) sessions to ensure educators can effectively use the analytics platform to identify at-risk students and take action. Training should be differentiated: administrators need strategic dashboard interpretation and MTSS workflow training; teachers need classroom-level data interpretation and intervention planning skills.

  • Training Session 1: Administrator Overview (2 hours) — Audience: Principals, Assistant Principals, Counselors, District Admin
  • Agenda Item 1: Platform login and navigation (15 min)
  • Agenda Item 2: Understanding the Early Warning Dashboard (30 min) — Risk tiers explained: Low / Some Risk / At Risk / High Risk; Drill-down: school → grade → classroom → student
  • Agenda Item 3: Student Profile Deep-Dive (20 min) — Viewing attendance trends, grade history, assessment trajectory; Adding notes and intervention assignments
  • Agenda Item 4: MTSS Workflow in Panorama (30 min) — Tier 1/2/3 intervention assignment; Progress monitoring and outcome tracking
  • Agenda Item 5: Power BI Custom Reports (15 min) — Accessing school-board-ready reports; Exporting PDF/PPT for presentations
  • Agenda Item 6: Q&A and Hands-on Practice (10 min)
  • Training Session 2: Teacher Workshop (90 min) — Audience: All classroom teachers (can be by department/grade)
  • Agenda Item 1: Logging in via Clever SSO (5 min)
  • Agenda Item 2: My Classroom Dashboard (20 min) — Viewing your students' risk levels; Color-coded indicators: what red/yellow/green means
  • Agenda Item 3: Understanding Assessment Data (20 min) — Star/MAP scores: what percentile and growth (SGP) mean; Identifying specific learning gaps by standard
  • Agenda Item 4: Taking Action: Intervention Logging (20 min) — Recording interventions you're already doing; Requesting Tier 2/3 support through the platform
  • Agenda Item 5: Data-Driven Instructional Adjustments (15 min) — Using gap analysis to plan differentiated lessons
  • Agenda Item 6: Quick Reference Guide Distribution (10 min)
Note

Schedule training during existing PD days — do not ask teachers to attend additional after-school sessions if possible. Create a 1-page Quick Reference Guide (laminated) with: how to login, how to read the risk dashboard, what to do if a student is flagged red. Record all training sessions for async viewing. Plan a follow-up 'Data Chat' session 4–6 weeks after go-live to address real-world questions once teachers have used the platform with their actual students.

Step 10: Pilot Launch & Data Validation

Launch the solution with a pilot group of 2–3 schools or specific grade levels (recommended: one elementary, one middle, one high school). Validate that data flows correctly from SIS through Clever to Panorama, that risk scores align with educator intuition, and that alerts fire correctly. Run a structured 3–4 week pilot before district-wide rollout.

Data Validation Queries
bash
# Student count reconciliation, assessment spot-check, risk score
# validation, and alert delivery test

# --- Data Validation Queries ---

# 1. Student Count Reconciliation
# Compare SIS total vs. Clever sync vs. Panorama imported
echo 'SIS Student Count:' && curl -s 'https://{powerschool-url}/ws/v1/district/student/count' -H 'Authorization: Bearer {token}'
echo 'Clever Student Count:' && curl -s 'https://api.clever.com/v3.0/students?count=true' -H 'Authorization: Bearer {clever_token}'
# Panorama: Check Admin Dashboard > Data > Import Summary

# 2. Assessment Data Spot-Check
# Pull 10 random students and verify their Star scores in Panorama match Star platform
psql -h localhost -U edfi_admin -d EdFi_ODS -c "
  SELECT s.StudentUniqueId, s.FirstName, s.LastSurname,
    sa.AdministrationDate, sasr.Result
  FROM edfi.Student s
  JOIN edfi.StudentAssessment sa ON s.StudentUSI = sa.StudentUSI
  JOIN edfi.StudentAssessmentScoreResult sasr ON sa.StudentAssessmentIdentifier = sasr.StudentAssessmentIdentifier
  ORDER BY RANDOM() LIMIT 10;
"

# 3. Risk Score Validation
# Export Panorama's at-risk list and compare with counselors' existing watch lists
# In Panorama: Reports > Early Warning > Export CSV
# Compare against the counselor's manually-maintained spreadsheet

# 4. Alert Delivery Test
# Manually trigger a test alert by temporarily lowering a test student's threshold
# Verify email delivery to the designated teacher within 15 minutes
Note

The most common issues during pilot: (1) Student ID mismatches between SIS and assessment platform — resolve by mapping alternate IDs in Clever; (2) Historical data not importing fully — work with Panorama CSM to re-run historical import; (3) Teachers confused by unfamiliar assessment metrics — prepare a 'Reading Your Dashboard' cheat sheet. The pilot is also the time to calibrate risk thresholds — if >40% of students are flagged 'At Risk,' the thresholds are too sensitive and need adjustment. Target: 15–25% of students flagged as some level of risk.

Step 11: Full District Rollout & Go-Live

After a successful pilot validation, extend the deployment to all schools and grade levels in the district. Activate all remaining data feeds, provision all teacher and admin accounts, enable full alerting, and publish the Power BI dashboards to all authorized users. Communicate the launch to all staff via the district's official channels.

  • Clever: Extend sync to all schools — In Clever Dashboard > Schools > Enable remaining schools. Trigger a full re-sync: Clever Dashboard > Data > Sync Now
  • Panorama: Activate all schools — Contact Panorama CSM to expand scope from pilot schools to full district. Verify data import completion for all schools (Admin > Data > Import History)
  • Power BI: Publish district-wide dashboards — In Power BI Service: (1) Publish updated .pbix file with all schools included, (2) Update RLS roles to include all principals and teachers, (3) Create Power BI App: 'Student Success Analytics' with Audience: All Staff, including dashboards: At-Risk Overview, Learning Gap Analysis, Attendance Trends, (4) Share app link via email and embed in Teams 'Analytics' channel
  • Send Go-Live Communication — Subject: Student Success Analytics Platform Now Available. To: All-Staff distribution list. Include: Clever login link (https://clever.com/in/{district-slug}), Quick Reference Guide (PDF attachment), training video links (recorded from PD sessions), help desk contact ({msp-helpdesk-email} or ext. {phone}), and FAQ document link
Note

Plan go-live for a Monday or Tuesday early in a grading period — avoid launching right before report cards, standardized testing windows, or school breaks. Have MSP support staff available on-site at each school for the first 2 days of go-live for real-time troubleshooting and hallway coaching. Monitor Panorama login analytics in the first week — if fewer than 50% of teachers have logged in after 5 days, escalate to building principals for additional encouragement.

Custom AI Components

Student Risk Prediction Model

Type: workflow A custom Python-based machine learning pipeline that supplements Panorama's built-in early warning system with a more granular, locally-trained predictive model. This model uses historical district-specific data to predict the probability that a student will fail one or more courses or become chronically absent within the next grading period. Particularly valuable for districts that want to own their model and data, or for tutoring organizations not large enough to justify a Panorama license.

Implementation:

student_risk_model.py — Student At-Risk Prediction Pipeline
python
# student_risk_model.py
# Student At-Risk Prediction Pipeline
# Requirements: pip install pandas scikit-learn sqlalchemy psycopg2-binary joblib

import pandas as pd
import numpy as np
from sqlalchemy import create_engine
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import StandardScaler, LabelEncoder
from sklearn.metrics import classification_report, roc_auc_score
from sklearn.pipeline import Pipeline
import joblib
import logging
from datetime import datetime

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

# --- Configuration ---
DB_CONNECTION = 'postgresql://edfi_admin:{password}@{server}:5432/EdFi_ODS'
MODEL_OUTPUT_PATH = '/opt/student-analytics/models/'
PREDICTION_OUTPUT_PATH = '/opt/student-analytics/predictions/'

# Risk prediction features
FEATURES = [
    'attendance_rate_90d',      # Attendance rate over last 90 days
    'attendance_rate_30d',      # Attendance rate over last 30 days
    'attendance_trend',         # Slope of weekly attendance over 90 days
    'gpa_current',              # Current cumulative GPA
    'gpa_previous_term',        # Previous term GPA
    'gpa_trend',                # GPA change from previous term
    'failing_course_count',     # Number of courses with D or F
    'assessment_percentile',    # Latest standardized assessment percentile
    'assessment_growth',        # Student Growth Percentile (SGP)
    'discipline_count_90d',     # Discipline incidents in last 90 days
    'grade_level_numeric',      # Numeric grade level (K=0, 1-12)
    'days_enrolled',            # Days enrolled in current school
    'course_load',              # Number of active course sections
    'is_sped',                  # Special education flag (0/1)
    'is_ell',                   # English Language Learner flag (0/1)
    'is_frl',                   # Free/Reduced Lunch flag (0/1)
]

TARGET = 'became_at_risk_next_period'  # Binary: 1 = student failed a course or became chronically absent

def extract_features(engine, as_of_date=None):
    """Extract student features from Ed-Fi ODS."""
    if as_of_date is None:
        as_of_date = datetime.now().strftime('%Y-%m-%d')
    
    query = f"""
    WITH student_base AS (
        SELECT
            s.StudentUSI,
            s.StudentUniqueId,
            s.FirstName,
            s.LastSurname,
            ssa.SchoolId,
            ssa.EntryDate,
            ('{as_of_date}'::date - ssa.EntryDate) AS days_enrolled,
            COALESCE(
                (SELECT COUNT(*) FROM edfi.StudentSectionAssociation sect
                 WHERE sect.StudentUSI = s.StudentUSI
                 AND sect.EndDate IS NULL), 0
            ) AS course_load
        FROM edfi.Student s
        JOIN edfi.StudentSchoolAssociation ssa ON s.StudentUSI = ssa.StudentUSI
        WHERE ssa.ExitWithdrawDate IS NULL
    ),
    attendance_90d AS (
        SELECT
            StudentUSI,
            COUNT(CASE WHEN AttendanceEventCategoryDescriptorId IN (
                SELECT DescriptorId FROM edfi.Descriptor WHERE CodeValue = 'In Attendance'
            ) THEN 1 END)::float / NULLIF(COUNT(*), 0) AS attendance_rate_90d
        FROM edfi.StudentSchoolAttendanceEvent
        WHERE EventDate >= '{as_of_date}'::date - INTERVAL '90 days'
          AND EventDate <= '{as_of_date}'::date
        GROUP BY StudentUSI
    ),
    attendance_30d AS (
        SELECT
            StudentUSI,
            COUNT(CASE WHEN AttendanceEventCategoryDescriptorId IN (
                SELECT DescriptorId FROM edfi.Descriptor WHERE CodeValue = 'In Attendance'
            ) THEN 1 END)::float / NULLIF(COUNT(*), 0) AS attendance_rate_30d
        FROM edfi.StudentSchoolAttendanceEvent
        WHERE EventDate >= '{as_of_date}'::date - INTERVAL '30 days'
          AND EventDate <= '{as_of_date}'::date
        GROUP BY StudentUSI
    ),
    grades_current AS (
        SELECT
            StudentUSI,
            AVG(NumericGradeEarned) / 25.0 AS gpa_current,
            COUNT(CASE WHEN LetterGradeEarned IN ('D','F') THEN 1 END) AS failing_course_count
        FROM edfi.Grade
        WHERE GradingPeriodDescriptorId = (
            SELECT DescriptorId FROM edfi.Descriptor
            WHERE CodeValue = 'First Semester' -- Adjust per grading period
            LIMIT 1
        )
        GROUP BY StudentUSI
    ),
    latest_assessment AS (
        SELECT DISTINCT ON (sa.StudentUSI)
            sa.StudentUSI,
            sasr.Result::float AS assessment_percentile
        FROM edfi.StudentAssessment sa
        JOIN edfi.StudentAssessmentScoreResult sasr
            ON sa.StudentAssessmentIdentifier = sasr.StudentAssessmentIdentifier
        WHERE sasr.AssessmentReportingMethodDescriptorId = (
            SELECT DescriptorId FROM edfi.Descriptor
            WHERE CodeValue = 'Percentile' LIMIT 1
        )
        ORDER BY sa.StudentUSI, sa.AdministrationDate DESC
    ),
    discipline AS (
        SELECT
            StudentUSI,
            COUNT(*) AS discipline_count_90d
        FROM edfi.DisciplineAction
        WHERE DisciplineDate >= '{as_of_date}'::date - INTERVAL '90 days'
        GROUP BY StudentUSI
    )
    SELECT
        sb.StudentUSI,
        sb.StudentUniqueId,
        sb.FirstName,
        sb.LastSurname,
        sb.SchoolId,
        COALESCE(a90.attendance_rate_90d, 1.0) AS attendance_rate_90d,
        COALESCE(a30.attendance_rate_30d, 1.0) AS attendance_rate_30d,
        COALESCE(a30.attendance_rate_30d, 1.0) - COALESCE(a90.attendance_rate_90d, 1.0) AS attendance_trend,
        COALESCE(gc.gpa_current, 2.5) AS gpa_current,
        2.5 AS gpa_previous_term,  -- Placeholder: join to prior grading period
        COALESCE(gc.gpa_current, 2.5) - 2.5 AS gpa_trend,
        COALESCE(gc.failing_course_count, 0) AS failing_course_count,
        COALESCE(la.assessment_percentile, 50.0) AS assessment_percentile,
        50.0 AS assessment_growth,  -- Placeholder: calculate SGP from consecutive assessments
        COALESCE(d.discipline_count_90d, 0) AS discipline_count_90d,
        sb.days_enrolled,
        sb.course_load
    FROM student_base sb
    LEFT JOIN attendance_90d a90 ON sb.StudentUSI = a90.StudentUSI
    LEFT JOIN attendance_30d a30 ON sb.StudentUSI = a30.StudentUSI
    LEFT JOIN grades_current gc ON sb.StudentUSI = gc.StudentUSI
    LEFT JOIN latest_assessment la ON sb.StudentUSI = la.StudentUSI
    LEFT JOIN discipline d ON sb.StudentUSI = d.StudentUSI;
    """
    
    df = pd.read_sql(query, engine)
    logger.info(f'Extracted features for {len(df)} students as of {as_of_date}')
    return df

def train_model(engine):
    """Train the risk prediction model on historical data."""
    logger.info('Starting model training...')
    
    # Extract historical features (would need labeled historical data)
    df = extract_features(engine)
    
    # For initial deployment, use rule-based labeling as proxy:
    # A student is 'at risk' if attendance < 90% OR GPA < 2.0 OR failing_course_count > 0
    df['became_at_risk_next_period'] = (
        (df['attendance_rate_90d'] < 0.90) |
        (df['gpa_current'] < 2.0) |
        (df['failing_course_count'] > 0)
    ).astype(int)
    
    feature_cols = [
        'attendance_rate_90d', 'attendance_rate_30d', 'attendance_trend',
        'gpa_current', 'gpa_trend', 'failing_course_count',
        'assessment_percentile', 'assessment_growth',
        'discipline_count_90d', 'days_enrolled', 'course_load'
    ]
    
    X = df[feature_cols].fillna(0)
    y = df['became_at_risk_next_period']
    
    X_train, X_test, y_train, y_test = train_test_split(
        X, y, test_size=0.2, random_state=42, stratify=y
    )
    
    pipeline = Pipeline([
        ('scaler', StandardScaler()),
        ('classifier', GradientBoostingClassifier(
            n_estimators=200,
            max_depth=4,
            learning_rate=0.1,
            min_samples_leaf=20,
            subsample=0.8,
            random_state=42
        ))
    ])
    
    # Cross-validation
    cv_scores = cross_val_score(pipeline, X_train, y_train, cv=5, scoring='roc_auc')
    logger.info(f'Cross-validation AUC: {cv_scores.mean():.3f} (+/- {cv_scores.std():.3f})')
    
    # Final training
    pipeline.fit(X_train, y_train)
    
    # Evaluation
    y_pred = pipeline.predict(X_test)
    y_proba = pipeline.predict_proba(X_test)[:, 1]
    
    logger.info(f'Test AUC: {roc_auc_score(y_test, y_proba):.3f}')
    logger.info(f'\n{classification_report(y_test, y_pred)}')
    
    # Feature importance
    importances = pipeline.named_steps['classifier'].feature_importances_
    for feat, imp in sorted(zip(feature_cols, importances), key=lambda x: -x[1]):
        logger.info(f'  {feat}: {imp:.3f}')
    
    # Save model
    model_path = f'{MODEL_OUTPUT_PATH}risk_model_{datetime.now().strftime("%Y%m%d")}.pkl'
    joblib.dump(pipeline, model_path)
    joblib.dump(feature_cols, f'{MODEL_OUTPUT_PATH}feature_cols.pkl')
    logger.info(f'Model saved to {model_path}')
    
    return pipeline, feature_cols

def predict_risk(engine, model=None, feature_cols=None):
    """Generate risk predictions for all current students."""
    if model is None:
        model = joblib.load(f'{MODEL_OUTPUT_PATH}risk_model_latest.pkl')
        feature_cols = joblib.load(f'{MODEL_OUTPUT_PATH}feature_cols.pkl')
    
    df = extract_features(engine)
    X = df[feature_cols].fillna(0)
    
    df['risk_probability'] = model.predict_proba(X)[:, 1]
    df['risk_tier'] = pd.cut(
        df['risk_probability'],
        bins=[0, 0.25, 0.50, 0.75, 1.0],
        labels=['Low', 'Some Risk', 'At Risk', 'High Risk']
    )
    
    # Output predictions
    output_file = f'{PREDICTION_OUTPUT_PATH}risk_scores_{datetime.now().strftime("%Y%m%d")}.csv'
    df[['StudentUniqueId', 'FirstName', 'LastSurname', 'SchoolId',
        'risk_probability', 'risk_tier',
        'attendance_rate_90d', 'gpa_current', 'failing_course_count',
        'assessment_percentile', 'discipline_count_90d'
    ]].to_csv(output_file, index=False)
    
    logger.info(f'Predictions saved to {output_file}')
    logger.info(f'Risk distribution:\n{df["risk_tier"].value_counts()}')
    
    return df

if __name__ == '__main__':
    engine = create_engine(DB_CONNECTION)
    model, feature_cols = train_model(engine)
    predictions = predict_risk(engine, model, feature_cols)
    print(f'\nTop 20 Highest Risk Students:')
    print(predictions.nlargest(20, 'risk_probability')[[
        'StudentUniqueId', 'FirstName', 'LastSurname',
        'risk_probability', 'risk_tier'
    ]].to_string(index=False))

Deployment: Schedule via cron to run weekly:

Weekly cron schedule for risk model execution
cron
0 3 * * 1 cd /opt/student-analytics && python3 student_risk_model.py >> /var/log/student-risk.log 2>&1
Note

Model Retraining: Retrain at the start of each semester when new labeled outcome data (final grades, end-of-year status) is available. Replace the proxy labels with actual outcomes for improved accuracy.

Learning Gap Analyzer

Type: agent An AI agent that analyzes assessment results at the learning-standard level to identify specific skill gaps for individual students and classroom cohorts. It uses the OpenAI GPT-5.4 API to generate natural-language recommendations for teachers, translating raw assessment data into actionable instructional adjustments. This component bridges the gap between data and classroom practice.

Implementation

learning_gap_agent.py
python
# Learning Gap Analysis Agent using OpenAI GPT-5.4

# learning_gap_agent.py
# Learning Gap Analysis Agent using OpenAI GPT-5.4
# Requirements: pip install openai pandas sqlalchemy psycopg2-binary

import os
import json
import pandas as pd
from sqlalchemy import create_engine
from openai import OpenAI
from datetime import datetime
import logging

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

# --- Configuration ---
OPENAI_API_KEY = os.environ.get('OPENAI_API_KEY', '{your-api-key}')
DB_CONNECTION = os.environ.get('EDFI_DB_URL', 'postgresql://edfi_admin:{password}@{server}:5432/EdFi_ODS')

client = OpenAI(api_key=OPENAI_API_KEY)

SYSTEM_PROMPT = """You are an expert educational data analyst and instructional coach.
You analyze student assessment data to identify specific learning gaps and provide
actionable, teacher-friendly recommendations.

Rules:
1. Always reference specific learning standards (e.g., CCSS.MATH.4.NBT.5) when identifying gaps.
2. Prioritize gaps by impact: focus on foundational skills that block future learning.
3. Provide 2-3 specific, practical instructional strategies for each gap.
4. Use encouraging, strengths-based language (highlight what students CAN do).
5. For classroom-level analysis, identify patterns and suggest small-group configurations.
6. Never include student names in any output that could be shared beyond the teacher.
7. Format output as structured markdown for easy reading.
8. All recommendations must be grade-level appropriate and aligned to standards.
"""

def get_student_assessment_data(engine, student_id):
    """Pull detailed assessment results by learning standard for a student."""
    query = f"""
    SELECT
        a.AssessmentTitle,
        a.AssessedGradeLevelDescriptorId,
        sa.AdministrationDate,
        oa.IdentificationCode AS standard_code,
        oa.Description AS standard_description,
        saoa.Result AS score,
        saoa.ResultDatatypeTypeDescriptorId
    FROM edfi.StudentAssessment sa
    JOIN edfi.Assessment a ON sa.AssessmentIdentifier = a.AssessmentIdentifier
    LEFT JOIN edfi.StudentAssessmentStudentObjectiveAssessment saoa
        ON sa.StudentAssessmentIdentifier = saoa.StudentAssessmentIdentifier
    LEFT JOIN edfi.ObjectiveAssessment oa
        ON saoa.IdentificationCode = oa.IdentificationCode
        AND saoa.AssessmentIdentifier = oa.AssessmentIdentifier
    WHERE sa.StudentUniqueId = '{student_id}'
    ORDER BY sa.AdministrationDate DESC, oa.IdentificationCode;
    """
    return pd.read_sql(query, engine)

def get_classroom_assessment_data(engine, section_id):
    """Pull assessment results for all students in a section."""
    query = f"""
    SELECT
        s.StudentUniqueId,
        s.FirstName,
        a.AssessmentTitle,
        sa.AdministrationDate,
        oa.IdentificationCode AS standard_code,
        oa.Description AS standard_description,
        saoa.Result AS score
    FROM edfi.StudentSectionAssociation sect
    JOIN edfi.Student s ON sect.StudentUSI = s.StudentUSI
    JOIN edfi.StudentAssessment sa ON s.StudentUSI = sa.StudentUSI
    JOIN edfi.Assessment a ON sa.AssessmentIdentifier = a.AssessmentIdentifier
    LEFT JOIN edfi.StudentAssessmentStudentObjectiveAssessment saoa
        ON sa.StudentAssessmentIdentifier = saoa.StudentAssessmentIdentifier
    LEFT JOIN edfi.ObjectiveAssessment oa
        ON saoa.IdentificationCode = oa.IdentificationCode
    WHERE sect.SectionIdentifier = '{section_id}'
      AND sect.EndDate IS NULL
    ORDER BY oa.IdentificationCode, s.StudentUniqueId;
    """
    return pd.read_sql(query, engine)

def analyze_student_gaps(student_id, engine):
    """Generate individual student learning gap analysis."""
    df = get_student_assessment_data(engine, student_id)
    if df.empty:
        return 'No assessment data available for this student.'
    
    # Prepare data summary for the LLM
    data_summary = df.to_dict(orient='records')
    
    user_prompt = f"""Analyze the following assessment data for Student ID: {student_id}.
Identify specific learning gaps by standard, prioritize them, and provide
actionable instructional recommendations.

Assessment Data:
{json.dumps(data_summary, indent=2, default=str)}

Please provide:
1. **Strengths Summary**: What standards/skills has this student mastered?
2. **Priority Learning Gaps**: Ranked list of gaps with specific standards
3. **Recommended Interventions**: 2-3 specific strategies per gap
4. **Suggested Resources**: Specific activities, websites, or materials
5. **Progress Monitoring Plan**: How to track improvement on each gap
"""
    
    response = client.chat.completions.create(
        model='gpt-5.4',
        messages=[
            {'role': 'system', 'content': SYSTEM_PROMPT},
            {'role': 'user', 'content': user_prompt}
        ],
        temperature=0.3,
        max_tokens=2000
    )
    
    return response.choices[0].message.content

def analyze_classroom_gaps(section_id, engine):
    """Generate classroom-level learning gap analysis with grouping suggestions."""
    df = get_classroom_assessment_data(engine, section_id)
    if df.empty:
        return 'No assessment data available for this section.'
    
    # Aggregate by standard
    standard_summary = df.groupby(['standard_code', 'standard_description']).agg(
        avg_score=('score', lambda x: pd.to_numeric(x, errors='coerce').mean()),
        student_count=('StudentUniqueId', 'nunique'),
        below_proficient=('score', lambda x: (pd.to_numeric(x, errors='coerce') < 70).sum())
    ).reset_index()
    
    # Student-level summary (anonymized for LLM)
    student_summary = df.groupby('StudentUniqueId').apply(
        lambda g: {
            'student_code': f"Student_{g.name[-4:]}",  # Last 4 chars as anonymized ID
            'standards_below_proficient': g[pd.to_numeric(g['score'], errors='coerce') < 70]['standard_code'].tolist(),
            'standards_proficient': g[pd.to_numeric(g['score'], errors='coerce') >= 70]['standard_code'].tolist()
        }
    ).tolist()
    
    user_prompt = f"""Analyze the following classroom assessment data for Section: {section_id}.

Standard-Level Summary (aggregated across all students):
{standard_summary.to_string(index=False)}

Student-Level Gap Patterns (anonymized):
{json.dumps(student_summary, indent=2, default=str)}

Please provide:
1. **Class-Wide Strengths**: Standards where most students are proficient
2. **Class-Wide Gaps**: Standards where >30% of students are below proficient
3. **Small Group Recommendations**: Suggest 3-4 skill-based groups with student codes
4. **Whole-Class Reteaching Priorities**: Top 2 standards to reteach to everyone
5. **Differentiated Instruction Plan**: Specific activities for each group
6. **Timeline**: Suggested 2-week plan to address top gaps
"""
    
    response = client.chat.completions.create(
        model='gpt-5.4',
        messages=[
            {'role': 'system', 'content': SYSTEM_PROMPT},
            {'role': 'user', 'content': user_prompt}
        ],
        temperature=0.3,
        max_tokens=3000
    )
    
    return response.choices[0].message.content

def generate_weekly_gap_report(engine, school_id):
    """Generate a weekly school-level learning gap summary for the principal."""
    query = f"""
    SELECT
        oa.IdentificationCode AS standard_code,
        oa.Description AS standard_description,
        COUNT(DISTINCT sa.StudentUSI) AS students_assessed,
        AVG(saoa.Result::numeric) AS avg_score,
        COUNT(CASE WHEN saoa.Result::numeric < 70 THEN 1 END) AS below_proficient_count,
        ROUND(COUNT(CASE WHEN saoa.Result::numeric < 70 THEN 1 END)::numeric / 
              NULLIF(COUNT(DISTINCT sa.StudentUSI), 0) * 100, 1) AS pct_below_proficient
    FROM edfi.StudentAssessment sa
    JOIN edfi.StudentSchoolAssociation ssa ON sa.StudentUSI = ssa.StudentUSI
    JOIN edfi.StudentAssessmentStudentObjectiveAssessment saoa
        ON sa.StudentAssessmentIdentifier = saoa.StudentAssessmentIdentifier
    JOIN edfi.ObjectiveAssessment oa
        ON saoa.IdentificationCode = oa.IdentificationCode
    WHERE ssa.SchoolId = {school_id}
      AND sa.AdministrationDate >= CURRENT_DATE - INTERVAL '30 days'
    GROUP BY oa.IdentificationCode, oa.Description
    HAVING COUNT(DISTINCT sa.StudentUSI) >= 5
    ORDER BY pct_below_proficient DESC;
    """
    df = pd.read_sql(query, engine)
    
    user_prompt = f"""Create a brief weekly learning gap summary for a school principal.

School-wide Assessment Gap Data (last 30 days):
{df.to_string(index=False)}

Provide:
1. **Executive Summary**: 2-3 sentences on overall school performance
2. **Top 5 Priority Standards**: Standards with highest percentage below proficient
3. **Recommended School-Wide Actions**: 2-3 actionable next steps
4. **Bright Spots**: Any standards showing strong performance to celebrate

Keep it concise — principals have 2 minutes to read this.
"""
    
    response = client.chat.completions.create(
        model='gpt-5.4',
        messages=[
            {'role': 'system', 'content': SYSTEM_PROMPT},
            {'role': 'user', 'content': user_prompt}
        ],
        temperature=0.3,
        max_tokens=1000
    )
    
    return response.choices[0].message.content

# --- CLI Interface ---
if __name__ == '__main__':
    import argparse
    parser = argparse.ArgumentParser(description='Learning Gap Analyzer')
    parser.add_argument('--mode', choices=['student', 'classroom', 'school'], required=True)
    parser.add_argument('--id', required=True, help='Student ID, Section ID, or School ID')
    args = parser.parse_args()
    
    engine = create_engine(DB_CONNECTION)
    
    if args.mode == 'student':
        result = analyze_student_gaps(args.id, engine)
    elif args.mode == 'classroom':
        result = analyze_classroom_gaps(args.id, engine)
    elif args.mode == 'school':
        result = generate_weekly_gap_report(engine, int(args.id))
    
    print(result)

Usage Examples

CLI usage examples for student, classroom, and school-level analysis modes
bash
# Individual student gap analysis
python3 learning_gap_agent.py --mode student --id 'STU12345'

# Classroom-level analysis with grouping suggestions
python3 learning_gap_agent.py --mode classroom --id 'SEC-MATH-4A'

# Weekly school principal summary
python3 learning_gap_agent.py --mode school --id 12345
Note

Cost Estimate: At ~$5/1M input tokens and ~$15/1M output tokens for GPT-5.4, a weekly run for a 2,000-student district analyzing 80 classrooms costs approximately $2–$5/week (~$10–$20/month). Schedule via cron or trigger after assessment data imports.

Warning

Privacy Note: Student names are anonymized before being sent to the OpenAI API. Only student codes (last 4 digits of ID) are transmitted. Configure the OpenAI API to NOT use data for training (enabled by default for API usage as of 2024). Document this in your DPA.

Automated Intervention Recommender Prompt

Type: prompt A carefully engineered system prompt used within the Panorama platform's notes feature or as a standalone GPT-based tool for counselors and intervention specialists. It generates evidence-based intervention recommendations based on a student's risk profile, following MTSS (Multi-Tiered System of Supports) frameworks.

Implementation:

MTSS Intervention Recommender — System Prompt

## Role You are an MTSS (Multi-Tiered System of Supports) intervention specialist assistant. You help school counselors, intervention specialists, and teachers select evidence-based interventions for at-risk students based on their specific risk profile. ## Input Format You will receive a student risk profile in this format: - Grade Level: [K-12] - Risk Factors: [list of active flags] - Attendance Rate: [percentage] - Current GPA: [number] - Failing Courses: [list] - Assessment Scores: [subject: percentile] - Behavior Incidents: [count and type] - Current Tier: [1, 2, or 3] - Demographics: [ELL, SPED, FRL status — for culturally responsive recommendations] - Previous Interventions Tried: [list, if any] ## Output Requirements ### 1. Risk Factor Analysis (2-3 sentences) Briefly explain what the data tells us about this student's situation. Use empathetic, student-centered language. Never blame the student or family. ### 2. Recommended Tier of Support Based on the risk profile, recommend Tier 1 (universal), Tier 2 (targeted), or Tier 3 (intensive). Justify the recommendation. ### 3. Evidence-Based Interventions (3-5 recommendations) For each intervention: - **Name**: Official intervention program name - **Evidence Base**: Citation from What Works Clearinghouse (WWC) or similar - **Description**: 1-2 sentences on what it involves - **Frequency**: How often (e.g., 3x/week, 30 min sessions) - **Duration**: How long to run before evaluating (e.g., 6-8 weeks) - **Who Delivers**: Teacher, counselor, specialist, peer tutor - **Progress Monitoring**: What metric to track and how often ### 4. Quick Wins (1-2 suggestions) Immediate, low-effort actions the teacher can take THIS WEEK. ### 5. Family Engagement Recommendation One specific suggestion for involving the family in supporting the student. Be culturally sensitive and avoid assumptions about family capacity. ### 6. Follow-Up Timeline Recommend a check-in schedule: - Weekly data review - Formal progress monitoring every [X] weeks - Intervention team meeting in [X] weeks to evaluate ## Constraints - Only recommend interventions rated "Promising" or higher by WWC. - For ELL students, prioritize interventions validated for multilingual learners. - For SPED students, note that interventions must align with IEP goals. - Never recommend reducing services or removing a student from supports. - All recommendations must be implementable by a typical classroom teacher or school counselor without specialized clinical training (unless Tier 3). - Keep total output under 500 words for counselor usability. ## Example Usage **User Input:** Grade Level: 7 Risk Factors: Chronic Absence, Declining Grades Attendance Rate: 82% Current GPA: 1.8 Failing Courses: Math 7, Science 7 Assessment Scores: Math: 18th percentile, Reading: 42nd percentile Behavior Incidents: 1 (tardy referral) Current Tier: 1 (no interventions yet) Demographics: FRL Previous Interventions: None **Expected Output:** [Agent generates structured MTSS recommendation following the template above]
Sonnet 4.6

Deployment: This prompt can be:

1
Used in a custom GPT (OpenAI ChatGPT Teams/Enterprise) accessible to counselors
2
Embedded in a simple Flask/FastAPI web app with a form interface
3
Integrated into Panorama's student profile notes via a bookmarklet or browser extension
4
Used via the OpenAI API in the learning_gap_agent.py script above

Cost: Negligible — each recommendation call uses ~1,500 tokens input + ~800 tokens output ≈ $0.02 per student recommendation at GPT-5.4 pricing.

SIS-to-Analytics Data Sync Monitor

Type: integration A monitoring integration that runs as a scheduled task to verify data synchronization between the SIS (via Clever), the Ed-Fi ODS, and Panorama Education. It checks record counts, identifies sync failures, and sends alerts to the MSP's ticketing system (e.g., ConnectWise, Autotask, Halo PSA) when discrepancies are detected. This is the MSP's primary tool for proactive managed service delivery.

Implementation

sync_monitor.py — SIS-to-Analytics Data Sync Health Monitor
python
# sync_monitor.py
# SIS-to-Analytics Data Sync Health Monitor
# Requirements: pip install requests psycopg2-binary schedule

import requests
import psycopg2
import json
import os
import logging
from datetime import datetime, timedelta

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s',
    handlers=[
        logging.FileHandler('/var/log/sync-monitor.log'),
        logging.StreamHandler()
    ]
)
logger = logging.getLogger(__name__)

# --- Configuration ---
CONFIG = {
    'clever': {
        'api_url': 'https://api.clever.com/v3.0',
        'token': os.environ.get('CLEVER_API_TOKEN', '{token}'),
    },
    'edfi': {
        'api_url': os.environ.get('EDFI_API_URL', 'https://{server}/api/data/v3'),
        'api_key': os.environ.get('EDFI_API_KEY', '{key}'),
        'api_secret': os.environ.get('EDFI_API_SECRET', '{secret}'),
    },
    'database': {
        'host': os.environ.get('DB_HOST', 'localhost'),
        'port': 5432,
        'dbname': 'EdFi_ODS',
        'user': os.environ.get('DB_USER', 'edfi_admin'),
        'password': os.environ.get('DB_PASSWORD', '{password}'),
    },
    'psa_webhook': {
        # ConnectWise Manage API or generic webhook
        'url': os.environ.get('PSA_WEBHOOK_URL', 'https://{connectwise-url}/v4_6_release/apis/3.0/service/tickets'),
        'api_key': os.environ.get('PSA_API_KEY', '{psa-key}'),
        'company_id': os.environ.get('PSA_COMPANY_ID', '{company}'),
    },
    'thresholds': {
        'max_count_discrepancy_pct': 5.0,   # Alert if counts differ by >5%
        'max_sync_age_hours': 26,             # Alert if last sync >26 hours ago
        'min_student_count': 10,              # Alert if fewer students than expected
    },
    'alert_email': os.environ.get('ALERT_EMAIL', 'alerts@{msp-domain}.com'),
}

def get_clever_counts():
    """Get student and teacher counts from Clever API."""
    headers = {'Authorization': f'Bearer {CONFIG["clever"]["token"]}'}
    
    students = requests.get(
        f'{CONFIG["clever"]["api_url"]}/students?limit=1',
        headers=headers
    )
    teachers = requests.get(
        f'{CONFIG["clever"]["api_url"]}/teachers?limit=1',
        headers=headers
    )
    sections = requests.get(
        f'{CONFIG["clever"]["api_url"]}/sections?limit=1',
        headers=headers
    )
    
    return {
        'students': int(students.headers.get('X-Total-Count', 0)),
        'teachers': int(teachers.headers.get('X-Total-Count', 0)),
        'sections': int(sections.headers.get('X-Total-Count', 0)),
        'status': 'ok' if students.status_code == 200 else 'error',
    }

def get_edfi_counts():
    """Get student counts from Ed-Fi ODS database."""
    conn = psycopg2.connect(**CONFIG['database'])
    cur = conn.cursor()
    
    counts = {}
    queries = {
        'students': "SELECT COUNT(*) FROM edfi.Student s JOIN edfi.StudentSchoolAssociation ssa ON s.StudentUSI = ssa.StudentUSI WHERE ssa.ExitWithdrawDate IS NULL",
        'staff': "SELECT COUNT(*) FROM edfi.Staff sf JOIN edfi.StaffSchoolAssociation sfa ON sf.StaffUSI = sfa.StaffUSI",
        'sections': "SELECT COUNT(*) FROM edfi.Section WHERE id IS NOT NULL",
        'assessments': "SELECT COUNT(*) FROM edfi.StudentAssessment WHERE AdministrationDate >= CURRENT_DATE - INTERVAL '90 days'",
        'latest_change': "SELECT MAX(LastModifiedDate) FROM edfi.Student",
    }
    
    for key, query in queries.items():
        try:
            cur.execute(query)
            result = cur.fetchone()[0]
            counts[key] = result if result else 0
        except Exception as e:
            counts[key] = f'ERROR: {str(e)}'
            logger.error(f'Query failed for {key}: {e}')
    
    cur.close()
    conn.close()
    return counts

def check_sync_health():
    """Main health check: compare counts and detect issues."""
    issues = []
    
    logger.info('Starting sync health check...')
    
    # Get counts from both sources
    try:
        clever = get_clever_counts()
        logger.info(f'Clever counts: {clever}')
    except Exception as e:
        issues.append(f'CRITICAL: Cannot reach Clever API: {e}')
        clever = None
    
    try:
        edfi = get_edfi_counts()
        logger.info(f'Ed-Fi ODS counts: {edfi}')
    except Exception as e:
        issues.append(f'CRITICAL: Cannot connect to Ed-Fi ODS database: {e}')
        edfi = None
    
    # Compare student counts
    if clever and edfi and isinstance(edfi.get('students'), int):
        clever_students = clever.get('students', 0)
        edfi_students = edfi.get('students', 0)
        
        if clever_students > 0:
            discrepancy_pct = abs(clever_students - edfi_students) / clever_students * 100
            if discrepancy_pct > CONFIG['thresholds']['max_count_discrepancy_pct']:
                issues.append(
                    f'WARNING: Student count mismatch - Clever: {clever_students}, '
                    f'Ed-Fi ODS: {edfi_students} ({discrepancy_pct:.1f}% difference)'
                )
        
        if edfi_students < CONFIG['thresholds']['min_student_count']:
            issues.append(
                f'CRITICAL: Ed-Fi ODS has only {edfi_students} students - '
                f'possible sync failure or data loss'
            )
    
    # Check sync freshness
    if edfi and isinstance(edfi.get('latest_change'), datetime):
        age_hours = (datetime.now() - edfi['latest_change']).total_seconds() / 3600
        if age_hours > CONFIG['thresholds']['max_sync_age_hours']:
            issues.append(
                f'WARNING: Ed-Fi ODS data is {age_hours:.1f} hours old - '
                f'last update: {edfi["latest_change"]}. Expected nightly sync.'
            )
    
    # Check assessment data freshness
    if edfi and isinstance(edfi.get('assessments'), int):
        if edfi['assessments'] == 0:
            issues.append(
                'WARNING: No assessment records in the last 90 days - '
                'verify assessment data feed is active'
            )
    
    return issues

def create_psa_ticket(issues):
    """Create a ticket in the MSP's PSA (ConnectWise Manage example)."""
    severity = 'Critical' if any('CRITICAL' in i for i in issues) else 'Warning'
    
    ticket_data = {
        'summary': f'[Auto] Student Analytics Sync {severity} - {datetime.now().strftime("%Y-%m-%d")}',
        'initialDescription': '\n'.join(issues),
        'board': {'name': 'Education Managed Services'},
        'status': {'name': 'New'},
        'priority': {'id': 1 if severity == 'Critical' else 3},
        'company': {'id': CONFIG['psa_webhook']['company_id']},
    }
    
    # For generic webhook (Halo, Autotask, or Slack):
    webhook_payload = {
        'text': f'🚨 *Student Analytics Sync Alert* ({severity})\n' + '\n'.join(f'• {i}' for i in issues),
        'timestamp': datetime.now().isoformat(),
        'district': CONFIG['psa_webhook']['company_id'],
    }
    
    try:
        response = requests.post(
            CONFIG['psa_webhook']['url'],
            json=webhook_payload,
            headers={
                'Content-Type': 'application/json',
                'Authorization': f'Bearer {CONFIG["psa_webhook"]["api_key"]}'
            },
            timeout=30
        )
        logger.info(f'PSA ticket created: {response.status_code}')
    except Exception as e:
        logger.error(f'Failed to create PSA ticket: {e}')

def run_monitor():
    """Execute the full monitoring cycle."""
    issues = check_sync_health()
    
    if issues:
        logger.warning(f'Found {len(issues)} sync issues:')
        for issue in issues:
            logger.warning(f'  - {issue}')
        create_psa_ticket(issues)
    else:
        logger.info('All sync health checks passed ✓')
    
    # Log summary to file for trend analysis
    summary = {
        'timestamp': datetime.now().isoformat(),
        'issues_count': len(issues),
        'issues': issues,
        'status': 'ALERT' if issues else 'OK'
    }
    with open('/var/log/sync-monitor-history.jsonl', 'a') as f:
        f.write(json.dumps(summary) + '\n')

if __name__ == '__main__':
    run_monitor()

Deployment

Install dependencies, configure environment variables, and schedule the monitor via cron
bash
# Install dependencies
pip install requests psycopg2-binary

# Set environment variables
export CLEVER_API_TOKEN='{token}'
export EDFI_API_URL='https://{server}/api/data/v3'
export DB_HOST='localhost'
export DB_PASSWORD='{password}'
export PSA_WEBHOOK_URL='https://{psa-url}/api/tickets'
export PSA_API_KEY='{key}'

# Schedule to run every 6 hours
crontab -e
# Add: 0 */6 * * * cd /opt/student-analytics && python3 sync_monitor.py

PSA Integration

Adapt the create_psa_ticket() function for your specific PSA:

  • ConnectWise Manage: Use REST API v4.6 with company/public API keys
  • Autotask/Datto: Use REST API v1.6 with API integration user
  • Halo PSA: Use REST API with client credentials OAuth
  • Generic: Post to a Slack/Teams webhook for simple alerting

Assessment Data ETL Pipeline

Type: workflow An automated Extract-Transform-Load pipeline that pulls assessment data from Renaissance Star, NWEA MAP, or iReady via their respective APIs or SFTP exports, normalizes it to a common schema, and loads it into the Ed-Fi ODS or directly into the analytics platform. Runs nightly after assessments are administered.

Implementation:

assessment_etl.py
python
# Assessment Data ETL Pipeline supporting Renaissance Star (SFTP/CSV), NWEA
# MAP (CSV), and iReady (CSV)

# assessment_etl.py
# Assessment Data ETL Pipeline
# Supports: Renaissance Star (SFTP/CSV), NWEA MAP (CSV), iReady (CSV)
# Requirements: pip install pandas paramiko sqlalchemy psycopg2-binary

import pandas as pd
import paramiko
import os
import glob
import logging
from sqlalchemy import create_engine
from datetime import datetime

logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)

# --- Configuration ---
CONFIG = {
    'star': {
        'sftp_host': os.environ.get('STAR_SFTP_HOST', 'sftp.renaissance.com'),
        'sftp_user': os.environ.get('STAR_SFTP_USER', '{user}'),
        'sftp_password': os.environ.get('STAR_SFTP_PASS', '{pass}'),
        'remote_path': '/exports/',
        'local_path': '/opt/student-analytics/data/star/',
    },
    'map': {
        'import_path': '/opt/student-analytics/data/map/',
    },
    'iready': {
        'import_path': '/opt/student-analytics/data/iready/',
    },
    'database': {
        'connection': os.environ.get('DB_URL', 'postgresql://edfi_admin:{password}@localhost:5432/EdFi_ODS'),
    }
}

# Common output schema
OUTPUT_COLUMNS = [
    'student_unique_id',
    'assessment_source',      # 'Star', 'MAP', 'iReady'
    'assessment_subject',     # 'Reading', 'Math', 'EarlyLiteracy'
    'administration_date',
    'scale_score',
    'percentile_rank',
    'growth_percentile',      # SGP or equivalent
    'grade_equivalent',
    'lexile_or_quantile',     # Lexile (reading) or Quantile (math)
    'proficiency_level',      # 'Below', 'Approaching', 'At', 'Above'
    'standard_scores',        # JSON: {"CCSS.MATH.4.NBT.5": 72, ...}
]

def extract_star_data():
    """Download latest Star assessment exports via SFTP."""
    logger.info('Extracting Star assessment data via SFTP...')
    
    transport = paramiko.Transport((CONFIG['star']['sftp_host'], 22))
    transport.connect(
        username=CONFIG['star']['sftp_user'],
        password=CONFIG['star']['sftp_password']
    )
    sftp = paramiko.SFTPClient.from_transport(transport)
    
    os.makedirs(CONFIG['star']['local_path'], exist_ok=True)
    
    remote_files = sftp.listdir(CONFIG['star']['remote_path'])
    csv_files = [f for f in remote_files if f.endswith('.csv') and 'Assessment' in f]
    
    for filename in csv_files:
        remote_path = CONFIG['star']['remote_path'] + filename
        local_path = CONFIG['star']['local_path'] + filename
        sftp.get(remote_path, local_path)
        logger.info(f'  Downloaded: {filename}')
    
    sftp.close()
    transport.close()
    return CONFIG['star']['local_path']

def transform_star(data_path):
    """Transform Star CSV exports to common schema."""
    logger.info('Transforming Star data...')
    
    all_files = glob.glob(os.path.join(data_path, '*.csv'))
    dfs = []
    
    for f in all_files:
        df = pd.read_csv(f)
        # Star typical columns: StudentID, AssessmentDate, Subject,
        # ScaledScore, PercentileRank, SGP, GradeEquivalent, ...
        
        transformed = pd.DataFrame({
            'student_unique_id': df.get('StudentID', df.get('Student ID', '')).astype(str),
            'assessment_source': 'Star',
            'assessment_subject': df.get('AssessmentType', df.get('Subject', 'Unknown')),
            'administration_date': pd.to_datetime(df.get('CompletedDate', df.get('Assessment Date'))),
            'scale_score': pd.to_numeric(df.get('ScaledScore', df.get('Scaled Score')), errors='coerce'),
            'percentile_rank': pd.to_numeric(df.get('PercentileRank', df.get('Percentile Rank')), errors='coerce'),
            'growth_percentile': pd.to_numeric(df.get('StudentGrowthPercentile', df.get('SGP')), errors='coerce'),
            'grade_equivalent': df.get('GradeEquivalent', df.get('Grade Equivalent', '')).astype(str),
            'lexile_or_quantile': df.get('Lexile', df.get('Quantile', '')).astype(str),
            'proficiency_level': df.get('ProficiencyLevel', df.get('Benchmark Category', 'Unknown')),
            'standard_scores': '{}',  # Star provides domain scores, not standard-level
        })
        dfs.append(transformed)
    
    result = pd.concat(dfs, ignore_index=True) if dfs else pd.DataFrame(columns=OUTPUT_COLUMNS)
    logger.info(f'  Transformed {len(result)} Star assessment records')
    return result

def transform_map(data_path):
    """Transform NWEA MAP CSV exports to common schema."""
    logger.info('Transforming MAP data...')
    
    all_files = glob.glob(os.path.join(data_path, '*.csv'))
    dfs = []
    
    for f in all_files:
        df = pd.read_csv(f)
        # MAP typical columns: StudentID, TermName, TestRITScore,
        # TestPercentile, FallToFallProjectedGrowth, ...
        
        transformed = pd.DataFrame({
            'student_unique_id': df.get('StudentID', '').astype(str),
            'assessment_source': 'MAP',
            'assessment_subject': df.get('Discipline', df.get('Subject', 'Unknown')),
            'administration_date': pd.to_datetime(df.get('TestStartDate', df.get('Test Start Date'))),
            'scale_score': pd.to_numeric(df.get('TestRITScore', df.get('RIT Score')), errors='coerce'),
            'percentile_rank': pd.to_numeric(df.get('TestPercentile', df.get('Percentile')), errors='coerce'),
            'growth_percentile': pd.to_numeric(df.get('GrowthIndex', None), errors='coerce'),
            'grade_equivalent': '',
            'lexile_or_quantile': df.get('LexileScore', df.get('RITtoReading', '')).astype(str),
            'proficiency_level': df.get('AchievementQuintile', 'Unknown').astype(str),
            'standard_scores': '{}',
        })
        dfs.append(transformed)
    
    result = pd.concat(dfs, ignore_index=True) if dfs else pd.DataFrame(columns=OUTPUT_COLUMNS)
    logger.info(f'  Transformed {len(result)} MAP assessment records')
    return result

def load_to_database(df, engine):
    """Load transformed assessment data into the analytics database."""
    logger.info(f'Loading {len(df)} records to database...')
    
    # Deduplicate: keep latest record per student per subject per date
    df = df.drop_duplicates(
        subset=['student_unique_id', 'assessment_source', 'assessment_subject', 'administration_date'],
        keep='last'
    )
    
    # Load to staging table
    df.to_sql(
        'stg_assessment_import',
        engine,
        schema='analytics',
        if_exists='replace',
        index=False,
        method='multi',
        chunksize=1000
    )
    
    # Merge into main assessment table
    with engine.connect() as conn:
        conn.execute("""
            INSERT INTO analytics.student_assessments
            SELECT * FROM analytics.stg_assessment_import s
            WHERE NOT EXISTS (
                SELECT 1 FROM analytics.student_assessments a
                WHERE a.student_unique_id = s.student_unique_id
                AND a.assessment_source = s.assessment_source
                AND a.assessment_subject = s.assessment_subject
                AND a.administration_date = s.administration_date
            );
        """)
        conn.commit()
    
    logger.info('Data load complete ✓')

def run_etl():
    """Execute the full ETL pipeline."""
    engine = create_engine(CONFIG['database']['connection'])
    all_data = []
    
    # Star ETL
    try:
        star_path = extract_star_data()
        star_df = transform_star(star_path)
        all_data.append(star_df)
    except Exception as e:
        logger.error(f'Star ETL failed: {e}')
    
    # MAP ETL (expects manual CSV drop or automated download)
    try:
        if os.path.exists(CONFIG['map']['import_path']):
            map_df = transform_map(CONFIG['map']['import_path'])
            all_data.append(map_df)
    except Exception as e:
        logger.error(f'MAP ETL failed: {e}')
    
    # Combine and load
    if all_data:
        combined = pd.concat(all_data, ignore_index=True)
        load_to_database(combined, engine)
        logger.info(f'ETL pipeline complete: {len(combined)} total records processed')
    else:
        logger.warning('No assessment data processed in this run')

if __name__ == '__main__':
    run_etl()
Database Setup
sql
-- run once to create the analytics schema and student_assessments table

CREATE SCHEMA IF NOT EXISTS analytics;

CREATE TABLE IF NOT EXISTS analytics.student_assessments (
    student_unique_id VARCHAR(50),
    assessment_source VARCHAR(20),
    assessment_subject VARCHAR(50),
    administration_date DATE,
    scale_score NUMERIC,
    percentile_rank NUMERIC,
    growth_percentile NUMERIC,
    grade_equivalent VARCHAR(10),
    lexile_or_quantile VARCHAR(20),
    proficiency_level VARCHAR(30),
    standard_scores JSONB DEFAULT '{}',
    created_at TIMESTAMP DEFAULT NOW(),
    PRIMARY KEY (student_unique_id, assessment_source, assessment_subject, administration_date)
);

CREATE INDEX idx_assessments_student ON analytics.student_assessments(student_unique_id);
CREATE INDEX idx_assessments_date ON analytics.student_assessments(administration_date);
Cron schedule — runs the ETL pipeline nightly at 4:00 AM
cron
0 4 * * * cd /opt/student-analytics && python3 assessment_etl.py >> /var/log/assessment-etl.log 2>&1

Testing & Validation

  • STUDENT COUNT RECONCILIATION: Export total active student count from SIS (PowerSchool Admin > Reports > Student Count by School). Compare against Clever Dashboard sync count and Panorama Admin > Data > Import Summary. All three counts must match within 2% tolerance. Document any discrepancy and its cause (e.g., recent enrollments not yet synced).
  • SSO LOGIN TEST: Log in via Clever SSO with test accounts for each role — teacher, counselor, principal, district admin. Verify each role sees only the data they are authorized to view (teachers see only their assigned students; principals see only their school). Test from a Chromebook, a Windows laptop, and a mobile device (iPad/phone).
  • ASSESSMENT DATA ACCURACY SPOT-CHECK: Select 10 students at random. For each, manually look up their latest Star/MAP score in the assessment platform and compare it to what appears in Panorama and Power BI. All 10 must match exactly (score, percentile, date). If any mismatch, investigate the data pipeline before go-live.
  • EARLY WARNING ALERT DELIVERY: Identify or create a test scenario where a student crosses a risk threshold (e.g., attendance drops below 90%). Verify that (1) Panorama updates the student's risk tier within 24 hours, (2) the designated email alert is received by the correct teacher/counselor, and (3) the Power Automate Teams notification fires (if configured). Document the alert latency.
  • RISK TIER CALIBRATION: Run the early warning system across all pilot students and review the distribution. Expected: 60-75% Low Risk, 10-15% Some Risk, 8-12% At Risk, 3-8% High Risk. If >40% of students are flagged at any risk level, thresholds are miscalibrated — adjust in Panorama Admin and re-run. Have counselors review the top 20 highest-risk students and confirm alignment with their professional judgment.
  • POWER BI ROW-LEVEL SECURITY: Log into Power BI with a teacher account and verify the dashboard shows ONLY students in that teacher's sections. Attempt to access another school's data by modifying URL parameters — it must return no data. Log in as a principal and verify school-level filtering works correctly. Document with screenshots.
  • NETWORK PERFORMANCE: Run a speed test from 3 different classrooms during peak usage (9-10 AM). Verify minimum 25 Mbps download and <100ms latency to Panorama's servers (ping panoramaed.com). Open the Panorama dashboard and confirm page load time is under 5 seconds. If any classroom fails, investigate AP coverage or bandwidth contention.
  • COMPLIANCE DOCUMENTATION AUDIT: Verify all DPAs are signed and filed in SharePoint/Google Drive compliance folder. Confirm annual FERPA notification has been sent to parents. Verify COPPA consent forms are on file for all students under 13 whose data is shared with analytics platforms. Cross-check vendor SOC 2 reports are current (not expired). Run through the 10-point compliance checklist and document pass/fail for each item.
  • BACKUP AND RECOVERY TEST: If on-premises components are deployed, perform a full backup of the Ed-Fi ODS database using Veeam or pg_dump. Then perform a test restore to a separate database instance and verify record counts match. Document Recovery Time Objective (RTO) and Recovery Point Objective (RPO) — target: RPO < 24 hours, RTO < 4 hours.
  • CUSTOM AI COMPONENT VALIDATION: Run the student_risk_model.py script against the full student population. Verify output CSV contains all expected columns, risk_probability values are between 0 and 1, and risk_tier distribution is reasonable. Run learning_gap_agent.py for 3 test students and verify the GPT-5.4 output contains specific learning standards, actionable recommendations, and does not include student names in the output. Verify OpenAI API calls do not transmit full student names (only anonymized IDs).

Client Handoff

The client handoff should be a structured 2-hour session with the district's leadership team (superintendent or assistant superintendent, director of curriculum/instruction, technology director, and building principals) plus a separate 90-minute session for the MTSS/intervention team. Cover the following topics:

Leadership Session (2 hours)

1
Solution Overview (15 min): What the platform does, what data it uses, and how it identifies at-risk students. Show the main dashboard with live district data.
2
Dashboard Walkthrough (30 min): District-level overview, drill-down by school, grade, and classroom. Show how to read risk tiers, attendance trends, and assessment growth. Demonstrate Power BI custom reports for school board presentations.
3
Early Warning System Demo (20 min): Show how alerts are triggered, who receives them, and what actions to take. Walk through the full workflow: alert received → student profile reviewed → intervention assigned → progress monitored.
4
Compliance & Privacy Review (15 min): Review all signed DPAs, FERPA notification documentation, COPPA consent status, and data retention policies. Provide the compliance binder (physical or digital) containing all documentation.
5
Roles & Responsibilities (15 min): Clarify who owns what — the MSP manages the technology infrastructure, data sync monitoring, and platform administration; the district owns data interpretation, intervention decisions, and parent communication.
6
Success Criteria Review (10 min): Review the agreed-upon KPIs: (a) >80% of teachers logging in at least weekly within 60 days, (b) >90% of high-risk students receiving a documented intervention within 2 weeks of flagging, (c) data sync uptime >99% monthly.
7
Support & Escalation (15 min): How to contact the MSP help desk, expected response times (SLA), and escalation paths. Provide the Quick Reference Card with contact info, login URLs, and common troubleshooting steps.

Documentation to Leave Behind

  • Laminated Quick Reference Guide (1-page) for every teacher
  • Administrator Guide (10-15 pages): platform navigation, report generation, user management
  • Compliance Binder: all signed DPAs, SOC 2 reports, FERPA notification copies, COPPA consent forms, data flow diagram, vendor contact list
  • Training video recordings (hosted on district's Google Drive or SharePoint)
  • Network documentation: IP addresses, VLAN map, firewall rules, Wi-Fi coverage map
  • Backup and disaster recovery runbook
  • MSP service agreement with SLA terms
  • Data dictionary: field definitions for all metrics shown in dashboards (attendance rate calculation, GPA formula, risk tier thresholds, percentile definitions)

Maintenance

Monthly Maintenance Tasks (MSP Responsibility)

  • Run sync_monitor.py health checks (automated 4x daily; review logs weekly)
  • Verify SIS-to-Clever-to-Panorama data sync accuracy via student count reconciliation
  • Review and resolve any PSA tickets generated by automated monitoring
  • Confirm Veeam backup completion for on-prem components; verify backup integrity monthly
  • Apply FortiGate firmware updates and FortiGuard signature updates (monthly maintenance window)
  • Update Chromebook management policies via Google Admin Console as needed
  • Review Power BI scheduled refresh success/failure logs

Quarterly Maintenance Tasks

  • Conduct Quarterly Business Review (QBR) with district leadership: review platform usage analytics (login frequency, report views), risk tier distribution trends, and intervention completion rates
  • Recalibrate early warning thresholds based on actual student outcomes from the prior quarter (did flagged students actually fail? did unflagged students fall through the cracks?)
  • Review and update DPAs for any new vendors or platform changes
  • Verify SOC 2 certifications remain current for all vendors
  • Run a tabletop exercise for data breach incident response
  • Review and prune user accounts (deprovisioned staff, new hires)

Semester/Annual Tasks

  • Retrain the custom risk prediction model (student_risk_model.py) at the start of each semester using labeled outcome data from the prior semester — this is the model retraining trigger
  • Coordinate with Panorama CSM on platform version updates and new feature rollouts
  • Update Ed-Fi ODS if a new version is released (annual cycle)
  • Send annual FERPA notification to parents
  • Conduct full backup and recovery test
  • Review and update training materials for new staff onboarding
  • Audit OpenAI API usage and costs; optimize prompts if costs exceed $25/month

SLA Considerations

  • Platform availability: 99.5% uptime (measured monthly, excluding vendor-side outages)
  • Data sync freshness: Student data updated within 24 hours of SIS changes
  • Alert delivery: Early warning notifications delivered within 4 hours of threshold crossing
  • Help desk response: Priority 1 (system down): 1 hour; Priority 2 (data issue): 4 hours; Priority 3 (question): 1 business day
  • Escalation path: Tier 1 (MSP help desk) → Tier 2 (MSP senior engineer) → Tier 3 (vendor support: Panorama CSM, Clever Support, Microsoft Support)

Model Retraining Triggers (Custom AI Components)

  • Scheduled: Start of each semester (August/January)
  • Performance-triggered: If the risk model's precision drops below 70% (measured by comparing predictions to actual end-of-quarter outcomes)
  • Data-triggered: If the district adds a new assessment platform, changes grading scales, or undergoes significant enrollment changes (>10% shift)

Alternatives

Replace Panorama Education with Otus ($2–$5/student/year) as the primary analytics and assessment platform. Otus combines LMS, standards-based gradebook, assessment authoring, and analytics dashboards in a single platform, reducing vendor sprawl and integration complexity. Particularly strong for districts that want to consolidate multiple tools.

Note

Recommend when: Budget is the primary constraint, the district has <1,000 students, or the district wants to replace its LMS simultaneously.

Custom Build with Power BI + Ed-Fi ODS (DIY Approach)

Instead of licensing a turnkey platform like Panorama, build a fully custom analytics solution using Microsoft Power BI connected to an Ed-Fi ODS data warehouse. The MSP develops custom dashboards, DAX measures for risk scoring, and Power Automate workflows for alerting. All logic is owned and maintained by the MSP.

Note

Recommend when: The district is already heavily invested in Microsoft 365 A5, has specific custom reporting needs that no turnkey platform satisfies, or has philosophical objections to sharing data with additional EdTech vendors.

Renaissance Star Analytics (Assessment-Centric Approach)

If the district already uses Renaissance Star assessments, leverage Star's built-in analytics and reporting dashboards rather than adding a separate analytics platform. Star provides growth reports, screening reports, and instructional planning pages that identify students below benchmark. Supplement with Star's 'Focus Skills' feature for learning gap identification.

SchoolAI + MagicSchool (AI-Native Lightweight Approach)

For small tutoring organizations or individual schools (not full districts), use SchoolAI ($14.99/month Pro plan) for AI-powered student interaction analytics and MagicSchool for teacher-facing AI tools. These platforms are designed for classroom-level use rather than district-wide analytics, but provide rapid time-to-value for identifying struggling students through AI-analyzed classroom interactions.

Open-Source Stack: Metabase + Ed-Fi + Python ML (Maximum Control)

Build an entirely open-source analytics stack: Ed-Fi ODS for data warehousing, Metabase (self-hosted, free) for dashboards, the custom Python risk prediction model for early warning, and Apache Superset as an alternative visualization layer. Host everything on the Dell PowerEdge T350 on-premises or on a cloud VM ($100–$200/month).

Want early access to the full toolkit?