65 min readAutonomous agents

Implementation Guide: Provide personalized adaptive practice, adjusting difficulty in real time

Step-by-step implementation guide for deploying AI to provide personalized adaptive practice, adjusting difficulty in real time for Education clients.

Hardware Procurement

Student Chromebooks

Student Chromebooks

LenovoLenovo 100e Chromebook Gen 4 (82W00004US)Qty: 200

$220/unit MSP cost / $300 suggested resale ($44,000 cost / $60,000 resale for fleet)

Primary student-facing devices for accessing the adaptive practice platform via Chrome browser. Chrome OS provides centralized management via Google Admin Console, automatic updates, and low maintenance overhead. Wi-Fi 6 support ensures reliable connectivity in dense classroom environments.

Teacher/Admin Laptops

Teacher/Admin Laptops

DellDell Latitude 3450 (i5-1345U, 16GB, 256GB SSD)Qty: 15

$650/unit MSP cost / $850 suggested resale ($9,750 cost / $12,750 resale)

Teacher and administrator workstations for accessing the analytics dashboard, managing student rosters, reviewing AI interaction logs, and configuring curriculum content. Higher specs needed for running multiple browser tabs with real-time dashboards.

Wireless Access Points

UniFi U6-Pro

UbiquitiU6-Pro-USQty: 8

$140/unit MSP cost / $200 suggested resale ($1,120 cost / $1,600 resale)

Wi-Fi 6 access points providing classroom wireless coverage. Each AP supports 25–30 concurrent Chromebook connections. 8 APs covers a typical small school or tutoring center with 10–15 instructional spaces plus common areas.

Network Switch

UbiquitiUniFi USW-Pro-24-PoEQty: 1

$450 MSP cost / $600 suggested resale

24-port PoE managed switch powering all UniFi access points and connecting to the internet uplink. PoE eliminates the need for separate power adapters at each AP location.

Security Gateway/Firewall

UniFi Security Gateway Pro

UbiquitiUSG-PRO-4Qty: 1

$300 MSP cost / $425 suggested resale

Network gateway providing VLAN segmentation (separate student and staff networks), basic firewall rules, and traffic monitoring. Works with the UniFi ecosystem for unified management.

Chromebook Charging Carts

LLTM30-B 30-Unit Charging Cart

LuxorLLTM30-BQty: 7

$900/unit MSP cost / $1,200 suggested resale ($6,300 cost / $8,400 resale)

Secure charging and storage for Chromebook fleet. Each cart holds 30 devices with individual charging slots. 7 carts accommodate the full 200-device fleet with room for spares.

Edge AI Device (Optional)

Jetson Orin Nano Super Developer Kit

NVIDIAJetson Orin Nano Super Developer KitQty: 1

$249 MSP cost / $399 suggested resale

Optional edge compute device for running smaller open-source models locally in environments with unreliable internet. Can run quantized LLaMA-based models for basic tutoring when cloud APIs are unavailable. Not required for primary deployment.

Software Procurement

Google Workspace for Education Standard

GoogleEducation StandardQty: Per-domain, free tier

$0/year (Standard tier is free for qualifying education institutions)

Identity provider (IdP) for student and staff SSO. Google Admin Console manages Chromebook fleet policies, app deployment, and content filtering integration. Provides Google Classroom as the primary LMS integration target.

Clever Rostering & SSO

Clever Inc.Middleware / SSO Platform

$0 for the school (Clever charges application vendors, not schools)

Middleware platform that syncs student rosters, teacher assignments, and class enrollments from the SIS (PowerSchool, Infinite Campus, etc.) to the adaptive practice platform. Provides SSO so students log in once and access the AI tutor without separate credentials.

GoGuardian Admin + Teacher

GoGuardianPer-student annual subscriptionQty: 200 students

$7.50/student/year ($1,500/year for 200 students)

CIPA-compliant web content filtering and classroom management. Filters inappropriate content while whitelisting AI platform domains. Teacher module allows real-time screen monitoring and URL blocking during adaptive practice sessions.

Google Gemini 2.0 Flash API

GoogleGemini 2.0 Flash

$0.10/M input tokens, $0.40/M output tokens; estimated $50–$150/month for 200 students at moderate usage

Primary LLM for high-volume student interactions: generating practice questions, providing immediate feedback, and basic hint generation. Most cost-effective option for the volume of interactions (estimated 500–1,500 tokens per student interaction, 20–50 interactions per student per day).

OpenAI GPT-4.1 API

OpenAIGPT-4.1Qty: Usage-based API

$2.00/M input tokens, $8.00/M output tokens; estimated $30–$100/month for complex reasoning tasks

Secondary LLM reserved for complex tutoring scenarios: multi-step problem explanations, Socratic dialogue chains, misconception diagnosis, and content generation for new curriculum items. Invoked only when the Assessment Agent detects a student is struggling or when generating new adaptive content.

Pinecone Vector Database

PineconeSaaS, tiered

$0 (free Starter tier for up to 100K vectors) or $50/month (Standard tier for production)

Stores vector embeddings of curriculum content (questions, explanations, worked examples) organized by topic, difficulty level, and prerequisite relationships. The Curriculum Agent queries Pinecone to find the next optimal practice item matching the student's current knowledge state.

LangGraph + LangSmith

LangChain Inc.LangGraph (Open-source) + LangSmith (SaaS)

$0 for LangGraph; $39/month Developer tier for LangSmith (monitoring, tracing, evaluation)

LangGraph is the agent orchestration framework that manages the state machine for each student session—routing between Assessment, Tutoring, and Curriculum agents based on real-time performance signals. LangSmith provides production monitoring, trace logging, and prompt evaluation dashboards.

Azure App Service (B2 Plan)

MicrosoftB2 PlanQty: 1

$55/month (B2: 2 vCPUs, 3.5 GB RAM) scaling to $110/month (P1v3) during peak

Hosts the FastAPI backend application running the LangGraph agent orchestrator, student session management, and REST API endpoints consumed by the web frontend and LTI integration layer.

Azure Database for PostgreSQL (Flexible Server)

MicrosoftBurstable B2s: 2 vCPUs, 4 GB RAM, 32 GB storageQty: 1

$30–$65/month

Stores student profiles, knowledge state vectors (BKT parameters per skill per student), session logs, interaction history, teacher configuration settings, and curriculum metadata. PostgreSQL chosen for JSON support and pgvector extension compatibility.

Azure Blob Storage

MicrosoftUsage-based

$5–$15/month

Stores curriculum content assets (images, PDFs, worked example diagrams), student interaction audit logs (FERPA compliance), and exported analytics reports.

IXL Learning (Supplementary)

IXL LearningPer-classroom or per-student annual

$369/year per 25-student classroom license; ~$5–$10/student/year for district pricing

Optional supplementary adaptive practice platform providing 17,000+ pre-built skill practice items across K-12 subjects. Used alongside the custom AI agent to cover subjects or grade levels where custom curriculum content has not yet been developed.

Khanmigo by Khan Academy

Khan AcademyPer-user monthlyQty: 200 students

Free for teachers; $4/month per learner ($800/month for 200 students, or $9,600/year)

Supplementary AI tutor providing Socratic questioning across Khan Academy's content library. Can be deployed immediately while the custom agent is being developed, giving students AI-assisted practice from day one.

Prerequisites

  • Active Google Workspace for Education domain with all student and staff accounts provisioned and organized into Organizational Units (OUs) by grade level or class section
  • Student Information System (SIS) configured and populated with current enrollment data — supported SIS: PowerSchool (23% market share), FACTS SIS, Infinite Campus, or any SIS supporting Clever Secure Sync
  • Internet connectivity of at least 100 Mbps symmetrical (1 Gbps recommended for 200+ concurrent devices). Verify with speed test from instructional areas during peak hours
  • Clever district account created and SIS sync configured (https://clever.com/signup). Clever sync must be verified with accurate roster data before platform deployment
  • CIPA-compliant content filtering solution deployed (GoGuardian, Securly, or Cisco Umbrella). Required for E-Rate funding eligibility and legal compliance
  • Executed Data Processing Agreements (DPAs) with all AI vendors: Google (Gemini API), OpenAI, Pinecone, Microsoft Azure. Templates available from Student Data Privacy Consortium (SDPC) at https://privacy.a4l.org/
  • Parental consent forms collected for all students under 13 per COPPA requirements. Must explicitly cover AI-based tutoring interactions and data collection. Updated for June 2025 COPPA amendments prohibiting indefinite data retention
  • Python 3.11+ development environment with access to Git, Docker, and Azure CLI for the MSP engineer building the custom agent stack
  • LMS administrator access: Google Classroom admin credentials for API integration, or Canvas/Schoology admin access for LTI 1.3 tool configuration
  • Network firewall rules configured to allow outbound HTTPS (port 443) to: *.googleapis.com, api.openai.com, *.pinecone.io, *.azurewebsites.net, *.langchain.com, *.clever.com
  • Designated project lead at the client site (typically a curriculum coordinator or tech-savvy teacher) who will serve as the primary feedback contact during pilot deployment
  • Budget approval for Year 1 estimated costs: $55,000–$90,000 first-year total (hardware + development + software + MSP services), $18,000–$48,000/year recurring

Installation Steps

Step 1: Network Infrastructure Assessment and Upgrade

Perform a site survey of all instructional spaces to assess current Wi-Fi coverage, bandwidth, and network infrastructure. Document dead zones, AP placement requirements, and PoE switch port availability. Install Ubiquiti UniFi equipment if upgrades are needed.

1
Install UniFi Network Controller on a local machine or use UniFi Cloud
2
Access controller at https://localhost:8443
3
Navigate to Devices > Adopt for each AP and switch
4
In UniFi Controller > Settings > Networks, create 'Staff' network: VLAN ID 10, DHCP range 10.10.10.0/24
5
In UniFi Controller > Settings > Networks, create 'Student' network: VLAN ID 20, DHCP range 10.20.20.0/24
6
Configure SSID 'SchoolStaff' on VLAN 10 with WPA3-Enterprise
7
Configure SSID 'StudentLearn' on VLAN 20 with WPA2-PSK or 802.1x
Install UniFi Network Controller
bash
sudo apt-get update && sudo apt-get install -y unifi
  • VLAN 10: Staff network (teacher laptops, admin systems)
  • VLAN 20: Student network (Chromebooks, restricted internet)
  • VLAN 30: IoT/Infrastructure (APs, switches, management)
Verify bandwidth from student VLAN
bash
speedtest-cli --simple
Note

Schedule installation during non-instructional hours. Each U6-Pro AP covers approximately 2,500 sq ft with 25-30 concurrent devices. Mount APs at ceiling height (8-10 feet) centered in instructional areas. Ensure PoE budget on the USW-Pro-24-PoE (400W total) can power all 8 APs simultaneously (~13W each = 104W total).

Step 2: Chromebook Fleet Provisioning and MDM Configuration

Unbox, enroll, and configure all 200 Chromebooks into the Google Admin Console with appropriate policies for adaptive learning. Create device OUs matching classroom/grade structure and deploy kiosk or managed browser settings.

1
Navigate to Devices > Chrome > Settings > Device Settings in Google Admin Console (admin.google.com)
2
Select the 'Students' OU
  • Enrollment & Access — Forced re-enrollment: Enabled
  • Enrollment & Access — Verified mode: Required
  • Enrollment & Access — Guest mode: Disabled
  • Enrollment & Access — Sign-in restriction: Limit to @yourdomain.edu accounts
  • Network — Auto-connect to 'StudentLearn' SSID
  • Network — Managed network configurations pushed via policy
  • Content & Apps — URL blocking: Delegate to GoGuardian (installed as extension)
  • Content & Apps — Force-install extensions: GoGuardian, Clever Instant Login
  • Content & Apps — Bookmarks: Add adaptive practice platform URL
  • Power & Updates — Auto-update: Enabled on 'Stable' channel
  • Power & Updates — Release channel: Stable
  • Power & Updates — Scheduled reboot: Weekly at 2:00 AM Saturday
1
Power on each Chromebook
2
Connect to WiFi
3
Press Ctrl+Alt+E
4
Enter enrollment token
Note

Order Chrome Enterprise Upgrade licenses ($50/device one-time or $50/device/year) if not already included in the Education license. This is required for full device management features. Consider using a zero-touch enrollment partner (CDW, SHI) to have devices pre-enrolled before delivery. Label each device with an asset tag matching the Google Admin serial number.

Step 3: Identity Provider and Rostering Configuration

Configure Google Workspace for Education as the identity provider, set up Clever rostering to sync student/teacher rosters from the SIS, and verify SSO flows work end-to-end. ``` # Step 3a: Verify Google Workspace user provisioning # In Google Admin > Directory > Users, confirm all students and staff exist # Students should be in OUs: /Students/Grade-K, /Students/Grade-1, etc. # Staff should be in OUs: /Staff/Teachers, /Staff/Admins # Step 3b: Configure Clever # 1. Log in to https://clever.com...

Step 4: Cloud Infrastructure Deployment

Provision the Azure cloud environment that will host the custom adaptive practice agent backend, PostgreSQL database, and blob storage. Use infrastructure-as-code for reproducibility.

1
Install Azure CLI
2
Log in to Azure
3
Create resource group
4
Create Azure App Service Plan (B2 tier for initial deployment)
5
Create Web App for the FastAPI backend
6
Create PostgreSQL Flexible Server
7
Enable pgvector extension
8
Create Blob Storage account
9
Create containers for content and logs
10
Configure Web App environment variables
Install Azure CLI and authenticate
bash
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
az login
Create resource group
bash
az group create --name rg-adaptive-learning --location eastus
Create Azure App Service Plan (B2 tier)
bash
az appservice plan create \
  --name asp-adaptive-learning \
  --resource-group rg-adaptive-learning \
  --sku B2 \
  --is-linux
Create Web App for the FastAPI backend
bash
az webapp create \
  --name adaptive-practice-api \
  --resource-group rg-adaptive-learning \
  --plan asp-adaptive-learning \
  --runtime 'PYTHON:3.11'
Create PostgreSQL Flexible Server
bash
az postgres flexible-server create \
  --name adaptive-learning-db \
  --resource-group rg-adaptive-learning \
  --location eastus \
  --sku-name Standard_B2s \
  --tier Burstable \
  --storage-size 32 \
  --version 16 \
  --admin-user adaptiveadmin \
  --admin-password '<GENERATE_STRONG_PASSWORD>' \
  --public-access 0.0.0.0
Enable pgvector extension on PostgreSQL
bash
az postgres flexible-server parameter set \
  --resource-group rg-adaptive-learning \
  --server-name adaptive-learning-db \
  --name azure.extensions \
  --value pgvector
Create Blob Storage account
bash
az storage account create \
  --name adaptivelearningstorage \
  --resource-group rg-adaptive-learning \
  --location eastus \
  --sku Standard_LRS \
  --kind StorageV2
Create storage containers for content, logs, and analytics
bash
az storage container create --name curriculum-content --account-name adaptivelearningstorage
az storage container create --name interaction-logs --account-name adaptivelearningstorage
az storage container create --name analytics-exports --account-name adaptivelearningstorage
Configure Web App environment variables
bash
az webapp config appsettings set \
  --name adaptive-practice-api \
  --resource-group rg-adaptive-learning \
  --settings \
  GOOGLE_API_KEY='<GEMINI_API_KEY>' \
  OPENAI_API_KEY='<OPENAI_API_KEY>' \
  PINECONE_API_KEY='<PINECONE_API_KEY>' \
  DATABASE_URL='postgresql://adaptiveadmin:<PASSWORD>@adaptive-learning-db.postgres.database.azure.com:5432/adaptive_learning' \
  AZURE_STORAGE_CONNECTION_STRING='<CONNECTION_STRING>' \
  CLEVER_CLIENT_ID='<CLEVER_CLIENT_ID>' \
  CLEVER_CLIENT_SECRET='<CLEVER_CLIENT_SECRET>' \
  LANGSMITH_API_KEY='<LANGSMITH_KEY>' \
  LANGSMITH_PROJECT='adaptive-practice-prod' \
  ENVIRONMENT='production'
Note

Use Azure Key Vault for production secret management instead of environment variables. The B2 plan supports up to 200 concurrent WebSocket connections which is sufficient for initial deployment. Plan to scale to P1v3 ($110/month) when concurrent users exceed 150. Enable Azure Monitor and Application Insights from day one for observability. Set up daily automated backups for PostgreSQL with 7-day retention.

Step 5: Database Schema and Initial Data Setup

Create the PostgreSQL database schema for student profiles, knowledge state tracking (Bayesian Knowledge Tracing parameters), session logs, and curriculum metadata. Initialize with curriculum content.

PostgreSQL schema: students, skills, knowledge state, curriculum items, sessions, interactions, and performance indexes
sql
-- Connect to PostgreSQL
psql 'postgresql://adaptiveadmin:<PASSWORD>@adaptive-learning-db.postgres.database.azure.com:5432/adaptive_learning'

-- Create extensions
CREATE EXTENSION IF NOT EXISTS pgvector;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

-- Core tables
CREATE TABLE students (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  clever_id VARCHAR(255) UNIQUE,
  google_id VARCHAR(255) UNIQUE,
  first_name VARCHAR(100) NOT NULL,
  last_name VARCHAR(100) NOT NULL,
  grade_level INTEGER,
  created_at TIMESTAMPTZ DEFAULT NOW(),
  updated_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE skills (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  subject VARCHAR(50) NOT NULL,
  domain VARCHAR(100) NOT NULL,
  skill_name VARCHAR(255) NOT NULL,
  grade_level INTEGER,
  difficulty_level FLOAT DEFAULT 0.5,
  prerequisite_skill_ids UUID[],
  description TEXT,
  standard_code VARCHAR(50)  -- e.g., CCSS.MATH.CONTENT.4.NF.A.1
);

CREATE TABLE student_knowledge_state (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  student_id UUID REFERENCES students(id),
  skill_id UUID REFERENCES skills(id),
  p_know FLOAT DEFAULT 0.3,       -- P(L_0): probability student knows the skill
  p_transit FLOAT DEFAULT 0.09,   -- P(T): probability of learning on each opportunity
  p_slip FLOAT DEFAULT 0.1,       -- P(S): probability of slipping (wrong despite knowing)
  p_guess FLOAT DEFAULT 0.2,      -- P(G): probability of guessing (right despite not knowing)
  mastery_level FLOAT DEFAULT 0.0, -- Derived mastery (0.0 to 1.0)
  total_attempts INTEGER DEFAULT 0,
  correct_attempts INTEGER DEFAULT 0,
  last_practiced_at TIMESTAMPTZ,
  UNIQUE(student_id, skill_id)
);

CREATE TABLE curriculum_items (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  skill_id UUID REFERENCES skills(id),
  item_type VARCHAR(50) NOT NULL, -- 'question', 'hint', 'explanation', 'worked_example'
  difficulty FLOAT NOT NULL,       -- 0.0 (easy) to 1.0 (hard)
  content JSONB NOT NULL,          -- {stem, choices, correct_answer, solution_steps}
  embedding vector(768),           -- For semantic search via pgvector
  metadata JSONB,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

CREATE TABLE sessions (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  student_id UUID REFERENCES students(id),
  started_at TIMESTAMPTZ DEFAULT NOW(),
  ended_at TIMESTAMPTZ,
  total_items_attempted INTEGER DEFAULT 0,
  total_correct INTEGER DEFAULT 0,
  skills_practiced UUID[],
  session_metadata JSONB
);

CREATE TABLE interactions (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  session_id UUID REFERENCES sessions(id),
  student_id UUID REFERENCES students(id),
  curriculum_item_id UUID REFERENCES curriculum_items(id),
  skill_id UUID REFERENCES skills(id),
  student_response TEXT,
  is_correct BOOLEAN,
  response_time_ms INTEGER,
  difficulty_at_time FLOAT,
  agent_feedback TEXT,
  agent_model_used VARCHAR(50),
  tokens_used INTEGER,
  created_at TIMESTAMPTZ DEFAULT NOW()
);

-- Indexes for performance
CREATE INDEX idx_knowledge_state_student ON student_knowledge_state(student_id);
CREATE INDEX idx_interactions_session ON interactions(session_id);
CREATE INDEX idx_interactions_student ON interactions(student_id);
CREATE INDEX idx_curriculum_items_skill ON curriculum_items(skill_id);
CREATE INDEX idx_curriculum_items_difficulty ON curriculum_items(difficulty);
Note

The Bayesian Knowledge Tracing (BKT) parameters (p_know, p_transit, p_slip, p_guess) are initialized with standard defaults from the literature. These will be updated in real-time as students interact with the system. The p_know value represents the system's current estimate of whether the student has mastered a skill. When p_know exceeds 0.95, the skill is considered mastered. Run VACUUM ANALYZE after bulk data imports.

Step 6: Pinecone Vector Database Setup and Curriculum Embedding

Create the Pinecone index for curriculum content, generate embeddings for all practice items using Google's text-embedding model, and upsert them into Pinecone for semantic retrieval by the Curriculum Agent.

1
Install required packages
2
Run the embedding script
Install required Python packages
bash
pip install pinecone-client openai google-generativeai langchain-google-genai
embed_curriculum.py
python
# generates embeddings and upserts curriculum items to Pinecone

# Python script: embed_curriculum.py
import os
import json
import google.generativeai as genai
from pinecone import Pinecone, ServerlessSpec

# Initialize Pinecone
pc = Pinecone(api_key=os.environ['PINECONE_API_KEY'])

# Create index (768 dimensions for Google text-embedding-004)
pc.create_index(
    name='curriculum-items',
    dimension=768,
    metric='cosine',
    spec=ServerlessSpec(cloud='aws', region='us-east-1')
)

 index = pc.Index('curriculum-items')

# Initialize Google embedding model
genai.configure(api_key=os.environ['GOOGLE_API_KEY'])

def embed_text(text: str) -> list[float]:
    result = genai.embed_content(
        model='models/text-embedding-004',
        content=text
    )
    return result['embedding']

# Load curriculum items from JSON file
with open('curriculum_data.json', 'r') as f:
    items = json.load(f)

# Batch upsert to Pinecone
batch_size = 100
for i in range(0, len(items), batch_size):
    batch = items[i:i+batch_size]
    vectors = []
    for item in batch:
        text_to_embed = f"{item['skill_name']}: {item['content']['stem']}"
        embedding = embed_text(text_to_embed)
        vectors.append({
            'id': item['id'],
            'values': embedding,
            'metadata': {
                'skill_id': item['skill_id'],
                'subject': item['subject'],
                'grade_level': item['grade_level'],
                'difficulty': item['difficulty'],
                'item_type': item['item_type']
            }
        })
    index.upsert(vectors=vectors)
    print(f'Upserted batch {i//batch_size + 1}')
Run the embedding script
bash
python embed_curriculum.py
Note

Google text-embedding-004 produces 768-dimensional vectors and is optimized for retrieval tasks. Cost is approximately $0.00001 per 1K characters, making bulk embedding of a full K-12 curriculum (10,000-50,000 items) cost under $5. For initial deployment, start with 500-2,000 curriculum items covering the most-taught skills at the client's grade levels. The MSP or curriculum coordinator should prepare curriculum_data.json following the schema in the curriculum_items table. Many textbook publishers provide question banks in QTI format that can be converted.

Step 7: Deploy the LangGraph Adaptive Agent Backend

Deploy the core FastAPI application containing the LangGraph multi-agent system. This includes the Assessment Agent, Tutoring Agent, and Curriculum Agent, along with REST API endpoints for the frontend and LTI integration. ``` # Clone the project repository git clone https://github.com/your-msp/adaptive-practice-agent.git cd adaptive-practice-agent # Project structure: # adaptive-practice-agent/ # ├── app/ # │ ├── __init__.py # │ ├── main.py # FastAPI application entry # │ ├...

Step 8: LTI 1.3 Integration with LMS

Configure the adaptive practice platform as an LTI 1.3 tool provider so it can be launched from within the client's LMS (Google Classroom via API, Canvas/Moodle via LTI). This enables seamless grade passback and deep linking.

Google Classroom Setup (API-based)

1
In Google Cloud Console > APIs & Services, enable the Google Classroom API
2
In Google Cloud Console > APIs & Services, enable the Google Drive API
3
Create OAuth 2.0 credentials with authorized redirect URI: https://adaptive-practice-api.azurewebsites.net/auth/google/callback

Canvas LMS Setup (LTI 1.3)

1
In Canvas Admin > Developer Keys > Add LTI Key, configure with: Title: Adaptive Practice AI | Redirect URIs: https://adaptive-practice-api.azurewebsites.net/lti/launch | Target Link URI: https://adaptive-practice-api.azurewebsites.net/lti/launch | OpenID Connect Initiation URL: https://adaptive-practice-api.azurewebsites.net/lti/login | JWK Method: Public JWK URL | Public JWK URL: https://adaptive-practice-api.azurewebsites.net/.well-known/jwks.json
2
Enable LTI Advantage Services: Can create and view assignment data in the gradebook (AGS), Can view assignment data in the gradebook (AGS), Can view names and roles (NRPS), Can create/update content items (Deep Linking)
3
In Canvas Admin > Settings > Apps > Add App, set Configuration Type to 'By Client ID' and enter the Client ID from the Developer Key
Install the PyLTI1p3 library in the backend
bash
pip install PyLTI1p3

Moodle Setup (LTI 1.3)

1
Navigate to Site Administration > Plugins > Activity modules > External tool > Manage tools
2
Add 'Adaptive Practice' as a preconfigured tool with the following settings: Tool URL: https://adaptive-practice-api.azurewebsites.net/lti/launch | LTI version: LTI 1.3 | Public keyset URL: https://adaptive-practice-api.azurewebsites.net/.well-known/jwks.json | Initiate login URL: https://adaptive-practice-api.azurewebsites.net/lti/login | Redirection URI(s): https://adaptive-practice-api.azurewebsites.net/lti/launch | Services: IMS LTI Assignment and Grade Services, IMS LTI Names and Role Provisioning
Note

Google Classroom does NOT support LTI—use the Google Classroom API directly for assignment creation and grade sync. For Schoology, note it only supports LTI 1.1, so use the Schoology REST API instead for full functionality. LTI 1.3 requires RSA key pairs; generate them with the command below. Store the private key in Azure Key Vault.

Generate RSA key pair for LTI 1.3
bash
openssl genrsa -out private.pem 2048 && openssl rsa -in private.pem -pubout -out public.pem

Step 9: Frontend Web Application Deployment

Deploy the student-facing web application that provides the adaptive practice interface. This is a responsive web app optimized for Chromebook screens that communicates with the FastAPI backend via REST and WebSocket APIs.

1
The frontend is a React/Next.js application
2
Clone and build the frontend
3
Deploy to Azure Static Web Apps or Vercel (choose one option below)
4
Configure a custom domain in Azure DNS or your registrar: CNAME: practice.yourdomain.com -> adaptive-practice-web.azurestaticapps.net
5
Enable HTTPS (automatic with Azure Static Web Apps)
6
Configure CORS on the backend
Clone and build the frontend application
bash
cd adaptive-practice-frontend
npm install
npm run build
Option A: Deploy to Azure Static Web Apps
bash
az staticwebapp create \
  --name adaptive-practice-web \
  --resource-group rg-adaptive-learning \
  --source https://github.com/your-msp/adaptive-practice-frontend \
  --location 'eastus2' \
  --branch main \
  --app-location '/' \
  --output-location '.next' \
  --login-with-github
Option B: Deploy to Vercel
bash
npx vercel --prod
Configure CORS on the backend to allow the frontend domain
bash
az webapp cors add \
  --name adaptive-practice-api \
  --resource-group rg-adaptive-learning \
  --allowed-origins 'https://practice.yourdomain.com'
Note

The frontend must be fully responsive for Chromebook screens (1366x768 typical resolution). Use large touch targets (minimum 44x44px) for younger students. Implement a distraction-free UI with minimal navigation. The practice interface should show: current question, answer input area, hint button, progress bar, and streak counter. Avoid ads, social features, or any non-educational content. Test on Chrome OS specifically using a Chromebook or ChromeOS Flex VM.

Step 10: Content Filtering and CIPA Compliance Configuration

Configure GoGuardian or equivalent content filter to whitelist all adaptive learning platform domains while maintaining CIPA-compliant filtering for general web access. Set up teacher monitoring capabilities. ``` # GoGuardian Admin Configuration: # 1. Log in to admin.goguardian.com # 2. Navigate to Policies > Content Filtering # 3. Create policy 'Adaptive Learning Allow' applied to Student OU # Whitelist the following domains: # - practice.yourdomain.com (adaptive practice frontend) # - adapti...

Step 11: FERPA/COPPA Compliance Implementation

Implement all required privacy and compliance measures including parental consent workflows, data retention policies, audit logging, and vendor DPA verification.

1
Configure data retention policy in PostgreSQL — create a scheduled job to purge old interaction data (per 2025 COPPA amendments: no indefinite retention)
2
Create audit log table
3
Configure API-level data minimization in app/config.py
4
Set OpenAI API to not use data for training
5
Set Google Gemini API data handling
6
Create parental consent tracking table
Data retention cleanup function and pg_cron schedule
sql
CREATE OR REPLACE FUNCTION cleanup_old_data() RETURNS void AS $$
BEGIN
  -- Archive interactions older than 3 years (adjust per district policy)
  INSERT INTO interactions_archive 
    SELECT * FROM interactions 
    WHERE created_at < NOW() - INTERVAL '3 years';
  DELETE FROM interactions 
    WHERE created_at < NOW() - INTERVAL '3 years';
  
  -- Delete session data older than 3 years
  DELETE FROM sessions 
    WHERE started_at < NOW() - INTERVAL '3 years';
  
  -- Log the cleanup
  INSERT INTO audit_log (action, details, performed_at)
    VALUES ('data_retention_cleanup', 
            json_build_object('deleted_before', NOW() - INTERVAL '3 years'),
            NOW());
END;
$$ LANGUAGE plpgsql;

-- Schedule via pg_cron (or Azure Automation Runbook)
SELECT cron.schedule('data-retention', '0 2 1 * *', 'SELECT cleanup_old_data()');
Audit log table definition
sql
CREATE TABLE audit_log (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  action VARCHAR(100) NOT NULL,
  user_id UUID,
  user_role VARCHAR(50),
  details JSONB,
  ip_address INET,
  performed_at TIMESTAMPTZ DEFAULT NOW()
);
app/config.py — API-level data minimization settings
python
COPPA_UNDER_13 = True  # Enable enhanced protections
DATA_COLLECTION_MINIMAL = True  # Only collect educational data
AI_TRAINING_OPT_OUT = True  # Never send student data for model training
Parental consent tracking table definition
sql
CREATE TABLE parental_consent (
  id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
  student_id UUID REFERENCES students(id),
  consent_type VARCHAR(100) NOT NULL,
  granted BOOLEAN NOT NULL,
  guardian_name VARCHAR(200),
  guardian_email VARCHAR(200),
  consent_date TIMESTAMPTZ,
  expiry_date TIMESTAMPTZ,
  ip_address INET,
  consent_document_url TEXT
);
Note

Execute Data Processing Agreements (DPAs) with ALL vendors before going live: OpenAI, Google (Gemini API), Pinecone, Microsoft Azure. Use the Student Data Privacy Consortium (SDPC) National DPA templates at https://privacy.a4l.org/ — many vendors have pre-signed NDPAs available. For students under 13, parental consent must be obtained BEFORE the student's first AI interaction. Build a simple consent form page that parents can sign electronically. Store consent records for the duration of the student's enrollment plus 3 years. States like California (SOPIPA), New York (Ed Law 2-d), and Texas have additional requirements — verify state-specific laws for the client's jurisdiction. OpenAI API data usage policy: https://platform.openai.com/docs/models#how-we-use-your-data Google Gemini API terms: https://ai.google.dev/terms

Step 12: Teacher Analytics Dashboard Configuration

Deploy and configure the teacher-facing analytics dashboard that shows real-time student progress, mastery levels, time-on-task, and AI interaction summaries. Enable teacher controls for adjusting agent behavior.

The analytics dashboard is served from the same Next.js frontend at /dashboard route, accessible only to users with 'teacher' or 'admin' role.

Key Dashboard Views to Configure

1
Class Overview: heatmap of skill mastery across all students
2
Individual Student: detailed BKT parameters, interaction history, struggle alerts
3
Skill Analysis: which skills have lowest mastery rates class-wide
4
AI Interaction Log: review AI-generated questions and feedback (FERPA audit)
5
Usage Stats: time on task, sessions per week, items attempted

Teacher Configuration Options Exposed via Dashboard

  • Set minimum/maximum difficulty bounds per student or class
  • Enable/disable specific skills or topics for practice
  • Set target mastery threshold (default 0.95)
  • Configure 'struggle alert' threshold (e.g., alert after 5 consecutive wrong)
  • Review and approve AI-generated content before it's shown to students

Analytics API Endpoints

FastAPI backend dashboard endpoints (already deployed)
http
GET /api/v1/dashboard/class/{class_id}/overview
GET /api/v1/dashboard/student/{student_id}/progress
GET /api/v1/dashboard/class/{class_id}/skill-mastery
GET /api/v1/dashboard/student/{student_id}/interactions
POST /api/v1/dashboard/class/{class_id}/settings

Real-Time WebSocket for Live Updates

Pushes student progress events in real-time
http
WS /ws/dashboard/{class_id}
Note

Teachers should be trained to check the dashboard at least weekly. Set up automated email digests (Monday morning) summarizing the prior week's class progress. The struggle alert feature is critical: when a student gets 5+ consecutive wrong answers on a skill, the teacher receives an email/push notification suggesting direct intervention. All AI interaction logs must be reviewable by teachers per FERPA — students have no reasonable expectation of privacy in educational AI interactions when the school is the data controller.

Step 13: Pilot Deployment and Iterative Testing

Launch the adaptive practice platform with a pilot group of 1-2 classrooms (25-50 students) for 2-4 weeks before full rollout. Collect teacher and student feedback, monitor system performance, and iterate on agent behavior.

1
Select pilot classrooms (ideally one strong and one struggling class). Configure pilot students in Clever with a 'pilot' tag.
2
Monitor system performance during pilot via LangSmith dashboard (https://smith.langchain.com/): monitor agent latency (target: <3 seconds per interaction), review interaction traces for quality, and track token usage and costs.
3
Run Azure Monitor queries for system health.
4
Run automated quality checks with pytest.
5
Collect feedback: daily 1-minute teacher survey (Google Form), weekly student satisfaction survey (emoji-based for younger students), and bi-weekly meeting with pilot teachers to review and adjust.
6
Iterate on prompts based on pilot feedback. Common adjustments: hint verbosity (often needs to be simpler for younger grades), encouragement tone (adjust for age appropriateness), difficulty ramp speed (often too fast initially), question format (multiple choice vs. free response by grade level).
Azure Monitor query for system health metrics
bash
az monitor metrics list \
  --resource adaptive-practice-api \
  --resource-group rg-adaptive-learning \
  --resource-type Microsoft.Web/sites \
  --metric 'Http5xx,HttpResponseTime,CpuPercentage,MemoryPercentage'
Automated quality checks: verifies Socratic method enforcement, difficulty adjustment logic, BKT parameter updates, content filtering, and PostgreSQL interaction logging
bash
python -m pytest tests/test_agent_quality.py -v
Critical

The pilot phase is CRITICAL. Do not skip it. The most common failures in adaptive learning deployment are: (1) AI responses are too complex for the grade level, (2) difficulty ramps too fast causing frustration, (3) teachers feel excluded from the process. Budget 2-4 weeks minimum for pilot. Have the MSP engineer on-site or available via Teams/Zoom during the first 3 days of pilot. Adjust LLM prompts daily during the first week based on teacher feedback.

Step 14: Full Production Rollout

After successful pilot validation, roll out the adaptive practice platform to all 200 students. Scale infrastructure, complete teacher training, and establish ongoing monitoring and support procedures.

1
Scale Azure App Service if needed
2
Update Clever rostering to include all students: In Clever Dashboard > Sharing Rules > expand to all sections/grades
3
Push adaptive practice app/bookmark to all student Chromebooks: In Google Admin > Devices > Chrome > Apps & Extensions > Students OU — Add managed bookmark: 'Adaptive Practice' -> https://practice.yourdomain.com
4
Enable auto-scaling for API (Azure)
5
Set up alerting
6
Verify all students can log in: Run a login test script that simulates SSO for each student
Scale Azure App Service workers
bash
az webapp update \
  --name adaptive-practice-api \
  --resource-group rg-adaptive-learning \
  --set siteConfig.numberOfWorkers=2
Or scale up the App Service plan
bash
az appservice plan update \
  --name asp-adaptive-learning \
  --resource-group rg-adaptive-learning \
  --sku P1V3
Enable auto-scaling for the API
bash
az monitor autoscale create \
  --resource-group rg-adaptive-learning \
  --resource adaptive-practice-api \
  --resource-type Microsoft.Web/serverfarms \
  --name autoscale-adaptive \
  --min-count 1 --max-count 3 --count 1

az monitor autoscale rule create \
  --resource-group rg-adaptive-learning \
  --autoscale-name autoscale-adaptive \
  --scale out 1 \
  --condition 'CpuPercentage > 70 avg 5m'

az monitor autoscale rule create \
  --resource-group rg-adaptive-learning \
  --autoscale-name autoscale-adaptive \
  --scale in 1 \
  --condition 'CpuPercentage < 30 avg 5m'
Set up alerting for high error rate
bash
az monitor metrics alert create \
  --name 'High Error Rate' \
  --resource-group rg-adaptive-learning \
  --scopes /subscriptions/<SUB_ID>/resourceGroups/rg-adaptive-learning/providers/Microsoft.Web/sites/adaptive-practice-api \
  --condition 'avg Http5xx > 5' \
  --window-size 5m \
  --action-group ops-team-alerts
Verify all students can log in via SSO simulation
bash
python scripts/verify_all_students_login.py
Note

Schedule full rollout during the start of a new grading period or unit when possible. Notify all parents via the school's communication channel (ParentSquare, ClassDojo, email) that the AI tutoring system is launching, referencing the consent forms collected earlier. Have the MSP helpdesk prepared for increased support tickets in the first week. Common issues: students can't log in (Clever sync), slow performance (bandwidth), and confusion about how to use the tool (training gap).

Custom AI Components

LangGraph Adaptive Practice Orchestrator

Type: workflow The core LangGraph state machine that orchestrates the entire adaptive practice session. It manages the flow between assessment, tutoring, and curriculum selection agents, maintaining per-student session state including knowledge estimates, current difficulty level, streak counters, and emotional state signals (frustration detection). The graph routes students through a continuous cycle: assess current knowledge → select appropriate difficulty content → present question → eva...

Assessment Agent

Type: agent Evaluates student responses to determine correctness, identifies specific misconceptions, and classifies the type of error (computational, conceptual, careless). Uses Google Gemini 2.0 Flash for fast evaluation of most responses, escalating to GPT-4.1 for ambiguous or open-ended responses. The agent handles multiple response types: multiple choice, numeric entry, short answer, and free-form text explanations. Implementation: ``` # app/agents/assessment.py import os import ...

Tutoring Agent

Type: agent Generates personalized, age-appropriate feedback and scaffolded hints using Socratic questioning techniques. Never reveals the answer directly — instead guides the student toward the solution through progressive hints. Adjusts tone and complexity based on grade level and frustration signals (consecutive incorrect answers). Uses Gemini Flash for routine feedback, GPT-4.1 for detailed explanations of complex misconceptions. Implementation: ``` # app/agents/tutoring.py import...

Curriculum Agent

Type: agent Selects the next optimal practice item for a student based on their current knowledge state, target difficulty level, prerequisite relationships, and variety constraints. Queries the Pinecone vector database to find semantically relevant content at the appropriate difficulty level, and can dynamically generate new questions via LLM when the content library lacks items at the exact difficulty needed. Implementation: ``` # app/agents/curriculum.py import os import json impor...

Bayesian Knowledge Tracing Model

Type: skill Implements the Bayesian Knowledge Tracing (BKT) algorithm that maintains and updates a probabilistic estimate of each student's knowledge state for each skill. BKT uses four parameters per skill: P(L₀) initial knowledge, P(T) probability of learning on each practice opportunity, P(S) probability of slipping (wrong despite knowing), and P(G) probability of guessing (right despite not knowing). After each student response, the model updates P(know) using Bayes' rule.

Implementation:

app/models/bkt.py
python
# app/models/bkt.py
from dataclasses import dataclass

@dataclass
class BKTParams:
    """Bayesian Knowledge Tracing parameters for a single student-skill pair."""
    p_know: float = 0.3      # P(L_0): Prior probability of knowing the skill
    p_transit: float = 0.09  # P(T): Probability of learning on each opportunity
    p_slip: float = 0.10     # P(S): Probability of incorrect response despite knowing
    p_guess: float = 0.20    # P(G): Probability of correct response despite not knowing


class BayesianKnowledgeTracing:
    """Standard BKT implementation with extensions for adaptive difficulty."""
    
    def update(self, p_know: float, is_correct: bool,
               p_transit: float = 0.09, p_slip: float = 0.10,
               p_guess: float = 0.20) -> float:
        """
        Update the probability of knowledge given an observed response.
        
        Uses the standard BKT update equations:
        1. Posterior P(know | observation) via Bayes' rule
        2. P(know_new) = P(know | obs) + P(transit) * (1 - P(know | obs))
        
        Args:
            p_know: Current probability student knows the skill
            is_correct: Whether the student answered correctly
            p_transit: Probability of learning on this opportunity
            p_slip: Probability of slipping (wrong despite knowing)
            p_guess: Probability of guessing (right despite not knowing)
        
        Returns:
            Updated p_know (float between 0 and 1)
        """
        # Step 1: Compute posterior P(know | observation)
        if is_correct:
            # P(know | correct) = P(correct | know) * P(know) / P(correct)
            p_correct_given_know = 1.0 - p_slip
            p_correct_given_not_know = p_guess
            p_correct = (p_correct_given_know * p_know) + \
                        (p_correct_given_not_know * (1.0 - p_know))
            
            if p_correct == 0:
                posterior = p_know
            else:
                posterior = (p_correct_given_know * p_know) / p_correct
        else:
            # P(know | incorrect) = P(incorrect | know) * P(know) / P(incorrect)
            p_incorrect_given_know = p_slip
            p_incorrect_given_not_know = 1.0 - p_guess
            p_incorrect = (p_incorrect_given_know * p_know) + \
                          (p_incorrect_given_not_know * (1.0 - p_know))
            
            if p_incorrect == 0:
                posterior = p_know
            else:
                posterior = (p_incorrect_given_know * p_know) / p_incorrect
        
        # Step 2: Account for learning (transition)
        # P(know_new) = P(know | obs) + P(transit) * (1 - P(know | obs))
        p_know_new = posterior + (p_transit * (1.0 - posterior))
        
        # Clamp to valid probability range
        return max(0.001, min(0.999, p_know_new))
    
    def get_mastery_status(self, p_know: float) -> str:
        """Classify mastery level for display."""
        if p_know >= 0.95:
            return 'mastered'
        elif p_know >= 0.80:
            return 'proficient'
        elif p_know >= 0.50:
            return 'developing'
        elif p_know >= 0.30:
            return 'beginning'
        else:
            return 'not_started'
    
    def estimate_items_to_mastery(self, p_know: float, p_transit: float = 0.09,
                                    target: float = 0.95) -> int:
        """Estimate how many correct responses needed to reach mastery."""
        if p_know >= target:
            return 0
        
        items = 0
        current = p_know
        while current < target and items < 100:
            # Simulate a correct response
            current = self.update(current, is_correct=True, p_transit=p_transit)
            items += 1
        return items
    
    def optimal_difficulty(self, p_know: float) -> float:
        """
        Calculate the optimal item difficulty for maximum learning.
        Based on the Zone of Proximal Development principle:
        items should be challenging but achievable.
        
        Returns difficulty between 0.0 and 1.0.
        """
        # Target ~70% success rate (well-established in educational research)
        # Difficulty should be slightly above current ability
        optimal = p_know + 0.1  # Slightly harder than current knowledge
        optimal = max(0.1, min(0.9, optimal))  # Clamp to valid range
        return round(optimal, 2)

Clever SSO and Rostering Integration

Type: integration

Handles OAuth 2.0 authentication with Clever for student and teacher SSO, and syncs roster data (students, teachers, sections, enrollments) from Clever into the local database. Implements webhook listeners for real-time roster updates when students are added, removed, or transferred between classes in the SIS.

Implementation:

app/integrations/clever.py
python
# app/integrations/clever.py
import os
import httpx
from fastapi import APIRouter, Request, HTTPException
from fastapi.responses import RedirectResponse
from sqlalchemy import select
from app.database import async_session, Student

router = APIRouter(prefix='/auth/clever')

CLEVER_CLIENT_ID = os.environ['CLEVER_CLIENT_ID']
CLEVER_CLIENT_SECRET = os.environ['CLEVER_CLIENT_SECRET']
CLEVER_REDIRECT_URI = os.environ.get(
    'CLEVER_REDIRECT_URI',
    'https://adaptive-practice-api.azurewebsites.net/auth/clever/callback'
)
CLEVER_AUTH_URL = 'https://clever.com/oauth/authorize'
CLEVER_TOKEN_URL = 'https://clever.com/oauth/tokens'
CLEVER_API_BASE = 'https://api.clever.com/v3.0'


@router.get('/login')
async def clever_login():
    """Redirect to Clever OAuth login page."""
    auth_url = (
        f'{CLEVER_AUTH_URL}?'
        f'response_type=code&'
        f'redirect_uri={CLEVER_REDIRECT_URI}&'
        f'client_id={CLEVER_CLIENT_ID}&'
        f'scope=read:user_id read:sis'
    )
    return RedirectResponse(url=auth_url)


@router.get('/callback')
async def clever_callback(request: Request, code: str):
    """Handle Clever OAuth callback, exchange code for token, fetch user info."""
    async with httpx.AsyncClient() as client:
        # Exchange authorization code for access token
        token_response = await client.post(
            CLEVER_TOKEN_URL,
            data={
                'code': code,
                'grant_type': 'authorization_code',
                'redirect_uri': CLEVER_REDIRECT_URI,
            },
            auth=(CLEVER_CLIENT_ID, CLEVER_CLIENT_SECRET)
        )
        
        if token_response.status_code != 200:
            raise HTTPException(status_code=401, detail='Failed to authenticate with Clever')
        
        token_data = token_response.json()
        access_token = token_data['access_token']
        
        # Fetch user identity
        me_response = await client.get(
            f'{CLEVER_API_BASE}/me',
            headers={'Authorization': f'Bearer {access_token}'}
        )
        me_data = me_response.json()['data']
        user_type = me_data['type']  # 'student', 'teacher', 'district_admin'
        user_id = me_data['id']
        
        # Fetch full user profile
        user_response = await client.get(
            f'{CLEVER_API_BASE}/{user_type}s/{user_id}',
            headers={'Authorization': f'Bearer {access_token}'}
        )
        user_data = user_response.json()['data']
        
        # Upsert user in local database
        async with async_session() as session:
            existing = await session.execute(
                select(Student).where(Student.clever_id == user_id)
            )
            student = existing.scalar_one_or_none()
            
            if not student:
                student = Student(
                    clever_id=user_id,
                    first_name=user_data.get('name', {}).get('first', ''),
                    last_name=user_data.get('name', {}).get('last', ''),
                    grade_level=int(user_data.get('grade', 0)) if user_data.get('grade') else None,
                )
                session.add(student)
            else:
                student.first_name = user_data.get('name', {}).get('first', student.first_name)
                student.last_name = user_data.get('name', {}).get('last', student.last_name)
                student.grade_level = int(user_data.get('grade', 0)) if user_data.get('grade') else student.grade_level
            
            await session.commit()
        
        # Create session token and redirect to frontend
        from app.auth import create_session_token
        session_token = create_session_token(
            user_id=str(student.id),
            user_type=user_type,
            clever_token=access_token
        )
        
        return RedirectResponse(
            url=f'https://practice.yourdomain.com/session?token={session_token}'
        )


async def sync_roster(access_token: str):
    """Full roster sync from Clever API. Run nightly via scheduled job."""
    async with httpx.AsyncClient() as client:
        headers = {'Authorization': f'Bearer {access_token}'}
        
        # Fetch all students
        students_response = await client.get(
            f'{CLEVER_API_BASE}/students',
            headers=headers,
            params={'limit': 1000}
        )
        students_data = students_response.json().get('data', [])
        
        async with async_session() as session:
            for s in students_data:
                s_data = s['data']
                existing = await session.execute(
                    select(Student).where(Student.clever_id == s_data['id'])
                )
                student = existing.scalar_one_or_none()
                
                if not student:
                    student = Student(
                        clever_id=s_data['id'],
                        first_name=s_data.get('name', {}).get('first', ''),
                        last_name=s_data.get('name', {}).get('last', ''),
                        grade_level=int(s_data.get('grade', 0)) if s_data.get('grade') else None,
                    )
                    session.add(student)
                else:
                    student.first_name = s_data.get('name', {}).get('first', student.first_name)
                    student.last_name = s_data.get('name', {}).get('last', student.last_name)
                
            await session.commit()
    
    return {'synced': len(students_data)}

Student Input Safety Filter

Type: skill A pre-processing filter that screens all student text inputs before they reach the LLM agents. Detects and blocks attempts at prompt injection, off-topic queries, inappropriate content, and PII leakage. Critical for COPPA compliance and safe deployment in K-12 environments.

Implementation:

app/agents/safety_filter.py
python
# app/agents/safety_filter.py
import re
from dataclasses import dataclass
from enum import Enum

class FilterResult(Enum):
    SAFE = 'safe'
    BLOCKED_INJECTION = 'blocked_prompt_injection'
    BLOCKED_INAPPROPRIATE = 'blocked_inappropriate'
    BLOCKED_PII = 'blocked_pii'
    BLOCKED_OFF_TOPIC = 'blocked_off_topic'

@dataclass
class FilterResponse:
    result: FilterResult
    original_input: str
    sanitized_input: str | None
    reason: str | None


class StudentInputSafetyFilter:
    """Filter student inputs for safety before passing to LLM agents."""
    
    # Prompt injection patterns
    INJECTION_PATTERNS = [
        r'ignore\s+(previous|above|all)\s+(instructions?|prompts?|rules?)',
        r'you\s+are\s+now\s+',
        r'forget\s+(everything|all|your)',
        r'system\s*prompt',
        r'(tell|give|show|reveal)\s+me\s+the\s+answer',
        r'what\s+is\s+the\s+(correct|right)\s+answer',
        r'pretend\s+(you|to\s+be)',
        r'act\s+as\s+(if|a)',
        r'jailbreak',
        r'DAN\s+mode',
        r'\[\s*SYSTEM\s*\]',
        r'<\s*system\s*>',
    ]
    
    # PII patterns (US-focused for student data)
    PII_PATTERNS = [
        r'\b\d{3}-\d{2}-\d{4}\b',           # SSN
        r'\b\d{3}\s\d{2}\s\d{4}\b',          # SSN with spaces
        r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',  # Email
        r'\b\d{10}\b',                         # Phone (10 digits)
        r'\b\(\d{3}\)\s?\d{3}-\d{4}\b',      # Phone formatted
        r'\b\d{1,5}\s+[A-Za-z]+(\s+[A-Za-z]+)*\s+(St|Ave|Blvd|Dr|Rd|Ln|Way|Ct)\b',  # Street address
    ]
    
    # Inappropriate content keywords (age-appropriate)
    INAPPROPRIATE_KEYWORDS = [
        'kill', 'suicide', 'self-harm', 'cutting', 'die',
        'weapon', 'gun', 'bomb', 'drug', 'alcohol',
        'sex', 'porn', 'nude', 'naked',
        # Add more based on school policy
    ]
    
    def __init__(self):
        self.injection_regex = [re.compile(p, re.IGNORECASE) for p in self.INJECTION_PATTERNS]
        self.pii_regex = [re.compile(p) for p in self.PII_PATTERNS]
    
    def filter_input(self, student_input: str, context: str = 'practice') -> FilterResponse:
        """Filter student input for safety. Returns sanitized input or block reason."""
        
        if not student_input or not student_input.strip():
            return FilterResponse(
                result=FilterResult.SAFE,
                original_input=student_input,
                sanitized_input='',
                reason=None
            )
        
        text = student_input.strip()
        
        # Check for prompt injection
        for pattern in self.injection_regex:
            if pattern.search(text):
                return FilterResponse(
                    result=FilterResult.BLOCKED_INJECTION,
                    original_input=text,
                    sanitized_input=None,
                    reason=f'Input matched prompt injection pattern'
                )
        
        # Check for PII
        for pattern in self.pii_regex:
            if pattern.search(text):
                # Redact PII rather than blocking entirely
                sanitized = pattern.sub('[REDACTED]', text)
                return FilterResponse(
                    result=FilterResult.BLOCKED_PII,
                    original_input=text,
                    sanitized_input=sanitized,
                    reason='PII detected and redacted'
                )
        
        # Check for inappropriate content
        text_lower = text.lower()
        for keyword in self.INAPPROPRIATE_KEYWORDS:
            if keyword in text_lower:
                # Flag for counselor review if self-harm related
                if keyword in ['suicide', 'self-harm', 'cutting', 'die']:
                    # Trigger alert (handled by caller)
                    pass
                return FilterResponse(
                    result=FilterResult.BLOCKED_INAPPROPRIATE,
                    original_input=text,
                    sanitized_input=None,
                    reason=f'Inappropriate content detected'
                )
        
        # Check if response is plausibly on-topic (very basic heuristic)
        # In practice mode, responses should be short answers, numbers, or letters
        if context == 'practice' and len(text) > 500:
            return FilterResponse(
                result=FilterResult.BLOCKED_OFF_TOPIC,
                original_input=text,
                sanitized_input=text[:500],
                reason='Response exceeds expected length for practice mode'
            )
        
        return FilterResponse(
            result=FilterResult.SAFE,
            original_input=text,
            sanitized_input=text,
            reason=None
        )


# Usage in the main API endpoint:
# safety_filter = StudentInputSafetyFilter()
# filter_result = safety_filter.filter_input(student_response, context='practice')
# if filter_result.result == FilterResult.SAFE:
#     # Proceed with agent processing
# elif filter_result.result == FilterResult.BLOCKED_PII:
#     # Use sanitized input, log PII attempt
# else:
#     # Return friendly message to student, log the block

Testing & Validation

- NETWORK TEST: From a student Chromebook connected to the 'StudentLearn' SSID, run a speed test confirming at least 100 Mbps download and 50 Mbps upload. Verify latency to api.openai.com and generativelanguage.googleapis.com is under 50ms using: ping -c 10 api.openai.com - SSO TEST: Log in as a test student via Clever Instant Login on a Chromebook. Verify the student is redirected to the adaptive practice frontend at practice.yourdomain.com with their name and grade displayed correctly. Repeat ...

Client Handoff

The client handoff meeting should be a 2-hour session with the school's leadership team, IT coordinator, and lead teachers. Cover the following topics: 1. System Overview (15 min): Walk through the architecture at a high level — how student inputs flow from the Chromebook to the AI agents and back, where data is stored, and what privacy protections are in place. Use a simple diagram showing: Student Device → Frontend → API → AI Agents → LMS. 2. Teacher Training (45 min): Demonstrate th...

Maintenance

Weekly Tasks

  • Review LangSmith dashboard for agent performance metrics: response latency, error rates, token usage, and any flagged interactions. Target: <1% error rate, <3s median latency.
  • Check Azure Monitor for infrastructure health: CPU/memory utilization, HTTP 5xx error count, database connection pool usage. Investigate any alerts triggered.
  • Review GoGuardian Smart Alerts for any student safety flags that require follow-up with school counselor.
  • Verify Clever roster sync completed successfully (check last_sync timestamp in the admin panel). Investigate any sync failures.

Monthly Tasks

  • Review and optimize LLM API costs. Analyze token usage by agent (Assessment vs. Tutoring vs. Curriculum) in LangSmith. If costs exceed budget, consider: shifting more Assessment Agent calls to direct pattern matching instead of LLM, reducing max_tokens on feedback generation, or switching more Tutoring Agent calls from GPT-4.1 to Gemini Flash.
  • Update curriculum content: work with teachers to add new practice items for recently taught skills. Re-run the embedding pipeline for new items.
  • Review AI interaction quality: randomly sample 20 student interactions per month and grade them for appropriateness, accuracy, and pedagogical quality. Flag any prompts that need adjustment.
  • Apply security patches to Azure App Service runtime, PostgreSQL, and all Python dependencies (pip audit and dependabot alerts).
  • Run a FERPA compliance spot-check: verify audit logs are being written, data retention jobs ran successfully, and no unauthorized access occurred.

Quarterly Tasks

  • Conduct a full BKT parameter review: analyze aggregate learning curves across all students. If mastery rates are too slow (students taking >30 items to master simple skills), adjust P(T) transit probability upward. If mastery is too easy (students mastering after 3-4 items), investigate P(G) guess parameter.
  • Teacher satisfaction survey and feedback session. Collect feature requests and prioritize for next quarter.
  • Update LLM model versions: evaluate new model releases (e.g., Gemini 2.1, GPT-4.2, Claude updates) by running the evaluation test suite against the new model. Only switch if quality improves without regression.
  • Review and update content filtering rules and whitelist/blocklist as new domains are added to the platform ecosystem.
  • Generate a quarterly analytics report for school leadership showing: student engagement metrics, mastery progression by grade/subject, AI cost per student, and comparison to baseline assessments (MAP Growth, iReady, or similar).

Annual Tasks

  • Full FERPA/COPPA compliance audit: verify all DPAs are current, review vendor privacy policies for changes, update parental consent forms if scope has changed, and run the data retention cleanup job.
  • Chromebook fleet assessment: identify devices needing replacement (broken screens, battery degradation, end-of-Auto-Update-Expiration). Budget for 10-15% annual replacement rate.
  • Network infrastructure review: assess bandwidth adequacy as student count grows, review Wi-Fi AP placement, and upgrade if needed.
  • Renew all software licenses: GoGuardian, IXL (if used), Khanmigo (if used), Pinecone, LangSmith, Azure reserved instances.
  • Strategic review with school leadership: assess whether to expand to additional subjects/grade levels, increase AI capabilities, or adjust the implementation approach.

SLA Commitments

  • Platform availability: 99.5% during school hours (M-F 7am-5pm local time)
  • Critical issue response: 4 hours during business hours
  • Standard issue response: 24 hours
  • AI quality issue response: 48 hours (may require prompt engineering)
  • Planned maintenance window: Saturdays 10pm-6am

Escalation Path

  • L1 (MSP Helpdesk): Device issues, login problems, basic troubleshooting → resolve within 4 hours
  • L2 (MSP Engineer): Platform bugs, performance degradation, integration failures → resolve within 24 hours
  • L3 (MSP AI Specialist): Agent behavior issues, prompt engineering, BKT parameter tuning, compliance concerns → resolve within 48 hours
  • L4 (Vendor Escalation): LLM API outages (OpenAI/Google status pages), Clever sync failures, Azure infrastructure issues → follow vendor SLAs

Alternatives

Turnkey Platform Deployment (Khanmigo + IXL)

Instead of building a custom AI agent, deploy two proven turnkey adaptive learning platforms: Khan Academy's Khanmigo ($4/student/month) for AI-powered Socratic tutoring, and IXL ($369/year per 25-student classroom) for adaptive skill practice with built-in difficulty adjustment. Both integrate with Google Classroom via their respective APIs and support Clever SSO. No custom development required.

SchoolAI Managed Spaces

Deploy SchoolAI's platform which provides teacher-controlled AI 'Spaces' where educators can create custom AI tutoring experiences with guardrails. SchoolAI handles FERPA/COPPA compliance (SOC 2 certified), offers a freemium model, and provides real-time teacher visibility into student-AI interactions. Teachers create 'Spaces' with custom prompts and curriculum alignment without needing any code.

Open-Source Self-Hosted Stack (Moodle + OpenAI API)

Deploy Moodle as the LMS (free, open-source) with the Moodle AI Subsystem plugin, combined with direct OpenAI API integration for adaptive tutoring. Host everything on a single Azure VM or on-premises server. Use Moodle's built-in quiz engine with adaptive mode (questions that provide immediate feedback and allow retries with penalty) supplemented by AI-generated hints.

Note

BEST FOR: Schools already using Moodle, organizations requiring full data sovereignty (on-premises hosting), or international deployments where Clever/Google Classroom are not standard.

Microsoft 365 + Azure AI (Copilot-Based)

Leverage Microsoft 365 Education (A3/A5) with Microsoft Copilot for Education as the AI tutor, integrated with Teams for Education as the learning environment. Use Azure AI Services (Azure OpenAI) instead of direct OpenAI API for enterprise-grade compliance and data residency. Build custom adaptive logic using Power Automate flows and Azure Functions rather than LangGraph.

Hybrid Approach: Turnkey Platform Now + Custom Agent Later

Deploy IXL or Khanmigo immediately (2-4 weeks) to give students adaptive practice right away, while simultaneously beginning development of the custom LangGraph-based agent system (3-6 months). Once the custom system is validated through pilot testing, gradually migrate students from the turnkey platform to the custom system subject-by-subject. Maintain the turnkey platform as a fallback and for subjects not yet covered by custom content.

Want early access to the full toolkit?