58 min readAutonomous agents

Implementation Guide: Research property permit history, hoa rules, and zoning and compile a buyer briefing

Step-by-step implementation guide for deploying AI to research property permit history, hoa rules, and zoning and compile a buyer briefing for Real Estate clients.

Hardware Procurement

Cloud Application Server

Cloud Application Server

Amazon Web Services (AWS)EC2 t3.medium (2 vCPU, 4GB RAM, 50GB EBS gp3)Qty: 1

$30–$45/month MSP cost / $75–$120/month suggested resale (bundled into managed service)

Primary application server hosting the CrewAI agent orchestration runtime, n8n workflow automation, Redis task queue, and PostgreSQL database. The t3.medium provides sufficient compute for concurrent agent executions handling up to 50 briefings per day.

On-Premises Server (Optional — for clients requiring data sovereignty)

On-Premises Server

Dell TechnologiesDell PowerEdge T360 (Intel Xeon E-2400 series, 32GB DDR5 ECC, 2x 1TB NVMe SSD RAID-1)Qty: 1

$2,674–$3,200 MSP cost / $3,600–$4,300 suggested resale

Optional on-premises deployment for brokerages with strict data residency requirements or those processing high volumes (100+ briefings/day). Hosts all application services via Docker Compose. Only recommend if the client explicitly requires on-prem or processes extremely high volumes.

Network-Attached Storage for Document Archive

SynologySynology DS224+ (2-bay, 2x 4TB Seagate IronWolf NAS drives)Qty: 1

$450–$550 MSP cost / $700–$850 suggested resale

Optional local archive for generated briefing PDFs, cached HOA documents, and permit records. Provides redundant storage with Synology Hyper Backup to S3. Only needed for on-prem deployments or clients who want a local document cache alongside cloud storage.

Software Procurement

CrewAI (Open Source + Cloud)

CrewAI Inc.SaaS (free tier or paid plans)

$0/month (open-source self-hosted) or $99/month (Starter plan with 500 executions) or $199/month (Pro plan with 5,000 executions)

Multi-agent orchestration framework that coordinates the Permit Research Agent, Zoning Analysis Agent, HOA Research Agent, and Briefing Compiler Agent. The open-source version is recommended for MSP-managed deployments; the cloud plan adds observability dashboards and managed hosting.

n8n Workflow Automation

n8n GmbHSaaS or self-hosted (free)

$0/month (self-hosted) or €20/month (Starter SaaS with 2,500 executions)

Visual workflow automation platform that handles CRM webhook triggers, file generation, PDF delivery, email notifications, and Zapier-like integrations. MSP technicians can modify workflows without Python expertise. Self-hosted on the application server.

OpenAI API (GPT-4.1)

OpenAIGPT-4.1Qty: Usage-based API

$2.00/M input tokens + $8.00/M output tokens; estimated $80–$200/month for 200 briefings/month

Primary LLM for agent reasoning, document analysis, HOA rule extraction, and briefing text generation. GPT-4.1's 1M token context window is critical for processing lengthy HOA CC&R documents and zoning codes in a single pass.

OpenAI API (GPT-5.4 mini)

OpenAIGPT-5.4 mini

$0.15/M input tokens + $0.60/M output tokens; estimated $5–$15/month for 200 briefings/month

Cost-efficient model for simpler sub-tasks: data formatting, address normalization, preliminary filtering, and structured data extraction from API responses. Used by worker agents to reduce overall LLM spend.

Shovels Permit Data API

Shovels Inc.API subscriptionQty: 500–1,000 lookups

Contact sales for pricing; estimated $200–$500/month for 500–1,000 lookups

Primary data source for building permit history across 2,000+ US jurisdictions. Returns permit type, issue date, contractor info, inspection status, and estimated project cost. Also provides Decisions data from city council/planning board meetings.

Zoneomics Zoning API

ZoneomicsAPI (per-request)Qty: 500 lookups

Estimated $150–$400/month for 500 lookups based on per-request pricing

Returns zoning classification, allowed uses, density limits, setback requirements, height restrictions, and overlay districts for any US parcel. Provides Zoning Briefs for quick summaries and Full Reports for detailed analysis.

ATTOM Property Data API

ATTOM Data SolutionsAPI subscription

$300–$700/month depending on endpoints and volume tier

Comprehensive property data including ownership history, tax assessments, transaction history, building characteristics, school district profiles, FEMA flood zone designation, and neighborhood demographics. Serves as the foundational data layer.

Follow Up Boss CRM Integration

Follow Up BossPer-seat SaaS (client pays directly)Qty: per user

$58–$139/user/month (client's existing subscription); API access included

Target CRM for briefing delivery. The system pushes completed briefing PDFs and summary notes into the contact/deal record via Follow Up Boss Open API. Webhook triggers initiate briefing generation when a buyer is tagged on a property.

FairSentry Compliance Screening

FairSentrySaaS subscription

Estimated $100–$300/month depending on volume

Automated Fair Housing Act compliance scanner that reviews generated briefing content for language that could constitute steering or discrimination based on protected characteristics. Integrated as a final-pass quality gate before briefing delivery.

LangSmith Observability

LangChain Inc.SaaS (free tier or $39/user/month Plus)

$0/month (Developer: 5K traces/month) or $39/user/month (Plus: 10K traces)

LLM observability platform for tracing agent executions, debugging failed briefings, monitoring token usage, and tracking latency. Essential for MSP support and optimization.

WeasyPrint PDF Generator

Open Source (CourtBouillon)Open source (BSD)

$0

Generates branded PDF briefing documents from HTML/CSS templates. Runs server-side with no external API dependency. Supports custom headers, footers, logos, and professional formatting.

Redis

Open Source / Redis Ltd.Open source (BSD-3)

$0 (self-hosted on application server)

In-memory task queue and caching layer. Queues briefing generation jobs, caches frequently-accessed API responses (e.g., zoning data for recently-queried parcels), and manages agent state.

PostgreSQL

Open Source / PostgreSQL Global Development GroupOpen source (PostgreSQL License)

$0 (self-hosted on application server)

Primary database storing briefing history, client/property records, API response caches, audit logs, and compliance screening results.

Prerequisites

  • Client must have an active CRM subscription with API access enabled (Follow Up Boss Pro plan at minimum for API/webhook access, or kvCORE/BoldTrail with API credentials)
  • Client must provide a list of target geographic markets (counties/municipalities) to configure permit and zoning API coverage
  • Client must provide brokerage branding assets: logo (SVG/PNG, min 300px wide), brand color hex codes, and preferred briefing header text for PDF template customization
  • MSP must have an AWS account (or Azure/GCP equivalent) with billing configured and IAM permissions to create EC2 instances, S3 buckets, and security groups
  • MSP must obtain API keys for: OpenAI Platform, Shovels, Zoneomics, and ATTOM Data — each requires separate vendor registration and may take 1–5 business days for approval
  • Python 3.11+ development environment with pip/poetry available on the MSP technician's workstation for initial development and testing
  • Docker Engine 24.0+ and Docker Compose v2 installed on the deployment target (EC2 instance or on-prem server)
  • A registered domain name or subdomain for the application (e.g., briefings.clientbrokerage.com) with DNS control for SSL certificate provisioning
  • SMTP credentials or SendGrid/Mailgun API key for email delivery of completed briefings
  • Client must designate a compliance officer or broker-of-record who will review and approve the briefing template and Fair Housing compliance guardrails before go-live
  • SSH key pair generated for secure access to the application server; MSP retains administrative access
  • Client must provide 5–10 sample properties with known permit history and zoning for validation testing during UAT phase

Installation Steps

Step 1: Provision Cloud Infrastructure

Create the application server on AWS EC2 with appropriate security groups, storage, and networking. This server will host all application components via Docker Compose. ``` # Create a security group allowing SSH (22), HTTPS (443), and n8n UI (5678) from MSP IP only aws ec2 create-security-group --group-name briefing-agent-sg --description "Buyer Briefing Agent Security Group" --vpc-id <VPC_ID> aws ec2 authorize-security-group-ingress --group-id <SG_ID> --protocol tcp --port 22 --cidr <MSP_OFFIC...

Step 2: Configure Server Environment

SSH into the provisioned server, install Docker, Docker Compose, and base system dependencies. Configure firewall, swap space, and automatic security updates. ``` ssh -i briefing-agent-key.pem ubuntu@<ELASTIC_IP> # Update system packages sudo apt update && sudo apt upgrade -y # Install Docker curl -fsSL https://get.docker.com | sudo sh sudo usermod -aG docker ubuntu # Install Docker Compose v2 sudo apt install docker-compose-plugin -y # Verify installations docker --version docker compose versi...

Step 3: Set Up Project Directory Structure and Environment Variables

Create the application directory structure, clone the project repository (or initialize from template), and configure all environment variables with API keys and credentials.

1
Create project directory structure
2
Navigate into the project directory
3
Create environment file (NEVER commit this to version control)
Create directory structure and populate .env configuration file
bash
mkdir -p ~/buyer-briefing-agent/{agents,tools,templates,configs,output,logs}
cd ~/buyer-briefing-agent
cat > .env << 'EOF'
# LLM API Keys
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxx

# Data API Keys
SHOVELS_API_KEY=shv_xxxxxxxxxxxxxxxxxxxx
ZONEOMICS_API_KEY=zm_xxxxxxxxxxxxxxxxxxxx
ATTOM_API_KEY=attom_xxxxxxxxxxxxxxxxxxxx

# CRM Integration
FUB_API_KEY=fub_xxxxxxxxxxxxxxxxxxxx
FUB_BASE_URL=https://api.followupboss.com/v1

# Compliance
FAIRSENTRY_API_KEY=fs_xxxxxxxxxxxxxxxxxxxx

# Observability
LANGSMITH_API_KEY=ls_xxxxxxxxxxxxxxxxxxxx
LANGSMITH_PROJECT=buyer-briefing-agent
LANGCHAIN_TRACING_V2=true

# Database
POSTGRES_USER=briefing_app
POSTGRES_PASSWORD=<GENERATE_STRONG_PASSWORD>
POSTGRES_DB=buyer_briefings
DATABASE_URL=postgresql://briefing_app:<PASSWORD>@postgres:5432/buyer_briefings

# Redis
REDIS_URL=redis://redis:6379/0

# S3
AWS_S3_BUCKET=<CLIENT_NAME>-buyer-briefings
AWS_DEFAULT_REGION=us-east-1

# Application
APP_ENV=production
APP_SECRET_KEY=<GENERATE_STRONG_SECRET>
BRIEFING_OUTPUT_DIR=/app/output
LOG_LEVEL=INFO
EOF
chmod 600 .env
Note

Generate strong passwords using: openssl rand -hex 32. Each API key must be obtained from the respective vendor portal. Shovels and ATTOM may require 2–5 business day approval. Store a copy of the .env file in the MSP's encrypted password vault (e.g., IT Glue, Hudu). NEVER commit .env to git.

Step 4: Create Docker Compose Stack

Define the complete application stack including the CrewAI agent service, n8n workflow automation, PostgreSQL database, Redis cache, and Nginx reverse proxy.

docker-compose.yml — full application stack definition
yaml
cat > ~/buyer-briefing-agent/docker-compose.yml << 'YAML'
version: "3.8"

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: briefing-agent
    restart: unless-stopped
    env_file: .env
    volumes:
      - ./output:/app/output
      - ./templates:/app/templates
      - ./logs:/app/logs
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    ports:
      - "8000:8000"
    networks:
      - briefing-net

  worker:
    build:
      context: .
      dockerfile: Dockerfile
    container_name: briefing-worker
    restart: unless-stopped
    command: celery -A app.celery_app worker --loglevel=info --concurrency=4
    env_file: .env
    volumes:
      - ./output:/app/output
      - ./templates:/app/templates
      - ./logs:/app/logs
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    networks:
      - briefing-net

  n8n:
    image: n8nio/n8n:latest
    container_name: briefing-n8n
    restart: unless-stopped
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=admin
      - N8N_BASIC_AUTH_PASSWORD=${N8N_ADMIN_PASSWORD:-changeme}
      - N8N_HOST=n8n.${DOMAIN}
      - N8N_PORT=5678
      - WEBHOOK_URL=https://n8n.${DOMAIN}/
      - N8N_ENCRYPTION_KEY=${APP_SECRET_KEY}
    volumes:
      - n8n_data:/home/node/.n8n
    ports:
      - "5678:5678"
    networks:
      - briefing-net

  postgres:
    image: postgres:16-alpine
    container_name: briefing-postgres
    restart: unless-stopped
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - postgres_data:/var/lib/postgresql/data
      - ./configs/init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
      - "127.0.0.1:5432:5432"
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - briefing-net

  redis:
    image: redis:7-alpine
    container_name: briefing-redis
    restart: unless-stopped
    command: redis-server --appendonly yes --maxmemory 512mb --maxmemory-policy allkeys-lru
    volumes:
      - redis_data:/data
    ports:
      - "127.0.0.1:6379:6379"
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - briefing-net

volumes:
  postgres_data:
  redis_data:
  n8n_data:

networks:
  briefing-net:
    driver: bridge
YAML
Note

The worker service runs Celery for async briefing generation so the API server remains responsive. Concurrency of 4 means 4 briefings can be generated simultaneously. Increase to 8 on larger instances. n8n is exposed on port 5678 — restrict to MSP IP in security group.

Step 5: Create Application Dockerfile and Python Dependencies

Build the Python application container with all required dependencies for CrewAI agent orchestration, API clients, PDF generation, and database connectivity.

Dockerfile for the buyer briefing agent Python application
bash
cat > ~/buyer-briefing-agent/Dockerfile << 'DOCKERFILE'
FROM python:3.11-slim

WORKDIR /app

# Install system dependencies for WeasyPrint PDF generation
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential \
    libpango-1.0-0 \
    libpangocairo-1.0-0 \
    libgdk-pixbuf2.0-0 \
    libffi-dev \
    shared-mime-info \
    && rm -rf /var/lib/apt/lists/*

COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY . .

EXPOSE 8000

CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000"]
DOCKERFILE
Python requirements.txt with pinned dependency versions
bash
cat > ~/buyer-briefing-agent/requirements.txt << 'EOF'
# Agent Framework
crewai==0.80.0
crewai-tools==0.14.0

# LLM Providers
openai==1.52.0
anthopic==0.39.0
langchain==0.3.7
langchain-openai==0.2.9
langchain-community==0.3.7
langsmith==0.1.147

# Web Framework & Async
fastapi==0.115.0
uvicorn[standard]==0.32.0
celery[redis]==5.4.0

# Database
sqlalchemy==2.0.36
alembic==1.14.0
psycopg2-binary==2.9.10
asyncpg==0.30.0

# Data APIs
httpx==0.27.2
aiohttp==3.11.0
tenacity==9.0.0

# PDF Generation
weasyprint==62.3
jinja2==3.1.4
markdown==3.7

# AWS
boto3==1.35.0

# Utilities
pydantic==2.10.0
pydantic-settings==2.6.0
python-dotenv==1.0.1
structlog==24.4.0
redis==5.2.0
EOF
Note

Pin all dependency versions to ensure reproducible builds. WeasyPrint requires system-level libraries (Pango, GDK-Pixbuf) for PDF rendering — these are installed via apt in the Dockerfile. Test the build locally with 'docker build -t briefing-agent .' before deploying.

Step 6: Initialize Database Schema

Create the PostgreSQL initialization script that sets up tables for briefing records, property data cache, audit logs, and compliance screening results.

bash
cat > ~/buyer-briefing-agent/configs/init.sql << 'SQL'
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

CREATE TABLE properties (
    id UUID PRIMARY KEY DEFAULT uuid_ossp.uuid_generate_v4(),
    address TEXT NOT NULL,
    city TEXT,
    state TEXT,
    zip_code TEXT,
    county TEXT,
    parcel_id TEXT,
    latitude DOUBLE PRECISION,
    longitude DOUBLE PRECISION,
    created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
    updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

CREATE TABLE briefings (
    id UUID PRIMARY KEY DEFAULT uuid_ossp.uuid_generate_v4(),
    property_id UUID REFERENCES properties(id),
    requested_by TEXT,
    crm_contact_id TEXT,
    status TEXT DEFAULT 'pending' CHECK (status IN ('pending', 'processing', 'review', 'completed', 'failed')),
    permit_data JSONB,
    zoning_data JSONB,
    hoa_data JSONB,
    property_data JSONB,
    compiled_briefing TEXT,
    pdf_url TEXT,
    compliance_status TEXT DEFAULT 'pending',
    compliance_notes TEXT,
    error_message TEXT,
    processing_time_seconds FLOAT,
    total_tokens_used INTEGER,
    total_api_cost_cents INTEGER,
    created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
    completed_at TIMESTAMP WITH TIME ZONE
);

CREATE TABLE api_cache (
    id UUID PRIMARY KEY DEFAULT uuid_ossp.uuid_generate_v4(),
    cache_key TEXT UNIQUE NOT NULL,
    api_source TEXT NOT NULL,
    response_data JSONB NOT NULL,
    expires_at TIMESTAMP WITH TIME ZONE NOT NULL,
    created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

CREATE TABLE audit_log (
    id UUID PRIMARY KEY DEFAULT uuid_ossp.uuid_generate_v4(),
    briefing_id UUID REFERENCES briefings(id),
    event_type TEXT NOT NULL,
    agent_name TEXT,
    details JSONB,
    created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);

CREATE INDEX idx_properties_address ON properties(address);
CREATE INDEX idx_briefings_status ON briefings(status);
CREATE INDEX idx_briefings_property ON briefings(property_id);
CREATE INDEX idx_api_cache_key ON api_cache(cache_key);
CREATE INDEX idx_api_cache_expires ON api_cache(expires_at);
CREATE INDEX idx_audit_log_briefing ON audit_log(briefing_id);
SQL
Note

The api_cache table with TTL-based expiration prevents redundant API calls for recently-queried properties, saving significant API costs. Default cache TTL should be 7 days for permit data, 30 days for zoning data, and 24 hours for HOA documents.

Step 7: Deploy Core Agent Application Code

Create the main FastAPI application with Celery task integration, CrewAI agent definitions, and API endpoint for triggering briefing generation. This is the core application that the custom AI components (defined later) plug into.

1
Create the main application module directory
Create the main application module directory
bash
mkdir -p ~/buyer-briefing-agent/app
1
Create the package __init__.py file
Create empty __init__.py to mark app as a Python package
bash
cat > ~/buyer-briefing-agent/app/__init__.py << 'EOF'
EOF
1
Create the main FastAPI application file
Write main.py
bash
# FastAPI app with briefing request/response models and endpoints

cat > ~/buyer-briefing-agent/app/main.py << 'PYTHON'
from fastapi import FastAPI, HTTPException, BackgroundTasks
from pydantic import BaseModel
import uuid
from datetime import datetime
from app.celery_app import celery
from app.tasks import generate_briefing_task
import structlog

logger = structlog.get_logger()

app = FastAPI(
    title="Buyer Briefing Agent API",
    version="1.0.0",
    description="Autonomous AI agent for real estate buyer briefing generation"
)

class BriefingRequest(BaseModel):
    address: str
    city: str
    state: str
    zip_code: str
    crm_contact_id: str | None = None
    requested_by: str | None = None
    include_hoa: bool = True
    include_permits: bool = True
    include_zoning: bool = True

class BriefingResponse(BaseModel):
    briefing_id: str
    status: str
    message: str

@app.post("/api/v1/briefings", response_model=BriefingResponse)
async def create_briefing(request: BriefingRequest):
    briefing_id = str(uuid.uuid4())
    logger.info("briefing_requested", briefing_id=briefing_id, address=request.address)
    
    # Queue the briefing generation task
    generate_briefing_task.delay(
        briefing_id=briefing_id,
        address=request.address,
        city=request.city,
        state=request.state,
        zip_code=request.zip_code,
        crm_contact_id=request.crm_contact_id,
        requested_by=request.requested_by,
        include_hoa=request.include_hoa,
        include_permits=request.include_permits,
        include_zoning=request.include_zoning,
    )
    
    return BriefingResponse(
        briefing_id=briefing_id,
        status="queued",
        message="Briefing generation has been queued. Check status at /api/v1/briefings/{briefing_id}"
    )

@app.get("/api/v1/briefings/{briefing_id}")
async def get_briefing_status(briefing_id: str):
    from app.database import get_briefing
    briefing = await get_briefing(briefing_id)
    if not briefing:
        raise HTTPException(status_code=404, detail="Briefing not found")
    return briefing

@app.get("/health")
async def health_check():
    return {"status": "healthy", "timestamp": datetime.utcnow().isoformat()}
PYTHON
1
Create the Celery application configuration file
Write celery_app.py
bash
# Celery instance configured with Redis broker and task limits

cat > ~/buyer-briefing-agent/app/celery_app.py << 'PYTHON'
from celery import Celery
import os

celery = Celery(
    "buyer_briefing",
    broker=os.getenv("REDIS_URL", "redis://redis:6379/0"),
    backend=os.getenv("REDIS_URL", "redis://redis:6379/0"),
)

celery.conf.update(
    task_serializer="json",
    accept_content=["json"],
    result_serializer="json",
    timezone="UTC",
    enable_utc=True,
    task_track_started=True,
    task_time_limit=600,  # 10 minute hard limit per briefing
    task_soft_time_limit=480,  # 8 minute soft limit
    worker_prefetch_multiplier=1,
    task_acks_late=True,
)
PYTHON
Note

The FastAPI endpoint receives briefing requests (from CRM webhooks or n8n workflows) and queues them to Celery. This decoupled architecture ensures the API responds instantly while briefings generate asynchronously. The 10-minute hard limit prevents runaway tasks from consuming resources.

Step 8: Configure Nginx Reverse Proxy with SSL

Set up Nginx as a reverse proxy for the FastAPI application and n8n UI, with Let's Encrypt SSL certificates for secure HTTPS access.

1
Create Nginx config
2
Obtain SSL certificates (ensure DNS A records point to Elastic IP first)
3
Set up auto-renewal
Create Nginx reverse proxy config and enable site
bash
sudo cat > /etc/nginx/sites-available/briefing-agent << 'NGINX'
server {
    listen 80;
    server_name briefings.<CLIENT_DOMAIN> n8n.<CLIENT_DOMAIN>;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl http2;
    server_name briefings.<CLIENT_DOMAIN>;

    ssl_certificate /etc/letsencrypt/live/briefings.<CLIENT_DOMAIN>/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/briefings.<CLIENT_DOMAIN>/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen 443 ssl http2;
    server_name n8n.<CLIENT_DOMAIN>;

    ssl_certificate /etc/letsencrypt/live/n8n.<CLIENT_DOMAIN>/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/n8n.<CLIENT_DOMAIN>/privkey.pem;

    location / {
        proxy_pass http://127.0.0.1:5678;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }
}
NGINX
sudo ln -s /etc/nginx/sites-available/briefing-agent /etc/nginx/sites-enabled/
sudo rm -f /etc/nginx/sites-enabled/default
Obtain SSL certificates and reload Nginx
bash
sudo certbot --nginx -d briefings.<CLIENT_DOMAIN> -d n8n.<CLIENT_DOMAIN> --non-interactive --agree-tos -m admin@<MSP_DOMAIN>
sudo nginx -t && sudo systemctl reload nginx
Enable automatic SSL certificate renewal
bash
sudo systemctl enable certbot.timer
Note

Ensure DNS A records for briefings.<CLIENT_DOMAIN> and n8n.<CLIENT_DOMAIN> point to the Elastic IP BEFORE running certbot. The websocket upgrade headers for n8n are essential for the workflow editor UI to function. Consider adding IP whitelisting via Nginx allow/deny directives for the n8n admin interface.

Step 9: Build and Launch the Docker Compose Stack

Build the application container and start all services. Verify that all containers are running healthy and can communicate with each other.

1
Build and start all services
2
Verify all containers are running
3
Check logs for any startup errors
4
Verify database is initialized
5
Verify Redis is responsive
6
Test the health endpoint
7
Verify n8n is accessible
bash
cd ~/buyer-briefing-agent
docker compose build --no-cache
docker compose up -d
docker compose ps
docker compose logs -f --tail=50
docker compose exec postgres psql -U briefing_app -d buyer_briefings -c '\dt'
docker compose exec redis redis-cli ping
curl -s https://briefings.<CLIENT_DOMAIN>/health | python3 -m json.tool
curl -s -o /dev/null -w '%{http_code}' https://n8n.<CLIENT_DOMAIN>
Note

All 5 containers should show 'Up (healthy)': app, worker, n8n, postgres, redis. If any container fails, check logs with 'docker compose logs <service_name>'. The database should show 4 tables (properties, briefings, api_cache, audit_log). The health endpoint should return {"status": "healthy"}.

Step 10: Configure n8n Workflows for CRM Integration

Set up n8n workflows that connect Follow Up Boss CRM events to the briefing agent API. Create the primary trigger workflow that initiates briefing generation when a buyer is tagged on a property in the CRM.

1
Access n8n at https://n8n.<CLIENT_DOMAIN> and login with the credentials set in docker-compose.yml
2
Create Workflow 1: CRM Trigger -> Briefing Request - Node 1: Webhook node (receives POST from Follow Up Boss) - Node 2: Function node (extract address and contact from payload) - Node 3: HTTP Request node (POST to https://briefings.<CLIENT_DOMAIN>/api/v1/briefings) - Node 4: IF node (check for success response) - Node 5: Slack/Email notification on failure
3
Create Workflow 2: Briefing Complete -> CRM Update - Node 1: Webhook node (receives callback from briefing agent on completion) - Node 2: HTTP Request node (POST note to Follow Up Boss contact) - Node 3: HTTP Request node (send email with PDF link to agent)
4
Configure Follow Up Boss webhook: - Go to Follow Up Boss > Admin > API > Webhooks - Add webhook URL: https://n8n.<CLIENT_DOMAIN>/webhook/fub-property-tag - Event: Note Created (filter for notes containing property addresses) - Or use Follow Up Boss Action Plans to trigger via API call
Confirm n8n workflow setup via web UI
bash
echo 'n8n workflow configuration is done via the web UI - see notes for detailed workflow JSON export'
Note

Follow Up Boss does not have a native 'property assigned to buyer' webhook event. The recommended pattern is: (1) Agent creates a deal/note with the property address in Follow Up Boss, (2) Follow Up Boss fires a webhook to n8n, (3) n8n extracts the address and calls the briefing API. Alternatively, create a custom Follow Up Boss Action Plan that calls the n8n webhook when a specific tag (e.g., 'generate-briefing') is applied to a contact. Export the completed n8n workflows as JSON and store in the project's configs/ directory for backup.

Step 11: Deploy PDF Template with Client Branding

Create the Jinja2 HTML template for the buyer briefing PDF, incorporating the client's brokerage branding (logo, colors, contact information).

1
Create the templates/assets directory
2
Copy client logo to templates/assets/logo.png
3
Write the briefing HTML template file
Create the templates directory structure and write the Jinja2 HTML briefing template
bash
mkdir -p ~/buyer-briefing-agent/templates/assets
cat > ~/buyer-briefing-agent/templates/briefing_template.html << 'HTML'
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<style>
  @page {
    size: letter;
    margin: 0.75in;
    @top-center {
      content: "{{ brokerage_name }} — Buyer Property Briefing";
      font-size: 9pt;
      color: #888;
    }
    @bottom-center {
      content: "Page " counter(page) " of " counter(pages);
      font-size: 9pt;
      color: #888;
    }
  }
  body { font-family: 'Helvetica Neue', Arial, sans-serif; color: #333; line-height: 1.6; }
  .header { display: flex; align-items: center; border-bottom: 3px solid {{ brand_color | default('#1a5276') }}; padding-bottom: 15px; margin-bottom: 25px; }
  .header img { height: 60px; margin-right: 20px; }
  .header h1 { color: {{ brand_color | default('#1a5276') }}; font-size: 22pt; margin: 0; }
  .header .subtitle { color: #666; font-size: 11pt; }
  .section { margin-bottom: 25px; page-break-inside: avoid; }
  .section h2 { color: {{ brand_color | default('#1a5276') }}; border-bottom: 1px solid #ddd; padding-bottom: 5px; font-size: 14pt; }
  .section h3 { color: #555; font-size: 11pt; margin-bottom: 5px; }
  .property-details { background: #f8f9fa; border-radius: 8px; padding: 15px; margin-bottom: 20px; }
  .property-details table { width: 100%; border-collapse: collapse; }
  .property-details td { padding: 6px 10px; font-size: 10pt; }
  .property-details td:first-child { font-weight: bold; width: 35%; color: #555; }
  .permit-table { width: 100%; border-collapse: collapse; font-size: 9pt; }
  .permit-table th { background: {{ brand_color | default('#1a5276') }}; color: white; padding: 8px; text-align: left; }
  .permit-table td { border-bottom: 1px solid #eee; padding: 6px 8px; }
  .alert { background: #fff3cd; border-left: 4px solid #ffc107; padding: 10px 15px; margin: 10px 0; border-radius: 4px; }
  .alert.danger { background: #f8d7da; border-left-color: #dc3545; }
  .alert.success { background: #d4edda; border-left-color: #28a745; }
  .zoning-badge { display: inline-block; background: {{ brand_color | default('#1a5276') }}; color: white; padding: 3px 10px; border-radius: 12px; font-size: 10pt; font-weight: bold; }
  .footer { margin-top: 30px; padding-top: 15px; border-top: 2px solid {{ brand_color | default('#1a5276') }}; font-size: 9pt; color: #888; }
  .disclaimer { font-size: 8pt; color: #aaa; margin-top: 20px; font-style: italic; }
</style>
</head>
<body>
<div class="header">
  <img src="file:///app/templates/assets/logo.png" alt="{{ brokerage_name }}">
  <div>
    <h1>Buyer Property Briefing</h1>
    <div class="subtitle">{{ property_address }} — Prepared {{ generated_date }}</div>
  </div>
</div>

<div class="property-details">
  <table>
    <tr><td>Property Address</td><td>{{ property_address }}</td></tr>
    <tr><td>Parcel ID</td><td>{{ parcel_id | default('N/A') }}</td></tr>
    <tr><td>Property Type</td><td>{{ property_type | default('N/A') }}</td></tr>
    <tr><td>Year Built</td><td>{{ year_built | default('N/A') }}</td></tr>
    <tr><td>Lot Size</td><td>{{ lot_size | default('N/A') }}</td></tr>
    <tr><td>Living Area</td><td>{{ living_area | default('N/A') }} sq ft</td></tr>
    <tr><td>Flood Zone</td><td>{{ flood_zone | default('N/A') }}</td></tr>
  </table>
</div>

{% if permits %}
<div class="section">
  <h2>🔨 Permit History</h2>
  <p>{{ permit_summary }}</p>
  <table class="permit-table">
    <thead><tr><th>Date</th><th>Type</th><th>Description</th><th>Status</th><th>Est. Cost</th></tr></thead>
    <tbody>
    {% for permit in permits %}
      <tr>
        <td>{{ permit.date }}</td>
        <td>{{ permit.type }}</td>
        <td>{{ permit.description }}</td>
        <td>{{ permit.status }}</td>
        <td>{{ permit.cost | default('N/A') }}</td>
      </tr>
    {% endfor %}
    </tbody>
  </table>
  {% if permit_alerts %}
  {% for alert in permit_alerts %}
  <div class="alert {{ alert.severity }}">⚠️ {{ alert.message }}</div>
  {% endfor %}
  {% endif %}
</div>
{% endif %}

{% if zoning %}
<div class="section">
  <h2>🏗️ Zoning & Land Use</h2>
  <p>Zoning Classification: <span class="zoning-badge">{{ zoning.classification }}</span></p>
  <p>{{ zoning.summary }}</p>
  <h3>Allowed Uses</h3>
  <ul>{% for use in zoning.allowed_uses %}<li>{{ use }}</li>{% endfor %}</ul>
  <h3>Key Restrictions</h3>
  <table class="permit-table">
    <tr><td><strong>Max Height</strong></td><td>{{ zoning.max_height | default('N/A') }}</td></tr>
    <tr><td><strong>Setbacks (Front/Side/Rear)</strong></td><td>{{ zoning.setbacks | default('N/A') }}</td></tr>
    <tr><td><strong>Max Lot Coverage</strong></td><td>{{ zoning.max_coverage | default('N/A') }}</td></tr>
    <tr><td><strong>Density</strong></td><td>{{ zoning.density | default('N/A') }}</td></tr>
  </table>
</div>
{% endif %}

{% if hoa %}
<div class="section">
  <h2>🏘️ HOA Information</h2>
  <p>{{ hoa.summary }}</p>
  <h3>Key Rules & Restrictions</h3>
  <ul>{% for rule in hoa.key_rules %}<li>{{ rule }}</li>{% endfor %}</ul>
  <h3>Financial Summary</h3>
  <table class="permit-table">
    <tr><td><strong>Monthly Dues</strong></td><td>{{ hoa.monthly_dues | default('N/A') }}</td></tr>
    <tr><td><strong>Special Assessments</strong></td><td>{{ hoa.special_assessments | default('None known') }}</td></tr>
    <tr><td><strong>Transfer Fee</strong></td><td>{{ hoa.transfer_fee | default('N/A') }}</td></tr>
    <tr><td><strong>Reserve Fund Status</strong></td><td>{{ hoa.reserve_status | default('N/A') }}</td></tr>
  </table>
  {% if hoa.alerts %}
  {% for alert in hoa.alerts %}
  <div class="alert {{ alert.severity }}">⚠️ {{ alert.message }}</div>
  {% endfor %}
  {% endif %}
</div>
{% endif %}

<div class="section">
  <h2>📋 Key Findings & Recommendations</h2>
  {{ key_findings | safe }}
</div>

<div class="footer">
  <p>Prepared by {{ brokerage_name }} | {{ agent_name }} | {{ agent_phone }} | {{ agent_email }}</p>
</div>
<div class="disclaimer">
  This briefing is generated using public records and available data sources. It is intended for informational purposes only and does not constitute legal, financial, or professional real estate advice. Buyers should independently verify all information and conduct their own due diligence. {{ brokerage_name }} makes no guarantees regarding the accuracy or completeness of this data.
</div>
</body>
</html>
HTML
Note

Place the client's logo file at templates/assets/logo.png. Update the brand_color variable in the .env file or pass it via the template context. The disclaimer is critical for liability protection. Have the client's broker-of-record review and approve the disclaimer language before go-live.

Step 12: Run Initial Build, Database Migration, and Smoke Test

Rebuild the application with all components deployed, run database migrations, and execute a smoke test to verify end-to-end functionality.

1
Rebuild with all new code
2
Wait for all services to be healthy
3
Verify database tables
4
Test the API endpoint with a sample address
5
Monitor the worker logs to see agent execution
6
Check briefing status (use the briefing_id from the POST response)
Rebuild and restart all services
bash
cd ~/buyer-briefing-agent
docker compose down
docker compose build --no-cache
docker compose up -d
Wait for all services to be healthy
bash
sleep 15
docker compose ps
Verify database tables
bash
docker compose exec postgres psql -U briefing_app -d buyer_briefings -c '\dt'
Test the API endpoint with a sample address
bash
curl -X POST https://briefings.<CLIENT_DOMAIN>/api/v1/briefings \
  -H 'Content-Type: application/json' \
  -d '{
    "address": "1600 Pennsylvania Avenue NW",
    "city": "Washington",
    "state": "DC",
    "zip_code": "20500",
    "requested_by": "msp-smoke-test"
  }'
Monitor worker logs to see agent execution
bash
docker compose logs -f worker
Check briefing status (use the briefing_id from the POST response)
bash
curl -s https://briefings.<CLIENT_DOMAIN>/api/v1/briefings/<BRIEFING_ID> | python3 -m json.tool
Note

The smoke test should complete within 3–5 minutes. Check the worker logs for each agent's progress: Permit Research Agent -> Zoning Analysis Agent -> HOA Research Agent -> Briefing Compiler Agent. If any agent fails, check the error in LangSmith traces. Common issues: expired API keys, rate limits on first call, or incorrect address formatting.

Custom AI Components

Permit Research Agent

Type: agent Autonomous agent that queries the Shovels API and ATTOM Property API to retrieve complete building permit history for a target property. It identifies permit types (renovation, addition, electrical, plumbing, roofing, demolition), flags open/expired permits, notes code violations, and generates a narrative summary highlighting items of concern for a buyer (e.g., unpermitted work, failed inspections, recent major renovations).

Implementation:

agents/permit_research_agent.py
python
# agents/permit_research_agent.py
from crewai import Agent, Task
from crewai_tools import tool
import httpx
import os
import json
from tenacity import retry, stop_after_attempt, wait_exponential
import structlog

logger = structlog.get_logger()

SHOVELS_API_KEY = os.getenv('SHOVELS_API_KEY')
SHOVELS_BASE_URL = 'https://api.shovels.ai/v2'
ATTOM_API_KEY = os.getenv('ATTOM_API_KEY')
ATTOM_BASE_URL = 'https://api.gateway.attomdata.com/propertyapi/v1.0.0'


@tool('search_permits_shovels')
def search_permits_shovels(address: str, city: str, state: str, zip_code: str) -> str:
    """Search for building permits using the Shovels API.
    Returns permit history including type, date, status, contractor, and estimated cost.
    """
    try:
        headers = {'Authorization': f'Bearer {SHOVELS_API_KEY}', 'Content-Type': 'application/json'}
        params = {'address': f'{address}, {city}, {state} {zip_code}'}
        
        with httpx.Client(timeout=30) as client:
            response = client.get(f'{SHOVELS_BASE_URL}/permits', headers=headers, params=params)
            response.raise_for_status()
            data = response.json()
        
        if not data.get('permits'):
            return json.dumps({'permits': [], 'message': 'No permits found in Shovels database for this address.'})
        
        permits = []
        for p in data['permits']:
            permits.append({
                'date': p.get('issue_date', 'Unknown'),
                'type': p.get('permit_type', 'General'),
                'description': p.get('description', 'No description'),
                'status': p.get('status', 'Unknown'),
                'cost': p.get('estimated_cost', 'N/A'),
                'contractor': p.get('contractor_name', 'N/A'),
                'inspection_status': p.get('inspection_status', 'N/A'),
                'source': 'Shovels'
            })
        
        return json.dumps({'permits': permits, 'total_count': len(permits)})
    except httpx.HTTPStatusError as e:
        logger.error('shovels_api_error', status=e.response.status_code, detail=str(e))
        return json.dumps({'error': f'Shovels API error: {e.response.status_code}', 'permits': []})
    except Exception as e:
        logger.error('shovels_unexpected_error', error=str(e))
        return json.dumps({'error': str(e), 'permits': []})


@tool('search_permits_attom')
def search_permits_attom(address: str, city: str, state: str, zip_code: str) -> str:
    """Search for building permits using the ATTOM Property API as a supplementary source.
    Returns permit records with type, date, and status.
    """
    try:
        headers = {'apikey': ATTOM_API_KEY, 'Accept': 'application/json'}
        params = {
            'address1': address,
            'address2': f'{city}, {state} {zip_code}'
        }
        
        with httpx.Client(timeout=30) as client:
            response = client.get(f'{ATTOM_BASE_URL}/property/buildingpermits', headers=headers, params=params)
            response.raise_for_status()
            data = response.json()
        
        permits = []
        for prop in data.get('property', []):
            for bp in prop.get('building_permits', []):
                permits.append({
                    'date': bp.get('effectiveDate', 'Unknown'),
                    'type': bp.get('type', 'General'),
                    'description': bp.get('description', 'No description'),
                    'status': bp.get('status', 'Unknown'),
                    'cost': bp.get('totalProjectValuation', 'N/A'),
                    'source': 'ATTOM'
                })
        
        return json.dumps({'permits': permits, 'total_count': len(permits)})
    except Exception as e:
        logger.error('attom_permits_error', error=str(e))
        return json.dumps({'error': str(e), 'permits': []})


def create_permit_research_agent():
    return Agent(
        role='Senior Property Permit Research Analyst',
        goal='Thoroughly research and document the complete building permit history for the target property, identifying any red flags or items of concern for a prospective buyer.',
        backstory="""You are an experienced real estate due diligence researcher specializing in 
        building permit analysis. You have 15 years of experience reviewing permit records for 
        residential and commercial properties. You know that unpermitted work, open permits, 
        failed inspections, and code violations are critical findings that buyers need to know about. 
        You always cross-reference multiple data sources and clearly distinguish between confirmed 
        facts and items that need further verification.""",
        tools=[search_permits_shovels, search_permits_attom],
        llm='gpt-4.1',
        verbose=True,
        memory=True,
        max_iter=5,
        allow_delegation=False
    )


def create_permit_research_task(agent, address: str, city: str, state: str, zip_code: str):
    return Task(
        description=f"""Research the complete building permit history for the property at:
        {address}, {city}, {state} {zip_code}
        
        Steps:
        1. Query the Shovels API for permit records
        2. Query the ATTOM API for supplementary permit data
        3. Deduplicate and merge results from both sources
        4. Sort permits chronologically (newest first)
        5. Flag any concerning findings:
           - Open or expired permits (work started but not finalized)
           - Failed inspections
           - Code violations
           - Permits suggesting major structural work (foundation, roof, load-bearing walls)
           - Permits that may indicate previous damage (fire, flood, mold remediation)
           - Any work done without permits that other records suggest occurred
        6. Generate a narrative summary suitable for a buyer briefing
        
        IMPORTANT: Do NOT include any language that could be interpreted as steering based on 
        race, religion, national origin, sex, familial status, or disability. Focus strictly 
        on the physical property and its documented history.""",
        expected_output="""A JSON object with the following structure:
        {
            "permits": [
                {
                    "date": "YYYY-MM-DD",
                    "type": "permit type",
                    "description": "description of work",
                    "status": "Closed/Open/Expired/Failed",
                    "cost": "estimated cost or N/A",
                    "contractor": "contractor name or N/A"
                }
            ],
            "permit_summary": "Narrative paragraph summarizing permit history for buyer",
            "permit_alerts": [
                {
                    "severity": "danger|warning|success",
                    "message": "Description of the alert"
                }
            ],
            "total_permits_found": integer,
            "data_sources_queried": ["Shovels", "ATTOM"],
            "coverage_notes": "Any notes about data gaps or limitations"
        }""",
        agent=agent
    )

Zoning Analysis Agent

Type: agent Autonomous agent that queries the Zoneomics API and supplements with ATTOM parcel data to determine the current zoning classification, allowed uses, development restrictions, setbacks, height limits, lot coverage maximums, density regulations, overlay districts, and any pending zoning changes. Produces a buyer-friendly summary explaining what the zoning means in practical terms.

Implementation

agents/zoning_analysis_agent.py
python
# agents/zoning_analysis_agent.py
from crewai import Agent, Task
from crewai_tools import tool
import httpx
import os
import json
import structlog

logger = structlog.get_logger()

ZONEOMICS_API_KEY = os.getenv('ZONEOMICS_API_KEY')
ZONEOMICS_BASE_URL = 'https://api.zoneomics.com/v2'
ATTOM_API_KEY = os.getenv('ATTOM_API_KEY')
ATTOM_BASE_URL = 'https://api.gateway.attomdata.com/propertyapi/v1.0.0'


@tool('get_zoning_data')
def get_zoning_data(address: str, city: str, state: str, zip_code: str) -> str:
    """Query Zoneomics API for comprehensive zoning data including classification, 
    allowed uses, setbacks, height limits, and overlay districts."""
    try:
        headers = {'Authorization': f'Bearer {ZONEOMICS_API_KEY}', 'Content-Type': 'application/json'}
        params = {'address': f'{address}, {city}, {state} {zip_code}', 'report_type': 'full'}
        
        with httpx.Client(timeout=45) as client:
            response = client.get(f'{ZONEOMICS_BASE_URL}/zoning', headers=headers, params=params)
            response.raise_for_status()
            data = response.json()
        
        zoning = {
            'classification': data.get('zoning_code', 'Unknown'),
            'classification_name': data.get('zoning_name', 'Unknown'),
            'municipality': data.get('municipality', 'Unknown'),
            'allowed_uses': data.get('permitted_uses', []),
            'conditional_uses': data.get('conditional_uses', []),
            'prohibited_uses': data.get('prohibited_uses', []),
            'max_height': data.get('max_height', 'N/A'),
            'setbacks': {
                'front': data.get('front_setback', 'N/A'),
                'side': data.get('side_setback', 'N/A'),
                'rear': data.get('rear_setback', 'N/A')
            },
            'max_coverage': data.get('max_lot_coverage', 'N/A'),
            'density': data.get('density', 'N/A'),
            'min_lot_size': data.get('min_lot_size', 'N/A'),
            'parking_requirements': data.get('parking', 'N/A'),
            'overlay_districts': data.get('overlay_districts', []),
            'special_districts': data.get('special_districts', []),
            'source': 'Zoneomics'
        }
        return json.dumps(zoning)
    except httpx.HTTPStatusError as e:
        logger.error('zoneomics_api_error', status=e.response.status_code)
        return json.dumps({'error': f'Zoneomics API error: {e.response.status_code}'})
    except Exception as e:
        logger.error('zoneomics_unexpected_error', error=str(e))
        return json.dumps({'error': str(e)})


@tool('get_parcel_details')
def get_parcel_details(address: str, city: str, state: str, zip_code: str) -> str:
    """Get parcel-level details from ATTOM including flood zone, lot dimensions, 
    and property characteristics."""
    try:
        headers = {'apikey': ATTOM_API_KEY, 'Accept': 'application/json'}
        params = {'address1': address, 'address2': f'{city}, {state} {zip_code}'}
        
        with httpx.Client(timeout=30) as client:
            response = client.get(f'{ATTOM_BASE_URL}/property/detail', headers=headers, params=params)
            response.raise_for_status()
            data = response.json()
        
        prop = data.get('property', [{}])[0] if data.get('property') else {}
        lot = prop.get('lot', {})
        building = prop.get('building', {})
        
        details = {
            'parcel_id': prop.get('identifier', {}).get('apn', 'N/A'),
            'property_type': prop.get('summary', {}).get('proptype', 'N/A'),
            'year_built': building.get('summary', {}).get('yearbuilt', 'N/A'),
            'lot_size_acres': lot.get('lotsize1', 'N/A'),
            'lot_size_sqft': lot.get('lotsize2', 'N/A'),
            'living_area_sqft': building.get('size', {}).get('livingsize', 'N/A'),
            'flood_zone': prop.get('area', {}).get('floodZone', 'N/A'),
            'flood_zone_description': prop.get('area', {}).get('floodZoneDescription', 'N/A'),
            'school_district': prop.get('area', {}).get('schoolDistrictName', 'N/A'),
            'latitude': prop.get('location', {}).get('latitude', None),
            'longitude': prop.get('location', {}).get('longitude', None),
            'source': 'ATTOM'
        }
        return json.dumps(details)
    except Exception as e:
        logger.error('attom_parcel_error', error=str(e))
        return json.dumps({'error': str(e)})


def create_zoning_analysis_agent():
    return Agent(
        role='Municipal Zoning & Land Use Analyst',
        goal='Analyze the complete zoning profile for the target property and translate complex zoning codes into clear, buyer-friendly language explaining what can and cannot be done with the property.',
        backstory="""You are an urban planning specialist with deep expertise in municipal zoning 
        codes across the United States. You can interpret any zoning classification and explain 
        its practical implications for homebuyers — such as whether they can build an ADU, 
        run a home business, add a second story, or subdivide the lot. You always note flood 
        zone designations and their insurance implications. You flag any zoning issues that 
        could affect property value or intended use.""",
        tools=[get_zoning_data, get_parcel_details],
        llm='gpt-4.1',
        verbose=True,
        memory=True,
        max_iter=5,
        allow_delegation=False
    )


def create_zoning_analysis_task(agent, address: str, city: str, state: str, zip_code: str):
    return Task(
        description=f"""Analyze the zoning and parcel information for the property at:
        {address}, {city}, {state} {zip_code}
        
        Steps:
        1. Query Zoneomics for the full zoning profile
        2. Query ATTOM for parcel details including flood zone
        3. Interpret the zoning classification in practical buyer terms
        4. Identify key restrictions and opportunities:
           - Can the buyer add an ADU/in-law suite?
           - Can they operate a home-based business?
           - Can they add stories or expand the building footprint?
           - Are there short-term rental (Airbnb) restrictions?
           - Are there any overlay districts that add extra requirements?
        5. Note flood zone status and insurance implications
        6. Flag any zoning issues that could affect the buyer's plans
        
        IMPORTANT: Do NOT reference neighborhood demographics, school quality rankings, 
        crime statistics, or any characteristic of the area's residents. Focus strictly 
        on land use regulations and physical property characteristics.""",
        expected_output="""A JSON object with:
        {
            "zoning": {
                "classification": "R-1 Single Family Residential",
                "summary": "Plain-English paragraph explaining what this zoning means for the buyer",
                "allowed_uses": ["list of allowed uses"],
                "conditional_uses": ["uses requiring special approval"],
                "max_height": "35 feet / 2.5 stories",
                "setbacks": "Front: 25ft, Side: 5ft, Rear: 20ft",
                "max_coverage": "40% of lot area",
                "density": "1 unit per lot",
                "adu_allowed": true/false,
                "home_business_allowed": true/false,
                "short_term_rental": "Allowed/Prohibited/Conditional",
                "overlay_districts": ["any overlay district names"],
                "opportunities": "What the buyer CAN do",
                "restrictions": "Key limitations to be aware of"
            },
            "parcel": {
                "parcel_id": "APN",
                "property_type": "type",
                "year_built": "year",
                "lot_size": "size with units",
                "living_area": "sqft",
                "flood_zone": "zone code",
                "flood_zone_summary": "Plain English flood risk explanation and insurance note"
            },
            "zoning_alerts": [{"severity": "warning", "message": "alert text"}]
        }""",
        agent=agent
    )

HOA Research Agent

Type: agent Autonomous agent that researches Homeowners Association (HOA) information for the target property. It queries ATTOM for HOA existence confirmation, then uses web search and document analysis to find CC&Rs, bylaws, fee schedules, meeting minutes, and special assessments. Extracts key rules affecting buyers (rental restrictions, pet policies, exterior modification rules, parking regulations, architectural review requirements).

Implementation

agents/hoa_research_agent.py
python
# agents/hoa_research_agent.py
from crewai import Agent, Task
from crewai_tools import tool, ScrapeWebsiteTool, SerperDevTool
import httpx
import os
import json
import structlog

logger = structlog.get_logger()

ATTOM_API_KEY = os.getenv('ATTOM_API_KEY')
ATTOM_BASE_URL = 'https://api.gateway.attomdata.com/propertyapi/v1.0.0'

web_search = SerperDevTool()
web_scraper = ScrapeWebsiteTool()


@tool('check_hoa_existence')
def check_hoa_existence(address: str, city: str, state: str, zip_code: str) -> str:
    """Check ATTOM property data to determine if the property is in an HOA 
    and retrieve basic HOA information including name and fee amount."""
    try:
        headers = {'apikey': ATTOM_API_KEY, 'Accept': 'application/json'}
        params = {'address1': address, 'address2': f'{city}, {state} {zip_code}'}
        
        with httpx.Client(timeout=30) as client:
            response = client.get(f'{ATTOM_BASE_URL}/property/detail', headers=headers, params=params)
            response.raise_for_status()
            data = response.json()
        
        prop = data.get('property', [{}])[0] if data.get('property') else {}
        hoa_info = prop.get('hoa', {})
        
        result = {
            'has_hoa': bool(hoa_info.get('hoaFee') or hoa_info.get('hoaName')),
            'hoa_name': hoa_info.get('hoaName', 'Unknown'),
            'monthly_fee': hoa_info.get('hoaFee', 'Unknown'),
            'fee_frequency': hoa_info.get('hoaFeeFrequency', 'Monthly'),
            'source': 'ATTOM'
        }
        return json.dumps(result)
    except Exception as e:
        logger.error('attom_hoa_error', error=str(e))
        return json.dumps({'error': str(e), 'has_hoa': None})


@tool('search_hoa_documents')
def search_hoa_documents(hoa_name: str, city: str, state: str) -> str:
    """Search the web for HOA documents including CC&Rs, bylaws, meeting minutes, 
    and financial reports for the specified HOA."""
    try:
        search_queries = [
            f'{hoa_name} {city} {state} CC&R covenants',
            f'{hoa_name} {city} {state} HOA bylaws rules',
            f'{hoa_name} {city} {state} HOA fees assessment',
            f'{hoa_name} {city} {state} HOA meeting minutes'
        ]
        
        all_results = []
        for query in search_queries:
            results = web_search._run(query)
            all_results.append({'query': query, 'results': results})
        
        return json.dumps(all_results)
    except Exception as e:
        logger.error('hoa_search_error', error=str(e))
        return json.dumps({'error': str(e)})


@tool('extract_hoa_rules_from_url')
def extract_hoa_rules_from_url(url: str) -> str:
    """Scrape and extract HOA rules, fees, and restrictions from a web page. 
    Use this on HOA management company sites or document hosting pages."""
    try:
        content = web_scraper._run(url)
        # Return first 15000 chars to stay within context limits
        return content[:15000] if content else 'No content extracted'
    except Exception as e:
        logger.error('hoa_scrape_error', url=url, error=str(e))
        return f'Error scraping {url}: {str(e)}'


def create_hoa_research_agent():
    return Agent(
        role='HOA Due Diligence Researcher',
        goal='Thoroughly research the HOA governing the target property, extracting all rules, fees, restrictions, and financial health indicators that a buyer needs to know before purchasing.',
        backstory="""You are a real estate due diligence specialist who has reviewed hundreds of 
        HOA packages for buyers. You know that HOA rules can significantly impact a buyer's 
        experience — from pet weight limits to exterior paint color restrictions to rental 
        caps. You always look for red flags like pending litigation, underfunded reserves, 
        special assessments, and restrictive rental policies. When HOA documents are not 
        publicly available, you clearly note what needs to be requested from the seller 
        or listing agent.""",
        tools=[check_hoa_existence, search_hoa_documents, extract_hoa_rules_from_url],
        llm='gpt-4.1',
        verbose=True,
        memory=True,
        max_iter=8,
        allow_delegation=False
    )


def create_hoa_research_task(agent, address: str, city: str, state: str, zip_code: str):
    return Task(
        description=f"""Research HOA information for the property at:
        {address}, {city}, {state} {zip_code}
        
        Steps:
        1. Check ATTOM data to confirm if property is in an HOA and get the HOA name
        2. If no HOA exists, report that clearly and skip remaining steps
        3. If HOA exists, search for:
           a. CC&Rs (Covenants, Conditions & Restrictions)
           b. Bylaws and rules & regulations
           c. Fee schedules and special assessments
           d. Recent meeting minutes (look for litigation, major repairs, assessment votes)
           e. HOA management company contact information
        4. Extract key rules that affect daily living:
           - Rental restrictions (minimum lease term, rental caps, Airbnb policy)
           - Pet policies (breed, weight, number restrictions)
           - Exterior modification/architectural review requirements
           - Parking regulations (number of vehicles, commercial vehicles, RV/boat storage)
           - Landscaping requirements
           - Noise and quiet hours
           - Business/home office restrictions
        5. Assess financial health if data is available:
           - Reserve fund adequacy
           - Pending or recent special assessments
           - Pending litigation
           - Insurance coverage
        6. Clearly note any information that could NOT be found and recommend what the buyer 
           should request from the seller/listing agent
        
        IMPORTANT: Do NOT make assumptions about the type of residents or community character. 
        Focus only on documented rules and financial facts. Do NOT reference community 
        demographics in any way.""",
        expected_output="""A JSON object with:
        {
            "hoa": {
                "exists": true/false,
                "name": "HOA name",
                "management_company": "company name and contact if found",
                "summary": "1-2 paragraph overview for the buyer",
                "monthly_dues": "$XXX/month",
                "special_assessments": "description of any current or pending",
                "transfer_fee": "fee amount if found",
                "reserve_status": "assessment of reserve fund health if available",
                "key_rules": [
                    "Rule 1: description",
                    "Rule 2: description"
                ],
                "rental_restrictions": "detailed rental policy summary",
                "pet_policy": "detailed pet policy summary",
                "architectural_review": "what requires approval",
                "parking_rules": "parking restrictions summary",
                "alerts": [
                    {"severity": "danger|warning", "message": "alert text"}
                ],
                "documents_found": ["list of document types found"],
                "documents_not_found": ["list of documents buyer should request"],
                "information_gaps": "what could not be verified and needs seller confirmation"
            }
        }""",
        agent=agent
    )

Fair Housing Compliance Guardrail

Type: skill

A compliance checking module that scans all agent outputs and the final compiled briefing for language that could violate the Fair Housing Act. It checks for references to protected characteristics (race, color, religion, national origin, sex, familial status, disability), neighborhood demographic descriptions, school quality characterizations that serve as proxies for racial composition, and any language that could be construed as steering. Integrated as a mandatory pass before any briefing is delivered.

Implementation:

tools/fair_housing_guardrail.py
python
# tools/fair_housing_guardrail.py
import os
import json
import re
import httpx
from openai import OpenAI
import structlog

logger = structlog.get_logger()

client = OpenAI(api_key=os.getenv('OPENAI_API_KEY'))
FAIRSENTRY_API_KEY = os.getenv('FAIRSENTRY_API_KEY')

# Known problematic phrases and patterns
PROHIBITED_PATTERNS = [
    r'\b(family[- ]friendly|family[- ]oriented)\b',
    r'\b(ideal for (singles|couples|families|retirees|young professionals))\b',
    r'\b(ethnic|racial|religious)\s+(community|neighborhood|enclave|area)\b',
    r'\b(safe|dangerous|sketchy|rough|up-and-coming|gentrifying)\s+(neighborhood|area|community)\b',
    r'\b(church(es)?|mosque|synagogue|temple)\s+(nearby|close|walking distance)\b',
    r'\b(good|great|best|top)\s+school(s)?\b',
    r'\b(walking|wheelchair)\s+(un)?friendly\b',
    r'\b(handicap|disabled|disability)\s+(accessible|access)\b',
    r'\b(no children|adults only|senior(s)? only|55\+|over 55)\b',
    r'\b(exclusive|prestigious|affluent|upscale)\s+(community|neighborhood|area)\b',
    r'\b(diverse|diversity|multicultural|minority)\b',
    r'\b(crime|criminal|safety)\s+(rate|statistics|stats|data|index)\b',
]


def check_prohibited_patterns(text: str) -> list:
    """Rule-based check for known Fair Housing red flag patterns."""
    violations = []
    for pattern in PROHIBITED_PATTERNS:
        matches = re.findall(pattern, text, re.IGNORECASE)
        if matches:
            violations.append({
                'type': 'pattern_match',
                'pattern': pattern,
                'matches': [str(m) for m in matches],
                'severity': 'warning'
            })
    return violations


def check_with_llm(text: str) -> dict:
    """Use GPT-5.4 mini for nuanced Fair Housing compliance analysis."""
    response = client.chat.completions.create(
        model='gpt-5.4-mini',
        temperature=0,
        messages=[
            {
                'role': 'system',
                'content': """You are a Fair Housing Act compliance reviewer for real estate documents. 
                Analyze the provided text for potential Fair Housing violations.
                
                The Fair Housing Act prohibits discrimination based on:
                - Race or color
                - Religion
                - National origin
                - Sex (including gender identity and sexual orientation)
                - Familial status (presence of children under 18)
                - Disability
                
                Flag ANY language that:
                1. Describes neighborhood demographics or composition
                2. Makes quality judgments about schools (often a proxy for racial composition)
                3. Suggests the property is better suited for certain types of people
                4. References crime statistics or safety ratings for the area
                5. Describes the 'character' or 'feel' of the neighborhood in ways that could imply demographic characteristics
                6. Uses coded language that could be interpreted as steering
                
                Respond ONLY with valid JSON:
                {
                    "compliant": true/false,
                    "violations": [
                        {
                            "text": "exact problematic text",
                            "reason": "why this is a potential violation",
                            "severity": "high/medium/low",
                            "suggestion": "recommended replacement text or removal"
                        }
                    ],
                    "overall_assessment": "brief summary"
                }"""
            },
            {
                'role': 'user',
                'content': f'Review this real estate buyer briefing for Fair Housing compliance:\n\n{text}'
            }
        ]
    )
    
    try:
        return json.loads(response.choices[0].message.content)
    except json.JSONDecodeError:
        return {'compliant': False, 'violations': [], 'overall_assessment': 'Unable to parse compliance check result — manual review required', 'parse_error': True}


def check_with_fairsentry(text: str) -> dict:
    """Optional: Send to FairSentry API for third-party compliance validation."""
    if not FAIRSENTRY_API_KEY:
        return {'skipped': True, 'reason': 'FairSentry API key not configured'}
    
    try:
        headers = {'Authorization': f'Bearer {FAIRSENTRY_API_KEY}', 'Content-Type': 'application/json'}
        with httpx.Client(timeout=30) as http_client:
            response = http_client.post(
                'https://api.fairsentry.com/v1/scan',
                headers=headers,
                json={'content': text, 'content_type': 'property_description'}
            )
            response.raise_for_status()
            return response.json()
    except Exception as e:
        logger.error('fairsentry_error', error=str(e))
        return {'error': str(e)}


def run_compliance_check(briefing_text: str) -> dict:
    """Run the full compliance pipeline: pattern matching, LLM review, and optional FairSentry.
    Returns a compliance report with pass/fail status and any required remediations."""
    logger.info('compliance_check_started')
    
    # Layer 1: Pattern matching
    pattern_violations = check_prohibited_patterns(briefing_text)
    
    # Layer 2: LLM-based analysis
    llm_result = check_with_llm(briefing_text)
    
    # Layer 3: FairSentry (if configured)
    fairsentry_result = check_with_fairsentry(briefing_text)
    
    # Determine overall compliance status
    has_high_severity = any(
        v.get('severity') == 'high' for v in llm_result.get('violations', [])
    )
    has_pattern_matches = len(pattern_violations) > 0
    
    overall_compliant = llm_result.get('compliant', False) and not has_high_severity
    
    result = {
        'compliant': overall_compliant,
        'requires_manual_review': has_pattern_matches and not has_high_severity,
        'pattern_check': {'violations': pattern_violations, 'count': len(pattern_violations)},
        'llm_check': llm_result,
        'fairsentry_check': fairsentry_result,
        'recommendation': 'APPROVED' if overall_compliant else ('MANUAL_REVIEW' if not has_high_severity else 'BLOCKED')
    }
    
    logger.info('compliance_check_completed', compliant=overall_compliant, recommendation=result['recommendation'])
    return result


def remediate_text(text: str, violations: list) -> str:
    """Automatically remove or replace problematic text based on identified violations."""
    remediated = text
    for violation in violations:
        problematic_text = violation.get('text', '')
        suggestion = violation.get('suggestion', '')
        if problematic_text and suggestion:
            remediated = remediated.replace(problematic_text, suggestion)
        elif problematic_text:
            remediated = remediated.replace(problematic_text, '[REMOVED - compliance review]')
    return remediated

Briefing Compiler Agent

Type: agent The final synthesis agent that takes the outputs from the Permit Research Agent, Zoning Analysis Agent, and HOA Research Agent, and compiles them into a cohesive, professionally written buyer briefing. It generates the key findings section, prioritizes alerts by severity, and produces the final HTML that is rendered to PDF via WeasyPrint.

Implementation

agents/briefing_compiler_agent.py
python
# agents/briefing_compiler_agent.py
from crewai import Agent, Task
from crewai_tools import tool
import json
import structlog

logger = structlog.get_logger()


@tool('compile_briefing_sections')
def compile_briefing_sections(permit_data_json: str, zoning_data_json: str, hoa_data_json: str) -> str:
    """Merge and structure the research outputs from all three specialized agents 
    into a unified briefing data structure ready for template rendering."""
    try:
        permits = json.loads(permit_data_json) if permit_data_json else {}
        zoning = json.loads(zoning_data_json) if zoning_data_json else {}
        hoa = json.loads(hoa_data_json) if hoa_data_json else {}
        
        compiled = {
            'permits': permits.get('permits', []),
            'permit_summary': permits.get('permit_summary', 'No permit data available.'),
            'permit_alerts': permits.get('permit_alerts', []),
            'zoning': zoning.get('zoning', {}),
            'parcel': zoning.get('parcel', {}),
            'zoning_alerts': zoning.get('zoning_alerts', []),
            'hoa': hoa.get('hoa', {}),
            'all_alerts': (
                permits.get('permit_alerts', []) + 
                zoning.get('zoning_alerts', []) + 
                hoa.get('hoa', {}).get('alerts', [])
            ),
            'data_sources': list(set(
                permits.get('data_sources_queried', []) + 
                ['Zoneomics', 'ATTOM']
            )),
            'sections_available': {
                'permits': bool(permits.get('permits')),
                'zoning': bool(zoning.get('zoning')),
                'hoa': bool(hoa.get('hoa', {}).get('exists'))
            }
        }
        return json.dumps(compiled)
    except Exception as e:
        logger.error('compilation_error', error=str(e))
        return json.dumps({'error': str(e)})


def create_briefing_compiler_agent():
    return Agent(
        role='Senior Real Estate Briefing Writer & Editor',
        goal='Synthesize all research findings into a clear, professional, and actionable buyer briefing that helps the buyer make an informed purchasing decision.',
        backstory="""You are an experienced real estate analyst and writer who has prepared 
        thousands of property briefings for buyers. You excel at taking complex data — permits, 
        zoning codes, HOA rules — and translating it into clear, actionable insights. You know 
        how to prioritize information: red flags first, then opportunities, then standard details. 
        Your writing is professional but accessible, avoiding jargon while being precise. You 
        always end with a clear 'Key Findings & Recommendations' section that highlights the 
        most important items for the buyer to consider or investigate further. You NEVER include 
        language that could violate the Fair Housing Act.""",
        tools=[compile_briefing_sections],
        llm='gpt-4.1',
        verbose=True,
        memory=True,
        max_iter=5,
        allow_delegation=False
    )


def create_briefing_compiler_task(agent, permit_output: str, zoning_output: str, hoa_output: str, property_address: str):
    return Task(
        description=f"""Compile the final buyer briefing for {property_address} using the research 
        from the three specialized agents.
        
        PERMIT DATA:\n{permit_output}\n\n
        ZONING DATA:\n{zoning_output}\n\n
        HOA DATA:\n{hoa_output}\n\n
        
        Steps:
        1. Use the compile_briefing_sections tool to merge all data
        2. Review all alerts and prioritize by severity (danger > warning > info)
        3. Write the Key Findings & Recommendations section:
           - Start with any RED FLAGS (open permits, code violations, flood zone, litigation)
           - Then IMPORTANT NOTES (zoning restrictions, HOA rules that affect daily life)
           - Then OPPORTUNITIES (ADU potential, expansion capacity, home business allowed)
           - End with RECOMMENDED NEXT STEPS (what buyer should request, inspect, or verify)
        4. Ensure all sections flow logically and cross-reference where appropriate
           (e.g., if a permit shows a deck addition, note whether the HOA allows decks)
        5. Format the Key Findings as clean HTML with bullet points and bold headers
        
        CRITICAL COMPLIANCE RULES:
        - Do NOT mention neighborhood demographics, safety ratings, or school rankings
        - Do NOT suggest the property is suitable for any specific type of person
        - Do NOT use language that implies neighborhood character based on residents
        - Focus ONLY on the physical property, its legal status, and documented rules
        - Include the standard disclaimer about independent verification""",
        expected_output="""A JSON object with:
        {
            "compiled_data": { ... merged data from compile_briefing_sections ... },
            "key_findings_html": "<div>...formatted HTML for the Key Findings section...</div>",
            "executive_summary": "2-3 sentence overview of the most important findings",
            "alert_count": {"danger": 0, "warning": 0, "info": 0},
            "recommended_next_steps": ["step 1", "step 2"],
            "briefing_quality_score": "high/medium/low based on data completeness"
        }""",
        agent=agent
    )

CrewAI Orchestration Workflow

Type: workflow

The master workflow that orchestrates all four agents in sequence, manages error handling and retries, runs the Fair Housing compliance check, generates the PDF, uploads to S3, and delivers the briefing to the CRM. This is the Celery task that is triggered by the API endpoint.

Implementation:

app/tasks.py
python
# app/tasks.py
import json
import time
import os
from datetime import datetime
from celery import shared_task
from crewai import Crew, Process
from jinja2 import Environment, FileSystemLoader
from weasyprint import HTML
import boto3
import httpx
import structlog

from agents.permit_research_agent import create_permit_research_agent, create_permit_research_task
from agents.zoning_analysis_agent import create_zoning_analysis_agent, create_zoning_analysis_task
from agents.hoa_research_agent import create_hoa_research_agent, create_hoa_research_task
from agents.briefing_compiler_agent import create_briefing_compiler_agent, create_briefing_compiler_task
from tools.fair_housing_guardrail import run_compliance_check, remediate_text
from app.celery_app import celery

logger = structlog.get_logger()

FUB_API_KEY = os.getenv('FUB_API_KEY')
FUB_BASE_URL = os.getenv('FUB_BASE_URL', 'https://api.followupboss.com/v1')
S3_BUCKET = os.getenv('AWS_S3_BUCKET')

# Jinja2 template environment
template_env = Environment(loader=FileSystemLoader('/app/templates'))


@celery.task(bind=True, max_retries=2, default_retry_delay=60)
def generate_briefing_task(
    self,
    briefing_id: str,
    address: str,
    city: str,
    state: str,
    zip_code: str,
    crm_contact_id: str = None,
    requested_by: str = None,
    include_hoa: bool = True,
    include_permits: bool = True,
    include_zoning: bool = True,
):
    start_time = time.time()
    property_address = f'{address}, {city}, {state} {zip_code}'
    logger.info('briefing_generation_started', briefing_id=briefing_id, address=property_address)
    
    try:
        # === Phase 1: Create Agents ===
        agents = []
        tasks = []
        
        if include_permits:
            permit_agent = create_permit_research_agent()
            permit_task = create_permit_research_task(permit_agent, address, city, state, zip_code)
            agents.append(permit_agent)
            tasks.append(permit_task)
        
        if include_zoning:
            zoning_agent = create_zoning_analysis_agent()
            zoning_task = create_zoning_analysis_task(zoning_agent, address, city, state, zip_code)
            agents.append(zoning_agent)
            tasks.append(zoning_task)
        
        if include_hoa:
            hoa_agent = create_hoa_research_agent()
            hoa_task = create_hoa_research_task(hoa_agent, address, city, state, zip_code)
            agents.append(hoa_agent)
            tasks.append(hoa_task)
        
        # === Phase 2: Execute Research Agents ===
        research_crew = Crew(
            agents=agents,
            tasks=tasks,
            process=Process.sequential,  # Sequential to manage API rate limits
            verbose=True,
            memory=True,
            max_rpm=30,  # Rate limit to avoid API throttling
        )
        
        research_results = research_crew.kickoff()
        logger.info('research_phase_completed', briefing_id=briefing_id)
        
        # Extract individual task outputs
        permit_output = tasks[0].output.raw if include_permits and len(tasks) > 0 else '{}'
        zoning_output = tasks[1].output.raw if include_zoning and len(tasks) > 1 else '{}'
        hoa_output = tasks[2].output.raw if include_hoa and len(tasks) > 2 else '{}'
        
        # === Phase 3: Compile Briefing ===
        compiler_agent = create_briefing_compiler_agent()
        compiler_task = create_briefing_compiler_task(
            compiler_agent, permit_output, zoning_output, hoa_output, property_address
        )
        
        compiler_crew = Crew(
            agents=[compiler_agent],
            tasks=[compiler_task],
            process=Process.sequential,
            verbose=True,
        )
        
        compiled_result = compiler_crew.kickoff()
        compiled_data = json.loads(compiler_task.output.raw)
        logger.info('compilation_completed', briefing_id=briefing_id)
        
        # === Phase 4: Fair Housing Compliance Check ===
        briefing_text = compiled_data.get('key_findings_html', '') + ' ' + compiled_data.get('executive_summary', '')
        compliance_result = run_compliance_check(briefing_text)
        
        if compliance_result['recommendation'] == 'BLOCKED':
            logger.warning('briefing_blocked_compliance', briefing_id=briefing_id)
            # Auto-remediate and re-check
            remediated_text = remediate_text(
                briefing_text, 
                compliance_result['llm_check'].get('violations', [])
            )
            re_check = run_compliance_check(remediated_text)
            if re_check['recommendation'] == 'BLOCKED':
                # Escalate to manual review
                logger.error('briefing_requires_manual_review', briefing_id=briefing_id)
                # TODO: update database status to 'review' and notify MSP
                return {'briefing_id': briefing_id, 'status': 'review', 'reason': 'compliance_block'}
            compiled_data['key_findings_html'] = remediated_text
        
        logger.info('compliance_check_passed', briefing_id=briefing_id, result=compliance_result['recommendation'])
        
        # === Phase 5: Generate PDF ===
        template = template_env.get_template('briefing_template.html')
        
        # Build template context
        permit_data = json.loads(permit_output) if include_permits else {}
        zoning_data = json.loads(zoning_output) if include_zoning else {}
        hoa_data = json.loads(hoa_output) if include_hoa else {}
        parcel = zoning_data.get('parcel', {})
        
        template_context = {
            'brokerage_name': os.getenv('BROKERAGE_NAME', 'Brokerage'),
            'brand_color': os.getenv('BRAND_COLOR', '#1a5276'),
            'property_address': property_address,
            'generated_date': datetime.now().strftime('%B %d, %Y'),
            'parcel_id': parcel.get('parcel_id', 'N/A'),
            'property_type': parcel.get('property_type', 'N/A'),
            'year_built': parcel.get('year_built', 'N/A'),
            'lot_size': parcel.get('lot_size', 'N/A'),
            'living_area': parcel.get('living_area', 'N/A'),
            'flood_zone': parcel.get('flood_zone', 'N/A'),
            'permits': permit_data.get('permits', []),
            'permit_summary': permit_data.get('permit_summary', ''),
            'permit_alerts': permit_data.get('permit_alerts', []),
            'zoning': zoning_data.get('zoning', {}),
            'hoa': hoa_data.get('hoa', {}),
            'key_findings': compiled_data.get('key_findings_html', ''),
            'agent_name': os.getenv('AGENT_NAME', ''),
            'agent_phone': os.getenv('AGENT_PHONE', ''),
            'agent_email': os.getenv('AGENT_EMAIL', ''),
        }
        
        html_content = template.render(**template_context)
        
        # Generate PDF
        pdf_filename = f'briefing_{briefing_id}.pdf'
        pdf_path = f'/app/output/{pdf_filename}'
        HTML(string=html_content, base_url='/app/templates/').write_pdf(pdf_path)
        logger.info('pdf_generated', briefing_id=briefing_id, path=pdf_path)
        
        # === Phase 6: Upload to S3 ===
        s3_client = boto3.client('s3')
        s3_key = f'briefings/{datetime.now().strftime("%Y/%m")}/{pdf_filename}'
        s3_client.upload_file(
            pdf_path, S3_BUCKET, s3_key,
            ExtraArgs={'ContentType': 'application/pdf'}
        )
        
        # Generate presigned URL (valid 7 days)
        pdf_url = s3_client.generate_presigned_url(
            'get_object',
            Params={'Bucket': S3_BUCKET, 'Key': s3_key},
            ExpiresIn=604800
        )
        logger.info('pdf_uploaded_s3', briefing_id=briefing_id, s3_key=s3_key)
        
        # === Phase 7: Deliver to CRM ===
        if crm_contact_id and FUB_API_KEY:
            note_body = f"""🏠 AI Buyer Briefing Generated\n\nProperty: {property_address}\n\n{compiled_data.get('executive_summary', '')}\n\n📄 View Full Briefing: {pdf_url}\n\nAlerts: {compiled_data.get('alert_count', {})}\n\nGenerated on {datetime.now().strftime('%B %d, %Y at %I:%M %p')}"""
            
            with httpx.Client(timeout=15) as http_client:
                fub_response = http_client.post(
                    f'{FUB_BASE_URL}/notes',
                    auth=(FUB_API_KEY, ''),
                    json={
                        'personId': int(crm_contact_id),
                        'body': note_body,
                        'subject': f'Buyer Briefing: {property_address}'
                    }
                )
                fub_response.raise_for_status()
                logger.info('crm_note_created', briefing_id=briefing_id, contact_id=crm_contact_id)
        
        # === Phase 8: Record Results ===
        processing_time = time.time() - start_time
        logger.info(
            'briefing_completed',
            briefing_id=briefing_id,
            processing_time_seconds=round(processing_time, 1),
            pdf_url=pdf_url
        )
        
        # TODO: Update database record with results
        # db.update_briefing(briefing_id, status='completed', pdf_url=pdf_url, ...)
        
        return {
            'briefing_id': briefing_id,
            'status': 'completed',
            'pdf_url': pdf_url,
            'processing_time_seconds': round(processing_time, 1),
            'compliance_status': compliance_result['recommendation'],
            'alert_count': compiled_data.get('alert_count', {})
        }
        
    except Exception as e:
        logger.error('briefing_generation_failed', briefing_id=briefing_id, error=str(e))
        # Retry on transient errors
        if 'rate_limit' in str(e).lower() or 'timeout' in str(e).lower():
            raise self.retry(exc=e)
        # TODO: Update database record with error
        return {'briefing_id': briefing_id, 'status': 'failed', 'error': str(e)}

CRM Webhook Integration

Type: integration

n8n workflow that receives webhooks from Follow Up Boss when a property tag is applied to a contact, extracts the property address, and triggers the briefing generation API. Also handles the completion callback to deliver the PDF link back to the CRM contact record and notify the agent via email.

Implementation:

n8n Workflow: FUB Property Tag → Briefing Generation & Completion Delivery
json
{
  "n8n_workflow_trigger": {
    "name": "FUB Property Tag -> Briefing Generation",
    "description": "Triggered when Follow Up Boss sends a webhook for a deal/note creation containing a property address with the 'generate-briefing' tag.",
    "nodes": [
      {
        "node_type": "Webhook",
        "config": {
          "path": "/webhook/fub-property-tag",
          "method": "POST",
          "authentication": "headerAuth",
          "header_name": "X-Webhook-Secret",
          "header_value": "{{$env.WEBHOOK_SECRET}}"
        }
      },
      {
        "node_type": "Function",
        "name": "Extract Property Data",
        "code": "const payload = $input.first().json;\n\n// Extract from FUB webhook payload\nconst dealData = payload.deal || payload.note || {};\nconst personId = payload.personId || dealData.personId;\n\n// Parse address from deal custom fields or note body\nlet address = dealData.customFields?.propertyAddress || '';\nlet city = dealData.customFields?.propertyCity || '';\nlet state = dealData.customFields?.propertyState || '';\nlet zipCode = dealData.customFields?.propertyZip || '';\n\n// If address not in custom fields, try parsing from note body\nif (!address && dealData.body) {\n  const addressMatch = dealData.body.match(/(?:address|property)[:\\s]+(.+)/i);\n  if (addressMatch) {\n    address = addressMatch[1].trim();\n  }\n}\n\nreturn [{\n  json: {\n    address: address,\n    city: city,\n    state: state,\n    zip_code: zipCode,\n    crm_contact_id: String(personId),\n    requested_by: payload.agentName || 'CRM Webhook'\n  }\n}];"
      },
      {
        "node_type": "IF",
        "name": "Address Valid?",
        "condition": "{{$json.address !== '' && $json.address !== undefined}}"
      },
      {
        "node_type": "HTTP Request",
        "name": "Trigger Briefing API",
        "config": {
          "method": "POST",
          "url": "https://briefings.{{$env.CLIENT_DOMAIN}}/api/v1/briefings",
          "headers": {
            "Content-Type": "application/json"
          },
          "body": {
            "address": "={{$json.address}}",
            "city": "={{$json.city}}",
            "state": "={{$json.state}}",
            "zip_code": "={{$json.zip_code}}",
            "crm_contact_id": "={{$json.crm_contact_id}}",
            "requested_by": "={{$json.requested_by}}"
          }
        }
      },
      {
        "node_type": "Send Email",
        "name": "Notify Agent - Briefing Queued",
        "config": {
          "to": "={{$json.requested_by_email}}",
          "subject": "Buyer Briefing Queued: {{$json.address}}",
          "body": "Your buyer briefing for {{$json.address}} has been queued. You will receive the completed briefing in your CRM within 5 minutes."
        }
      }
    ],
    "error_handling": {
      "on_address_invalid": "Send Slack/email notification to MSP that webhook received but address could not be parsed",
      "on_api_failure": "Retry once after 30 seconds, then notify MSP support channel"
    }
  },
  "n8n_workflow_completion": {
    "name": "Briefing Complete -> Email Delivery",
    "description": "Polls for completed briefings and sends email notification with PDF link to the requesting agent.",
    "trigger": "Cron: every 2 minutes",
    "nodes": [
      {
        "node_type": "Cron",
        "config": {"interval": "*/2 * * * *"}
      },
      {
        "node_type": "HTTP Request",
        "name": "Check Pending Briefings",
        "config": {
          "method": "GET",
          "url": "https://briefings.{{$env.CLIENT_DOMAIN}}/api/v1/briefings?status=completed&notified=false"
        }
      },
      {
        "node_type": "SplitInBatches",
        "name": "Process Each Completed Briefing"
      },
      {
        "node_type": "Send Email",
        "name": "Send Briefing to Agent",
        "config": {
          "to": "={{$json.requested_by_email}}",
          "subject": "✅ Buyer Briefing Ready: {{$json.property_address}}",
          "body": "Your AI-generated buyer briefing for {{$json.property_address}} is ready.\n\nKey Findings: {{$json.executive_summary}}\n\nView Full Briefing: {{$json.pdf_url}}\n\nThis briefing has been automatically added to the contact's record in Follow Up Boss."
        }
      }
    ]
  },
  "follow_up_boss_setup_instructions": {
    "step_1": "Login to Follow Up Boss as Admin",
    "step_2": "Go to Admin > API & Integrations > Webhooks",
    "step_3": "Click 'Add Webhook'",
    "step_4": "Set URL to: https://n8n.<CLIENT_DOMAIN>/webhook/fub-property-tag",
    "step_5": "Select events: 'Deal Created', 'Deal Updated', 'Note Created'",
    "step_6": "Add custom header: X-Webhook-Secret = <your_webhook_secret>",
    "step_7": "Test the webhook by creating a test deal with a property address",
    "alternative": "If Follow Up Boss webhooks are insufficient, create an Action Plan that fires an API call when tag 'generate-briefing' is applied to a person"
  }
}

Testing & Validation

  • SMOKE TEST - API Health: Run the health check curl command and verify response contains {"status": "healthy"}. All 5 Docker containers should show 'Up (healthy)' in 'docker compose ps'.
  • SMOKE TEST - Database: Connect to PostgreSQL and run '\dt' to confirm all 4 tables exist (properties, briefings, api_cache, audit_log).
  • UNIT TEST - Shovels API: Call the Shovels permit search endpoint directly with a known address. Verify JSON response with permit records.
  • UNIT TEST - Zoneomics API: Call the Zoneomics API with a known address and verify zoning classification is returned. Confirm classification, allowed uses, and setback data.
  • UNIT TEST - ATTOM API: Call ATTOM property detail endpoint and verify property characteristics are returned including flood zone and HOA information.
  • INTEGRATION TEST - Full Briefing Generation: Submit a briefing request via the API for one of the 5–10 sample properties provided by the client. Monitor worker logs and verify completion within 5 minutes.
  • INTEGRATION TEST - PDF Quality: Download the generated PDF from S3 and verify: (1) client logo renders correctly, (2) all sections are present (permits, zoning, HOA), (3) data is accurate by cross-referencing with manually researched data, (4) formatting is professional with no layout breaks, (5) disclaimer is present.
  • COMPLIANCE TEST - Fair Housing: Submit a briefing request for a property in a diverse neighborhood. Review the generated briefing text and verify: (1) No references to neighborhood demographics, (2) No school quality rankings, (3) No safety/crime language, (4) No steering language suggesting property is 'ideal for' any group. Run the compliance guardrail manually and confirm it passes.
  • COMPLIANCE TEST - Intentional Violation: Temporarily modify the compiler agent's output to include a prohibited phrase (e.g., 'great schools nearby') and verify the Fair Housing guardrail catches it, flags it as a violation, and either auto-remediates or blocks delivery.
  • INTEGRATION TEST - CRM Delivery: Trigger a briefing for a test contact in Follow Up Boss and verify: (1) the n8n webhook receives the event, (2) the briefing API is called, (3) upon completion a note is created on the FUB contact with the PDF link, (4) the agent receives an email notification with the PDF link.
  • LOAD TEST - Concurrent Briefings: Submit 5 briefing requests simultaneously and verify all complete successfully within 15 minutes without errors. Monitor Docker container resource usage to ensure no OOM kills.
  • REGRESSION TEST - Error Handling: Submit a briefing with an invalid address (e.g., '999 Nonexistent Street, Nowhere, XX 00000') and verify: (1) the system handles the empty API responses gracefully, (2) a briefing is still generated noting data was unavailable, (3) no unhandled exceptions in the worker logs.
  • UAT - Client Validation: Have 3 real estate agents from the client brokerage each submit 2 real property addresses they have manually researched before. Compare the AI briefing output against their manual research. Success criteria: 90%+ accuracy on factual data, all critical findings identified, and agents rate the briefing as 'useful' or 'very useful'.
SMOKE TEST - API Health check
bash
curl -s https://briefings.<CLIENT_DOMAIN>/health
Verify all 5 Docker containers show 'Up (healthy)'
bash
docker compose ps
SMOKE TEST - Connect to PostgreSQL
bash
docker compose exec postgres psql -U briefing_app -d buyer_briefings
Confirm all 4 tables exist (properties, briefings, api_cache, audit_log)
sql
\dt
UNIT TEST
bash
# Shovels permit search endpoint with known address

curl -H "Authorization: Bearer $SHOVELS_API_KEY" "https://api.shovels.ai/v2/permits?address=123+Main+St,+Anytown,+CA+90210"
UNIT TEST - Zoneomics API with known address
bash
curl -H "Authorization: Bearer $ZONEOMICS_API_KEY" "https://api.zoneomics.com/v2/zoning?address=123+Main+St"
INTEGRATION TEST
bash
# Full briefing generation for a sample property

curl -X POST https://briefings.<CLIENT_DOMAIN>/api/v1/briefings \
  -H "Content-Type: application/json" \
  -d "{\"address\":\"<SAMPLE_ADDRESS>\",\"city\":\"<CITY>\",\"state\":\"<STATE>\",\"zip_code\":\"<ZIP>\"}"

Client Handoff

Client Handoff Checklist

...

Training Session (2 hours, in-person or video)

1
System Overview (30 min): Walk through the architecture at a high level — agents search permits, zoning, and HOA data, then compile a PDF briefing. Show the flow from CRM trigger to delivered briefing.
2
How to Request a Briefing (30 min): Demonstrate three methods: (a) Tag a contact in Follow Up Boss with 'generate-briefing' and add a property deal, (b) Use the n8n dashboard to manually trigger, (c) Direct API call for power users. Practice with 2–3 live addresses.
3
Reading the Briefing (20 min): Walk through a sample PDF section by section. Explain what each alert severity means. Show how to use the briefing during a buyer consultation.
4
Limitations & Best Practices (20 min): Explain data coverage gaps (some jurisdictions have limited permit data), typical processing time (3–5 minutes), and that all information should be independently verified. Emphasize the briefing is a starting point, not a substitute for professional inspections.
5
Fair Housing Compliance (10 min): Explain the compliance guardrails and why certain language is excluded. Remind agents not to add discriminatory annotations when sharing briefings.
6
Troubleshooting & Support (10 min): Show how to check briefing status via the API, what to do if a briefing fails (re-submit or contact MSP), and escalation contacts.

Documentation to Leave Behind

  • User Quick Start Guide (1-page PDF): Step-by-step for requesting a briefing from Follow Up Boss
  • Sample Briefing (annotated): A marked-up example explaining each section
  • FAQ Document: Covering common questions (processing time, data sources, accuracy, coverage areas)
  • Support Contact Card: MSP support email, phone, portal URL, and SLA response times
  • Admin Guide (for broker/office manager): How to add/remove agents, change branding, and view usage reports

Success Criteria Review

Maintenance

Ongoing Maintenance Responsibilities

Weekly (15 minutes)

  • Review LangSmith dashboard for failed agent executions and high-latency traces
  • Check Docker container health: docker compose ps — all containers should show 'Up (healthy)'
  • Review Celery task queue for stuck or failed tasks: docker compose exec worker celery inspect active
  • Check disk usage: df -h — alert if any volume exceeds 80%

Monthly (1 hour)

  • Review API usage and costs across all providers (OpenAI, Shovels, Zoneomics, ATTOM)
  • Compare actual costs to client billing to ensure margins are maintained
  • Review and rotate API keys if any are approaching expiration
  • Review FairSentry/compliance scan results for any recurring pattern issues

Run the following command to report briefing volume to client:

Report briefing volume for the past 30 days
bash
docker compose exec postgres psql -U briefing_app -d buyer_briefings -c 'SELECT COUNT(*) FROM briefings WHERE status=\'completed\' AND created_at > NOW() - INTERVAL \'30 days\''

Clean expired API cache entries:

Remove expired API cache entries
sql
DELETE FROM api_cache WHERE expires_at < NOW()

Update Docker images for Redis, PostgreSQL, and n8n:

Pull latest Docker images and restart services
bash
docker compose pull && docker compose up -d

Quarterly (2 hours)

  • Update Python dependencies: review changelogs for crewai, openai, langchain for breaking changes, then pip install --upgrade and test in staging
  • Review LLM model performance: check if newer models (e.g., GPT-4.1-mini) could reduce costs without quality loss
  • Review briefing quality with 3–5 random samples — verify data accuracy against manual checks
  • Update PDF template if client has rebranded or changed contact information
  • Test disaster recovery: restore from backup on a fresh instance

Backup PostgreSQL database to S3:

PostgreSQL backup to S3
bash
docker compose exec postgres pg_dump -U briefing_app buyer_briefings | gzip > backup_$(date +%Y%m%d).sql.gz

As-Needed

  • API Provider Changes: Shovels, Zoneomics, and ATTOM may update their API schemas. Subscribe to their developer newsletters and update client code within 2 weeks of breaking change announcements.
  • OpenAI Model Deprecation: OpenAI typically gives 6+ months notice before deprecating models. Plan migration to successor model within 1 month of announcement.
  • Fair Housing Regulation Updates: Monitor HUD.gov for new guidance on AI in real estate. Update compliance guardrails within 30 days of new guidance.
  • Client Market Expansion: If client expands to new geographic markets, verify Shovels and Zoneomics coverage for new jurisdictions and configure additional API access if needed.

SLA Considerations

  • Uptime Target: 99.5% (allows ~3.6 hours downtime/month for maintenance)
  • Briefing Generation SLA: 95% of briefings completed within 5 minutes; 99% within 10 minutes
  • Support Response Time: P1 (system down): 1 hour. P2 (briefing failures): 4 hours. P3 (quality issues): 1 business day
  • Escalation Path: L1 (MSP help desk) -> L2 (MSP senior technician with server access) -> L3 (development team for code changes)

Monitoring & Alerting

  • Set up UptimeRobot or AWS CloudWatch to monitor the /health endpoint every 5 minutes
  • Configure alerts for: container restarts, disk >80%, Celery queue depth >20, API error rate >5%
  • Set up monthly automated reports emailed to client: briefings generated, average processing time, API costs

Alternatives

No-Code Approach with n8n + OpenAI Only

Replace the CrewAI multi-agent architecture with a series of n8n workflows that directly call OpenAI's API with structured prompts for each research step. Each workflow node handles a specific API call (Shovels, Zoneomics, ATTOM) followed by an OpenAI node that processes the results. No Python development required — the entire solution is built visually in n8n.

LangGraph + LangSmith Full Stack

Replace CrewAI with LangChain's LangGraph framework for agent orchestration, with LangSmith for full observability. LangGraph provides more granular control over agent state machines, conditional routing, and human-in-the-loop approval steps. All agents are implemented as LangGraph nodes with explicit state transitions.

OpenAI Agents SDK (Lightweight)

Use OpenAI's native Agents SDK instead of CrewAI for agent orchestration. The Agents SDK provides a minimalist framework with Agents, Handoffs, and Guardrails as core primitives. Each research function is a tool, and agent handoffs manage the research-to-compilation flow.

Pre-Built SaaS: RealReports or Similar

Instead of building a custom system, subscribe to an existing AI-powered property research SaaS product and integrate it with the client's CRM. Products like RealReports, Restb.ai, or HouseCanary offer property intelligence APIs that can be embedded into workflows.

On-Premises Self-Hosted LLM (Ollama + Llama 3)

Replace OpenAI API calls with a self-hosted open-source LLM (e.g., Meta Llama 3 70B) running on-premises via Ollama on the Dell PowerEdge T560 with NVIDIA RTX A4000 GPU. Eliminates all LLM API costs and keeps data fully on-premises.

Want early access to the full toolkit?