53 min readIntelligence & insights

Implementation Guide: Synthesize audience research and competitor messaging into strategic positioning recommendations

Step-by-step implementation guide for deploying AI to synthesize audience research and competitor messaging into strategic positioning recommendations for Marketing & Creative Agencies clients.

Hardware Procurement

Analyst Workstation

DellDell Latitude 5550 (i7-1365U, 16GB RAM, 512GB SSD)Qty: 2

$1,150 per unit MSP cost / $1,500 suggested resale

Primary workstations for agency strategists running multiple browser tabs with dashboards, competitive analysis tools, and AI-generated reports. 16GB RAM minimum required for smooth operation of Semrush, SparkToro, Notion, and Looker Studio simultaneously.

External Monitor - Dual Setup

DellDell U2723QE UltraSharp 27" 4K USB-C Hub MonitorQty: 4

$480 per unit MSP cost / $620 suggested resale

Dual-monitor setup per analyst workstation enables side-by-side comparison of competitor messaging, audience data visualization, and AI-generated positioning recommendations. USB-C hub reduces cable clutter and provides laptop charging.

Network Attached Storage (Optional)

Synology DS423+ with Seagate IronWolf Drives

SynologySynology DS423+ (diskless) with 2x Seagate IronWolf 4TB (ST4000VN006)Qty: 1

$620 unit + $180 drives = $800 MSP cost / $1,100 suggested resale

Centralized on-premises archive for competitive intelligence reports, client deliverables, historical positioning documents, and exported data snapshots. Provides automated backup and RAID 1 redundancy. Optional—cloud storage (Google Drive/SharePoint) can substitute.

Software Procurement

Semrush Guru Plan

SemrushGuru Plan

$249.95/month ($208.33/month billed annually at $2,499.96/year)

Core competitive intelligence engine: keyword research, competitor SEO/PPC analysis, content gap analysis, market explorer, position tracking. Guru tier required for historical data, content marketing toolkit, and multi-location tracking. Provides the foundational competitive data that feeds into LLM synthesis.

SparkToro Standard Plan

SparkToroStandard Plan

$112/month ($84/month billed annually)

Audience intelligence platform that identifies where target audiences gather online, what content they consume, which social accounts they follow, and what language they use. Provides the audience research half of the positioning equation. Essential for understanding audience psychographics without expensive primary research.

Competely Pro Plan

CompetelyPro Plan

$79/month

AI-powered competitive analysis tool that generates instant analysis across 100+ data points including positioning, messaging, pricing, features, audience sentiment, and market perception. Produces structured competitive profiles that serve as direct input to the LLM synthesis pipeline.

OpenAI API (GPT-5.4 mini + GPT-4.1)

OpenAIGPT-5.4 mini + GPT-4.1

$75–$150/month estimated (GPT-5.4 mini: $0.15/$0.60 per MTok in/out; GPT-4.1: $3/$12 per MTok in/out)

Primary LLM engine for synthesizing audience and competitive data into strategic positioning recommendations. GPT-5.4 mini handles high-volume summarization and data extraction. GPT-4.1 handles deep strategic synthesis, positioning framework generation, and nuanced recommendation writing.

$30–$75/month estimated ($3/$15 per MTok in/out)

Secondary LLM provider for redundancy and for tasks requiring different analytical perspective. Claude excels at nuanced strategic writing and handling long competitive analysis documents. Used as fallback when OpenAI has outages and for A/B testing positioning recommendation quality.

Zapier Professional Plan

ZapierProfessional PlanQty: 1

$69/month billed annually (750 tasks/month)

Workflow automation backbone connecting all data sources to LLM APIs and delivery channels. Triggers on competitor alerts, schedules recurring analysis runs, routes outputs to Notion, Slack, email, and client dashboards. Professional tier required for multi-step Zaps and custom logic paths.

Notion Business Plan

NotionPer-seat SaaS - monthly subscriptionQty: 5 users

$20/user/month (estimate 5 users = $100/month)

Central knowledge base and client delivery platform. Houses competitive intelligence wikis, positioning strategy documents, historical trend analysis, and AI-generated reports. Business plan includes built-in AI features (GPT-4.1 and Claude Sonnet 4.6) for on-the-fly document enhancement. Client-facing pages provide transparent strategy documentation.

Google Workspace Business Starter

GooglePer-seat SaaS - monthly subscription

$7/user/month

Collaborative document creation (Google Docs/Sheets) for strategy deliverables, Looker Studio for competitive dashboards, Google Drive for file storage and sharing. SSO provider for all other SaaS tools via Google OAuth.

$52/month

Budget alternative to Semrush for smaller agencies. Tracks up to 750 keywords across 10 projects. Provides core competitive SEO data at approximately one-fourth the cost of Semrush Guru. Recommended only for agencies with fewer than 5 clients.

Prerequisites

  • Reliable business internet connection with 50+ Mbps download speed (all tools are cloud SaaS)
  • Google Workspace or Microsoft 365 tenant configured with SSO for user authentication across all platforms
  • Active business credit card or payment method for SaaS subscriptions and API billing
  • Designated project lead at the agency (typically a senior strategist or account director) who will own the positioning output process
  • List of 3–5 primary competitors per client account (minimum 3 clients to justify the investment)
  • Existing brand positioning documents, messaging guidelines, and target audience personas for each client (even if informal)
  • Chrome or Edge browser (latest version) installed on all analyst workstations
  • Admin access to the agency's existing project management tool (Asana, Monday.com, Teamwork, or equivalent) for integration
  • Slack workspace or Microsoft Teams environment for real-time alert delivery
  • Written approval from agency leadership for AI-assisted strategic work, including acknowledgment of AI usage disclosure requirements
  • Data Processing Agreement (DPA) templates prepared for all SaaS vendor sign-ups to ensure GDPR/CCPA compliance
  • Python 3.10+ installed on at least one workstation (for advanced custom scripting in later phases; not required for basic deployment)
  • Git client installed and GitHub/GitLab account provisioned (for version-controlling prompt templates and automation configurations)

Installation Steps

...

Step 1: Provision Core SaaS Accounts and Configure SSO

Create accounts for all core platforms using the agency's Google Workspace SSO. This ensures centralized authentication, simplified offboarding, and audit trail compliance. Set up each platform with the agency's billing information and assign admin roles to the MSP service account and the agency's designated project lead.

1
Navigate to https://www.semrush.com and create a Guru account using Google SSO
2
Navigate to https://sparktoro.com and create a Standard account using Google SSO
3
Navigate to https://competely.ai and create a Pro account
4
Navigate to https://platform.openai.com/signup and create an Organization account
5
Navigate to https://console.anthropic.com and create a team account
6
Navigate to https://zapier.com and create a Professional account using Google SSO
7
Navigate to https://www.notion.so and create a Business workspace using Google SSO
8
For OpenAI: Set up billing at https://platform.openai.com/account/billing
9
For Anthropic: Set up billing at https://console.anthropic.com/settings/billing
10
Set monthly spending limits: OpenAI $200/month hard cap, Anthropic $100/month hard cap
Note

Always use the agency's Google Workspace domain for SSO where supported. This creates a unified identity layer. For OpenAI and Anthropic, set conservative spending limits initially—you can increase later. Record all account credentials in the MSP's password manager (e.g., IT Glue, Hudu, or 1Password Business). Ensure the MSP has a separate admin account distinct from agency user accounts for ongoing management.

Step 2: Configure Semrush Guru — Competitive Landscape Setup

Set up Semrush as the primary competitive SEO and digital marketing intelligence source. Create a project for each client account, configure competitor tracking, set up position tracking for target keywords, and configure automated competitive reports. This provides the foundational competitive data layer.

1
Create New Project → enter client's root domain
2
Position Tracking → Add target keywords (50-100 per client): Set target location (country/city as appropriate), Add 5 competitor domains for tracking
3
Organic Research → enter each competitor domain → export top keywords CSV
4
Market Explorer → enter client's domain → identify market segment
5
Content Gap → input client domain vs. 4 competitors → export gaps CSV
6
Set up automated PDF report (Settings → My Reports → Schedule) — Weekly cadence, delivered to project lead email + MSP admin
7
Configure Brand Monitoring → add client brand name + competitor brand names
Optional API call to export competitor keyword data for automation
bash
curl -s 'https://api.semrush.com/?type=domain_organic&key=YOUR_API_KEY&domain=competitor.com&database=us&display_limit=100&export_columns=Ph,Po,Nq,Cp,Ur' > competitor_keywords.csv
Note

Semrush Guru plan allows up to 15 projects — allocate one per client. The API key is found under Semrush → Subscription Info → API tab. API calls are limited based on your plan (Guru: 5,000 results per report, 30 reports/day via API). Schedule exports during off-peak hours. The Content Gap report is the single most valuable output for positioning analysis — always prioritize this.

Step 3: Configure SparkToro — Audience Intelligence Setup

Set up SparkToro audience research for each client's target market. Create saved searches for each client's audience segments, export audience profiles, and establish a regular research cadence. SparkToro provides the 'who is the audience and what do they care about' layer.

1
Search by 'My audience frequently uses the hashtag: [client industry hashtag]'
2
Search by 'My audience frequently talks about: [client industry topic]'
3
Search by 'My audience follows the social account: [competitor social handle]'
4
For each search, review and export: Demographics tab → export CSV, Social accounts they follow → export top 50, Websites they visit → export top 50, Podcasts they listen to → export top 20, YouTube channels they watch → export top 20, Subreddits they engage with → export top 20, Hidden gems (low-follower, high-engagement accounts) → export
5
Save each search as a named Audience (e.g., 'ClientX - Primary Audience')
6
Repeat for secondary audiences / competitor audiences
7
Export all data to Google Drive folder: /Clients/[ClientName]/Audience_Research/
Note

SparkToro Standard plan allows 150 searches/month — budget approximately 10 searches per client per month for ongoing monitoring. The most powerful feature is comparing your client's audience to a competitor's audience — this reveals positioning gaps. Always run searches from both the client perspective AND each major competitor's audience perspective. Export data to a structured Google Drive folder for LLM ingestion later.

Step 4: Configure Competely — AI Competitive Analysis

Set up Competely for rapid AI-powered competitive analysis for each client. Generate comprehensive competitive profiles across positioning, messaging, pricing, features, and market perception. These structured outputs become the primary input for the LLM synthesis pipeline.

1
All browser-based at https://competely.ai
2
Per client account: Enter client company URL as 'Your Company'
3
Enter primary competitor URL (repeat for each competitor, up to 5)
4
Generate full competitive analysis → review all sections: Marketing & Positioning, Pricing Comparison, Product Comparison, Customer Sentiment, Online Presence, Technical Comparison
5
Export each analysis as PDF and structured text
6
Save to Google Drive: /Clients/[ClientName]/Competitive_Intel/Competely/
7
Set calendar reminder to regenerate analyses monthly
8
Competely does not have a public API — exports are manual or can be automated via browser automation (Puppeteer/Playwright) if needed
Note

Competely generates AI analysis using its own models — the output quality is good for initial competitive snapshots but should be validated by agency strategists. The key value is the structured format: it provides clean, categorized competitive data that feeds directly into our custom LLM prompts. Generate fresh analyses monthly or whenever a significant competitor change is detected. Competely's pro plan limits should cover 3–5 full analyses per day.

Step 5: Set Up OpenAI and Anthropic API Keys and Test Connectivity

Configure API access for both LLM providers. Generate API keys, set up organization-level billing and rate limits, test basic connectivity, and verify model access. Both providers are used: OpenAI for primary synthesis, Anthropic as backup and for specific analytical tasks.

bash
# Install Python dependencies on the admin workstation
pip install openai anthropic python-dotenv requests

# Create project directory
mkdir -p ~/positioning-intelligence/{config,prompts,scripts,outputs}
cd ~/positioning-intelligence

# Create .env file with API keys
cat > .env << 'EOF'
OPENAI_API_KEY=sk-proj-xxxxxxxxxxxxxxxxxxxx
OPENAI_ORG_ID=org-xxxxxxxxxxxxxxxxxxxx
ANTHROPIC_API_KEY=sk-ant-xxxxxxxxxxxxxxxxxxxx
EOF

# Test OpenAI connectivity
python3 -c "
import openai, os
from dotenv import load_dotenv
load_dotenv()
client = openai.OpenAI()
response = client.chat.completions.create(
    model='gpt-5.4-mini',
    messages=[{'role':'user','content':'Respond with OK if you can read this.'}],
    max_tokens=10
)

print(f'OpenAI OK: {response.choices[0].message.content}')
print(f'Model: {response.model}')
print(f'Tokens used: {response.usage.total_tokens}')
"

# Test Anthropic connectivity
python3 -c "
import anthropic, os
from dotenv import load_dotenv
load_dotenv()
client = anthropic.Anthropic()
response = client.messages.create(
    model='claude-sonnet-4-20250514',
    max_tokens=10,
    messages=[{'role':'user','content':'Respond with OK if you can read this.'}]
)

print(f'Anthropic OK: {response.content[0].text}')
print(f'Model: {response.model}')
print(f'Input tokens: {response.usage.input_tokens}')
"
Critical

Never commit .env files to version control. Add .env to .gitignore immediately. Store a backup of API keys in the MSP's password manager (IT Glue/Hudu). Set monthly budget limits in both OpenAI ($200) and Anthropic ($100) dashboards. The organization ID for OpenAI is found at https://platform.openai.com/account/organization. Monitor usage weekly during the first month to calibrate actual costs per client analysis run.

Step 6: Build Notion Knowledge Base Structure

Create the Notion workspace structure that serves as both the internal knowledge base and the client-facing delivery platform. This is where all competitive intelligence, audience research, and AI-generated positioning recommendations are organized, reviewed, and shared.

  • All browser-based in Notion
  • ROOT: [Agency Name] Positioning Intelligence
  • ├── 📋 Dashboard (linked database views, status overview)
  • ├── 📁 Clients/
  • │ ├── 📁 [Client A]/
  • │ │ ├── 🎯 Brand Overview (current positioning, personas, brand voice)
  • │ │ ├── 👥 Audience Intelligence (SparkToro data, demographics, psychographics)
  • │ │ ├── 🏢 Competitive Landscape/
  • │ │ │ ├── Competitor 1 Profile
  • │ │ │ ├── Competitor 2 Profile
  • │ │ │ ├── ... (up to 5 competitors)
  • │ │ │ └── Competitive Matrix (comparison table)
  • │ │ ├── 📊 Positioning Recommendations/
  • │ │ │ ├── [Date] Weekly Positioning Brief
  • │ │ │ ├── [Date] Monthly Strategy Report
  • │ │ │ └── Positioning Evolution Timeline
  • │ │ ├── 📈 SEO & Content Gaps (Semrush data)
  • │ │ └── 📝 Action Items & Briefs
  • │ ├── 📁 [Client B]/
  • │ │ └── (same structure)
  • │ └── ...
  • ├── 📁 Templates/
  • │ ├── Competitor Profile Template
  • │ ├── Positioning Brief Template
  • │ ├── Monthly Strategy Report Template
  • │ └── Audience Research Summary Template
  • ├── 📁 Prompt Library/
  • │ ├── Audience Synthesis Prompt
  • │ ├── Competitor Messaging Extraction Prompt
  • │ ├── Positioning Gap Analysis Prompt
  • │ └── Strategic Recommendation Prompt
  • └── 📁 Process Documentation/
  • ├── Workflow Runbook
  • ├── Tool Access & Credentials (linked to password manager)
  • └── Compliance & Data Handling Policy
  • Configure database properties for Positioning Recommendations:
  • - Status: Draft | In Review | Approved | Delivered
  • - Client: relation to Clients database
  • - Analysis Date: date
  • - Competitors Analyzed: multi-select
  • - Confidence Score: number (1-10)
  • - Strategist Review: person
  • - Tags: multi-select (messaging, pricing, audience, SEO, social)
Note

Use Notion templates extensively — create a Competitor Profile template and a Positioning Brief template that auto-populate with the required sections. Enable Notion AI on the Business plan for quick on-the-fly editing of AI-generated content. Share client-specific subpages (not the whole workspace) with external client contacts using Notion's guest permissions. Set up a database view filtered by 'Status = Delivered' as the client-facing portal.

Step 7: Configure Zapier Automation Workflows

Build the automation workflows that connect data sources to LLM processing and deliver outputs to the Notion knowledge base and Slack channels. This is the integration backbone of the entire system. We create three core Zaps: (1) Scheduled Competitive Analysis Pipeline, (2) Real-Time Competitor Alert Router, and (3) Audience Research Sync.

  • All configured in Zapier web UI at https://zapier.com/app/editor
  • === ZAP 1: Weekly Competitive Analysis Pipeline === Trigger: Schedule by Zapier → Every Monday at 7:00 AM
  • Step 2: Webhooks by Zapier → GET Semrush API — URL: https://api.semrush.com/?type=domain_organic&key={{api_key}}&domain={{competitor_domain}}&database=us&display_limit=50
  • Step 3: Code by Zapier (Python) → Parse and format Semrush CSV response
  • Step 4: Webhooks by Zapier → POST to OpenAI API — URL: https://api.openai.com/v1/chat/completions — Headers: Authorization: Bearer {{openai_key}}, Content-Type: application/json — Body: {"model":"gpt-4.1","messages":[{"role":"system","content":"{{system_prompt}}"},{"role":"user","content":"{{formatted_data}}"}],"max_tokens":4000}
  • Step 5: Notion → Create Database Item in Positioning Recommendations — Map: Title, Content (from OpenAI response), Status='Draft', Date=today
  • Step 6: Slack → Send Channel Message → #competitive-intel — Message: 'New positioning analysis for [Client] is ready for review in Notion'
  • === ZAP 2: Competitor Alert Router === Trigger: Email by Zapier → monitor MSP admin inbox for Semrush Brand Monitoring alerts
  • Step 2: Filter → Only continue if email contains competitor brand names
  • Step 3: Webhooks → POST to OpenAI API (gpt-5.4-mini for speed) — Prompt: 'Summarize this competitive alert and assess positioning impact: {{email_body}}'
  • Step 4: Slack → Send Channel Message → #competitor-alerts
  • Step 5: Notion → Update Competitor Profile page with latest intel
  • === ZAP 3: Monthly Audience Research Sync === Trigger: Schedule → 1st of each month at 9:00 AM
  • Step 2: Google Drive → Find File → latest SparkToro export CSV
  • Step 3: Code by Zapier → Parse CSV and format for LLM
  • Step 4: Webhooks → POST to OpenAI API (gpt-4.1) — Prompt: audience synthesis prompt from prompt library
  • Step 5: Notion → Update Audience Intelligence page for each client
  • Step 6: Slack → notify → #strategy-updates
Note

Zapier Professional allows 750 tasks/month — budget roughly: Zap 1 uses ~30 tasks/week (4 steps × ~8 clients = 32), Zap 2 uses ~40 tasks/month (variable), Zap 3 uses ~50 tasks/month. Total ~250 tasks/month, well within limits. If you exceed limits, upgrade to Zapier Team ($99/month for 2,000 tasks) or migrate high-volume workflows to Make.com or n8n. Test each Zap individually before enabling automation. Use Zapier's built-in error handling to send failure notifications to the MSP admin Slack channel.

Step 8: Deploy Custom LLM Prompt Library

Create and test the four core prompts that power the positioning intelligence system. These prompts are the intellectual property of the solution — they transform raw competitive and audience data into strategic positioning recommendations. Store all prompts in both the Notion Prompt Library and in a Git repository for version control.

bash
# Create prompt files in the project directory
cd ~/positioning-intelligence/prompts

# Create the four core prompt files (content provided in custom_ai_components)
touch competitor_messaging_extraction.md
touch audience_synthesis.md
touch positioning_gap_analysis.md
touch strategic_recommendation.md

# Initialize git repo for version control
cd ~/positioning-intelligence
git init
echo '.env' > .gitignore
echo '__pycache__/' >> .gitignore
echo 'outputs/' >> .gitignore
git add .
git commit -m 'Initial prompt library and project structure'

# Create a prompt testing script
cat > scripts/test_prompts.py << 'PYEOF'
import openai, json, os, sys
from dotenv import load_dotenv
load_dotenv()

client = openai.OpenAI()

def test_prompt(prompt_file, test_data_file, model='gpt-4.1'):
    with open(f'prompts/{prompt_file}', 'r') as f:
        system_prompt = f.read()
    with open(f'test_data/{test_data_file}', 'r') as f:
        user_data = f.read()
    
    response = client.chat.completions.create(
        model=model,
        messages=[
            {'role': 'system', 'content': system_prompt},
            {'role': 'user', 'content': user_data}
        ],
        max_tokens=4000,
        temperature=0.3
    )
    
    result = response.choices[0].message.content
    tokens = response.usage.total_tokens
    cost_est = (response.usage.prompt_tokens * 3 + response.usage.completion_tokens * 12) / 1_000_000
    
    print(f'Model: {model}')
    print(f'Tokens: {tokens}')
    print(f'Est. cost: ${cost_est:.4f}')
    print(f'Output length: {len(result)} chars')
    print('---OUTPUT---')
    print(result)
    return result


if __name__ == '__main__':
    prompt = sys.argv[1] if len(sys.argv) > 1 else 'competitor_messaging_extraction.md'
    data = sys.argv[2] if len(sys.argv) > 2 else 'sample_competitor_data.txt'
    test_prompt(prompt, data)
PYEOF

# Create test data directory
mkdir -p test_data
echo 'Place sample Semrush exports, SparkToro exports, and Competely reports here' > test_data/README.md
Note

Prompt engineering is the highest-impact activity in this entire implementation. Budget 8–12 hours for initial prompt development and testing. Use temperature=0.3 for analytical/strategic outputs (lower creativity, higher consistency). Always version-control prompts with meaningful commit messages (e.g., 'v2.1: Added SWOT framework to positioning gap prompt'). Test each prompt with data from at least 3 different clients/industries before deploying to production. The prompts in the custom_ai_components section below contain the complete text.

Step 9: Build the End-to-End Analysis Pipeline Script

Create a Python orchestration script that runs the complete positioning analysis pipeline: gathers data from exports, processes through the LLM prompt chain, and outputs structured positioning recommendations. This script can be triggered manually, via cron, or via Zapier webhook.

End-to-end positioning analysis pipeline script with OpenAI/Anthropic fallback
bash
cat > scripts/run_positioning_analysis.py << 'PYEOF'
import openai
import anthropic
import json
import os
import csv
import sys
from datetime import datetime
from dotenv import load_dotenv
from pathlib import Path

load_dotenv()

oai_client = openai.OpenAI()
ant_client = anthropic.Anthropic()

PROMPT_DIR = Path('prompts')
OUTPUT_DIR = Path('outputs')
OUTPUT_DIR.mkdir(exist_ok=True)

def load_prompt(name):
    return (PROMPT_DIR / name).read_text()

def call_openai(system_prompt, user_content, model='gpt-4.1', max_tokens=4000):
    try:
        response = oai_client.chat.completions.create(
            model=model,
            messages=[
                {'role': 'system', 'content': system_prompt},
                {'role': 'user', 'content': user_content}
            ],
            max_tokens=max_tokens,
            temperature=0.3
        )
        return response.choices[0].message.content
    except Exception as e:
        print(f'OpenAI error: {e}, falling back to Anthropic')
        return call_anthropic(system_prompt, user_content)

def call_anthropic(system_prompt, user_content, model='claude-sonnet-4-20250514', max_tokens=4000):
    response = ant_client.messages.create(
        model=model,
        max_tokens=max_tokens,
        system=system_prompt,
        messages=[{'role': 'user', 'content': user_content}]
    )
    return response.content[0].text

def run_analysis(client_name, data_dir):
    print(f'Running positioning analysis for {client_name}...')
    data_path = Path(data_dir)
    
    # Step 1: Load all available data
    all_data = {}
    for f in data_path.glob('*.csv'):
        all_data[f.stem] = f.read_text()
    for f in data_path.glob('*.txt'):
        all_data[f.stem] = f.read_text()
    for f in data_path.glob('*.json'):
        all_data[f.stem] = f.read_text()
    
    # Step 2: Extract competitor messaging (GPT-5.4 mini for speed)
    print('  Step 2: Extracting competitor messaging...')
    competitor_data = '\n\n'.join([f'=== {k} ===\n{v}' for k, v in all_data.items() if 'competitor' in k.lower() or 'competely' in k.lower()])
    messaging_extraction = call_openai(
        load_prompt('competitor_messaging_extraction.md'),
        competitor_data,
        model='gpt-5.4-mini'
    )
    
    # Step 3: Synthesize audience insights (GPT-5.4 mini)
    print('  Step 3: Synthesizing audience insights...')
    audience_data = '\n\n'.join([f'=== {k} ===\n{v}' for k, v in all_data.items() if 'audience' in k.lower() or 'sparktoro' in k.lower()])
    audience_synthesis = call_openai(
        load_prompt('audience_synthesis.md'),
        audience_data,
        model='gpt-5.4-mini'
    )
    
    # Step 4: Positioning gap analysis (GPT-4.1 for depth)
    print('  Step 4: Analyzing positioning gaps...')
    gap_input = f'COMPETITOR MESSAGING:\n{messaging_extraction}\n\nAUDIENCE INSIGHTS:\n{audience_synthesis}\n\nSEO/CONTENT DATA:\n{all_data.get("semrush_content_gap", "No content gap data available")}'
    gap_analysis = call_openai(
        load_prompt('positioning_gap_analysis.md'),
        gap_input,
        model='gpt-4.1'
    )
    
    # Step 5: Generate strategic recommendations (GPT-4.1)
    print('  Step 5: Generating strategic recommendations...')
    strategy_input = f'CLIENT: {client_name}\n\nPOSITIONING GAP ANALYSIS:\n{gap_analysis}\n\nAUDIENCE SYNTHESIS:\n{audience_synthesis}\n\nCOMPETITOR MESSAGING PROFILES:\n{messaging_extraction}'
    recommendations = call_openai(
        load_prompt('strategic_recommendation.md'),
        strategy_input,
        model='gpt-4.1',
        max_tokens=6000
    )
    
    # Step 6: Save outputs
    timestamp = datetime.now().strftime('%Y%m%d_%H%M')
    output_file = OUTPUT_DIR / f'{client_name}_{timestamp}_positioning_brief.md'
    
    full_report = f"""# Strategic Positioning Brief: {client_name}
## Generated: {datetime.now().strftime('%B %d, %Y')}

---

## 1. Competitor Messaging Analysis
{messaging_extraction}

---

## 2. Audience Intelligence Synthesis  
{audience_synthesis}

---

## 3. Positioning Gap Analysis
{gap_analysis}

---

## 4. Strategic Positioning Recommendations
{recommendations}

---
*Generated by Positioning Intelligence System | Review required before client delivery*
"""
    
    output_file.write_text(full_report)
    print(f'Report saved to: {output_file}')
    return str(output_file)

if __name__ == '__main__':
    client = sys.argv[1] if len(sys.argv) > 1 else 'TestClient'
    data = sys.argv[2] if len(sys.argv) > 2 else f'data/{client}'
    run_analysis(client, data)
PYEOF

# Make executable
chmod +x scripts/run_positioning_analysis.py

# Create data directory structure
mkdir -p data/{ClientA,ClientB,ClientC}
Note

This script implements a 4-stage LLM chain: extraction → synthesis → gap analysis → recommendations. Each stage builds on the previous output. Using GPT-5.4 mini for the first two stages (high-volume, lower complexity) and GPT-4.1 for the final two (deeper reasoning required) optimizes cost while maintaining quality. The automatic Anthropic fallback ensures continuity during OpenAI outages. Typical runtime: 2–4 minutes per client. Estimated cost: $0.15–$0.50 per full analysis run.

Step 10: Configure Slack Integration for Real-Time Alerts

Set up Slack channels and integrate alert delivery so that competitive intelligence updates, positioning analysis completions, and system notifications are delivered in real-time to the appropriate team members.

  • Create Slack channels (via Slack admin or API): #competitive-intel — positioning analysis outputs and weekly briefs, #competitor-alerts — real-time competitor change notifications, #strategy-updates — monthly audience research updates, #ci-system-admin — system errors, API failures, workflow issues (MSP only)
  • Install Slack webhook for the positioning analysis script: Go to https://api.slack.com/apps → Create New App → From Scratch
  • Name the app: 'Positioning Intelligence Bot'
  • Enable Incoming Webhooks → Add webhook to #competitive-intel
  • Copy webhook URL
Add Slack notification to the analysis pipeline
python
cat > scripts/notify_slack.py << 'PYEOF'
import requests
import os
from dotenv import load_dotenv
load_dotenv()

SLACK_WEBHOOK = os.getenv('SLACK_WEBHOOK_URL')

def send_slack_notification(channel_webhook, message, blocks=None):
    payload = {'text': message}
    if blocks:
        payload['blocks'] = blocks
    response = requests.post(channel_webhook, json=payload)
    return response.status_code == 200

def notify_analysis_complete(client_name, notion_url):
    blocks = [
        {'type': 'header', 'text': {'type': 'plain_text', 'text': f'📊 New Positioning Brief: {client_name}'}},
        {'type': 'section', 'text': {'type': 'mrkdwn', 'text': f'A new strategic positioning analysis has been generated and is ready for strategist review.\n\n*Client:* {client_name}\n*Status:* Draft — Awaiting Review'}},
        {'type': 'actions', 'elements': [{'type': 'button', 'text': {'type': 'plain_text', 'text': '📄 View in Notion'}, 'url': notion_url, 'style': 'primary'}]}
    ]
    return send_slack_notification(SLACK_WEBHOOK, f'New positioning brief for {client_name}', blocks)
PYEOF
Add SLACK_WEBHOOK_URL to .env file
bash
echo 'SLACK_WEBHOOK_URL=https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX' >> .env
Note

Create separate webhook URLs for each Slack channel if you want different alert types routed to different channels. The MSP-only #ci-system-admin channel should receive all error notifications and API budget alerts. Consider using Slack's scheduled messages feature for weekly digest summaries instead of individual notifications for each client.

Step 11: Set Up Google Looker Studio Dashboard for Client Reporting

Build a competitive intelligence dashboard in Google Looker Studio that visualizes positioning data, competitive metrics, and trend analysis. This serves as both an internal monitoring tool and a client-facing reporting layer that can be embedded in Notion or shared via URL.

1
All browser-based at https://lookerstudio.google.com
2
Create New Report → 'Positioning Intelligence Dashboard'
3
Add Data Source → Google Sheets connector. Create a Google Sheet: 'CI_Data_[AgencyName]' with tabs: 'Competitive_Metrics' (columns: Date, Client, Competitor, Domain_Authority, Organic_Traffic_Est, Top_Keywords_Count, Content_Gap_Score, Backlink_Count), 'Positioning_Scores' (columns: Date, Client, Category, Score_1to10, Trend_Direction, Notes), 'Audience_Metrics' (columns: Date, Client, Audience_Segment, Engagement_Rate, Top_Channel, Sentiment_Score)
4
Build dashboard pages: Page 1: Executive Overview — Scorecard: Overall positioning strength (1-10), Time series: Positioning score trend over 12 months, Table: Top competitive gaps identified this month. Page 2: Competitive Landscape — Bar chart: Domain authority comparison (client vs competitors), Stacked bar: Content gap analysis by topic cluster, Table: Competitor messaging theme comparison. Page 3: Audience Intelligence — Pie chart: Audience channel distribution, Table: Top audience interests and content preferences, Time series: Audience sentiment trend. Page 4: Per-Client Detail (parameterized by Client filter)
5
Apply agency branding (logo, colors, fonts)
6
Set refresh schedule: Every 12 hours
7
Share: Set to 'Anyone with the link can view' for client access
8
Automate data population with a Google Apps Script: In Google Sheets → Extensions → Apps Script → paste the script that pulls from Semrush API and formats data for the dashboard
Note

Looker Studio is free and integrates natively with Google Sheets, making it the most cost-effective dashboarding option. The Google Sheet serves as an intermediary data store — populate it via Zapier (Semrush → Google Sheets step) or via the Apps Script on a schedule. For more sophisticated dashboards, consider Databox ($59/month) or AgencyAnalytics ($79/month) which have native Semrush integrations. Embed individual dashboard pages in Notion using the /embed block.

Step 12: Compliance Configuration and Documentation

Implement data handling policies, obtain Data Processing Agreements from all vendors, configure data retention settings, and create the compliance documentation required for GDPR/CCPA adherence. This step is critical for agencies handling data related to EU or California audiences.

  • Create compliance directory: mkdir -p ~/positioning-intelligence/compliance
  • Data Processing Agreements — collect from each vendor: - OpenAI DPA: https://openai.com/policies/data-processing-addendum (sign online) - Anthropic DPA: https://www.anthropic.com/policies/data-processing-addendum - Semrush DPA: Available in account settings under Privacy - SparkToro: Review privacy policy at https://sparktoro.com/privacy - Zapier DPA: https://zapier.com/legal/data-processing-addendum - Notion DPA: https://www.notion.so/Data-Processing-Addendum-361b1b2034014fba86cf92a5e3f0c8f7 - Google DPA: https://workspace.google.com/terms/dpa_terms.html
  • Configure OpenAI data settings: Navigate to https://platform.openai.com/account/organization → Under 'Data controls': Disable 'Improve our models' (prevents training on your data) and review data retention policy (API data retained 30 days for abuse monitoring)
Create AI Usage Register document
bash
cat > compliance/ai_usage_register.md << 'EOF'
# AI Usage Register
## Last Updated: [DATE]

| Tool | AI Purpose | Data Types Processed | Data Residency | Retention | DPA Signed |
|-|-|-|-|-|-|
| OpenAI GPT-4.1 | Strategic synthesis | Competitive data, audience demographics | US (Azure) | 30 days | Yes |
| OpenAI GPT-5.4 mini | Data extraction & summarization | SEO metrics, web content | US (Azure) | 30 days | Yes |
| Anthropic Claude | Backup synthesis | Same as OpenAI | US | 30 days | Yes |
| Semrush | SEO/competitive data collection | Public web data | EU/US | Per subscription | Yes |
| SparkToro | Audience intelligence | Public social profiles | US | Per subscription | Yes |
| Competely | AI competitive analysis | Public company data | US | Per subscription | Yes |
| Notion AI | Document enhancement | Strategy documents | US | Per subscription | Yes |
EOF
Create data retention policy document
bash
cat > compliance/data_retention_policy.md << 'EOF'
# Data Retention Policy - Positioning Intelligence System

| Data Type | Retention Period | Deletion Method |
|-|-|-|
| Raw competitive data exports | 24 months | Automated Google Drive cleanup |
| AI-generated positioning reports | 36 months | Manual archive review |
| Audience research data | 12 months (refresh monthly) | Overwrite with new data |
| API logs and usage data | 6 months | Platform auto-purge |
| Client deliverables (final) | Duration of client contract + 12 months | Manual deletion on offboarding |
EOF
Note

DPA collection is often the most time-consuming compliance task — start this in Phase 1. OpenAI's zero-data-retention (ZDR) option is available for enterprise accounts; for standard API accounts, data is retained for 30 days for abuse monitoring but NOT used for training. Anthropic similarly does not train on API data. Document everything in the AI Usage Register and update it whenever a new tool is added. Review quarterly with the agency's compliance contact.

Step 13: End-to-End System Test and Validation

Run a complete end-to-end test using one real client's data to validate the entire pipeline: data collection → processing → LLM synthesis → output delivery → notification. Document results and identify any issues before scaling to all clients.

Full end-to-end pipeline test: data prep, analysis run, output verification, Slack notification, and test logging
bash
cd ~/positioning-intelligence

# 1. Prepare test data for one client
mkdir -p data/TestClient

# Export from Semrush: Organic Research → Competitors tab → Export CSV
# Save as: data/TestClient/semrush_competitor_keywords.csv

# Export from Semrush: Content Gap → Export CSV
# Save as: data/TestClient/semrush_content_gap.csv

# Export from SparkToro: Audience search → Export all tabs
# Save as: data/TestClient/sparktoro_audience_demographics.csv
# Save as: data/TestClient/sparktoro_audience_sources.csv

# Export from Competely: Full analysis → Copy text
# Save as: data/TestClient/competely_competitor_analysis.txt

# 2. Run the full analysis pipeline
python3 scripts/run_positioning_analysis.py TestClient data/TestClient

# 3. Verify output
ls -la outputs/
cat outputs/TestClient_*_positioning_brief.md | head -100

# 4. Check output quality criteria:
# - Does the competitor messaging section identify distinct positioning themes?
# - Does the audience synthesis reveal actionable audience preferences?
# - Does the gap analysis identify specific unoccupied positioning territories?
# - Do the recommendations include concrete messaging suggestions?
# - Is the output length sufficient (aim for 2,000-4,000 words total)?

# 5. Test Slack notification
python3 -c "
from scripts.notify_slack import notify_analysis_complete
notify_analysis_complete('TestClient', 'https://notion.so/test-page-url')
"

# 6. Test Zapier workflow (trigger manually from Zapier dashboard)
# Verify the Notion page is created with correct properties
# Verify the Slack message arrives in #competitive-intel

# 7. Document test results
echo 'Test completed: [DATE] - [PASS/FAIL] - [NOTES]' >> compliance/test_log.md
Note

The first full pipeline run will likely reveal prompt refinement needs — this is expected. Budget 2-3 iterations of prompt adjustment. Common issues: LLM outputs too generic (fix: add more specific examples in prompts), data formatting errors from CSV exports (fix: add CSV parsing logic), Zapier step timeouts for large data payloads (fix: truncate input or split into smaller chunks). Take screenshots of successful outputs for the client handoff documentation.

Custom AI Components

Competitor Messaging Extraction Prompt

Type: prompt

Extracts and categorizes competitor messaging, positioning themes, value propositions, and tone of voice from raw competitive data. This is the first stage of the LLM chain. It processes Semrush keyword data, Competely analysis output, and any scraped competitor website copy to produce a structured competitor messaging profile for each competitor.

Implementation:

System Prompt: Competitor Messaging Extraction

You are a senior competitive intelligence analyst specializing in brand positioning and messaging strategy for marketing agencies. Your task is to analyze raw competitive data and extract structured messaging profiles. ## Instructions Analyze the provided competitive data (SEO keywords, competitor analysis reports, website content) and produce a structured messaging profile for EACH competitor identified. ## Output Format For each competitor, produce the following sections: ### [Competitor Name] **1. Core Positioning Statement** Synthesize their primary market positioning in 1-2 sentences. What do they claim to be? Who do they claim to serve? **2. Primary Value Propositions** (list 3-5) What specific benefits do they emphasize most prominently? Rank by prominence. **3. Messaging Themes** (list 3-7) Recurring themes, phrases, and concepts across their content. Include exact phrases where available. **4. Target Audience Signals** Who is their messaging targeting? What pain points do they address? What aspirational outcomes do they promise? **5. Tone & Voice Characteristics** Describe their communication style: formal/casual, technical/accessible, authoritative/approachable, etc. Provide specific examples. **6. Differentiators Claimed** What do they explicitly or implicitly claim makes them different from alternatives? **7. Content Strategy Signals** Based on SEO data: what topics do they invest in? What keywords do they prioritize? What content formats dominate? **8. Weaknesses & Gaps Observed** Based on the data, what positioning weaknesses are apparent? What topics or audiences are they ignoring? ## Analysis Rules - Be specific and evidence-based. Cite actual keywords, phrases, or data points. - Distinguish between what competitors SAY (messaging) vs. what the DATA shows (actual performance). - If data is insufficient for a section, state "Insufficient data" rather than speculating. - Maintain analytical objectivity — do not editorialize or recommend actions (that comes in a later stage). - Process ALL competitors found in the data, even if some have limited information. - For SEO keyword data, focus on branded terms, top-ranking non-branded terms, and content themes rather than listing individual keywords.
Sonnet 4.6

Audience Intelligence Synthesis Prompt

Type: prompt

Synthesizes audience research data from SparkToro exports and other audience intelligence sources into a comprehensive audience profile. This produces the audience understanding layer that informs positioning recommendations. It identifies audience behaviors, preferences, media consumption patterns, and psychographic signals.

Implementation:

Audience Intelligence Synthesis

# System Prompt: Audience Intelligence Synthesis You are a senior audience research strategist specializing in psychographic analysis and media behavior mapping. Your task is to synthesize raw audience data into actionable audience intelligence profiles. ## Instructions Analyze the provided audience research data (SparkToro exports, demographic data, social media audience data) and produce a comprehensive audience intelligence synthesis. ## Output Format ### Audience Profile Summary A 2-3 paragraph narrative description of the target audience — who they are, what motivates them, and how they make decisions. ### 1. Demographic Snapshot - Primary age ranges and gender distribution - Geographic concentrations - Professional roles and seniority levels (if B2B) - Notable demographic patterns or surprises ### 2. Psychographic Profile - Core values and beliefs that drive decisions - Aspirations and desired outcomes - Pain points and frustrations - Risk tolerance and decision-making style - Brand affinity patterns (what types of brands do they gravitate toward?) ### 3. Media Consumption Map Organize by channel type: - **Social Platforms:** Which platforms, how they use them, engagement patterns - **Content Sources:** Websites, blogs, newsletters they read regularly - **Podcasts:** Shows they listen to and what this reveals about their interests - **YouTube/Video:** Channels they watch and content themes - **Communities:** Subreddits, forums, Slack/Discord groups - **Influencers/Thought Leaders:** Accounts they follow and trust ### 4. Content Preferences - Formats that resonate (long-form, short-form, video, audio, visual) - Topics that generate highest engagement - Language and terminology they use (important for messaging alignment) - Content consumption timing patterns (if available) ### 5. Purchase Behavior Signals - How they research purchases in this category - What sources they trust for recommendations - Decision-making influencers (peers, experts, reviews) - Price sensitivity indicators ### 6. Audience Segments If the data reveals distinct sub-segments, identify them: - Segment name and size estimate - Key differentiators from other segments - Channel preferences specific to this segment - Messaging implications for this segment ### 7. Strategic Implications - Top 3 channels for reaching this audience - Top 3 content themes that would resonate - Language/tone recommendations based on audience voice - Messaging traps to avoid (what would alienate this audience) ## Analysis Rules - Prioritize behavioral data (what they DO) over stated preferences (what they SAY) - Note sample sizes and confidence levels where available - Distinguish between primary audience (high volume) and influential audience (high impact) - Flag any data gaps that would benefit from additional research - Include specific account names, publication names, and community names — specificity is actionable - If SparkToro data shows 'Hidden Gems' (low-follower, high-engagement accounts), highlight these prominently as they often represent underserved niches
Sonnet 4.6

Positioning Gap Analysis Prompt

Type: prompt

The third stage of the LLM chain. Takes the outputs of competitor messaging extraction and audience synthesis, plus SEO content gap data, and identifies unoccupied or underserved positioning territories. This is the analytical core that finds where the client can differentiate.

Implementation:

System Prompt: Positioning Gap Analysis

You are a senior brand strategist specializing in competitive positioning and market opportunity identification. Your task is to analyze the intersection of competitor positioning, audience needs, and content gaps to identify strategic positioning opportunities. ## Inputs You Will Receive 1. **Competitor Messaging Profiles** — structured analysis of how each competitor positions themselves 2. **Audience Intelligence Synthesis** — comprehensive understanding of the target audience 3. **SEO/Content Gap Data** — keyword and content opportunities not covered by competitors ## Output Format ### Executive Summary 2-3 sentences describing the overall positioning landscape and the single biggest opportunity identified. ### 1. Positioning Landscape Map Create a conceptual map of the competitive positioning space: - **Axis 1:** [Identify the primary dimension competitors differentiate on, e.g., Price vs. Premium, Simple vs. Comprehensive] - **Axis 2:** [Identify the secondary dimension, e.g., Technical vs. Accessible, Niche vs. Broad] - Plot each competitor on this map with a brief explanation - Identify EMPTY QUADRANTS or SPARSE AREAS — these are positioning opportunities ### 2. Messaging Saturation Analysis For each major messaging theme used by competitors: | Theme | # Competitors Using It | Saturation Level | Audience Resonance | Opportunity? | |-------|----------------------|-------------------|-------------------|-------------| - **High saturation + High resonance** = table stakes (must address but can't differentiate) - **High saturation + Low resonance** = overplayed (opportunity to counter-position) - **Low saturation + High resonance** = prime opportunity (differentiation with demand) - **Low saturation + Low resonance** = niche play (only if audience segment justifies it) ### 3. Audience-Competitor Alignment Gaps Where do competitors FAIL to address what the audience WANTS? - List specific audience needs/preferences from the audience synthesis - Map each to competitor coverage (who addresses it, who doesn't) - Highlight unmet needs with the highest potential impact ### 4. Content & SEO Positioning Gaps Based on the content gap data: - Topic clusters where no competitor has strong authority - High-search-volume keywords with low competition - Content format gaps (e.g., competitors all write blogs but audience prefers video) - Semantic territories ripe for ownership ### 5. Positioning Opportunity Matrix Rank the top 5-7 positioning opportunities by: | Opportunity | Differentiation Potential (1-10) | Audience Demand (1-10) | Competitive Defensibility (1-10) | Implementation Ease (1-10) | TOTAL | |------------|--------------------------------|----------------------|--------------------------------|--------------------------|-------| ### 6. Positioning Risks - Positioning territories to AVOID and why - Competitor responses that could undermine each opportunity - Market trends that could shift the landscape in 6-12 months ## Analysis Rules - Every opportunity must be grounded in BOTH competitive data AND audience data - Do not recommend positioning that the audience doesn't care about, regardless of how differentiated it is - Do not recommend positioning that is already saturated, regardless of how much the audience wants it - Be explicit about the evidence for each finding - Consider both rational positioning (features, benefits) and emotional positioning (identity, values, aspiration) - Think in terms of Category Design when possible — can the client create or own a new sub-category?
Sonnet 4.6

Strategic Positioning Recommendation Prompt

Type: prompt

The final stage of the LLM chain. Takes all previous outputs and generates specific, actionable strategic positioning recommendations including positioning statements, messaging frameworks, channel strategy, and implementation priorities. This is what gets delivered to the agency strategist for client application.

Implementation:

System Prompt: Strategic Positioning Recommendations

You are a senior brand strategist at a top-tier marketing agency, delivering positioning strategy recommendations to a client team. Your recommendations must be specific, actionable, and immediately applicable to messaging, content, and campaign development. ## Inputs You Will Receive 1. **Client Name** and context 2. **Positioning Gap Analysis** — identified opportunities and risks 3. **Audience Intelligence Synthesis** — target audience understanding 4. **Competitor Messaging Profiles** — competitive landscape ## Output Format ### Strategic Positioning Brief #### Recommended Primary Positioning **Positioning Statement (Internal):** [Fill in this framework] For [target audience], [Client] is the [category/frame of reference] that [key differentiator] because [reasons to believe]. **Elevator Pitch (External, 30 seconds):** [Write a natural, conversational version of the positioning for external use] **Positioning Rationale:** 3-4 sentences explaining WHY this positioning was selected, referencing specific competitive gaps and audience needs. #### Messaging Architecture **Brand Promise:** [One sentence — the core promise] **Pillar Messages:** (3-4 supporting messages) | Pillar | Message | Proof Points | Competitor Vulnerability | |--------|---------|-------------|------------------------| | 1. [Name] | [Key message] | [Evidence/features that support it] | [Which competitor weakness this exploits] | | 2. [Name] | | | | | 3. [Name] | | | | **Tone of Voice Recommendation:** - Primary voice attributes (3-4 adjectives with descriptions) - Voice do's and don'ts with examples - How this voice differentiates from competitor voices #### Channel & Content Strategy Recommendations Based on the audience media consumption map: **Priority Channels (Ranked):** 1. [Channel] — Why, what content type, posting cadence 2. [Channel] — Why, what content type, posting cadence 3. [Channel] — Why, what content type, posting cadence **Content Themes to Own:** - Theme 1: [Topic cluster] — addresses [audience need], exploits [competitive gap] - Theme 2: [Topic cluster] — addresses [audience need], exploits [competitive gap] - Theme 3: [Topic cluster] — addresses [audience need], exploits [competitive gap] **Content Formats to Prioritize:** Based on audience preferences, recommend specific formats with rationale. **SEO Quick Wins:** - 3-5 specific keyword clusters or content pieces to create immediately based on content gap data #### Implementation Roadmap **Immediate (Weeks 1-2):** - Update website hero messaging to reflect new positioning - Revise boilerplate and elevator pitch across all channels - Brief the creative team on new messaging architecture **Short-term (Weeks 3-8):** - Launch first content pieces targeting owned themes - Update social media bios and pinned content - Develop 2-3 thought leadership pieces for priority channels **Medium-term (Months 3-6):** - Full content calendar aligned to positioning themes - Paid media messaging tests using pillar messages - Measure positioning effectiveness via brand tracking survey #### Competitive Response Playbook For each major competitor: - **If [Competitor A] does [likely response]:** Recommended counter-move - **If [Competitor B] does [likely response]:** Recommended counter-move #### Success Metrics | Metric | Current Baseline | 90-Day Target | Measurement Method | |--------|-----------------|--------------|-------------------| | Share of voice (target keywords) | | | Semrush Position Tracking | | Brand mention sentiment | | | SparkToro/Brandwatch | | Content gap closure rate | | | Semrush Content Gap | | Audience engagement rate | | | Platform analytics | #### Alternative Positioning (Backup) If the primary positioning proves unviable, here is a backup positioning approach: - **Alternative positioning statement** - **Key differences from primary** - **When to pivot:** specific trigger conditions ## Writing Rules - Write in a confident, strategic voice appropriate for a senior agency audience - Be specific — no filler like 'leverage synergies' or 'think outside the box' - Every recommendation must trace back to data from the analysis - Include rationale for WHY, not just WHAT - Highlight quick wins that demonstrate immediate value - Acknowledge limitations and areas requiring further research - Use the client's industry terminology naturally - Formatting must be clean and presentation-ready (this goes into a Notion page or strategy deck)
Sonnet 4.6

Zapier Competitive Analysis Automation

Type: workflow A Zapier multi-step workflow (Zap) that automates the weekly competitive analysis pipeline. Triggers every Monday, pulls Semrush data via API, processes through OpenAI for analysis, creates a Notion page with the output, and notifies the team via Slack. This is the primary recurring automation that generates ongoing value.

Implementation:

Zap Configuration

Step 1: Trigger - Schedule by Zapier

  • Trigger Event: Every Week
  • Day of Week: Monday
  • Time of Day: 7:00 AM (client's timezone)
  • Output: Timestamp

Step 2: Looping by Zapier - Loop Over Clients

  • Input: A Notion database query or a Google Sheet with client names and competitor domains
  • Loop Values: client_name, client_domain, competitor_domains (comma-separated)

Step 3: Webhooks by Zapier - GET Semrush Data

  • Event: GET
  • URL: https://api.semrush.com/
  • Query Parameters: type: domain_organic, key: {{semrush_api_key}}, domain: {{loop_competitor_domain}}, database: us, display_limit: 50, export_columns: Ph,Po,Nq,Cp,Ur,Tc
  • Headers: None required (key in URL)

Step 4: Code by Zapier (Python) - Format Data for LLM

Zapier Code step
python
# formats raw Semrush CSV and metadata into a structured string for LLM
# input

# Input: raw Semrush CSV from Step 3, Competely report text (stored in Google Drive)
import json

semrush_data = input_data.get('semrush_response', '')
client_name = input_data.get('client_name', 'Unknown')
competitor_domains = input_data.get('competitor_domains', '')

# Format the data for LLM processing
formatted = f"""## Competitive Data for {client_name}

### Semrush Organic Keywords Data
{semrush_data[:8000]}  # Truncate to fit token limits

### Competitors Analyzed
{competitor_domains}

### Analysis Date
{input_data.get('timestamp', 'N/A')}
"""

output = [{'formatted_data': formatted, 'client_name': client_name}]

Step 5: Webhooks by Zapier - POST to OpenAI API

  • Event: POST
  • URL: https://api.openai.com/v1/chat/completions
  • Headers: Authorization: Bearer {{openai_api_key}}, Content-Type: application/json
OpenAI API request payload
json
# sent via Webhooks by Zapier in Step 5

{
  "model": "gpt-4.1",
  "messages": [
    {
      "role": "system",
      "content": "You are a competitive intelligence analyst. Analyze the provided competitive data and produce a weekly positioning update. Focus on: 1) Notable changes in competitor keyword rankings, 2) New content themes competitors are investing in, 3) Shifts in competitor messaging or positioning, 4) Opportunities for the client. Keep the analysis concise (500-800 words) and actionable."
    },
    {
      "role": "user",
      "content": "{{formatted_data_from_step4}}"
    }
  ],
  "max_tokens": 2000,
  "temperature": 0.3
}

OpenAI System Prompt — Weekly Competitive Intelligence Analyst

You are a competitive intelligence analyst. Analyze the provided competitive data and produce a weekly positioning update. Focus on: 1) Notable changes in competitor keyword rankings, 2) New content themes competitors are investing in, 3) Shifts in competitor messaging or positioning, 4) Opportunities for the client. Keep the analysis concise (500-800 words) and actionable.
Sonnet 4.6

Step 6: Notion - Create Database Item

  • Connection: Notion integration (must be installed in workspace)
  • Database: Positioning Recommendations database
  • Title: Weekly CI Update: {{client_name}} - {{date}}
  • Content: {{openai_response.choices[0].message.content}}
  • Status: Draft
  • Client: {{client_name}}
  • Analysis Date: {{current_date}}
  • Tags: weekly-update, competitive-intel

Step 7: Slack - Send Channel Message

  • Channel: #competitive-intel
  • Message Text: 📊 Weekly CI update for *{{client_name}}* is ready for review. {{notion_page_url}}
  • Bot Name: CI Bot
  • Bot Icon: :chart_with_upwards_trend:

Error Handling

  • Enable Zapier's built-in error handler after Step 5
  • On error: Send message to #ci-system-admin with error details
  • Common errors: OpenAI rate limit (retry after 60s), Semrush API limit (reduce concurrent calls)

Estimated Task Usage

  • 7 steps × 8 clients = 56 tasks per weekly run
  • Monthly: ~224 tasks (well within Professional plan's 750 limit)

Monthly Positioning Report Generator

Type: agent An orchestration agent that runs the full 4-stage LLM analysis pipeline on a monthly cadence for all active clients. It collects the latest data exports, runs all four prompts in sequence, compiles a comprehensive monthly positioning report, and delivers it to Notion with appropriate notifications. This is the heavyweight analysis that supplements the lighter weekly updates.

Implementation

Monthly Positioning Report Generator Agent — runs on the 1st of each month via cron or Zapier Schedule trigger
bash
#!/bin/bash
# monthly_positioning_report.sh
# Cron entry: 0 8 1 * * /home/msp/positioning-intelligence/scripts/monthly_positioning_report.sh

cd /home/msp/positioning-intelligence
source .env

# Read active clients from config
CLIENTS_FILE="config/active_clients.json"

# Example active_clients.json:
# [
#   {"name": "ClientA", "data_dir": "data/ClientA", "notion_page_id": "abc123"},
#   {"name": "ClientB", "data_dir": "data/ClientB", "notion_page_id": "def456"}
# ]

# Step 1: Refresh data exports (manual step - remind via Slack)
python3 scripts/notify_slack.py reminder "🔔 Monthly positioning reports running today. Ensure latest Semrush, SparkToro, and Competely exports are in the client data folders."

# Step 2: Wait 2 hours for data refresh, then run analysis
sleep 7200

# Step 3: Run full analysis for each client
python3 << 'PYEOF'
import json
import subprocess
from pathlib import Path
from datetime import datetime

with open('config/active_clients.json') as f:
    clients = json.load(f)

results = []
for client in clients:
    print(f"Processing {client['name']}...")
    try:
        # Import and run the analysis pipeline
        from scripts.run_positioning_analysis import run_analysis
        output_file = run_analysis(client['name'], client['data_dir'])
        results.append({'client': client['name'], 'status': 'success', 'file': output_file})
    except Exception as e:
        results.append({'client': client['name'], 'status': 'error', 'error': str(e)})
        print(f"Error processing {client['name']}: {e}")

# Save results summary
with open(f"outputs/monthly_run_{datetime.now().strftime('%Y%m')}.json", 'w') as f:
    json.dump(results, f, indent=2)

# Notify completion
from scripts.notify_slack import send_slack_notification
import os
successes = sum(1 for r in results if r['status'] == 'success')
failures = sum(1 for r in results if r['status'] == 'error')
send_slack_notification(
    os.getenv('SLACK_WEBHOOK_URL'),
    f"📊 Monthly positioning reports complete: {successes} succeeded, {failures} failed. Check Notion for results."
)
PYEOF
Config File: active_clients.json
json
[
  {
    "name": "ClientA",
    "data_dir": "data/ClientA",
    "notion_page_id": "your-notion-page-id-here",
    "competitors": ["competitor1.com", "competitor2.com", "competitor3.com"],
    "audience_queries": ["talks about: SaaS marketing", "follows: @competitor1"]
  }
]
Cron Setup
bash
# Add to crontab (crontab -e)
# Run monthly report on 1st of each month at 8 AM
0 8 1 * * /home/msp/positioning-intelligence/scripts/monthly_positioning_report.sh >> /var/log/positioning-intel.log 2>&1
  • The 2-hour delay allows MSP staff to verify data exports are current
  • Each full client analysis takes 2-4 minutes and costs $0.15-$0.50 in API fees
  • For 10 clients, expect ~30-40 minutes total runtime and $1.50-$5.00 in API costs
  • Failed analyses are logged and can be re-run individually
  • The agent uses the same run_positioning_analysis.py script from Step 9

Notion Integration Script

Type: integration Python utility that programmatically creates and updates Notion pages with positioning analysis outputs. Handles formatting, database item creation, property setting, and page content population. Used by both the Zapier workflow and the monthly report generator.

Implementation:

notion_integration.py
python
# Utility for creating and updating Notion pages with positioning
# intelligence outputs

# notion_integration.py
# Utility for creating and updating Notion pages with positioning intelligence outputs
# Requires: pip install notion-client

import os
from notion_client import Client
from datetime import datetime
from dotenv import load_dotenv

load_dotenv()

notion = Client(auth=os.getenv('NOTION_API_KEY'))

# Configuration
POSITIONING_DB_ID = os.getenv('NOTION_POSITIONING_DB_ID')  # Database ID for Positioning Recommendations

def create_positioning_brief(client_name: str, report_content: str, report_type: str = 'monthly') -> str:
    """
    Creates a new page in the Positioning Recommendations database.
    Returns the Notion page URL.
    """
    # Parse the report content into sections
    sections = parse_report_sections(report_content)
    
    # Build Notion blocks from the report
    children_blocks = []
    
    # Add header
    children_blocks.append({
        'object': 'block',
        'type': 'callout',
        'callout': {
            'rich_text': [{'type': 'text', 'text': {'content': f'AI-generated positioning analysis. Review required before client delivery.'}}],
            'icon': {'type': 'emoji', 'emoji': '🤖'},
            'color': 'yellow_background'
        }
    })
    
    # Add each section
    for section_title, section_content in sections.items():
        children_blocks.append({
            'object': 'block',
            'type': 'heading_2',
            'heading_2': {
                'rich_text': [{'type': 'text', 'text': {'content': section_title}}]
            }
        })
        
        # Split content into paragraphs (Notion has 2000 char limit per block)
        paragraphs = section_content.split('\n\n')
        for para in paragraphs:
            if para.strip():
                # Truncate to Notion's block limit
                for chunk in [para[i:i+2000] for i in range(0, len(para), 2000)]:
                    children_blocks.append({
                        'object': 'block',
                        'type': 'paragraph',
                        'paragraph': {
                            'rich_text': [{'type': 'text', 'text': {'content': chunk.strip()}}]
                        }
                    })
    
    # Add divider and metadata
    children_blocks.append({'object': 'block', 'type': 'divider', 'divider': {}})
    children_blocks.append({
        'object': 'block',
        'type': 'paragraph',
        'paragraph': {
            'rich_text': [{
                'type': 'text',
                'text': {'content': f'Generated: {datetime.now().strftime("%B %d, %Y at %I:%M %p")} | Type: {report_type} | System: Positioning Intelligence v1.0'},
                'annotations': {'italic': True, 'color': 'gray'}
            }]
        }
    })
    
    # Create the page
    title = f"{'📊' if report_type == 'monthly' else '📈'} {report_type.title()} Positioning Brief: {client_name} - {datetime.now().strftime('%B %Y')}"
    
    new_page = notion.pages.create(
        parent={'database_id': POSITIONING_DB_ID},
        properties={
            'Name': {'title': [{'text': {'content': title}}]},
            'Status': {'select': {'name': 'Draft'}},
            'Client': {'select': {'name': client_name}},
            'Analysis Date': {'date': {'start': datetime.now().isoformat()[:10]}},
            'Report Type': {'select': {'name': report_type}},
            'Tags': {'multi_select': [{'name': 'ai-generated'}, {'name': 'positioning'}, {'name': report_type}]}
        },
        children=children_blocks[:100]  # Notion API limit: 100 blocks per request
    )
    
    page_url = new_page['url']
    print(f'Created Notion page: {page_url}')
    return page_url

def parse_report_sections(content: str) -> dict:
    """Parse markdown-formatted report into sections."""
    sections = {}
    current_section = 'Overview'
    current_content = []
    
    for line in content.split('\n'):
        if line.startswith('## '):
            if current_content:
                sections[current_section] = '\n'.join(current_content)
            current_section = line.replace('## ', '').replace('#', '').strip()
            current_content = []
        else:
            current_content.append(line)
    
    if current_content:
        sections[current_section] = '\n'.join(current_content)
    
    return sections

def update_competitor_profile(page_id: str, new_intel: str) -> None:
    """Append new competitive intelligence to an existing competitor profile page."""
    notion.blocks.children.append(
        block_id=page_id,
        children=[
            {'object': 'block', 'type': 'divider', 'divider': {}},
            {
                'object': 'block',
                'type': 'heading_3',
                'heading_3': {
                    'rich_text': [{'type': 'text', 'text': {'content': f'Update: {datetime.now().strftime("%B %d, %Y")}'}}]
                }
            },
            {
                'object': 'block',
                'type': 'paragraph',
                'paragraph': {
                    'rich_text': [{'type': 'text', 'text': {'content': new_intel[:2000]}}]
                }
            }
        ]
    )

# Setup instructions:
# 1. Create a Notion integration at https://www.notion.so/my-integrations
# 2. Copy the Internal Integration Token → save as NOTION_API_KEY in .env
# 3. Share the Positioning Recommendations database with the integration
# 4. Copy the database ID from the URL → save as NOTION_POSITIONING_DB_ID in .env
# Database URL format: https://notion.so/{workspace}/{database_id}?v={view_id}
# The database_id is the 32-character hex string before the ?v= parameter

if __name__ == '__main__':
    # Test with sample data
    sample_report = """## Competitor Messaging Analysis
Competitor A positions as the premium solution for enterprise teams.

## Audience Intelligence
The target audience primarily engages on LinkedIn and industry podcasts.

## Positioning Gaps
No competitor effectively addresses the mid-market segment seeking enterprise features at SMB pricing.

## Strategic Recommendations
Position as the 'enterprise-grade solution built for growing teams' to capture the underserved mid-market."""
    
    url = create_positioning_brief('TestClient', sample_report, 'monthly')
    print(f'Test page created: {url}')

Testing & Validation

  • CONNECTIVITY TEST: Run the OpenAI and Anthropic API connectivity tests from Step 5. Verify both return 'OK' responses. If either fails, check API key validity, billing configuration, and network firewall rules (ensure HTTPS outbound to api.openai.com and api.anthropic.com is permitted).
  • SEMRUSH API TEST: Execute a manual Semrush API call using curl: curl 'https://api.semrush.com/?type=domain_organic&key=YOUR_KEY&domain=hubspot.com&database=us&display_limit=5' — verify CSV response with keyword data. If empty or error, check API key permissions and plan tier.
  • SPARKTORO DATA EXPORT TEST: Log into SparkToro, run a search for a known audience (e.g., 'frequently talks about: content marketing'), export demographics CSV, and verify the file contains structured data with columns for demographics, social accounts, and websites.
  • COMPETELY ANALYSIS TEST: Generate a competitive analysis in Competely for two well-known competitors (e.g., Mailchimp vs. ConvertKit). Verify the output includes all six analysis sections (Marketing & Positioning, Pricing, Product, Customer Sentiment, Online Presence, Technical). Export and save as text file.
  • FULL PIPELINE TEST: Run python3 scripts/run_positioning_analysis.py TestClient data/TestClient with sample data from the above exports. Verify the output file is created in the outputs/ directory, contains all four sections (Competitor Messaging, Audience Synthesis, Gap Analysis, Recommendations), and is 2,000-4,000 words in length.
  • LLM OUTPUT QUALITY TEST: Have the agency's senior strategist review one complete AI-generated positioning brief. Score on a 1-10 scale for: accuracy (do competitor profiles match reality?), insight depth (beyond obvious observations?), actionability (can a strategist directly use these recommendations?), and writing quality. Target: minimum 7/10 on all criteria.
  • ZAPIER WORKFLOW TEST: Manually trigger each of the three Zaps (Weekly CI Pipeline, Competitor Alert Router, Monthly Audience Sync). Verify that: (a) data flows correctly between steps, (b) OpenAI API call succeeds, (c) Notion page is created with correct properties and content, (d) Slack notification arrives in the correct channel with working links.
  • NOTION INTEGRATION TEST: Run python3 scripts/notion_integration.py to create a test page. Verify the page appears in the Positioning Recommendations database with correct title, status (Draft), client name, date, and formatted content. Test the page URL is accessible to agency team members.
  • SLACK NOTIFICATION TEST: Trigger a test notification via python3 -c "from scripts.notify_slack import notify_analysis_complete; notify_analysis_complete('TestClient', 'https://notion.so/test')". Verify the message appears in #competitive-intel with formatted blocks and a working Notion link button.
  • COST TRACKING TEST: After running 3 full pipeline analyses, check OpenAI usage dashboard (https://platform.openai.com/usage) and Anthropic usage dashboard. Verify actual costs align with estimates ($0.15-$0.50 per analysis). If costs exceed $1.00 per analysis, review prompts for token efficiency.
  • LOOKER STUDIO DASHBOARD TEST: Populate the Google Sheet data source with sample competitive metrics for 2 clients and 3 competitors each. Verify the Looker Studio dashboard correctly renders all charts, filters work (client selector), and the data refresh schedule is active.
  • COMPLIANCE DOCUMENTATION TEST: Verify all vendor DPAs are signed and filed in the compliance/ directory. Confirm OpenAI 'Improve our models' setting is disabled. Verify the AI Usage Register is complete with all tools listed. Confirm data retention policy document exists and is shared with agency leadership.
  • FAILOVER TEST: Temporarily invalidate the OpenAI API key in .env (add a character). Run the pipeline and verify it automatically falls back to Anthropic Claude. Restore the key and verify normal operation resumes. This confirms the redundancy mechanism works.
  • END-TO-END TIMING TEST: Time a complete analysis run for one client from data collection to Notion page creation. Target: under 5 minutes. If exceeding 10 minutes, investigate bottlenecks (likely in LLM API response time or network latency). Document baseline timing for SLA purposes.

Client Handoff

Client Handoff Plan

Training Sessions (2 sessions, 90 minutes each)

Session 1: System Overview & Daily Usage (for all agency strategists)

  • How the positioning intelligence system works (high-level architecture diagram)
  • Navigating the Notion knowledge base: finding client pages, reading positioning briefs, understanding the status workflow (Draft → In Review → Approved → Delivered)
  • Interpreting AI-generated positioning recommendations: what to trust, what to verify, how to add human strategic judgment
  • Understanding the Slack notification channels and what each alert type means
  • How to request an ad-hoc analysis run (contact MSP or trigger via defined process)
  • Reviewing the Looker Studio competitive dashboard: filters, date ranges, export options

Session 2: Administration & Data Management (for project lead + 1 backup)

  • How to add a new client to the system (update active_clients.json, create Notion structure, configure Semrush project)
  • How to add or remove competitors for an existing client
  • How to update SparkToro audience queries when client targeting shifts
  • How to run the manual analysis pipeline if needed (basic command-line usage)
  • Understanding API cost monitoring: where to check usage, what spending alerts mean
  • How to update prompt templates for specific client needs (with MSP support)
  • Escalation procedures: when to contact the MSP vs. handle internally

Documentation Package Left Behind

1
System Runbook (Notion page): Step-by-step procedures for all routine tasks
2
Prompt Library (Notion + Git repo): All four core prompts with version history and modification guidelines
3
Architecture Diagram: Visual showing all tools, data flows, and integration points
4
Vendor Account Summary: List of all SaaS accounts, billing owners, renewal dates, and support contacts (credentials in password manager only)
5
Compliance Package: AI Usage Register, Data Retention Policy, signed DPAs, GDPR/CCPA guidelines
6
Troubleshooting Guide: Common issues and resolutions (API errors, Zapier failures, data quality issues)
7
Cost Management Guide: Expected monthly costs by tier, how to monitor, budget alert thresholds

Success Criteria Review (90-day checkpoint)

Maintenance

Ongoing Maintenance Plan

Weekly Tasks (1-2 hours/week MSP effort)

  • Monday: Verify weekly Zapier CI pipeline ran successfully; review Slack #ci-system-admin for any error notifications; spot-check one client's weekly output for quality
  • Friday: Review OpenAI and Anthropic API usage dashboards; check Zapier task consumption against monthly quota; review Semrush Brand Monitoring alerts queue

Monthly Tasks (4-6 hours/month MSP effort)

  • 1st of month: Verify monthly positioning report generator ran for all clients; review outputs for quality; investigate any failed runs
  • 5th of month: Review and reconcile all SaaS billing (Semrush, SparkToro, Competely, Zapier, Notion, API costs); compare to budget projections; flag anomalies
  • 15th of month: Prompt library review — check if any prompts need updating based on output quality feedback from agency strategists; commit updates to Git
  • 20th of month: Data freshness audit — verify SparkToro exports are current, Semrush projects are tracking correctly, Competely analyses have been regenerated

Quarterly Tasks (8-12 hours/quarter)

  • Prompt optimization sprint: Review all four core prompts with the agency's senior strategist; identify areas for improvement; test revised prompts with historical data; deploy updates
  • Compliance review: Update AI Usage Register; verify DPAs are current; check for new regulatory requirements (EU AI Act updates, FTC guidance changes); update data retention policy if needed
  • Tool evaluation: Assess whether current tool tier is appropriate — upgrade or downgrade based on actual usage patterns; evaluate new competitive intelligence tools that may have launched
  • Dashboard refresh: Update Looker Studio dashboard with new metrics or visualizations based on user feedback; add new clients if onboarded
  • Cost optimization: Analyze LLM token usage patterns; identify prompts that can be made more efficient; consider model swaps (e.g., newer cheaper models as they release); review Zapier task usage for optimization

Annual Tasks

  • Full system review: Architecture assessment, vendor contract renewals, pricing renegotiation
  • Security audit: Review all API keys (rotate if needed), audit user access across all platforms, verify SSO configurations
  • Strategy alignment: Meet with agency leadership to ensure the system is aligned with evolving business needs; plan feature additions for the coming year

SLA Considerations

  • System Availability: Target 99% uptime for automated workflows (dependent on Zapier and LLM API availability)
  • Data Freshness: Competitive data updated weekly; audience data updated monthly; positioning briefs generated within 24 hours of data refresh
  • Response Time: MSP responds to system errors within 4 business hours; non-critical issues within 1 business day
  • Output Quality: Agency strategist quality scores maintained at 7+/10 average; prompt revisions deployed within 5 business days of quality feedback

Escalation Path

1
Level 1 (Agency Self-Service): Consult runbook and troubleshooting guide in Notion
2
Level 2 (MSP Standard): Email/ticket to MSP support — response within 4 business hours
3
Level 3 (MSP Urgent): Phone/Slack DM to MSP account manager — for system-down scenarios or data breach concerns
4
Level 4 (Vendor Escalation): MSP contacts SaaS vendor support directly for platform-specific issues

Model Update Triggers

  • When OpenAI or Anthropic release new model versions, evaluate within 2 weeks: test new model with existing prompts, compare output quality and cost, migrate if improvement is confirmed
  • When any SaaS tool releases a major update or API change, review integration points within 1 week and update workflows as needed
  • When a client's competitive landscape changes significantly (new major competitor, acquisition, market shift), trigger an immediate full re-analysis and prompt review

Alternatives

Enterprise-Grade Stack with Crayon + Klue + Brandwatch

Replaces Competely and SparkToro with enterprise competitive intelligence platforms (Crayon for CI, Klue for sales enablement battlecards, Brandwatch for social listening and audience intelligence). Adds significantly more automated data collection, real-time monitoring, AI-powered alerting, and deeper analytics. Integrates natively with Salesforce and HubSpot CRM.

Budget Stack with SE Ranking + Competely + Open-Source LLMs

Replaces Semrush with SE Ranking ($52/month), keeps Competely ($39–$79/month), drops SparkToro entirely (uses free social media research manually), and replaces OpenAI/Anthropic APIs with self-hosted open-source LLMs (Llama 3 or Mistral via Ollama on a local workstation). Uses n8n (self-hosted, free) instead of Zapier. Total tool cost: under $150/month.

Warning

Tradeoffs: Dramatically lower cost ($150/month vs. $500–$650/month) but significant quality and convenience tradeoffs. SE Ranking has less competitive data depth than Semrush (fewer keyword databases, less accurate traffic estimates). Losing SparkToro means manual audience research (2–4 extra hours per client per month). Self-hosted LLMs produce lower-quality strategic writing than GPT-4.1 or Claude Sonnet 4, requiring more human editing (estimated 30–50% more strategist time reviewing outputs). n8n requires Docker deployment and ongoing server maintenance. Recommend this only for solo consultants or agencies with 1–3 clients who are highly cost-sensitive and have strong technical capabilities.

No-Code Platform Approach with Relevance AI or Stack AI

Instead of building custom Python scripts and Zapier workflows, use a no-code AI workflow platform (Relevance AI or Stack AI) to build the entire analysis pipeline visually. These platforms provide drag-and-drop LLM chain builders, built-in web scraping, API connectors, and output formatting — all without writing code. The MSP builds the workflow visually and deploys it as an agent.

HubSpot-Centric Approach with Breeze AI

For agencies already deep in the HubSpot ecosystem (Marketing Hub Professional or Enterprise), leverage HubSpot's built-in Breeze AI suite for competitive analysis, content strategy, and audience insights. Supplement with Semrush integration (native HubSpot connector) and use HubSpot's AI content assistant for positioning document generation. Eliminates the need for separate Notion, SparkToro, and possibly Competely.

Replaces Semrush with Ahrefs Standard ($249/month) as the primary competitive intelligence tool. Ahrefs provides superior backlink analysis, content explorer for competitor content discovery, and keyword research. Pairs with SparkToro for audience intelligence and the same LLM pipeline for synthesis. Better suited for agencies where SEO and content marketing are the primary service offerings.

Want early access to the full toolkit?