
Implementation Guide: Synthesize guest review sentiment to identify top and bottom experience elements
Step-by-step implementation guide for deploying AI to synthesize guest review sentiment to identify top and bottom experience elements for Hospitality clients.
Hardware Procurement
Workflow Automation Server
$699 per unit (MSP cost) / $999 suggested resale installed
Self-hosted n8n workflow server that orchestrates review collection, sentiment analysis API calls, and dashboard data updates. Installed on-premises at the hotel or at the MSP's data center. Can also serve as local data cache for compliance. Alternatively, deploy on a cloud VM if the client prefers.
Back-Office Sentiment Dashboard Display
$420 per unit (MSP cost) / $699 suggested resale installed
Wall-mounted display in the hotel back-office or front desk manager area showing real-time sentiment dashboard, top/bottom experience element rankings, and alert notifications. Connected to the n8n server or a dedicated Chromebit/mini PC running the Looker Studio dashboard in kiosk mode.
Dashboard Display Media Player
$290 per unit (MSP cost) / $449 suggested resale installed
Compact media player connected to the Samsung display via HDMI. Runs Chrome OS in kiosk mode displaying the Google Looker Studio sentiment dashboard. Low maintenance, auto-updates, and supports managed Chrome Enterprise enrollment.
Display Wall Mount
$25 per unit (MSP cost) / $75 suggested resale installed
VESA-compatible wall mount for the Samsung display in the back-office area.
Software Procurement
n8n Self-Hosted (Community Edition)
$0/month (self-hosted) or $24/month for n8n Cloud Starter if cloud-hosted preferred
Workflow automation platform that orchestrates the entire sentiment pipeline: scheduled review collection from APIs, OpenAI sentiment analysis calls, data transformation, database writes, alerting, and dashboard data feeds. Pre-built hospitality templates available.
OpenAI API (GPT-5.4 mini)
$0.15 per 1M input tokens / $0.60 per 1M output tokens. Estimated $2–$15/month per property processing 200–500 reviews
Usage-based API
Primary LLM for aspect-level sentiment analysis. Extracts sentiment scores and verbatim quotes for each hospitality experience element (cleanliness, staff, F&B, room comfort, check-in, location, value, amenities, noise, Wi-Fi). 128K context window enables batch processing of reviews.
Google Looker Studio
$0/month
Dashboard and reporting layer. Connects to Google Sheets or BigQuery as data source to display sentiment trends, aspect rankings, alert history, and competitive benchmarking. Shareable with hotel management via link.
Google Sheets API
$0/month (included in Google Workspace)
Lightweight data store for sentiment analysis results. n8n writes structured sentiment data to Google Sheets, which serves as the data source for Looker Studio dashboards. Suitable for up to ~50,000 rows (approx. 3–4 years of reviews for a mid-size hotel).
Google Business Profile API
$0/month
Automated retrieval of Google Reviews for the hotel property. Requires verified Google Business Profile ownership.
TripAdvisor Content API
Free for certified partners; application required
Automated retrieval of TripAdvisor reviews. Requires partnership application and approval. Alternative: use ReviewPro or TrustYou aggregation if direct API access is unavailable.
PostgreSQL Database
$0/month (self-hosted on the Dell OptiPlex) or $0–$25/month on Supabase cloud
Persistent relational database for storing all review data, sentiment scores, aspect breakdowns, and historical trends. Enables complex queries and serves as the canonical data store. Google Sheets is used as a lightweight mirror for Looker Studio.
Slack or Microsoft Teams
$0 incremental (uses existing client subscription)
Real-time alert delivery channel for negative sentiment spikes, service recovery triggers, and weekly sentiment summary notifications to hotel management.
Amazon Comprehend (Optional - PII Detection)
Free tier: 50K units/month. Beyond: $0.0001 per unit. Estimated $1–$3/month
Optional PII detection and redaction layer. Scans review text before sending to OpenAI to strip guest names, email addresses, phone numbers, and room numbers for GDPR/CCPA compliance.
Google Workspace Business Starter
$7.20/user/month (likely already in MSP's stack)
Provides Google Sheets, Google Drive, and Gmail for the solution infrastructure. Most MSPs already have this. Required for Sheets API and Looker Studio access.
Prerequisites
- Client must have a claimed and verified Google Business Profile for each hotel property
- Client must have admin access to their TripAdvisor Management Center (tripadvisor.com/Owners) for each property
- Client must have admin access to their Booking.com Extranet for each property
- Client must have admin access to their Expedia Partner Central account (if applicable)
- Reliable business internet connection (minimum 25 Mbps) at the hotel or MSP data center hosting the n8n server
- Property Management System (PMS) admin credentials and API documentation — common PMS platforms include Oracle OPERA, Cloudbeds, Mews, Hotelogix, or StayNTouch. PMS integration is optional but recommended for guest profile enrichment
- Existing Slack workspace or Microsoft Teams tenant for alert delivery (or client willingness to adopt one)
- MSP must have an active OpenAI API account with billing enabled (platform.openai.com)
- MSP must have an active AWS account for Amazon Comprehend PII detection (optional but strongly recommended for GDPR compliance)
- Client must designate a Hotel Operations Manager or General Manager as the primary stakeholder and dashboard consumer
- Static IP address or Dynamic DNS configured if hosting n8n server on-premises at the hotel (alternatively, host at MSP data center or on a cloud VM)
- Google Workspace or standalone Google account for the MSP service account (used for Sheets API and Looker Studio)
- Written data processing agreement (DPA) template prepared for GDPR compliance, to be signed between MSP and hotel client
- Inventory of all review sources: list of all OTA profiles, social media accounts, and direct survey tools currently in use
Installation Steps
Step 1: Environment Setup - Provision n8n Workflow Server
Install and configure the n8n workflow automation server on the Dell OptiPlex Micro (or cloud VM). This server will run 24/7 and orchestrate all data collection, sentiment analysis, and dashboard updates. We use Docker for clean deployment and easy updates.
sudo apt update && sudo apt upgrade -y
sudo apt install -y docker.io docker-compose git curl
sudo systemctl enable docker && sudo systemctl start docker
sudo usermod -aG docker $USERmkdir -p /opt/n8n-sentiment && cd /opt/n8n-sentimentversion: '3.8'
services:
n8n:
image: n8nio/n8n:latest
restart: always
ports:
- '5678:5678'
environment:
- N8N_BASIC_AUTH_ACTIVE=true
- N8N_BASIC_AUTH_USER=mspadmin
- N8N_BASIC_AUTH_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
- N8N_HOST=n8n.yourdomain.com
- N8N_PROTOCOL=https
- WEBHOOK_URL=https://n8n.yourdomain.com/
- GENERIC_TIMEZONE=America/New_York
- N8N_DEFAULT_BINARY_DATA_MODE=filesystem
volumes:
- n8n_data:/home/node/.n8n
- ./local-files:/files
postgres:
image: postgres:16
restart: always
environment:
POSTGRES_DB: sentiment_db
POSTGRES_USER: sentiment_user
POSTGRES_PASSWORD: CHANGE_THIS_DB_PASSWORD
ports:
- '5432:5432'
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
n8n_data:
postgres_data:docker-compose up -d
docker-compose psChange N8N_BASIC_AUTH_PASSWORD and POSTGRES_PASSWORD to strong unique values. If hosting on a cloud VM (e.g., AWS t3.medium at ~$30/month or DigitalOcean Droplet at $24/month), adjust the N8N_HOST to the VM's public hostname. For production, place behind an Nginx reverse proxy with Let's Encrypt SSL. Store all credentials in the MSP's password vault (e.g., IT Glue, Hudu).
Step 2: Database Schema Setup - Create Sentiment Data Tables
Initialize the PostgreSQL database with tables to store raw reviews, sentiment analysis results, and aggregated scores. This schema supports multi-property deployments and historical trend analysis.
docker exec -it $(docker ps -q -f name=postgres) psql -U sentiment_user -d sentiment_dbCREATE TABLE properties (
id SERIAL PRIMARY KEY,
name VARCHAR(255) NOT NULL,
address TEXT,
city VARCHAR(100),
country VARCHAR(100),
pms_property_id VARCHAR(100),
created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE reviews (
id SERIAL PRIMARY KEY,
property_id INTEGER REFERENCES properties(id),
source VARCHAR(50) NOT NULL, -- google, tripadvisor, booking, expedia, direct_survey
source_review_id VARCHAR(255),
reviewer_name VARCHAR(255),
reviewer_name_anonymized BOOLEAN DEFAULT false,
review_date DATE,
rating_overall DECIMAL(2,1),
review_title TEXT,
review_text TEXT NOT NULL,
language VARCHAR(10) DEFAULT 'en',
collected_at TIMESTAMP DEFAULT NOW(),
analyzed_at TIMESTAMP,
UNIQUE(source, source_review_id)
);
CREATE TABLE sentiment_scores (
id SERIAL PRIMARY KEY,
review_id INTEGER REFERENCES reviews(id) ON DELETE CASCADE,
aspect VARCHAR(50) NOT NULL, -- cleanliness, staff, fnb, room_comfort, checkin, location, value, amenities, noise, wifi, parking, pool_spa
sentiment VARCHAR(10) NOT NULL, -- positive, negative, neutral, mixed
score DECIMAL(3,2), -- -1.00 to 1.00
verbatim TEXT, -- exact quote from the review
analyzed_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE sentiment_aggregates (
id SERIAL PRIMARY KEY,
property_id INTEGER REFERENCES properties(id),
period_type VARCHAR(10) NOT NULL, -- daily, weekly, monthly
period_start DATE NOT NULL,
aspect VARCHAR(50) NOT NULL,
avg_score DECIMAL(4,3),
positive_count INTEGER DEFAULT 0,
negative_count INTEGER DEFAULT 0,
neutral_count INTEGER DEFAULT 0,
total_mentions INTEGER DEFAULT 0,
top_positive_verbatim TEXT,
top_negative_verbatim TEXT,
updated_at TIMESTAMP DEFAULT NOW(),
UNIQUE(property_id, period_type, period_start, aspect)
);
CREATE TABLE alerts (
id SERIAL PRIMARY KEY,
property_id INTEGER REFERENCES properties(id),
review_id INTEGER REFERENCES reviews(id),
alert_type VARCHAR(50) NOT NULL, -- negative_spike, service_recovery, trend_change
severity VARCHAR(10) NOT NULL, -- low, medium, high, critical
message TEXT NOT NULL,
delivered BOOLEAN DEFAULT false,
delivered_at TIMESTAMP,
created_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_reviews_property_date ON reviews(property_id, review_date DESC);
CREATE INDEX idx_sentiment_review ON sentiment_scores(review_id);
CREATE INDEX idx_aggregates_property_period ON sentiment_aggregates(property_id, period_type, period_start DESC);
INSERT INTO properties (name, address, city, country) VALUES ('Hotel Client Name', '123 Main St', 'City', 'US');
\qAdjust the property INSERT statement with the actual hotel client details. For multi-property clients, insert one row per property. The schema supports up to 12 hospitality-specific aspect categories. The UNIQUE constraint on reviews prevents duplicate processing.
Step 3: Configure n8n Credentials and API Connections
Set up all API credentials in n8n's credential store. This includes OpenAI for sentiment analysis, Google Sheets for dashboard data, and optionally AWS Comprehend for PII detection. Access the n8n web UI at https://n8n.yourdomain.com:5678.
- Access n8n web UI at https://your-n8n-host:5678
- Navigate to Settings > Credentials > Add Credential
Create a dedicated OpenAI API key for this project with usage limits set to $25/month as a safety cap (Settings > Limits on platform.openai.com). For the Google Sheets credential, create a Google Cloud project specifically for this client, enable the Sheets API, and create OAuth2 credentials. Store all API keys in the MSP's credential vault alongside the n8n credentials.
Step 4: Create Google Sheets Dashboard Data Store
Create a Google Sheet that will serve as the data bridge between n8n and Google Looker Studio. This sheet will contain multiple tabs for different dashboard views and will be automatically updated by n8n workflows.
# Tab 6: 'Config' - Key/Value pairs:
property_name | Hotel Client Name
analysis_start_date | 2025-01-01
alert_threshold_negative | -0.5
alert_threshold_spike | 3The Google Sheet acts as a lightweight, zero-cost data layer that Looker Studio can connect to natively. For higher-volume properties (500+ reviews/month) or multi-property deployments, consider using Google BigQuery instead ($0 for first 1TB/month of queries). The Config tab allows adjusting alert thresholds without modifying n8n workflows.
Step 5: Build the Core Review Collection Workflow in n8n
Create the n8n workflow that collects reviews from Google Business Profile, and processes Booking.com/TripAdvisor reviews via manual CSV upload or API integration. This workflow runs on a daily schedule and stores new reviews in PostgreSQL.
Node 1: Schedule Trigger
- Trigger every day at 6:00 AM hotel local time
0 6 * * *Node 2: HTTP Request - Google Reviews
- Method: GET
- URL: https://mybusinessreviews.googleapis.com/v1/accounts/{accountId}/locations/{locationId}/reviews
- Authentication: Google OAuth2 (configured in Step 3)
- Query Params: pageSize=50, orderBy=updateTime desc
Node 3: Function - Normalize Google Reviews
Transform Google API response to standard format:
// maps API response to standard review format
items.map(item => ({
source: 'google',
source_review_id: item.json.reviewId || item.json.name,
reviewer_name: item.json.reviewer?.displayName || 'Anonymous',
review_date: item.json.createTime?.split('T')[0],
rating_overall: item.json.starRating === 'FIVE' ? 5.0 : item.json.starRating === 'FOUR' ? 4.0 : item.json.starRating === 'THREE' ? 3.0 : item.json.starRating === 'TWO' ? 2.0 : 1.0,
review_title: '',
review_text: item.json.comment || '',
language: 'en'
}))Node 4: Postgres - Check for Duplicates
- Operation: SELECT
SELECT source_review_id FROM reviews WHERE source = 'google' AND source_review_id = ANY($1::text[])Node 5: Function - Filter New Reviews Only
- Compare collected reviews against existing IDs
- Pass through only new reviews
Node 6: Postgres - Insert New Reviews
- Operation: INSERT
- Table: reviews
- Map fields from normalized format
Node 7: Set - Flag for Sentiment Analysis
- Add flag: needs_analysis = true
- Pass review IDs to sentiment analysis workflow (via webhook or direct call)
Google Business Profile API requires a verified GBP listing and an approved Google Cloud project. For Booking.com and Expedia reviews, these platforms do not offer public review APIs — use one of these alternatives: (1) Manual CSV export from the Booking.com Extranet uploaded to n8n via a watched folder, (2) Use a reputation aggregation service like ReviewPro's API as a data source, (3) Set up email forwarding from OTA review notification emails to n8n's email trigger node and parse the review content. TripAdvisor Content API requires a partnership application at developer-tripadvisor.com. Plan 2–4 weeks for approval.
Step 6: Build the Sentiment Analysis Workflow in n8n
Create the core AI workflow that takes unprocessed reviews from PostgreSQL, optionally strips PII via AWS Comprehend, sends them to OpenAI GPT-5.4 mini for aspect-level sentiment analysis, and stores structured results back in the database and Google Sheets.
The batch processing approach (10 reviews per API call) reduces API costs by ~80% compared to individual calls. GPT-5.4 mini handles this reliably with JSON mode. Set a monthly budget cap of $25 on the OpenAI account as a safety measure. The optional AWS Comprehend PII step adds ~$1–3/month but is strongly recommended for GDPR compliance when processing EU guest reviews. If the hotel serves primarily domestic US guests with no EU exposure, this step can be deferred.
Step 7: Build the Aggregation and Reporting Workflow
Create a weekly workflow that aggregates daily sentiment scores into weekly and monthly summaries, ranks aspects from best to worst, identifies trends, and updates the Google Sheets dashboard data. This workflow also generates the 'Top 5 and Bottom 5 Experience Elements' ranking.
SELECT
aspect,
AVG(score) as avg_score,
COUNT(*) FILTER (WHERE sentiment = 'positive') as positive_count,
COUNT(*) FILTER (WHERE sentiment = 'negative') as negative_count,
COUNT(*) FILTER (WHERE sentiment = 'neutral') as neutral_count,
COUNT(*) as total_mentions
FROM sentiment_scores ss
JOIN reviews r ON ss.review_id = r.id
WHERE r.property_id = 1
AND r.review_date >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY aspect
ORDER BY avg_score DESCSELECT aspect, verbatim, score FROM sentiment_scores ss
JOIN reviews r ON ss.review_id = r.id
WHERE r.property_id = 1 AND r.review_date >= CURRENT_DATE - INTERVAL '7 days'
AND aspect = $1 ORDER BY score DESC LIMIT 1Generate a 3-paragraph executive summary of this hotel's guest sentiment for the week. Include top 3 strengths, bottom 3 weaknesses, and one actionable recommendation.The weekly aggregation is the core deliverable for hotel management. The narrative summary generated by GPT-5.4 mini converts raw data into actionable insights without requiring the hotel GM to interpret charts. Consider also scheduling a monthly version with deeper trend analysis on the 1st of each month.
Step 8: Configure Google Looker Studio Dashboard
Build the interactive dashboard that hotel management will use daily. Connect Looker Studio to the Google Sheets data store and create visualizations for sentiment trends, aspect rankings, and alert history.
Dashboard Page 1: Executive Overview
- Scorecard: Overall Sentiment Score (avg of all aspects, current month)
- Scorecard: Total Reviews Analyzed (current month)
- Scorecard: Positive/Negative Split (% pie chart)
- Bar Chart: 'Top & Bottom Experience Elements' (horizontal bar, sorted by avg_score) — Data Source: Monthly_Summary, Dimension: aspect, Metric: avg_score, Sort: avg_score descending, Conditional formatting: green for score > 0.3, red for score < -0.3
- Time Series: 'Sentiment Trend Over Time' (line chart, weekly avg scores) — One line per aspect, filterable
Dashboard Page 2: Aspect Deep Dive
- Dropdown filter: Select Aspect
- Time series: Selected aspect score over time
- Table: Recent verbatim quotes (positive and negative)
- Scorecard: Week-over-week change
Dashboard Page 3: Alert History
- Table: All alerts with timestamp, severity, message
- Filter by date range and severity
Dashboard Page 4: Review Source Breakdown
- Pie chart: Reviews by source (Google, TripAdvisor, Booking, etc.)
- Bar chart: Average sentiment by source
- Helps identify if sentiment differs by booking channel
Looker Studio is free and supports scheduled email delivery of PDF snapshots. For the wall-mounted display, set up the Chromebox to auto-launch Chrome in kiosk mode with the Looker Studio URL. Use the Looker Studio embed URL with auto-refresh parameter: append '&refreshInterval=300' for 5-minute auto-refresh. The dashboard should be the primary client-facing deliverable — invest time making it visually polished.
Step 9: Install and Configure Dashboard Display Hardware
Mount the Samsung display in the hotel back-office area, connect the ASUS Chromebox, and configure Chrome OS kiosk mode to show the Looker Studio dashboard continuously.
Chrome OS Kiosk Configuration — Option A: Managed Chrome Enterprise (recommended)
Chrome OS Kiosk Configuration — Option B: Manual Kiosk Setup (if no Chrome Enterprise license)
# navigate to this URL in Chrome and set to Enable
chrome://flags/#ash-enable-auto-startDisplay Settings
The dashboard display is an optional but high-impact deliverable that reinforces the value of the sentiment analysis system. Hotel staff walking past will naturally absorb the sentiment scores, creating a feedback-aware culture. The hardware cost is modest ($735 MSP cost for display + Chromebox + mount) and can be resold at $1,200+ including installation. If the client declines the display, the dashboard is still accessible via any web browser.
Step 10: PMS Integration for Guest Profile Enrichment (Optional)
Connect the sentiment analysis pipeline to the hotel's Property Management System to associate review sentiment with guest stay records. This enables the hotel to see sentiment data alongside guest profiles and identify patterns by room type, length of stay, or booking source.
MEWS PMS Integration
# Search guest and update customer profile with sentiment tags. Full API
# docs: https://mews-systems.gitbook.io/connector-api/
POST https://api.mews.com/api/connector/v1/customers/search
Headers: { "Content-Type": "application/json" }
Body: { "ClientToken": "xxx", "AccessToken": "xxx", "Name": "guest_name_from_review" }
POST https://api.mews.com/api/connector/v1/customers/updateCLOUDBEDS PMS Integration
# Search guest and post a guest note. Full API docs:
# https://hotels.cloudbeds.com/api/docs
GET /api/v1.2/getGuests?name=xxx
POST /api/v1.2/postGuestNoteORACLE OPERA Cloud Integration
Requires OPN membership and OPERA Cloud API access.
# Search guest profile and post sentiment comments.
GET /gst/v1/guests?givenName=xxx&surname=xxx
POST /gst/v1/guests/{guestId}/commentsPMS Platforms Without API Access
Use n8n to generate a daily CSV report of sentiment data. Hotel staff can manually cross-reference in their PMS.
PMS integration is the most complex and variable part of this deployment. It is optional for Phase 1 and can be added in Phase 2 after the core sentiment pipeline is validated. Guest name matching between reviews and PMS records is inherently imprecise (reviewers may use nicknames or first names only). Implement fuzzy matching with a confidence threshold and only auto-tag matches above 85% confidence. Always have a human review questionable matches. Some PMS vendors charge for API access — verify costs before committing.
Step 11: Configure GDPR/CCPA Compliance Controls
Implement data protection measures including PII redaction, data retention policies, deletion workflows, and required documentation. This is critical for any hotel serving EU guests or California residents.
docker exec -it $(docker ps -q -f name=postgres) psql -U sentiment_user -d sentiment_db-- Create automated data retention (delete reviews older than 2 years)
CREATE OR REPLACE FUNCTION cleanup_old_data() RETURNS void AS $$
BEGIN
DELETE FROM sentiment_scores WHERE review_id IN (SELECT id FROM reviews WHERE review_date < CURRENT_DATE - INTERVAL '730 days');
DELETE FROM reviews WHERE review_date < CURRENT_DATE - INTERVAL '730 days';
DELETE FROM sentiment_aggregates WHERE period_start < CURRENT_DATE - INTERVAL '730 days';
DELETE FROM alerts WHERE created_at < CURRENT_DATE - INTERVAL '365 days';
END;
$$ LANGUAGE plpgsql;
-- Schedule via pg_cron or n8n weekly trigger
\qGDPR compliance is non-negotiable for hotels with EU guests. Key points: (1) Public OTA reviews are generally processed under 'legitimate interest' legal basis, but direct survey responses require explicit consent. (2) The DPA must specify that OpenAI is a sub-processor and data is sent to their API — check OpenAI's data usage policy (as of 2025, API data is not used for training). (3) AWS Comprehend PII detection costs approximately $1–3/month and is a worthwhile compliance investment. (4) Keep the data retention period aligned with the hotel's existing data retention policy — 2 years is a reasonable default.
Step 12: End-to-End Testing and Go-Live
Perform comprehensive testing of the entire pipeline from review collection through dashboard display. Validate sentiment accuracy against manually labeled samples, verify alert delivery, and confirm dashboard data freshness.
-- Connect to the database
-- docker exec -it $(docker ps -q -f name=postgres) psql -U sentiment_user -d sentiment_db
INSERT INTO reviews (property_id, source, source_review_id, review_date, rating_overall, review_text, language) VALUES
(1, 'manual_test', 'test_001', CURRENT_DATE, 5.0, 'The room was spotless and the bed was incredibly comfortable. Staff at the front desk were so helpful with restaurant recommendations. Breakfast buffet was outstanding.', 'en'),
(1, 'manual_test', 'test_002', CURRENT_DATE, 2.0, 'Very noisy room next to the elevator. The Wi-Fi kept disconnecting and the bathroom was not clean. Staff seemed indifferent when I complained.', 'en'),
(1, 'manual_test', 'test_003', CURRENT_DATE, 4.0, 'Great location, walking distance to everything. Room was a bit small but clean. The pool area was nice but the gym equipment was outdated.', 'en');
-- Add 17 more varied reviews covering all aspectsSELECT r.source_review_id, ss.aspect, ss.sentiment, ss.score, ss.verbatim
FROM sentiment_scores ss JOIN reviews r ON ss.review_id = r.id
WHERE r.source = 'manual_test' ORDER BY r.source_review_id, ss.aspect;docker exec -it $(docker ps -q -f name=postgres) psql -U sentiment_user -d sentiment_dbAllow 1 week of parallel operation (automated + manual spot-checking) before fully trusting the system. During this week, have the hotel GM or operations manager review the daily sentiment output and flag any inaccuracies. Fine-tune the OpenAI prompt based on feedback (see custom_ai_components for the full prompt with adjustment guidance). Delete all test data before go-live: DELETE FROM reviews WHERE source = 'manual_test';
Custom AI Components
Hospitality Aspect-Level Sentiment Analyzer
Type: prompt
The core OpenAI GPT-5.4 mini prompt that performs aspect-level sentiment analysis on hotel guest reviews. It identifies which hospitality experience elements are mentioned, classifies sentiment for each, assigns a score from -1.0 to 1.0, and extracts the relevant verbatim quote. This prompt is designed for batch processing (up to 10 reviews per call) to minimize API costs.
Implementation
System Prompt
User Prompt Template
n8n Implementation
- Model: gpt-5.4-mini
- Temperature: 0.1
- Max Tokens: 4000
- Response Format: JSON (enable 'JSON Mode' in the node settings)
- System Message: [paste system prompt above]
- User Message: Use an n8n Expression to construct the user prompt from the batched reviews
// User Message construction from batched reviews
{{ $json.batchedReviews.map((r, i) => `Review ${i+1} (Source: ${r.source}, Rating: ${r.rating_overall}/5, Date: ${r.review_date}):\n"${r.review_text}"`).join('\n\n') }}Tuning Guidance
- If scores are too generous (all reviews getting 0.5+), add to system prompt: 'Be more critical in scoring. A review that says something is "ok" or "fine" should score 0.0 to 0.1, not 0.3+.'
- If aspects are being hallucinated (aspects mentioned that weren't in the review), add: 'CRITICAL: Only extract aspects with explicit textual evidence. If in doubt, skip the aspect.'
- If multilingual reviews are poorly analyzed, add: 'For non-English reviews, first internally translate to English for analysis, then provide the verbatim in the original language.'
Cost Estimate
- Average review: ~150 words = ~200 tokens input
- Batch of 10 reviews: ~2,000 input tokens + ~500 system prompt tokens = ~2,500 input tokens
- Output per batch: ~1,500 tokens
- Cost per batch: (2,500 × $0.15/1M) + (1,500 × $0.60/1M) = $0.000375 + $0.0009 = $0.001275
- 300 reviews/month = 30 batches = $0.038/month
- Even at 1,000 reviews/month: ~$0.13/month
Weekly Sentiment Executive Summary Generator
Type: prompt
An OpenAI GPT-5.4 mini prompt that converts raw weekly sentiment aggregation data into a natural-language executive summary suitable for hotel management. Delivered via Slack/Teams and optionally email. Highlights top 3 strengths, bottom 3 weaknesses, week-over-week trends, and one specific actionable recommendation.
Implementation
System Prompt
User Prompt Template
n8n Implementation
- Model: gpt-5.4-mini
- Temperature: 0.4 (slightly higher for more natural writing)
- Max Tokens: 1000
- System Message: [paste system prompt above]
- User Message: Construct from PostgreSQL aggregation query results using n8n expressions
- Used within the Weekly Aggregation workflow, Node 7
Cost: ~$0.001 per weekly summary = ~$0.004/month
Real-Time Service Recovery Alert Agent
Type: workflow
An n8n workflow component that monitors incoming sentiment analysis results in real-time and triggers immediate alerts when it detects: (1) any review with 2+ aspects scoring below -0.5, (2) 3+ negative mentions of the same aspect within 24 hours (spike detection), or (3) a review mentioning a specific guest by name who is still checked in (requires PMS integration). Alerts are routed to the appropriate hotel department via Slack/Teams with urgency levels.
Implementation
Implement as a sub-workflow called from the Sentiment Analysis workflow. The workflow is triggered by an array of sentiment_scores just inserted for a batch of reviews.
NODE 1: Function — Classify Alert Severity
// JavaScript: Classify alert severity. Applies Rule 1 (2+ aspects below
// -0.5) and Rule 2 (any single aspect below -0.8), avoiding double-alerting.
const results = $input.all();
const alerts = [];
for (const item of results) {
const scores = item.json.aspects || [];
const reviewId = item.json.review_id;
const reviewSource = item.json.source;
const reviewDate = item.json.review_date;
// Rule 1: Multi-aspect negative review (2+ aspects below -0.5)
const severeNegatives = scores.filter(s => s.score < -0.5);
if (severeNegatives.length >= 2) {
alerts.push({
review_id: reviewId,
alert_type: 'service_recovery',
severity: severeNegatives.length >= 3 ? 'critical' : 'high',
message: `🚨 Service Recovery Needed: Review from ${reviewSource} on ${reviewDate} has ${severeNegatives.length} severely negative aspects: ${severeNegatives.map(s => `${s.aspect} (${s.score})`).join(', ')}`,
aspects: severeNegatives.map(s => s.aspect),
verbatims: severeNegatives.map(s => `${s.aspect}: "${s.verbatim}"`).join('\n')
});
}
// Rule 2: Single critical aspect (score below -0.8)
const criticalAspects = scores.filter(s => s.score < -0.8);
for (const ca of criticalAspects) {
if (!severeNegatives.includes(ca)) { // Avoid double-alerting
alerts.push({
review_id: reviewId,
alert_type: 'negative_spike',
severity: 'high',
message: `⚠️ Critical Negative Sentiment: ${ca.aspect} scored ${ca.score} - "${ca.verbatim}"`,
aspects: [ca.aspect],
verbatims: `${ca.aspect}: "${ca.verbatim}"`
});
}
}
}
return alerts.map(a => ({ json: a }));NODE 2: Postgres — Check for Spike Pattern
-- Postgres query: Detect negative aspect spikes within the last 24 hours.
-- Any aspect with 3+ negative mentions triggers a trend_change alert.
SELECT aspect, COUNT(*) as neg_count
FROM sentiment_scores ss
JOIN reviews r ON ss.review_id = r.id
WHERE r.property_id = 1
AND ss.sentiment = 'negative'
AND ss.analyzed_at >= NOW() - INTERVAL '24 hours'
GROUP BY aspect
HAVING COUNT(*) >= 3;If results are returned, create an additional spike alert with the following structure:
# Spike alert payload constructed when the spike query returns results.
{
"alert_type": "trend_change",
"severity": "high",
"message": "📊 Negative Spike Detected: {aspect} has {neg_count} negative mentions in the last 24 hours. Investigate immediately."
}NODE 3: Postgres — Insert Alerts
-- Postgres: Persist each generated alert to the alerts table.
INSERT INTO alerts (property_id, review_id, alert_type, severity, message)
VALUES (1, $1, $2, $3, $4);NODE 4: IF — Route by Severity
- Critical / High → Slack DM to GM + channel post
- Medium → Channel post only
- Low → Log only (weekly summary)
NODE 5: Slack — Send Alert
# Slack message template for #hotel-sentiment-alerts channel.
Channel: #hotel-sentiment-alerts
🏨 *{alert_type}* | Severity: *{severity}*
{message}
📝 Guest Quotes:
{verbatims}
🔗 <{dashboard_url}|View Full Dashboard>
_Recommended: Review and respond to this feedback within 2 hours._NODE 6: Google Sheets — Log Alert
Append to the Alerts_Log tab with the following fields: timestamp, alert type, severity, and message.
PMS integration is required to support Rule 3 (alerting on reviews that mention a currently checked-in guest by name). This rule is not implemented in the nodes above and must be added as an additional lookup step once PMS connectivity is available.
Review Source Aggregation Connector
Type: integration
A multi-source review collection integration that handles the complexity of pulling reviews from various OTAs and direct feedback channels. Since not all platforms offer APIs, this component implements multiple collection strategies including API polling, email parsing, and manual CSV import processing.
Implementation
N8N WORKFLOW: 'Multi-Source Review Collector' — This workflow handles 5 review collection channels.
Channel 1: Google Reviews (API)
- Node type: HTTP Request
- Method: GET
- URL: https://mybusiness.googleapis.com/v4/accounts/{{$env.GOOGLE_ACCOUNT_ID}}/locations/{{$env.GOOGLE_LOCATION_ID}}/reviews
- Auth: Google OAuth2
- Pagination: Follow nextPageToken
- Schedule: Daily at 6:00 AM
return items.map(item => ({
json: {
source: 'google',
source_review_id: item.json.reviewId,
reviewer_name: item.json.reviewer?.displayName || 'Anonymous',
review_date: item.json.createTime?.split('T')[0],
rating_overall: {'ONE':1,'TWO':2,'THREE':3,'FOUR':4,'FIVE':5}[item.json.starRating] || 0,
review_title: '',
review_text: item.json.comment || '',
language: item.json.reviewer?.isAnonymous ? 'en' : 'en'
}
}));Channel 2: Booking.com (Email Parsing)
Strategy: Booking.com sends review notification emails to the hotel. Forward these to a dedicated email address monitored by n8n.
- Node type: Email Trigger
- Protocol: IMAP
- Host: imap.gmail.com (or hotel's mail server)
- Mailbox: reviews@hotelname.com
- Filter: From contains 'booking.com' AND Subject contains 'New Guest Review'
- Node type: HTML Parse
- Extract guest name via CSS selector for reviewer name
- Extract numeric rating
- Extract review body text
- Extract date from email received date
return [{
json: {
source: 'booking',
source_review_id: `booking_${Date.now()}_${Math.random().toString(36).substr(2,9)}`,
reviewer_name: $json.parsed_guest_name,
review_date: new Date().toISOString().split('T')[0],
rating_overall: parseFloat($json.parsed_rating),
review_title: $json.parsed_title || '',
review_text: $json.parsed_review_body,
language: 'en'
}
}];Channel 3: TripAdvisor (Email Parsing / API)
- Same email-parsing strategy as Booking.com
- Alternatively, use TripAdvisor Content API if partnership is approved
- Email filter: From contains 'tripadvisor' AND Subject contains 'review'
Channel 4: Expedia (Email Parsing)
- Same email-parsing strategy as Booking.com
- Email filter: From contains 'expedia' AND Subject contains 'review'
Channel 5: Direct Guest Surveys (Webhook)
If the hotel uses a survey tool (SurveyMonkey, Typeform, Google Forms):
- Node type: Webhook Trigger
- Path: /webhook/survey-response
- Method: POST
- Auth: Header token validation
return [{
json: {
source: 'direct_survey',
source_review_id: `survey_${$json.response_id}`,
reviewer_name: $json.guest_name || 'Survey Respondent',
review_date: new Date().toISOString().split('T')[0],
rating_overall: $json.overall_rating || null,
review_title: '',
review_text: $json.open_feedback || $json.comments || '',
language: 'en'
}
}];Channel 6: Manual CSV Upload (Fallback)
For any source without API or email integration:
For the email parsing strategy, set up a dedicated email account (e.g., reviews@hotelclient.com) and configure forwarding rules in the hotel's Booking.com Extranet, TripAdvisor Management Center, and Expedia Partner Central to CC this address on all review notifications. This is the most reliable method for OTAs that lack public APIs.
Competitive Benchmarking Analyzer
Type: prompt An optional OpenAI prompt that analyzes publicly available competitor review data to benchmark the client hotel's sentiment against 3-5 local competitors. Runs monthly and identifies competitive advantages and gaps by aspect category.
Implementation
System Prompt
User Prompt Template
GET https://maps.googleapis.com/maps/api/place/details/json?place_id={COMPETITOR_PLACE_ID}&fields=reviews&key={API_KEY}- To get competitor review data, use the Google Places API (Text Search + Place Details) to fetch recent reviews for 3-5 competing hotels
- Google Places API returns up to 5 most recent reviews per call. For more comprehensive data, collect weekly and accumulate over time.
- Run the competitor sentiment through the same Hospitality Aspect-Level Sentiment Analyzer prompt.
- Store competitor scores in a separate PostgreSQL table: competitor_sentiment_aggregates
- Cost: ~$0.05/month for API calls + ~$0.01/month for OpenAI analysis of competitor reviews
- Schedule: Monthly, 1st of each month
- This is a premium add-on feature — include in the Professional and Premium MSP packages
Testing & Validation
- PIPELINE TEST: Insert 5 test reviews with known sentiment (2 strongly positive, 2 strongly negative, 1 mixed) into the reviews table. Run the Sentiment Analysis workflow manually. Verify that the strongly positive reviews receive average aspect scores above 0.5 and strongly negative reviews receive scores below -0.5. Mixed review should have some positive and some negative aspects.
- ASPECT ACCURACY TEST: Using the 20 seed test reviews from Step 12, manually label which aspects are mentioned in each review. Compare against the AI-extracted aspects. Target: 90%+ precision (AI-identified aspects actually exist in the review) and 80%+ recall (aspects mentioned in the review are caught by AI).
- ALERT DELIVERY TEST: Insert a test review with 3 aspects scoring below -0.7 (e.g., 'Terrible experience. Room was filthy, staff was rude, and the food made me sick.'). Verify that (1) the Service Recovery Alert workflow triggers, (2) severity is classified as 'critical', (3) the Slack/Teams message is delivered within 5 minutes, and (4) the alert is logged in both the PostgreSQL alerts table and the Google Sheets Alerts_Log tab.
- SPIKE DETECTION TEST: Insert 4 reviews within a 1-hour window, all with negative 'cleanliness' aspect scores. Run the sentiment analysis and alert workflows. Verify that the spike detection query fires and delivers a trend_change alert for the cleanliness aspect.
- DASHBOARD DATA FLOW TEST: After running the Sentiment Analysis workflow on test data, open the Google Sheet and verify that (1) the Aspect_Scores tab has new rows with correct data, (2) the review_id, aspect, sentiment, score, and verbatim columns are all populated. Then open Looker Studio and verify the dashboard charts reflect the new data within 5 minutes.
- PII REDACTION TEST: Insert a test review containing a guest name, email, and phone number: 'I am John Smith (john.smith@email.com, 555-123-4567) and my room 304 was terrible.' Run the pipeline and verify that the verbatim field in sentiment_scores does NOT contain the name, email, phone, or room number. They should be replaced with [REDACTED] or stripped entirely.
- WEEKLY SUMMARY TEST: After accumulating at least 10 analyzed reviews, manually trigger the Weekly Aggregation workflow. Verify that (1) the Monthly_Summary tab in Google Sheets is updated with ranked aspects, (2) the OpenAI-generated narrative summary is delivered to the Slack/Teams channel, and (3) the summary correctly identifies the top 3 and bottom 3 aspects.
- DASHBOARD DISPLAY TEST: On the wall-mounted Samsung display with ASUS Chromebox, verify that (1) Chrome launches in kiosk mode on boot, (2) the Looker Studio dashboard loads fully, (3) the dashboard auto-refreshes (data updates visible within 10 minutes), and (4) the display is readable from 6 feet away in typical back-office lighting.
- DEDUPLICATION TEST: Run the Review Collection workflow twice in succession. Verify that the same reviews are NOT inserted twice by checking the UNIQUE constraint on (source, source_review_id) in the reviews table. The second run should process 0 new reviews.
- MULTI-LANGUAGE TEST: Insert a review in Spanish, French, or German (common for hotels with international guests). Run sentiment analysis and verify that aspects are correctly identified and the verbatim is preserved in the original language. Example: 'Das Zimmer war sehr sauber aber das WLAN war schrecklich langsam.' (German: Room was very clean but WiFi was terribly slow.) Should produce: cleanliness=positive, wifi=negative.
- FULL END-TO-END LATENCY TEST: Post a new review on the hotel's Google Business Profile at a known time. Wait for the next scheduled collection run (6 AM). Verify the review appears in the database, is analyzed by 7 AM, and is visible in the Looker Studio dashboard by 7:30 AM. Total latency target: under 24 hours from review posting to dashboard visibility.
- LOAD TEST: Insert 500 reviews in bulk to simulate a backlog scenario (e.g., initial historical import). Run the Sentiment Analysis workflow and verify it processes all 500 reviews in batches of 10 without errors, timeouts, or API rate limiting. Expected completion time: under 30 minutes for 50 API calls.
Client Handoff
TRAINING SESSION (90 minutes, on-site or video call with hotel GM, operations manager, and front desk manager):
Maintenance
ONGOING MSP RESPONSIBILITIES:
Daily (Automated)
- Monitor n8n workflow execution logs for failed runs (set up n8n error workflow that emails MSP on any failure)
- Verify daily review collection and sentiment analysis workflows completed successfully
- Check n8n server uptime (integrate with MSP's RMM tool — e.g., Datto RMM, ConnectWise Automate agent on the Dell OptiPlex)
Weekly
- Review the weekly sentiment briefing output for quality (spot-check 2-3 sentiment scores against actual reviews)
- Check Google Sheets row count — monitor for approaching 50,000 row limit (archive old data annually)
- Verify Looker Studio dashboard loads correctly and data is current
- Review OpenAI API usage dashboard for unexpected cost spikes
Monthly
- Generate MSP monthly report for the client: system uptime, reviews processed, API costs, notable sentiment trends
- Review and tune alert thresholds based on false positive/negative rates (adjust the -0.5 threshold in the Config sheet if needed)
- Verify all review source connections are active (Google API token refresh, email parsing still working for Booking.com/TripAdvisor)
- Apply n8n Docker image updates — test in staging first
- Check PostgreSQL database size and vacuum
docker-compose pull && docker-compose up -ddocker exec postgres psql -U sentiment_user -d sentiment_db -c 'VACUUM ANALYZE;'Quarterly
- Sentiment accuracy audit: randomly sample 25 reviews, manually label sentiment, compare to AI output. If accuracy drops below 85%, adjust the OpenAI prompt.
- Review and update the hospitality aspect categories if the hotel has added new amenities or services
- Conduct a 30-minute client check-in call to review sentiment trends and gather feedback on system utility
- Review GDPR/CCPA compliance: verify data retention cleanup ran, check for any deletion requests, update DPA if vendor terms changed
- Review OpenAI model availability — if a newer/cheaper model is released (e.g., GPT-5.4 mini successor), evaluate and migrate
- Update Looker Studio dashboard with any new visualizations requested by the client
Annually
- Full system review and optimization session with the client
- Data retention cleanup: archive reviews older than 2 years, verify automated cleanup function ran
- Renew or renegotiate any paid software licenses
- Security audit: rotate all API keys, update n8n credentials, review server patches
- GDPR/CCPA compliance review: update privacy impact assessment, verify DPA with all sub-processors (OpenAI, AWS, Google)
- Evaluate if the hotel has outgrown the current solution and should upgrade to a Tier 1 SaaS platform (ReviewPro, Revinate)
SLA Considerations
- System availability target: 99% uptime (allows ~7 hours downtime/month for maintenance)
- Alert delivery SLA: within 15 minutes of sentiment analysis completion
- Dashboard data freshness: within 24 hours of review publication
- MSP response time: 4 hours for critical issues (system down, data breach), 24 hours for non-critical issues
- Escalation path: L1 (automated monitoring) → L2 (MSP technician: restart services, check API keys) → L3 (MSP developer: workflow debugging, prompt tuning, database issues)
Alternatives
Turnkey SaaS Platform: Shiji ReviewPro
Deploy Shiji ReviewPro as a fully managed SaaS platform instead of the custom n8n pipeline. ReviewPro aggregates reviews from 175+ sources, provides semantic analysis across 500 hospitality-specific concepts, and includes a pre-built dashboard with the Global Review Index™ score. Setup involves vendor onboarding, connecting OTA accounts, and configuring PMS integration through ReviewPro's partner ecosystem. No custom development required.
Tradeoffs
- COST: $150–$500/month per property (vs. $5–$20/month for the custom build), significantly reducing MSP margin to 30-40% when resold at $299–$699/month.
- COMPLEXITY: Much lower — 2-week deployment vs. 4-6 weeks, no coding required, L2 technician can handle.
- CAPABILITY: Superior out-of-the-box — 500 semantic concepts vs. our 12, 175+ sources vs. manual integration, multilingual support built in, competitive benchmarking included.
- CONTROL: Less customizable — dashboard layout, alert rules, and AI model behavior are controlled by ReviewPro.
- BEST WHEN: The client is a mid-to-large hotel (100+ rooms) with budget for premium tools, the MSP wants minimal development overhead, or the property needs immediate deployment with enterprise-grade analytics.
Turnkey SaaS Platform: TrustYou CXP
Deploy TrustYou's Customer Experience Platform (CXP) for smaller properties. TrustYou launched CXP at ITB Berlin 2024, unifying surveys, reviews, and messaging into a single platform with built-in AI sentiment analysis. More affordable than ReviewPro for independent and boutique hotels.
- COST: Estimated $100–$400/month per property, lower than ReviewPro. MSP margin is moderate at 40-50% when resold.
- COMPLEXITY: Very low — guided SaaS onboarding, typically 1-2 weeks.
- CAPABILITY: Strong for independents — unified survey + review analysis, AI-powered response suggestions, good multilingual support. Less depth than ReviewPro's 500 semantic concepts.
- CONTROL: Limited customization but sufficient for most independent hotels.
- BEST WHEN: Client is an independent or boutique hotel (25-100 rooms), budget is constrained, and the property values simplicity over depth. Also a good choice when the hotel wants integrated survey capabilities alongside review analysis.
White-Label Resale via Vendasta
Use Vendasta's white-label reputation management platform to offer sentiment analysis under the MSP's own brand. Vendasta monitors reviews across 90+ directories and provides sentiment reporting. The MSP sets their own pricing and branding, making it invisible to the client that a third party is involved.
Tradeoffs
- COST: Vendasta plans from $99–$999/month for the MSP platform + wholesale product costs per client. Resale at $299–$599/month per property yields 40-60% margin.
- COMPLEXITY: Low — Vendasta provides the technology, MSP provides the branded wrapper.
- CAPABILITY: Moderate — general reputation management with sentiment, but not hospitality-specific. Lacks the deep aspect-level analysis of ReviewPro or our custom solution. Good enough for basic sentiment trends.
- CONTROL: High branding control (white-label), but limited technical customization.
- BEST WHEN: The MSP wants to scale across multiple hospitality clients quickly with a consistent branded offering, or the MSP serves clients across multiple verticals and wants one platform for all reputation management.
Hybrid: Birdeye + Custom Enrichment
Use Birdeye ($299–$449/month per location) for review aggregation and basic sentiment analysis, supplemented with a lightweight n8n workflow that sends Birdeye's data through OpenAI for deeper hospitality-specific aspect-level analysis. This gets the best of both worlds: reliable review collection from Birdeye plus custom AI analysis depth.
- COST: $299–$449/month for Birdeye + $5–$15/month for OpenAI API = $304–$464/month. Resale at $599–$899/month for 40-50% margin.
- COMPLEXITY: Medium — Birdeye handles the hard part (review aggregation from all sources), MSP builds only the custom analysis layer. 2-3 week deployment.
- CAPABILITY: Strong — Birdeye's proven review collection infrastructure plus custom hospitality-specific deep analysis. Gets aspect-level insights without building review collection from scratch.
- CONTROL: Good balance — Birdeye handles data collection reliably, MSP controls the AI analysis and dashboard.
- BEST WHEN: The MSP wants reliable multi-source review collection without building email parsers and API integrations, but still wants custom AI analysis depth and higher margins than pure Birdeye resale.
Full Open-Source Self-Hosted (Maximum Control, Zero SaaS Cost)
Replace OpenAI GPT-5.4 mini with a self-hosted open-source model like siebert/sentiment-roberta-large-english from Hugging Face Transformers, running on a GPU-enabled server or cloud VM. Combined with the n8n pipeline and PostgreSQL stack, this eliminates all recurring API costs for sentiment analysis.
Tradeoffs
- COST: $0/month for sentiment API (vs. $5–$20/month for OpenAI). However, requires a GPU VM (~$50–$150/month for an AWS g4dn.xlarge or equivalent) or a local GPU workstation ($1,500–$3,000 one-time). Net savings only at scale (10+ properties).
- COMPLEXITY: Very high — requires ML engineering experience, model fine-tuning for hospitality domain, and GPU infrastructure management. 8–16 weeks implementation.
- CAPABILITY: Lower out-of-the-box — RoBERTa provides sentence-level sentiment (positive/negative/neutral) but lacks GPT-5.4 mini's ability to extract aspects and verbatims in a single call. Would need a multi-model pipeline (NER for aspect extraction + sentiment classification).
- CONTROL: Maximum — no third-party API dependencies, all data stays on-premises.
- BEST WHEN: The client has strict data sovereignty requirements (cannot send data to OpenAI/cloud APIs), the MSP has ML engineering capabilities, or the MSP plans to deploy across 20+ properties and wants to eliminate per-property API costs.
Want early access to the full toolkit?