52 min readIntelligence & insights

Implementation Guide: Synthesize guest review sentiment to identify top and bottom experience elements

Step-by-step implementation guide for deploying AI to synthesize guest review sentiment to identify top and bottom experience elements for Hospitality clients.

Hardware Procurement

Workflow Automation Server

DellDell OptiPlex 7010 Micro (i5-13500T, 16GB RAM, 256GB SSD)Qty: 1

$699 per unit (MSP cost) / $999 suggested resale installed

Self-hosted n8n workflow server that orchestrates review collection, sentiment analysis API calls, and dashboard data updates. Installed on-premises at the hotel or at the MSP's data center. Can also serve as local data cache for compliance. Alternatively, deploy on a cloud VM if the client prefers.

Back-Office Sentiment Dashboard Display

SamsungSamsung Business Display BE43T-H (43-inch, 4K UHD)Qty: 1

$420 per unit (MSP cost) / $699 suggested resale installed

Wall-mounted display in the hotel back-office or front desk manager area showing real-time sentiment dashboard, top/bottom experience element rankings, and alert notifications. Connected to the n8n server or a dedicated Chromebit/mini PC running the Looker Studio dashboard in kiosk mode.

Dashboard Display Media Player

ASUSASUS Chromebox 5 (i3-1215U, 8GB RAM, 128GB SSD)Qty: 1

$290 per unit (MSP cost) / $449 suggested resale installed

Compact media player connected to the Samsung display via HDMI. Runs Chrome OS in kiosk mode displaying the Google Looker Studio sentiment dashboard. Low maintenance, auto-updates, and supports managed Chrome Enterprise enrollment.

Display Wall Mount

Mounting DreamMounting Dream MD2380 (32–55 inch tilt mount)Qty: 1

$25 per unit (MSP cost) / $75 suggested resale installed

VESA-compatible wall mount for the Samsung display in the back-office area.

Software Procurement

n8n Self-Hosted (Community Edition)

n8n GmbHSustainable Use License (free for self-hosted, fair-use)

$0/month (self-hosted) or $24/month for n8n Cloud Starter if cloud-hosted preferred

Workflow automation platform that orchestrates the entire sentiment pipeline: scheduled review collection from APIs, OpenAI sentiment analysis calls, data transformation, database writes, alerting, and dashboard data feeds. Pre-built hospitality templates available.

OpenAI API (GPT-5.4 mini)

OpenAIGPT-5.4 mini

$0.15 per 1M input tokens / $0.60 per 1M output tokens. Estimated $2–$15/month per property processing 200–500 reviews

Usage-based API

Primary LLM for aspect-level sentiment analysis. Extracts sentiment scores and verbatim quotes for each hospitality experience element (cleanliness, staff, F&B, room comfort, check-in, location, value, amenities, noise, Wi-Fi). 128K context window enables batch processing of reviews.

Google Looker Studio

GoogleFree (Google account required)

$0/month

Dashboard and reporting layer. Connects to Google Sheets or BigQuery as data source to display sentiment trends, aspect rankings, alert history, and competitive benchmarking. Shareable with hotel management via link.

Google Sheets API

GoogleFree (within Google Workspace quotas)Qty: 1

$0/month (included in Google Workspace)

Lightweight data store for sentiment analysis results. n8n writes structured sentiment data to Google Sheets, which serves as the data source for Looker Studio dashboards. Suitable for up to ~50,000 rows (approx. 3–4 years of reviews for a mid-size hotel).

$0/month

Automated retrieval of Google Reviews for the hotel property. Requires verified Google Business Profile ownership.

TripAdvisor Content API

TripAdvisor (Tripadvisor LLC)Partnership/API access

Free for certified partners; application required

Automated retrieval of TripAdvisor reviews. Requires partnership application and approval. Alternative: use ReviewPro or TrustYou aggregation if direct API access is unavailable.

PostgreSQL Database

PostgreSQL Global Development Group / SupabaseOpen source (PostgreSQL License) / Free tier on Supabase

$0/month (self-hosted on the Dell OptiPlex) or $0–$25/month on Supabase cloud

Persistent relational database for storing all review data, sentiment scores, aspect breakdowns, and historical trends. Enables complex queries and serves as the canonical data store. Google Sheets is used as a lightweight mirror for Looker Studio.

Slack or Microsoft Teams

Slack Technologies / MicrosoftPer-seat SaaS (likely already in client environment)

$0 incremental (uses existing client subscription)

Real-time alert delivery channel for negative sentiment spikes, service recovery triggers, and weekly sentiment summary notifications to hotel management.

Amazon Comprehend (Optional - PII Detection)

Amazon Web ServicesUsage-based API

Free tier: 50K units/month. Beyond: $0.0001 per unit. Estimated $1–$3/month

Optional PII detection and redaction layer. Scans review text before sending to OpenAI to strip guest names, email addresses, phone numbers, and room numbers for GDPR/CCPA compliance.

$7.20/user/month (likely already in MSP's stack)

Provides Google Sheets, Google Drive, and Gmail for the solution infrastructure. Most MSPs already have this. Required for Sheets API and Looker Studio access.

Prerequisites

  • Client must have a claimed and verified Google Business Profile for each hotel property
  • Client must have admin access to their TripAdvisor Management Center (tripadvisor.com/Owners) for each property
  • Client must have admin access to their Booking.com Extranet for each property
  • Client must have admin access to their Expedia Partner Central account (if applicable)
  • Reliable business internet connection (minimum 25 Mbps) at the hotel or MSP data center hosting the n8n server
  • Property Management System (PMS) admin credentials and API documentation — common PMS platforms include Oracle OPERA, Cloudbeds, Mews, Hotelogix, or StayNTouch. PMS integration is optional but recommended for guest profile enrichment
  • Existing Slack workspace or Microsoft Teams tenant for alert delivery (or client willingness to adopt one)
  • MSP must have an active OpenAI API account with billing enabled (platform.openai.com)
  • MSP must have an active AWS account for Amazon Comprehend PII detection (optional but strongly recommended for GDPR compliance)
  • Client must designate a Hotel Operations Manager or General Manager as the primary stakeholder and dashboard consumer
  • Static IP address or Dynamic DNS configured if hosting n8n server on-premises at the hotel (alternatively, host at MSP data center or on a cloud VM)
  • Google Workspace or standalone Google account for the MSP service account (used for Sheets API and Looker Studio)
  • Written data processing agreement (DPA) template prepared for GDPR compliance, to be signed between MSP and hotel client
  • Inventory of all review sources: list of all OTA profiles, social media accounts, and direct survey tools currently in use

Installation Steps

Step 1: Environment Setup - Provision n8n Workflow Server

Install and configure the n8n workflow automation server on the Dell OptiPlex Micro (or cloud VM). This server will run 24/7 and orchestrate all data collection, sentiment analysis, and dashboard updates. We use Docker for clean deployment and easy updates.

1
SSH into the server or open terminal locally
2
Create n8n directory structure: /opt/n8n-sentiment
3
Create docker-compose.yml
4
Launch the stack
5
Verify both containers are running
Install Docker and dependencies
bash
sudo apt update && sudo apt upgrade -y
sudo apt install -y docker.io docker-compose git curl
sudo systemctl enable docker && sudo systemctl start docker
sudo usermod -aG docker $USER
Create n8n directory structure
bash
mkdir -p /opt/n8n-sentiment && cd /opt/n8n-sentiment
docker-compose.yml — n8n and PostgreSQL stack
yaml
version: '3.8'
services:
  n8n:
    image: n8nio/n8n:latest
    restart: always
    ports:
      - '5678:5678'
    environment:
      - N8N_BASIC_AUTH_ACTIVE=true
      - N8N_BASIC_AUTH_USER=mspadmin
      - N8N_BASIC_AUTH_PASSWORD=CHANGE_THIS_STRONG_PASSWORD
      - N8N_HOST=n8n.yourdomain.com
      - N8N_PROTOCOL=https
      - WEBHOOK_URL=https://n8n.yourdomain.com/
      - GENERIC_TIMEZONE=America/New_York
      - N8N_DEFAULT_BINARY_DATA_MODE=filesystem
    volumes:
      - n8n_data:/home/node/.n8n
      - ./local-files:/files
  postgres:
    image: postgres:16
    restart: always
    environment:
      POSTGRES_DB: sentiment_db
      POSTGRES_USER: sentiment_user
      POSTGRES_PASSWORD: CHANGE_THIS_DB_PASSWORD
    ports:
      - '5432:5432'
    volumes:
      - postgres_data:/var/lib/postgresql/data
volumes:
  n8n_data:
  postgres_data:
Launch the stack and verify containers are running
bash
docker-compose up -d
docker-compose ps
Note

Change N8N_BASIC_AUTH_PASSWORD and POSTGRES_PASSWORD to strong unique values. If hosting on a cloud VM (e.g., AWS t3.medium at ~$30/month or DigitalOcean Droplet at $24/month), adjust the N8N_HOST to the VM's public hostname. For production, place behind an Nginx reverse proxy with Let's Encrypt SSL. Store all credentials in the MSP's password vault (e.g., IT Glue, Hudu).

Step 2: Database Schema Setup - Create Sentiment Data Tables

Initialize the PostgreSQL database with tables to store raw reviews, sentiment analysis results, and aggregated scores. This schema supports multi-property deployments and historical trend analysis.

Connect to PostgreSQL
bash
docker exec -it $(docker ps -q -f name=postgres) psql -U sentiment_user -d sentiment_db
Execute SQL: Create all sentiment schema tables, indexes, and insert initial property record
sql
CREATE TABLE properties (
  id SERIAL PRIMARY KEY,
  name VARCHAR(255) NOT NULL,
  address TEXT,
  city VARCHAR(100),
  country VARCHAR(100),
  pms_property_id VARCHAR(100),
  created_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE reviews (
  id SERIAL PRIMARY KEY,
  property_id INTEGER REFERENCES properties(id),
  source VARCHAR(50) NOT NULL, -- google, tripadvisor, booking, expedia, direct_survey
  source_review_id VARCHAR(255),
  reviewer_name VARCHAR(255),
  reviewer_name_anonymized BOOLEAN DEFAULT false,
  review_date DATE,
  rating_overall DECIMAL(2,1),
  review_title TEXT,
  review_text TEXT NOT NULL,
  language VARCHAR(10) DEFAULT 'en',
  collected_at TIMESTAMP DEFAULT NOW(),
  analyzed_at TIMESTAMP,
  UNIQUE(source, source_review_id)
);
CREATE TABLE sentiment_scores (
  id SERIAL PRIMARY KEY,
  review_id INTEGER REFERENCES reviews(id) ON DELETE CASCADE,
  aspect VARCHAR(50) NOT NULL, -- cleanliness, staff, fnb, room_comfort, checkin, location, value, amenities, noise, wifi, parking, pool_spa
  sentiment VARCHAR(10) NOT NULL, -- positive, negative, neutral, mixed
  score DECIMAL(3,2), -- -1.00 to 1.00
  verbatim TEXT, -- exact quote from the review
  analyzed_at TIMESTAMP DEFAULT NOW()
);
CREATE TABLE sentiment_aggregates (
  id SERIAL PRIMARY KEY,
  property_id INTEGER REFERENCES properties(id),
  period_type VARCHAR(10) NOT NULL, -- daily, weekly, monthly
  period_start DATE NOT NULL,
  aspect VARCHAR(50) NOT NULL,
  avg_score DECIMAL(4,3),
  positive_count INTEGER DEFAULT 0,
  negative_count INTEGER DEFAULT 0,
  neutral_count INTEGER DEFAULT 0,
  total_mentions INTEGER DEFAULT 0,
  top_positive_verbatim TEXT,
  top_negative_verbatim TEXT,
  updated_at TIMESTAMP DEFAULT NOW(),
  UNIQUE(property_id, period_type, period_start, aspect)
);
CREATE TABLE alerts (
  id SERIAL PRIMARY KEY,
  property_id INTEGER REFERENCES properties(id),
  review_id INTEGER REFERENCES reviews(id),
  alert_type VARCHAR(50) NOT NULL, -- negative_spike, service_recovery, trend_change
  severity VARCHAR(10) NOT NULL, -- low, medium, high, critical
  message TEXT NOT NULL,
  delivered BOOLEAN DEFAULT false,
  delivered_at TIMESTAMP,
  created_at TIMESTAMP DEFAULT NOW()
);
CREATE INDEX idx_reviews_property_date ON reviews(property_id, review_date DESC);
CREATE INDEX idx_sentiment_review ON sentiment_scores(review_id);
CREATE INDEX idx_aggregates_property_period ON sentiment_aggregates(property_id, period_type, period_start DESC);
INSERT INTO properties (name, address, city, country) VALUES ('Hotel Client Name', '123 Main St', 'City', 'US');
\q
Note

Adjust the property INSERT statement with the actual hotel client details. For multi-property clients, insert one row per property. The schema supports up to 12 hospitality-specific aspect categories. The UNIQUE constraint on reviews prevents duplicate processing.

Step 3: Configure n8n Credentials and API Connections

Set up all API credentials in n8n's credential store. This includes OpenAI for sentiment analysis, Google Sheets for dashboard data, and optionally AWS Comprehend for PII detection. Access the n8n web UI at https://n8n.yourdomain.com:5678.

  • Access n8n web UI at https://your-n8n-host:5678
  • Navigate to Settings > Credentials > Add Credential
1
OpenAI API Credential — Type: OpenAI | API Key: sk-proj-XXXXXXXX (from platform.openai.com > API Keys) | Organization ID: org-XXXXXXXX (optional)
2
Google Sheets OAuth2 Credential — Type: Google Sheets OAuth2 | Client ID: from Google Cloud Console > APIs & Services > Credentials | Client Secret: from same location | Scopes: https://www.googleapis.com/auth/spreadsheets | Follow OAuth2 flow to authorize
3
PostgreSQL Credential — Type: Postgres | Host: postgres (Docker service name) or localhost if external | Port: 5432 | Database: sentiment_db | User: sentiment_user | Password: (the password set in docker-compose.yml)
4
Slack Credential (if using Slack for alerts) — Type: Slack OAuth2 | Create Slack App at api.slack.com/apps | Bot Token Scopes: chat:write, channels:read | Install to workspace and copy Bot User OAuth Token
5
Microsoft Teams Credential (if using Teams for alerts) — Type: Microsoft Teams | Follow n8n's Microsoft OAuth2 setup guide
6
AWS Credential (optional, for Comprehend PII detection) — Type: AWS | Access Key ID: from IAM user with ComprehendFullAccess policy | Secret Access Key: corresponding secret
Note

Create a dedicated OpenAI API key for this project with usage limits set to $25/month as a safety cap (Settings > Limits on platform.openai.com). For the Google Sheets credential, create a Google Cloud project specifically for this client, enable the Sheets API, and create OAuth2 credentials. Store all API keys in the MSP's credential vault alongside the n8n credentials.

Step 4: Create Google Sheets Dashboard Data Store

Create a Google Sheet that will serve as the data bridge between n8n and Google Looker Studio. This sheet will contain multiple tabs for different dashboard views and will be automatically updated by n8n workflows.

1
In Google Drive, create a new Google Sheet named: '[Hotel Name] - Sentiment Analysis Dashboard Data'
2
Create Tab 1: 'Reviews' — Columns: review_id | source | review_date | rating_overall | review_text_snippet | language
3
Create Tab 2: 'Aspect_Scores' — Columns: review_id | review_date | aspect | sentiment | score | verbatim
4
Create Tab 3: 'Daily_Aggregates' — Columns: date | aspect | avg_score | positive_count | negative_count | neutral_count | total_mentions
5
Create Tab 4: 'Monthly_Summary' — Columns: month | aspect | avg_score | rank_positive | rank_negative | total_mentions | top_positive_quote | top_negative_quote
6
Create Tab 5: 'Alerts_Log' — Columns: timestamp | alert_type | severity | aspect | message | review_source
7
Create Tab 6: 'Config' — Columns: key | value
8
Share the sheet with the MSP service account email (the Google account used for n8n OAuth2 credentials) and grant Editor access
Config tab default values for the Google Sheet
text
# Tab 6: 'Config' - Key/Value pairs:
property_name           | Hotel Client Name
analysis_start_date     | 2025-01-01
alert_threshold_negative | -0.5
alert_threshold_spike   | 3
Note

The Google Sheet acts as a lightweight, zero-cost data layer that Looker Studio can connect to natively. For higher-volume properties (500+ reviews/month) or multi-property deployments, consider using Google BigQuery instead ($0 for first 1TB/month of queries). The Config tab allows adjusting alert thresholds without modifying n8n workflows.

Step 5: Build the Core Review Collection Workflow in n8n

Create the n8n workflow that collects reviews from Google Business Profile, and processes Booking.com/TripAdvisor reviews via manual CSV upload or API integration. This workflow runs on a daily schedule and stores new reviews in PostgreSQL.

1
In n8n UI, create a new workflow: 'Review Collection - [Hotel Name]'
2
Import the following workflow JSON via n8n's import feature: (Paste this into n8n > Workflows > Import from JSON)

Node 1: Schedule Trigger

  • Trigger every day at 6:00 AM hotel local time
Schedule Trigger CRON expression
cron
0 6 * * *

Node 2: HTTP Request - Google Reviews

  • Method: GET
  • URL: https://mybusinessreviews.googleapis.com/v1/accounts/{accountId}/locations/{locationId}/reviews
  • Authentication: Google OAuth2 (configured in Step 3)
  • Query Params: pageSize=50, orderBy=updateTime desc

Node 3: Function - Normalize Google Reviews

Transform Google API response to standard format:

Normalize Google Reviews
javascript
// maps API response to standard review format

items.map(item => ({
  source: 'google',
  source_review_id: item.json.reviewId || item.json.name,
  reviewer_name: item.json.reviewer?.displayName || 'Anonymous',
  review_date: item.json.createTime?.split('T')[0],
  rating_overall: item.json.starRating === 'FIVE' ? 5.0 : item.json.starRating === 'FOUR' ? 4.0 : item.json.starRating === 'THREE' ? 3.0 : item.json.starRating === 'TWO' ? 2.0 : 1.0,
  review_title: '',
  review_text: item.json.comment || '',
  language: 'en'
}))

Node 4: Postgres - Check for Duplicates

  • Operation: SELECT
Postgres duplicate check query
sql
SELECT source_review_id FROM reviews WHERE source = 'google' AND source_review_id = ANY($1::text[])

Node 5: Function - Filter New Reviews Only

  • Compare collected reviews against existing IDs
  • Pass through only new reviews

Node 6: Postgres - Insert New Reviews

  • Operation: INSERT
  • Table: reviews
  • Map fields from normalized format

Node 7: Set - Flag for Sentiment Analysis

  • Add flag: needs_analysis = true
  • Pass review IDs to sentiment analysis workflow (via webhook or direct call)
Note

Google Business Profile API requires a verified GBP listing and an approved Google Cloud project. For Booking.com and Expedia reviews, these platforms do not offer public review APIs — use one of these alternatives: (1) Manual CSV export from the Booking.com Extranet uploaded to n8n via a watched folder, (2) Use a reputation aggregation service like ReviewPro's API as a data source, (3) Set up email forwarding from OTA review notification emails to n8n's email trigger node and parse the review content. TripAdvisor Content API requires a partnership application at developer-tripadvisor.com. Plan 2–4 weeks for approval.

Step 6: Build the Sentiment Analysis Workflow in n8n

Create the core AI workflow that takes unprocessed reviews from PostgreSQL, optionally strips PII via AWS Comprehend, sends them to OpenAI GPT-5.4 mini for aspect-level sentiment analysis, and stores structured results back in the database and Google Sheets.

1
In n8n UI, create a new workflow: 'Sentiment Analysis - [Hotel Name]'
2
Node 1: Schedule Trigger — Trigger every day at 7:00 AM (1 hour after collection). CRON: 0 7 * * *
3
Node 2: Postgres - Fetch Unanalyzed Reviews — Operation: SELECT. Query: SELECT id, review_text, source, review_date, rating_overall FROM reviews WHERE property_id = 1 AND analyzed_at IS NULL ORDER BY review_date DESC LIMIT 50
4
Node 3: (Optional) AWS Comprehend - PII Detection — For each review, call DetectPiiEntities. Redact NAME, EMAIL, PHONE, ADDRESS entities. Replace with [REDACTED]
5
Node 4: Function - Batch Reviews for OpenAI — Group reviews into batches of 10 for efficient API usage. Format as numbered list for the prompt
6
Node 5: OpenAI - Aspect Sentiment Analysis — Model: gpt-5.4-mini. Temperature: 0.1 (low for consistency). System Prompt: (see custom_ai_components for full prompt). User Message: batch of 10 reviews. Response Format: JSON mode enabled
7
Node 6: Function - Parse OpenAI Response — Parse JSON response into individual aspect scores. Map back to review IDs
8
Node 7: Postgres - Insert Sentiment Scores — Operation: INSERT. Table: sentiment_scores. Bulk insert all aspect scores
9
Node 8: Postgres - Mark Reviews as Analyzed — Operation: UPDATE. Query: UPDATE reviews SET analyzed_at = NOW() WHERE id = ANY($1::int[])
10
Node 9: Google Sheets - Append to Aspect_Scores Tab — Append new rows to the Aspect_Scores sheet. Map: review_id, review_date, aspect, sentiment, score, verbatim
11
Node 10: Function - Check for Negative Alerts — If any aspect score < -0.5, trigger alert flow. If 3+ negative mentions of same aspect in one day, trigger spike alert
12
Node 11: IF - Alert Needed? — Branch to alert delivery or end
13
Node 12: Slack/Teams - Send Alert — Channel: #hotel-sentiment-alerts. Message format: 🚨 *Negative Sentiment Alert* ...
Note

The batch processing approach (10 reviews per API call) reduces API costs by ~80% compared to individual calls. GPT-5.4 mini handles this reliably with JSON mode. Set a monthly budget cap of $25 on the OpenAI account as a safety measure. The optional AWS Comprehend PII step adds ~$1–3/month but is strongly recommended for GDPR compliance when processing EU guest reviews. If the hotel serves primarily domestic US guests with no EU exposure, this step can be deferred.

Step 7: Build the Aggregation and Reporting Workflow

Create a weekly workflow that aggregates daily sentiment scores into weekly and monthly summaries, ranks aspects from best to worst, identifies trends, and updates the Google Sheets dashboard data. This workflow also generates the 'Top 5 and Bottom 5 Experience Elements' ranking.

1
In n8n UI, create a new workflow: 'Weekly Aggregation - [Hotel Name]'
2
Node 1: Schedule Trigger — Every Monday at 5:00 AM
3
Node 2: Postgres - Aggregate Last 7 Days
4
Node 3: Postgres - Get Top Verbatims — For each aspect, get the most positive and most negative verbatim
5
Node 4: Postgres - Upsert Weekly Aggregates — INSERT INTO sentiment_aggregates ... ON CONFLICT UPDATE
6
Node 5: Google Sheets - Clear and Update Daily_Aggregates Tab — Clear existing data in Daily_Aggregates, then write fresh aggregated data
7
Node 6: Google Sheets - Update Monthly_Summary Tab — Append or update current month's summary row per aspect
8
Node 7: OpenAI - Generate Weekly Narrative Summary — Model: gpt-5.4-mini. Include top 3 strengths, bottom 3 weaknesses, and one actionable recommendation
9
Node 8: Slack/Teams - Deliver Weekly Report — Post the narrative summary to management channel and include link to full Looker Studio dashboard
Node 2: Postgres — Aggregate Last 7 Days
sql
SELECT 
  aspect,
  AVG(score) as avg_score,
  COUNT(*) FILTER (WHERE sentiment = 'positive') as positive_count,
  COUNT(*) FILTER (WHERE sentiment = 'negative') as negative_count,
  COUNT(*) FILTER (WHERE sentiment = 'neutral') as neutral_count,
  COUNT(*) as total_mentions
FROM sentiment_scores ss
JOIN reviews r ON ss.review_id = r.id
WHERE r.property_id = 1
AND r.review_date >= CURRENT_DATE - INTERVAL '7 days'
GROUP BY aspect
ORDER BY avg_score DESC
Node 3: Postgres — Get Top Verbatims per Aspect
sql
SELECT aspect, verbatim, score FROM sentiment_scores ss
JOIN reviews r ON ss.review_id = r.id
WHERE r.property_id = 1 AND r.review_date >= CURRENT_DATE - INTERVAL '7 days'
AND aspect = $1 ORDER BY score DESC LIMIT 1
Node 7: OpenAI — Weekly Narrative Prompt
text
Generate a 3-paragraph executive summary of this hotel's guest sentiment for the week. Include top 3 strengths, bottom 3 weaknesses, and one actionable recommendation.
Note

The weekly aggregation is the core deliverable for hotel management. The narrative summary generated by GPT-5.4 mini converts raw data into actionable insights without requiring the hotel GM to interpret charts. Consider also scheduling a monthly version with deeper trend analysis on the 1st of each month.

Step 8: Configure Google Looker Studio Dashboard

Build the interactive dashboard that hotel management will use daily. Connect Looker Studio to the Google Sheets data store and create visualizations for sentiment trends, aspect rankings, and alert history.

1
Go to lookerstudio.google.com
2
Click 'Create' > 'Report'
3
Add Data Source: Google Sheets > select the sentiment dashboard sheet
4
Add separate data sources for each tab: Aspect_Scores, Daily_Aggregates, Monthly_Summary, Alerts_Log

Dashboard Page 1: Executive Overview

  • Scorecard: Overall Sentiment Score (avg of all aspects, current month)
  • Scorecard: Total Reviews Analyzed (current month)
  • Scorecard: Positive/Negative Split (% pie chart)
  • Bar Chart: 'Top & Bottom Experience Elements' (horizontal bar, sorted by avg_score) — Data Source: Monthly_Summary, Dimension: aspect, Metric: avg_score, Sort: avg_score descending, Conditional formatting: green for score > 0.3, red for score < -0.3
  • Time Series: 'Sentiment Trend Over Time' (line chart, weekly avg scores) — One line per aspect, filterable

Dashboard Page 2: Aspect Deep Dive

  • Dropdown filter: Select Aspect
  • Time series: Selected aspect score over time
  • Table: Recent verbatim quotes (positive and negative)
  • Scorecard: Week-over-week change

Dashboard Page 3: Alert History

  • Table: All alerts with timestamp, severity, message
  • Filter by date range and severity

Dashboard Page 4: Review Source Breakdown

  • Pie chart: Reviews by source (Google, TripAdvisor, Booking, etc.)
  • Bar chart: Average sentiment by source
  • Helps identify if sentiment differs by booking channel
1
Apply hotel branding: Theme > Custom — use hotel's brand colors, add hotel logo in header
2
Set sharing: Share with hotel management email addresses (view-only)
3
Set up scheduled email delivery: Looker Studio > Schedule email > Weekly PDF to GM
Note

Looker Studio is free and supports scheduled email delivery of PDF snapshots. For the wall-mounted display, set up the Chromebox to auto-launch Chrome in kiosk mode with the Looker Studio URL. Use the Looker Studio embed URL with auto-refresh parameter: append '&refreshInterval=300' for 5-minute auto-refresh. The dashboard should be the primary client-facing deliverable — invest time making it visually polished.

Step 9: Install and Configure Dashboard Display Hardware

Mount the Samsung display in the hotel back-office area, connect the ASUS Chromebox, and configure Chrome OS kiosk mode to show the Looker Studio dashboard continuously.

1
Physical Installation: Mount the Mounting Dream MD2380 bracket on the back-office wall
2
Attach the Samsung BE43T-H display to the bracket
3
Connect the ASUS Chromebox 5 to the display via HDMI
4
Connect Chromebox to hotel network via Ethernet (preferred) or Wi-Fi
5
Power on both devices

Chrome OS Kiosk Configuration — Option A: Managed Chrome Enterprise (recommended)

1
Enroll Chromebox in Google Admin Console
2
Navigate to Devices > Chrome > Settings > Device > Kiosk Settings
3
Set Public Session Kiosk Mode
4
Add the Looker Studio dashboard URL as the single-app kiosk URL
5
Configure: auto-launch on boot, disable sleep, disable screen saver

Chrome OS Kiosk Configuration — Option B: Manual Kiosk Setup (if no Chrome Enterprise license)

1
Sign in to Chromebox with a dedicated Google account (e.g., dashboard@hotelname.com)
2
Open Chrome, navigate to the Looker Studio dashboard URL
3
Press F11 for fullscreen
4
Install Chrome extension: 'Kiosk' or 'Fullscreen Kiosk'
5
Configure the extension to auto-launch dashboard URL on startup
Set Chrome flag to enable auto-start
bash
# navigate to this URL in Chrome and set to Enable

chrome://flags/#ash-enable-auto-start

Display Settings

1
On Samsung display remote, set HDMI-CEC to ON
2
Set Auto Source Switching to ON
3
Adjust brightness for ambient back-office lighting (typically 60-70%)
4
Set sleep timer to match hotel operating hours (e.g., on 6AM, off 11PM)
Note

The dashboard display is an optional but high-impact deliverable that reinforces the value of the sentiment analysis system. Hotel staff walking past will naturally absorb the sentiment scores, creating a feedback-aware culture. The hardware cost is modest ($735 MSP cost for display + Chromebox + mount) and can be resold at $1,200+ including installation. If the client declines the display, the dashboard is still accessible via any web browser.

Step 10: PMS Integration for Guest Profile Enrichment (Optional)

Connect the sentiment analysis pipeline to the hotel's Property Management System to associate review sentiment with guest stay records. This enables the hotel to see sentiment data alongside guest profiles and identify patterns by room type, length of stay, or booking source.

MEWS PMS Integration

1
Obtain API credentials from Mews > Settings > Integrations > API
2
In n8n, add an HTTP Request node after sentiment analysis
3
If match found, update customer profile with sentiment tags: 'Positive Reviewer' or 'At Risk - Service Recovery'
Mews API
json
# Search guest and update customer profile with sentiment tags. Full API
# docs: https://mews-systems.gitbook.io/connector-api/

POST https://api.mews.com/api/connector/v1/customers/search
Headers: { "Content-Type": "application/json" }
Body: { "ClientToken": "xxx", "AccessToken": "xxx", "Name": "guest_name_from_review" }

POST https://api.mews.com/api/connector/v1/customers/update

CLOUDBEDS PMS Integration

1
Register as Cloudbeds API partner
2
Obtain OAuth2 credentials
3
Search guest by name
4
Add note to guest record
Cloudbeds API
http
# Search guest and post a guest note. Full API docs:
# https://hotels.cloudbeds.com/api/docs

GET /api/v1.2/getGuests?name=xxx
POST /api/v1.2/postGuestNote

ORACLE OPERA Cloud Integration

Note

Requires OPN membership and OPERA Cloud API access.

1
Obtain OPERA Cloud REST API credentials
2
Search guest by given name and surname
3
Update guest profile with sentiment notes via comments endpoint
OPERA Cloud API
http
# Search guest profile and post sentiment comments.

GET /gst/v1/guests?givenName=xxx&surname=xxx
POST /gst/v1/guests/{guestId}/comments

PMS Platforms Without API Access

Use n8n to generate a daily CSV report of sentiment data. Hotel staff can manually cross-reference in their PMS.

Note

PMS integration is the most complex and variable part of this deployment. It is optional for Phase 1 and can be added in Phase 2 after the core sentiment pipeline is validated. Guest name matching between reviews and PMS records is inherently imprecise (reviewers may use nicknames or first names only). Implement fuzzy matching with a confidence threshold and only auto-tag matches above 85% confidence. Always have a human review questionable matches. Some PMS vendors charge for API access — verify costs before committing.

Step 11: Configure GDPR/CCPA Compliance Controls

Implement data protection measures including PII redaction, data retention policies, deletion workflows, and required documentation. This is critical for any hotel serving EU guests or California residents.

1
Enable PII Detection in the Sentiment Analysis Workflow (Step 6, Node 3): Ensure AWS Comprehend PII detection is active. Verify redaction works: check that no guest names appear in sentiment_scores.verbatim
2
Set up data retention policy in PostgreSQL
3
Create Right-to-Deletion workflow in n8n: Webhook trigger: POST /delete-guest-data — Input: guest name or email — Action: Search reviews table, delete matching reviews and all associated sentiment_scores — Log the deletion in an audit table — Respond with confirmation
4
Prepare GDPR documentation: Data Processing Agreement (DPA) between MSP and hotel client — Record of Processing Activities (ROPA) entry for this system — Privacy Impact Assessment (PIA) documenting data flows — Update hotel's guest privacy notice to mention review analysis
Connect to PostgreSQL container
bash
docker exec -it $(docker ps -q -f name=postgres) psql -U sentiment_user -d sentiment_db
Create automated data retention function in PostgreSQL
sql
-- Create automated data retention (delete reviews older than 2 years)
CREATE OR REPLACE FUNCTION cleanup_old_data() RETURNS void AS $$
BEGIN
  DELETE FROM sentiment_scores WHERE review_id IN (SELECT id FROM reviews WHERE review_date < CURRENT_DATE - INTERVAL '730 days');
  DELETE FROM reviews WHERE review_date < CURRENT_DATE - INTERVAL '730 days';
  DELETE FROM sentiment_aggregates WHERE period_start < CURRENT_DATE - INTERVAL '730 days';
  DELETE FROM alerts WHERE created_at < CURRENT_DATE - INTERVAL '365 days';
END;
$$ LANGUAGE plpgsql;

-- Schedule via pg_cron or n8n weekly trigger
\q
Note

GDPR compliance is non-negotiable for hotels with EU guests. Key points: (1) Public OTA reviews are generally processed under 'legitimate interest' legal basis, but direct survey responses require explicit consent. (2) The DPA must specify that OpenAI is a sub-processor and data is sent to their API — check OpenAI's data usage policy (as of 2025, API data is not used for training). (3) AWS Comprehend PII detection costs approximately $1–3/month and is a worthwhile compliance investment. (4) Keep the data retention period aligned with the hotel's existing data retention policy — 2 years is a reasonable default.

Step 12: End-to-End Testing and Go-Live

Perform comprehensive testing of the entire pipeline from review collection through dashboard display. Validate sentiment accuracy against manually labeled samples, verify alert delivery, and confirm dashboard data freshness.

1
Seed test data - Insert 20 sample reviews manually
2
Manually trigger the Sentiment Analysis workflow in n8n: Open n8n > Workflows > Sentiment Analysis > Click 'Execute Workflow' > Monitor execution log for errors
3
Verify results
4
Validate accuracy: Compare AI sentiment labels to your manual assessment. Target: 85%+ accuracy on sentiment polarity (positive/negative/neutral). Target: 90%+ accuracy on aspect identification
5
Test alert delivery: The test_002 review should trigger a negative alert. Verify Slack/Teams message received
6
Verify Google Sheets data: Open the dashboard Google Sheet. Confirm Aspect_Scores tab has new rows
7
Verify Looker Studio dashboard: Open dashboard URL. Confirm charts populate with test data. Test all filters and drill-downs
8
Go-live: Enable scheduled triggers on all three workflows — Review Collection: daily 6:00 AM, Sentiment Analysis: daily 7:00 AM, Weekly Aggregation: Monday 5:00 AM
Step 1 — Seed test reviews into the database
sql
-- Connect to the database
-- docker exec -it $(docker ps -q -f name=postgres) psql -U sentiment_user -d sentiment_db

INSERT INTO reviews (property_id, source, source_review_id, review_date, rating_overall, review_text, language) VALUES
(1, 'manual_test', 'test_001', CURRENT_DATE, 5.0, 'The room was spotless and the bed was incredibly comfortable. Staff at the front desk were so helpful with restaurant recommendations. Breakfast buffet was outstanding.', 'en'),
(1, 'manual_test', 'test_002', CURRENT_DATE, 2.0, 'Very noisy room next to the elevator. The Wi-Fi kept disconnecting and the bathroom was not clean. Staff seemed indifferent when I complained.', 'en'),
(1, 'manual_test', 'test_003', CURRENT_DATE, 4.0, 'Great location, walking distance to everything. Room was a bit small but clean. The pool area was nice but the gym equipment was outdated.', 'en');
-- Add 17 more varied reviews covering all aspects
Step 3 — Verify sentiment analysis results for test reviews
sql
SELECT r.source_review_id, ss.aspect, ss.sentiment, ss.score, ss.verbatim
FROM sentiment_scores ss JOIN reviews r ON ss.review_id = r.id
WHERE r.source = 'manual_test' ORDER BY r.source_review_id, ss.aspect;
Connect to PostgreSQL via Docker
bash
docker exec -it $(docker ps -q -f name=postgres) psql -U sentiment_user -d sentiment_db
Note

Allow 1 week of parallel operation (automated + manual spot-checking) before fully trusting the system. During this week, have the hotel GM or operations manager review the daily sentiment output and flag any inaccuracies. Fine-tune the OpenAI prompt based on feedback (see custom_ai_components for the full prompt with adjustment guidance). Delete all test data before go-live: DELETE FROM reviews WHERE source = 'manual_test';

Custom AI Components

Hospitality Aspect-Level Sentiment Analyzer

Type: prompt

The core OpenAI GPT-5.4 mini prompt that performs aspect-level sentiment analysis on hotel guest reviews. It identifies which hospitality experience elements are mentioned, classifies sentiment for each, assigns a score from -1.0 to 1.0, and extracts the relevant verbatim quote. This prompt is designed for batch processing (up to 10 reviews per call) to minimize API costs.

Implementation

System Prompt

SYSTEM PROMPT: --- You are a hospitality sentiment analysis expert. You analyze hotel guest reviews and extract aspect-level sentiment for specific experience elements. For each review, identify ALL mentioned aspects from this list: - cleanliness: Room cleanliness, bathroom cleanliness, public area cleanliness - staff: Staff friendliness, helpfulness, professionalism, responsiveness - fnb: Food & beverage quality, breakfast, restaurant, room service, bar - room_comfort: Bed comfort, room size, temperature, furnishings, decor - checkin: Check-in/check-out process, speed, efficiency, welcome experience - location: Property location, proximity to attractions, transportation access - value: Value for money, pricing fairness, worth the cost - amenities: Pool, spa, gym, business center, parking, elevator - noise: Noise levels, soundproofing, quiet environment - wifi: Internet quality, Wi-Fi speed, connectivity reliability - parking: Parking availability, cost, convenience, valet - pool_spa: Pool condition, spa services, sauna, hot tub For each mentioned aspect, provide: 1. sentiment: 'positive', 'negative', 'neutral', or 'mixed' 2. score: Float from -1.0 (extremely negative) to 1.0 (extremely positive). Use 0.0 for neutral. - Score guide: -1.0 to -0.7 = strongly negative, -0.7 to -0.3 = moderately negative, -0.3 to 0.0 = slightly negative, 0.0 = neutral, 0.0 to 0.3 = slightly positive, 0.3 to 0.7 = moderately positive, 0.7 to 1.0 = strongly positive 3. verbatim: The EXACT quote from the review (max 100 characters) that supports this sentiment. If the aspect is implied but not directly quoted, paraphrase briefly. IMPORTANT RULES: - Only extract aspects that are actually mentioned or clearly implied in the review. - Do NOT invent aspects that aren't discussed. - If a review mentions an aspect with mixed feelings, use sentiment 'mixed' and a score reflecting the balance. - Handle multiple languages: if the review is not in English, still analyze it and provide the verbatim in the original language. - Be precise with scores: a '4/5 stars' review mentioning staff was 'fine' is staff: neutral (0.1), not positive (0.7). Respond in JSON format with this exact structure: { "reviews": [ { "review_index": 1, "aspects": [ { "aspect": "cleanliness", "sentiment": "positive", "score": 0.8, "verbatim": "The room was spotless" } ] } ] } ---
Sonnet 4.6

User Prompt Template

USER PROMPT TEMPLATE: --- Analyze the following {N} hotel guest reviews. For each review, extract all mentioned hospitality aspects with sentiment scores. Review 1 (Source: {source}, Rating: {rating}/5, Date: {date}): "{review_text_1}" Review 2 (Source: {source}, Rating: {rating}/5, Date: {date}): "{review_text_2}" [...up to Review 10...] ---
Sonnet 4.6

n8n Implementation

  • Model: gpt-5.4-mini
  • Temperature: 0.1
  • Max Tokens: 4000
  • Response Format: JSON (enable 'JSON Mode' in the node settings)
  • System Message: [paste system prompt above]
  • User Message: Use an n8n Expression to construct the user prompt from the batched reviews
n8n Expression
javascript
// User Message construction from batched reviews

{{ $json.batchedReviews.map((r, i) => `Review ${i+1} (Source: ${r.source}, Rating: ${r.rating_overall}/5, Date: ${r.review_date}):\n"${r.review_text}"`).join('\n\n') }}

Tuning Guidance

  • If scores are too generous (all reviews getting 0.5+), add to system prompt: 'Be more critical in scoring. A review that says something is "ok" or "fine" should score 0.0 to 0.1, not 0.3+.'
  • If aspects are being hallucinated (aspects mentioned that weren't in the review), add: 'CRITICAL: Only extract aspects with explicit textual evidence. If in doubt, skip the aspect.'
  • If multilingual reviews are poorly analyzed, add: 'For non-English reviews, first internally translate to English for analysis, then provide the verbatim in the original language.'

Cost Estimate

  • Average review: ~150 words = ~200 tokens input
  • Batch of 10 reviews: ~2,000 input tokens + ~500 system prompt tokens = ~2,500 input tokens
  • Output per batch: ~1,500 tokens
  • Cost per batch: (2,500 × $0.15/1M) + (1,500 × $0.60/1M) = $0.000375 + $0.0009 = $0.001275
  • 300 reviews/month = 30 batches = $0.038/month
  • Even at 1,000 reviews/month: ~$0.13/month

Weekly Sentiment Executive Summary Generator

Type: prompt

An OpenAI GPT-5.4 mini prompt that converts raw weekly sentiment aggregation data into a natural-language executive summary suitable for hotel management. Delivered via Slack/Teams and optionally email. Highlights top 3 strengths, bottom 3 weaknesses, week-over-week trends, and one specific actionable recommendation.

Implementation

System Prompt

SYSTEM PROMPT: --- You are a hospitality consultant writing a weekly guest experience briefing for a hotel General Manager. Your tone is professional, concise, and action-oriented. You always lead with the most important insight. Format your response as follows: 📊 **Weekly Guest Sentiment Briefing - [Hotel Name]** **Period:** [start_date] to [end_date] | **Reviews Analyzed:** [count] **Overall Sentiment Score:** [score] (↑/↓ [change] vs. last week) 🌟 **Top 3 Strengths This Week:** 1. **[Aspect]** (Score: [X.XX]) - "[top positive verbatim]" 2. **[Aspect]** (Score: [X.XX]) - "[top positive verbatim]" 3. **[Aspect]** (Score: [X.XX]) - "[top positive verbatim]" ⚠️ **Bottom 3 Areas for Improvement:** 1. **[Aspect]** (Score: [X.XX]) - "[top negative verbatim]" 2. **[Aspect]** (Score: [X.XX]) - "[top negative verbatim]" 3. **[Aspect]** (Score: [X.XX]) - "[top negative verbatim]" 📈 **Notable Trends:** [1-2 sentences on significant week-over-week changes] 💡 **Recommended Action:** [One specific, actionable recommendation based on the data. Be concrete — e.g., 'Schedule a deep-clean of rooms on floors 3-5 this week' not 'improve cleanliness'.] Keep the entire briefing under 250 words. ---
Sonnet 4.6

User Prompt Template

USER PROMPT TEMPLATE: --- Generate the weekly sentiment briefing for {hotel_name}. Period: {start_date} to {end_date} Total reviews analyzed: {review_count} This week's aspect scores (sorted best to worst): {aspect_data_json} Last week's aspect scores for comparison: {last_week_data_json} Top positive verbatims by aspect: {positive_verbatims_json} Top negative verbatims by aspect: {negative_verbatims_json} ---
Sonnet 4.6

n8n Implementation

  • Model: gpt-5.4-mini
  • Temperature: 0.4 (slightly higher for more natural writing)
  • Max Tokens: 1000
  • System Message: [paste system prompt above]
  • User Message: Construct from PostgreSQL aggregation query results using n8n expressions
  • Used within the Weekly Aggregation workflow, Node 7

Cost: ~$0.001 per weekly summary = ~$0.004/month

Real-Time Service Recovery Alert Agent

Type: workflow

An n8n workflow component that monitors incoming sentiment analysis results in real-time and triggers immediate alerts when it detects: (1) any review with 2+ aspects scoring below -0.5, (2) 3+ negative mentions of the same aspect within 24 hours (spike detection), or (3) a review mentioning a specific guest by name who is still checked in (requires PMS integration). Alerts are routed to the appropriate hotel department via Slack/Teams with urgency levels.

Implementation

Implement as a sub-workflow called from the Sentiment Analysis workflow. The workflow is triggered by an array of sentiment_scores just inserted for a batch of reviews.

NODE 1: Function — Classify Alert Severity

NODE 1
javascript
// JavaScript: Classify alert severity. Applies Rule 1 (2+ aspects below
// -0.5) and Rule 2 (any single aspect below -0.8), avoiding double-alerting.

const results = $input.all();
const alerts = [];

for (const item of results) {
  const scores = item.json.aspects || [];
  const reviewId = item.json.review_id;
  const reviewSource = item.json.source;
  const reviewDate = item.json.review_date;
  
  // Rule 1: Multi-aspect negative review (2+ aspects below -0.5)
  const severeNegatives = scores.filter(s => s.score < -0.5);
  if (severeNegatives.length >= 2) {
    alerts.push({
      review_id: reviewId,
      alert_type: 'service_recovery',
      severity: severeNegatives.length >= 3 ? 'critical' : 'high',
      message: `🚨 Service Recovery Needed: Review from ${reviewSource} on ${reviewDate} has ${severeNegatives.length} severely negative aspects: ${severeNegatives.map(s => `${s.aspect} (${s.score})`).join(', ')}`,
      aspects: severeNegatives.map(s => s.aspect),
      verbatims: severeNegatives.map(s => `${s.aspect}: "${s.verbatim}"`).join('\n')
    });
  }
  
  // Rule 2: Single critical aspect (score below -0.8)
  const criticalAspects = scores.filter(s => s.score < -0.8);
  for (const ca of criticalAspects) {
    if (!severeNegatives.includes(ca)) { // Avoid double-alerting
      alerts.push({
        review_id: reviewId,
        alert_type: 'negative_spike',
        severity: 'high',
        message: `⚠️ Critical Negative Sentiment: ${ca.aspect} scored ${ca.score} - "${ca.verbatim}"`,
        aspects: [ca.aspect],
        verbatims: `${ca.aspect}: "${ca.verbatim}"`
      });
    }
  }
}

return alerts.map(a => ({ json: a }));

NODE 2: Postgres — Check for Spike Pattern

NODE 2
sql
-- Postgres query: Detect negative aspect spikes within the last 24 hours.
-- Any aspect with 3+ negative mentions triggers a trend_change alert.

SELECT aspect, COUNT(*) as neg_count
FROM sentiment_scores ss
JOIN reviews r ON ss.review_id = r.id
WHERE r.property_id = 1
AND ss.sentiment = 'negative'
AND ss.analyzed_at >= NOW() - INTERVAL '24 hours'
GROUP BY aspect
HAVING COUNT(*) >= 3;

If results are returned, create an additional spike alert with the following structure:

NODE 2
json
# Spike alert payload constructed when the spike query returns results.

{
  "alert_type": "trend_change",
  "severity": "high",
  "message": "📊 Negative Spike Detected: {aspect} has {neg_count} negative mentions in the last 24 hours. Investigate immediately."
}

NODE 3: Postgres — Insert Alerts

NODE 3
sql
-- Postgres: Persist each generated alert to the alerts table.

INSERT INTO alerts (property_id, review_id, alert_type, severity, message)
VALUES (1, $1, $2, $3, $4);

NODE 4: IF — Route by Severity

  • Critical / High → Slack DM to GM + channel post
  • Medium → Channel post only
  • Low → Log only (weekly summary)

NODE 5: Slack — Send Alert

NODE 5
text
# Slack message template for #hotel-sentiment-alerts channel.

Channel: #hotel-sentiment-alerts

🏨 *{alert_type}* | Severity: *{severity}*

{message}

📝 Guest Quotes:
{verbatims}

🔗 <{dashboard_url}|View Full Dashboard>

_Recommended: Review and respond to this feedback within 2 hours._

NODE 6: Google Sheets — Log Alert

Append to the Alerts_Log tab with the following fields: timestamp, alert type, severity, and message.

Note

PMS integration is required to support Rule 3 (alerting on reviews that mention a currently checked-in guest by name). This rule is not implemented in the nodes above and must be added as an additional lookup step once PMS connectivity is available.

Review Source Aggregation Connector

Type: integration

A multi-source review collection integration that handles the complexity of pulling reviews from various OTAs and direct feedback channels. Since not all platforms offer APIs, this component implements multiple collection strategies including API polling, email parsing, and manual CSV import processing.

Implementation

N8N WORKFLOW: 'Multi-Source Review Collector' — This workflow handles 5 review collection channels.

Channel 1: Google Reviews (API)

  • Node type: HTTP Request
  • Method: GET
  • URL: https://mybusiness.googleapis.com/v4/accounts/{{$env.GOOGLE_ACCOUNT_ID}}/locations/{{$env.GOOGLE_LOCATION_ID}}/reviews
  • Auth: Google OAuth2
  • Pagination: Follow nextPageToken
  • Schedule: Daily at 6:00 AM
Google Reviews — Normalization Function
javascript
return items.map(item => ({
  json: {
    source: 'google',
    source_review_id: item.json.reviewId,
    reviewer_name: item.json.reviewer?.displayName || 'Anonymous',
    review_date: item.json.createTime?.split('T')[0],
    rating_overall: {'ONE':1,'TWO':2,'THREE':3,'FOUR':4,'FIVE':5}[item.json.starRating] || 0,
    review_title: '',
    review_text: item.json.comment || '',
    language: item.json.reviewer?.isAnonymous ? 'en' : 'en'
  }
}));

Channel 2: Booking.com (Email Parsing)

Strategy: Booking.com sends review notification emails to the hotel. Forward these to a dedicated email address monitored by n8n.

  • Node type: Email Trigger
  • Protocol: IMAP
  • Host: imap.gmail.com (or hotel's mail server)
  • Mailbox: reviews@hotelname.com
  • Filter: From contains 'booking.com' AND Subject contains 'New Guest Review'
  • Node type: HTML Parse
  • Extract guest name via CSS selector for reviewer name
  • Extract numeric rating
  • Extract review body text
  • Extract date from email received date
Booking.com — Normalization Function
javascript
return [{
  json: {
    source: 'booking',
    source_review_id: `booking_${Date.now()}_${Math.random().toString(36).substr(2,9)}`,
    reviewer_name: $json.parsed_guest_name,
    review_date: new Date().toISOString().split('T')[0],
    rating_overall: parseFloat($json.parsed_rating),
    review_title: $json.parsed_title || '',
    review_text: $json.parsed_review_body,
    language: 'en'
  }
}];

Channel 3: TripAdvisor (Email Parsing / API)

  • Same email-parsing strategy as Booking.com
  • Alternatively, use TripAdvisor Content API if partnership is approved
  • Email filter: From contains 'tripadvisor' AND Subject contains 'review'

Channel 4: Expedia (Email Parsing)

  • Same email-parsing strategy as Booking.com
  • Email filter: From contains 'expedia' AND Subject contains 'review'

Channel 5: Direct Guest Surveys (Webhook)

If the hotel uses a survey tool (SurveyMonkey, Typeform, Google Forms):

  • Node type: Webhook Trigger
  • Path: /webhook/survey-response
  • Method: POST
  • Auth: Header token validation
Direct Guest Surveys — Normalization Function
javascript
return [{
  json: {
    source: 'direct_survey',
    source_review_id: `survey_${$json.response_id}`,
    reviewer_name: $json.guest_name || 'Survey Respondent',
    review_date: new Date().toISOString().split('T')[0],
    rating_overall: $json.overall_rating || null,
    review_title: '',
    review_text: $json.open_feedback || $json.comments || '',
    language: 'en'
  }
}];

Channel 6: Manual CSV Upload (Fallback)

For any source without API or email integration:

1
Watch Folder node monitoring /opt/n8n-sentiment/local-files/csv-imports/
2
Read CSV node parsing the file
3
Expected CSV columns: source, review_date, rating, reviewer_name, review_text
4
Normalization and deduplication as above
5
Move processed file to /opt/n8n-sentiment/local-files/csv-imports/processed/
Note

For the email parsing strategy, set up a dedicated email account (e.g., reviews@hotelclient.com) and configure forwarding rules in the hotel's Booking.com Extranet, TripAdvisor Management Center, and Expedia Partner Central to CC this address on all review notifications. This is the most reliable method for OTAs that lack public APIs.

Competitive Benchmarking Analyzer

Type: prompt An optional OpenAI prompt that analyzes publicly available competitor review data to benchmark the client hotel's sentiment against 3-5 local competitors. Runs monthly and identifies competitive advantages and gaps by aspect category.

Implementation

System Prompt

SYSTEM PROMPT: --- You are a hospitality competitive intelligence analyst. Given sentiment data for a target hotel and its competitors, produce a competitive benchmarking analysis. Format: 📊 **Monthly Competitive Sentiment Benchmark** **Property:** [Hotel Name] | **Period:** [Month Year] **Competitors:** [list of competitor names] **Aspect-by-Aspect Comparison:** | Aspect | [Hotel] | [Comp 1] | [Comp 2] | [Comp 3] | Rank | |--------|---------|----------|----------|----------|------| | Staff | 0.72 | 0.65 | 0.58 | 0.70 | 1st | [...] 🏆 **Competitive Advantages** (aspects where you rank 1st or 2nd): [list with brief explanation] ⚡ **Competitive Gaps** (aspects where you rank last or second-to-last): [list with brief explanation and specific recommendation] 📌 **Strategic Recommendation:** [One paragraph: which aspect to prioritize improving for maximum competitive differentiation] ---
Sonnet 4.6

User Prompt Template

USER PROMPT TEMPLATE: --- Generate the monthly competitive benchmark for {hotel_name}. {hotel_name} aspect scores this month: {hotel_scores_json} Competitor data (sourced from public Google reviews): {competitor_scores_json} Note: Competitor scores are approximate, based on analysis of their most recent 50 public Google reviews. ---
Sonnet 4.6
Google Places API — Fetch Competitor Reviews
http
GET https://maps.googleapis.com/maps/api/place/details/json?place_id={COMPETITOR_PLACE_ID}&fields=reviews&key={API_KEY}
  • To get competitor review data, use the Google Places API (Text Search + Place Details) to fetch recent reviews for 3-5 competing hotels
  • Google Places API returns up to 5 most recent reviews per call. For more comprehensive data, collect weekly and accumulate over time.
  • Run the competitor sentiment through the same Hospitality Aspect-Level Sentiment Analyzer prompt.
  • Store competitor scores in a separate PostgreSQL table: competitor_sentiment_aggregates
  • Cost: ~$0.05/month for API calls + ~$0.01/month for OpenAI analysis of competitor reviews
  • Schedule: Monthly, 1st of each month
  • This is a premium add-on feature — include in the Professional and Premium MSP packages

Testing & Validation

  • PIPELINE TEST: Insert 5 test reviews with known sentiment (2 strongly positive, 2 strongly negative, 1 mixed) into the reviews table. Run the Sentiment Analysis workflow manually. Verify that the strongly positive reviews receive average aspect scores above 0.5 and strongly negative reviews receive scores below -0.5. Mixed review should have some positive and some negative aspects.
  • ASPECT ACCURACY TEST: Using the 20 seed test reviews from Step 12, manually label which aspects are mentioned in each review. Compare against the AI-extracted aspects. Target: 90%+ precision (AI-identified aspects actually exist in the review) and 80%+ recall (aspects mentioned in the review are caught by AI).
  • ALERT DELIVERY TEST: Insert a test review with 3 aspects scoring below -0.7 (e.g., 'Terrible experience. Room was filthy, staff was rude, and the food made me sick.'). Verify that (1) the Service Recovery Alert workflow triggers, (2) severity is classified as 'critical', (3) the Slack/Teams message is delivered within 5 minutes, and (4) the alert is logged in both the PostgreSQL alerts table and the Google Sheets Alerts_Log tab.
  • SPIKE DETECTION TEST: Insert 4 reviews within a 1-hour window, all with negative 'cleanliness' aspect scores. Run the sentiment analysis and alert workflows. Verify that the spike detection query fires and delivers a trend_change alert for the cleanliness aspect.
  • DASHBOARD DATA FLOW TEST: After running the Sentiment Analysis workflow on test data, open the Google Sheet and verify that (1) the Aspect_Scores tab has new rows with correct data, (2) the review_id, aspect, sentiment, score, and verbatim columns are all populated. Then open Looker Studio and verify the dashboard charts reflect the new data within 5 minutes.
  • PII REDACTION TEST: Insert a test review containing a guest name, email, and phone number: 'I am John Smith (john.smith@email.com, 555-123-4567) and my room 304 was terrible.' Run the pipeline and verify that the verbatim field in sentiment_scores does NOT contain the name, email, phone, or room number. They should be replaced with [REDACTED] or stripped entirely.
  • WEEKLY SUMMARY TEST: After accumulating at least 10 analyzed reviews, manually trigger the Weekly Aggregation workflow. Verify that (1) the Monthly_Summary tab in Google Sheets is updated with ranked aspects, (2) the OpenAI-generated narrative summary is delivered to the Slack/Teams channel, and (3) the summary correctly identifies the top 3 and bottom 3 aspects.
  • DASHBOARD DISPLAY TEST: On the wall-mounted Samsung display with ASUS Chromebox, verify that (1) Chrome launches in kiosk mode on boot, (2) the Looker Studio dashboard loads fully, (3) the dashboard auto-refreshes (data updates visible within 10 minutes), and (4) the display is readable from 6 feet away in typical back-office lighting.
  • DEDUPLICATION TEST: Run the Review Collection workflow twice in succession. Verify that the same reviews are NOT inserted twice by checking the UNIQUE constraint on (source, source_review_id) in the reviews table. The second run should process 0 new reviews.
  • MULTI-LANGUAGE TEST: Insert a review in Spanish, French, or German (common for hotels with international guests). Run sentiment analysis and verify that aspects are correctly identified and the verbatim is preserved in the original language. Example: 'Das Zimmer war sehr sauber aber das WLAN war schrecklich langsam.' (German: Room was very clean but WiFi was terribly slow.) Should produce: cleanliness=positive, wifi=negative.
  • FULL END-TO-END LATENCY TEST: Post a new review on the hotel's Google Business Profile at a known time. Wait for the next scheduled collection run (6 AM). Verify the review appears in the database, is analyzed by 7 AM, and is visible in the Looker Studio dashboard by 7:30 AM. Total latency target: under 24 hours from review posting to dashboard visibility.
  • LOAD TEST: Insert 500 reviews in bulk to simulate a backlog scenario (e.g., initial historical import). Run the Sentiment Analysis workflow and verify it processes all 500 reviews in batches of 10 without errors, timeouts, or API rate limiting. Expected completion time: under 30 minutes for 50 API calls.

Client Handoff

TRAINING SESSION (90 minutes, on-site or video call with hotel GM, operations manager, and front desk manager):

1
DASHBOARD WALKTHROUGH (30 min): Walk through each Looker Studio dashboard page — Executive Overview, Aspect Deep Dive, Alert History, Review Source Breakdown. Show how to use filters (date range, aspect, source). Explain what each metric means in operational terms. Demonstrate the wall-mounted display and explain the auto-refresh behavior.
2
INTERPRETING SENTIMENT SCORES (15 min): Explain the -1.0 to 1.0 scoring scale with real examples from their own reviews. Discuss the difference between aspect sentiment and overall rating. Show how a 4-star review can still contain negative aspects. Teach the team to focus on the 'Bottom 3 Aspects' as priority action items.
3
ALERT RESPONSE PROTOCOL (15 min): Review the alert types (service_recovery, negative_spike, trend_change) and severity levels. Establish response time SLAs: Critical alerts = respond within 2 hours, High = within 24 hours. Define who is responsible for each alert type (e.g., cleanliness → Housekeeping Manager, staff → Front Office Manager, F&B → Restaurant Manager).
4
WEEKLY BRIEFING REVIEW (15 min): Walk through a sample weekly sentiment briefing. Show how to use the 'Recommended Action' section in staff meetings. Suggest incorporating the weekly briefing into Monday morning management huddles.
5
DOCUMENTATION HANDOFF: Leave behind (digitally, in a shared Google Drive folder): Dashboard access URLs and login credentials; Alert response protocol one-pager (laminated for front desk); Aspect category definitions and examples; FAQ document covering common questions; MSP support contact information and escalation process; Data privacy notice (what data is collected, where it's stored, retention period).
6
SUCCESS CRITERIA REVIEW: Review together after 30 days.

Maintenance

ONGOING MSP RESPONSIBILITIES:

Daily (Automated)

  • Monitor n8n workflow execution logs for failed runs (set up n8n error workflow that emails MSP on any failure)
  • Verify daily review collection and sentiment analysis workflows completed successfully
  • Check n8n server uptime (integrate with MSP's RMM tool — e.g., Datto RMM, ConnectWise Automate agent on the Dell OptiPlex)

Weekly

  • Review the weekly sentiment briefing output for quality (spot-check 2-3 sentiment scores against actual reviews)
  • Check Google Sheets row count — monitor for approaching 50,000 row limit (archive old data annually)
  • Verify Looker Studio dashboard loads correctly and data is current
  • Review OpenAI API usage dashboard for unexpected cost spikes

Monthly

  • Generate MSP monthly report for the client: system uptime, reviews processed, API costs, notable sentiment trends
  • Review and tune alert thresholds based on false positive/negative rates (adjust the -0.5 threshold in the Config sheet if needed)
  • Verify all review source connections are active (Google API token refresh, email parsing still working for Booking.com/TripAdvisor)
  • Apply n8n Docker image updates — test in staging first
  • Check PostgreSQL database size and vacuum
Pull and restart n8n Docker image update
bash
docker-compose pull && docker-compose up -d
Check PostgreSQL database size and run vacuum analyze
bash
docker exec postgres psql -U sentiment_user -d sentiment_db -c 'VACUUM ANALYZE;'

Quarterly

  • Sentiment accuracy audit: randomly sample 25 reviews, manually label sentiment, compare to AI output. If accuracy drops below 85%, adjust the OpenAI prompt.
  • Review and update the hospitality aspect categories if the hotel has added new amenities or services
  • Conduct a 30-minute client check-in call to review sentiment trends and gather feedback on system utility
  • Review GDPR/CCPA compliance: verify data retention cleanup ran, check for any deletion requests, update DPA if vendor terms changed
  • Review OpenAI model availability — if a newer/cheaper model is released (e.g., GPT-5.4 mini successor), evaluate and migrate
  • Update Looker Studio dashboard with any new visualizations requested by the client

Annually

  • Full system review and optimization session with the client
  • Data retention cleanup: archive reviews older than 2 years, verify automated cleanup function ran
  • Renew or renegotiate any paid software licenses
  • Security audit: rotate all API keys, update n8n credentials, review server patches
  • GDPR/CCPA compliance review: update privacy impact assessment, verify DPA with all sub-processors (OpenAI, AWS, Google)
  • Evaluate if the hotel has outgrown the current solution and should upgrade to a Tier 1 SaaS platform (ReviewPro, Revinate)

SLA Considerations

  • System availability target: 99% uptime (allows ~7 hours downtime/month for maintenance)
  • Alert delivery SLA: within 15 minutes of sentiment analysis completion
  • Dashboard data freshness: within 24 hours of review publication
  • MSP response time: 4 hours for critical issues (system down, data breach), 24 hours for non-critical issues
  • Escalation path: L1 (automated monitoring) → L2 (MSP technician: restart services, check API keys) → L3 (MSP developer: workflow debugging, prompt tuning, database issues)

Alternatives

Turnkey SaaS Platform: Shiji ReviewPro

Deploy Shiji ReviewPro as a fully managed SaaS platform instead of the custom n8n pipeline. ReviewPro aggregates reviews from 175+ sources, provides semantic analysis across 500 hospitality-specific concepts, and includes a pre-built dashboard with the Global Review Index™ score. Setup involves vendor onboarding, connecting OTA accounts, and configuring PMS integration through ReviewPro's partner ecosystem. No custom development required.

Tradeoffs

  • COST: $150–$500/month per property (vs. $5–$20/month for the custom build), significantly reducing MSP margin to 30-40% when resold at $299–$699/month.
  • COMPLEXITY: Much lower — 2-week deployment vs. 4-6 weeks, no coding required, L2 technician can handle.
  • CAPABILITY: Superior out-of-the-box — 500 semantic concepts vs. our 12, 175+ sources vs. manual integration, multilingual support built in, competitive benchmarking included.
  • CONTROL: Less customizable — dashboard layout, alert rules, and AI model behavior are controlled by ReviewPro.
  • BEST WHEN: The client is a mid-to-large hotel (100+ rooms) with budget for premium tools, the MSP wants minimal development overhead, or the property needs immediate deployment with enterprise-grade analytics.

Turnkey SaaS Platform: TrustYou CXP

Deploy TrustYou's Customer Experience Platform (CXP) for smaller properties. TrustYou launched CXP at ITB Berlin 2024, unifying surveys, reviews, and messaging into a single platform with built-in AI sentiment analysis. More affordable than ReviewPro for independent and boutique hotels.

  • COST: Estimated $100–$400/month per property, lower than ReviewPro. MSP margin is moderate at 40-50% when resold.
  • COMPLEXITY: Very low — guided SaaS onboarding, typically 1-2 weeks.
  • CAPABILITY: Strong for independents — unified survey + review analysis, AI-powered response suggestions, good multilingual support. Less depth than ReviewPro's 500 semantic concepts.
  • CONTROL: Limited customization but sufficient for most independent hotels.
  • BEST WHEN: Client is an independent or boutique hotel (25-100 rooms), budget is constrained, and the property values simplicity over depth. Also a good choice when the hotel wants integrated survey capabilities alongside review analysis.

White-Label Resale via Vendasta

Use Vendasta's white-label reputation management platform to offer sentiment analysis under the MSP's own brand. Vendasta monitors reviews across 90+ directories and provides sentiment reporting. The MSP sets their own pricing and branding, making it invisible to the client that a third party is involved.

Tradeoffs

  • COST: Vendasta plans from $99–$999/month for the MSP platform + wholesale product costs per client. Resale at $299–$599/month per property yields 40-60% margin.
  • COMPLEXITY: Low — Vendasta provides the technology, MSP provides the branded wrapper.
  • CAPABILITY: Moderate — general reputation management with sentiment, but not hospitality-specific. Lacks the deep aspect-level analysis of ReviewPro or our custom solution. Good enough for basic sentiment trends.
  • CONTROL: High branding control (white-label), but limited technical customization.
  • BEST WHEN: The MSP wants to scale across multiple hospitality clients quickly with a consistent branded offering, or the MSP serves clients across multiple verticals and wants one platform for all reputation management.

Hybrid: Birdeye + Custom Enrichment

Use Birdeye ($299–$449/month per location) for review aggregation and basic sentiment analysis, supplemented with a lightweight n8n workflow that sends Birdeye's data through OpenAI for deeper hospitality-specific aspect-level analysis. This gets the best of both worlds: reliable review collection from Birdeye plus custom AI analysis depth.

  • COST: $299–$449/month for Birdeye + $5–$15/month for OpenAI API = $304–$464/month. Resale at $599–$899/month for 40-50% margin.
  • COMPLEXITY: Medium — Birdeye handles the hard part (review aggregation from all sources), MSP builds only the custom analysis layer. 2-3 week deployment.
  • CAPABILITY: Strong — Birdeye's proven review collection infrastructure plus custom hospitality-specific deep analysis. Gets aspect-level insights without building review collection from scratch.
  • CONTROL: Good balance — Birdeye handles data collection reliably, MSP controls the AI analysis and dashboard.
  • BEST WHEN: The MSP wants reliable multi-source review collection without building email parsers and API integrations, but still wants custom AI analysis depth and higher margins than pure Birdeye resale.

Full Open-Source Self-Hosted (Maximum Control, Zero SaaS Cost)

Replace OpenAI GPT-5.4 mini with a self-hosted open-source model like siebert/sentiment-roberta-large-english from Hugging Face Transformers, running on a GPU-enabled server or cloud VM. Combined with the n8n pipeline and PostgreSQL stack, this eliminates all recurring API costs for sentiment analysis.

Tradeoffs

  • COST: $0/month for sentiment API (vs. $5–$20/month for OpenAI). However, requires a GPU VM (~$50–$150/month for an AWS g4dn.xlarge or equivalent) or a local GPU workstation ($1,500–$3,000 one-time). Net savings only at scale (10+ properties).
  • COMPLEXITY: Very high — requires ML engineering experience, model fine-tuning for hospitality domain, and GPU infrastructure management. 8–16 weeks implementation.
  • CAPABILITY: Lower out-of-the-box — RoBERTa provides sentence-level sentiment (positive/negative/neutral) but lacks GPT-5.4 mini's ability to extract aspects and verbatims in a single call. Would need a multi-model pipeline (NER for aspect extraction + sentiment classification).
  • CONTROL: Maximum — no third-party API dependencies, all data stays on-premises.
  • BEST WHEN: The client has strict data sovereignty requirements (cannot send data to OpenAI/cloud APIs), the MSP has ML engineering capabilities, or the MSP plans to deploy across 20+ properties and wants to eliminate per-property API costs.

Want early access to the full toolkit?