Table of Contents
Introduction
Outreach that you can’t measure is outreach you can’t improve. Teams often get a sending workflow running in n8n and stop there—emails go out, replies trickle in, and the source of truth becomes a messy spreadsheet plus a few Slack threads. The result? Duplicate sends, unreliable metrics, and a long detective story every time someone asks, “Did we send that?” or “Why did bounce rates spike yesterday?”
This guide shows how to make outreach observable end‑to‑end: write back the right fields in n8n, keep idempotency with a simple dedupe strategy, export events to a warehouse (BigQuery or Airtable) for durable analytics, build a Looker Studio dashboard that tracks reply rate and time‑to‑first‑reply, and wire up alerts so you hear about anomalies before your customers do.
1) Why Observability Beats Guesswork (execution IDs, run summaries)
Outreach is a pipeline, not a single action. Messages are generated, transformed, sent, and (hopefully) replied to. Without observability, you’re left with gut feel. With observability, you get:
-
Traceability:
Which run sent which emails? Which node failed? What inputs caused it?
Idempotency:
Assurance that retries won’t double‑send.
Faster incident response:
Surface spikes in bounces or unsubscribes immediately.
Operational insight:
Measure real outcomes—positive‑reply rate and time‑to‑first‑reply—instead of vanity opens.
Execution ID as a first‑class citizen: Every n8n execution has an ID. Include it in your logs and write‑backs. It becomes the thread that ties items to runs, runs to alerts, and alerts to dashboards.
Run summaries: After each run, emit a compact summary: counts by status (sent, failed), labels (interested, OOO, unsubscribe), and timing. Post it to Slack and persist it to a runs
table. You’ll thank yourself during audits and postmortems.
Event Model & Architecture Overview
Event Model & Architecture Overview
Think in events. Outreach produces a small, stable set of event types that you can stream to Sheets and a warehouse:
High‑level architecture
Writing Back to Sheets (sent_at, messageId, status, error_reason)
Sheets is practical as an operational surface: it’s visible, editable, and familiar. Treat it like a source of truth for the last known state and a staging area for export.
Contacts sheet (operational)
rowId, email, firstName, company, role, campaign,
threadId, messageId, token, sent_at, last_sent_at, sends,
status, owner, locale, time_zone
}
status
: new | ready | sent | replied | suppressed | invalid | failed
sends
: count of initial + follow‑ups
Writes during send
After Gmail success: set messageId
, threadId
, sent_at = now()
, status = sent
, increment sends
, last_sent_at = now()
and append a send_success
event to the log.
On error: status
= failed
, error_reason
, and a send_error
event. Optionally backoff and retry.
Replies log (append‑only)
rowId, email, reply_at, freshReply, label, confidence, summary,
messageId, threadId, execution_id
}
Suppression list
email, reason, source, added_at
}
n8n write‑back node patterns
- Use Google Sheets → Update for Contacts by
rowId
oremail
. - Use Google Sheets → Append for Replies and Runs logs (append‑only).
- Always include
execution_id
so you can join rows back to runs.
Error hygiene: Never overwrite messageId
once set. Prefer new rows in logs for changes; update only the operational status
fields.
4) Dedupe Strategy (status=sent, messageId presence)
Double‑sends erode trust. Dedupe with two guardrails:
- Pre‑send check: Only send when
status in {new, ready}
andmessageId IS NULL
. - Atomic write‑back: Immediately after a successful send, set
messageId
,threadId
,status=sent
,last_sent_at
, and incrementsends
before the next item is processed.
In n8n
Add an IF node before Gmail:
// Condition
{{$json.status}} in [“new”,”ready”] && !{{$json.messageId}}
}
False branch: skip send, log skipped_already_sent
event.
Retry safety
When retrying a failed run, filter out rows where messageId
exists. If Gmail returns a transient error, use exponential backoff and a retry_count
field.
Thread continuity
For follow‑ups, require threadId
to be present; send in the same thread to avoid splitting conversations.
5) Exporting to BigQuery (or Airtable) for Long‑Term Analytics
Sheets is fine for day‑to‑day operations. For trend analysis, move events to a warehouse.
Why BigQuery (or Airtable)
- Durability & scale: Millions of events; no row limits.
- SQL analytics: Easy to compute rolling reply rates, cohorting by campaign, and anomaly baselines.
- BI integration: Looker Studio connects natively.
Event table schema (outreach_events
)
event_type STRING, — send_success, send_error, reply_received, etc.
row_id STRING, — contact rowId
email STRING,
campaign STRING,
message_id STRING,
thread_id STRING,
label STRING, — for replies
confidence FLOAT64,
summary STRING,
error_reason STRING,
created_at TIMESTAMP,
execution_id STRING
n8n export pattern
After each write‑back, Append a normalized event to a staging Sheet or directly to BigQuery via an HTTP or BigQuery node.
If Sheets is the staging layer, run a scheduled “ingest to BigQuery” workflow that reads new rows and inserts them.
Sample BigQuery INSERT
(via HTTP node / Cloud Function)
(event_id, event_type, row_id, email, campaign, message_id, thread_id,
label, confidence, summary, error_reason, created_at, execution_id)
VALUES
(@event_id, @event_type, @row_id, @email, @campaign, @message_id, @thread_id,
@label, @confidence, @summary, @error_reason, CURRENT_TIMESTAMP(), @execution_id);
Backfill strategy
If you start mid‑campaign, export historical sends and replies by iterating through your Sheets and reconstructing events.
Airtable alternative
Use a base with Events
table and the same columns. It’s friendlier for non‑SQL teams, but you’ll hit scale limits earlier.
6) Looker Studio Dashboard: Reply Rate, Positive‑Reply Rate, Time‑to‑First‑Reply
Dashboards answer three questions: What shipped? What failed? What worked? Build a top‑level view for leadership and a detail view for operators.
Suggested pages & visuals
- Overview
- KPI cards: Sends, Replies, Positive Replies, Reply Rate, Positive‑Reply Rate, Bounces, Unsubs
- Time series: Daily sends vs replies (stacked), bounce/unsubs overlay
- Histogram: Time‑to‑first‑reply (bucketed in hours)
- By Campaign / Owner
- Table: Campaign, Sends, Replies, Positive Replies, Reply Rate, Time‑to‑First‑Reply median, Unsubs, Bounces
- Filter controls: campaign, owner, date range
- Quality
- Pie/Bar: Labels distribution (interested, scheduling, not_now, ooo, unsubscribe, bounce)
- Table: Top subjects vs reply quality (if you log subject lines)
Core metrics (SQL on outreach_events
)
Reply rate
COUNTIF(event_type = ‘send_success’)) AS reply_rate
FROM dataset.outreach_events
WHERE created_at BETWEEN @start AND @end;
Positive‑reply rate
COUNTIF(event_type = ‘send_success’)) AS positive_reply_rate
FROM dataset.outreach_events
WHERE created_at BETWEEN @start AND @end;
Time‑to‑first‑reply (minutes)
SELECT email, campaign, MIN(created_at) AS first_send_at
FROM dataset.outreach_events
WHERE event_type = ‘send_success’
GROUP BY email, campaign
), first_reply AS (
SELECT email, campaign, MIN(created_at) AS first_reply_at
FROM dataset.outreach_events
WHERE event_type = ‘reply_received’
GROUP BY email, campaign
)
SELECT TIMESTAMP_DIFF(r.first_reply_at, s.first_send_at, MINUTE) AS ttr_minutes
FROM first_send s
JOIN first_reply r USING(email, campaign);
Run reliability (per execution)
COUNTIF(event_type=’send_success’) AS sent,
COUNTIF(event_type=’send_error’) AS send_errors,
COUNTIF(event_type=’reply_received’) AS replies,
COUNTIF(event_type=’suppressed’) AS suppressed
FROM dataset.outreach_events
WHERE created_at BETWEEN @start AND @end
GROUP BY execution_id
ORDER BY MAX(created_at) DESC;
Looker Studio setup
- Data source: connect to BigQuery
outreach_events
(or Sheets if you must). - Create calculated fields for rates:
Replies / Sends
,PositiveReplies / Sends
. - Use date controls and filters for campaign/owner.
- Build two pages (Overview, Campaign Detail) and save a View‑only link for stakeholders.
Design tips
- Keep units consistent (percentages, minutes).
- Use clear labels (“Positive‑reply rate”, not “PRR”).
- Add small tooltips describing each metric.
- Lock charts and constrain filters so casual viewers can’t break the layout.
Alerting Patterns (Slack Digest per Run, Anomaly Detection Basics)
Dashboards tell you what happened; alerts tell you when to look.
Slack digest (per execution or daily)
execution_id
sent
,send_errors
replies
(and of those,interested + scheduling
)bounces
,unsubscribes
- any
error_reason
samples
Example Slack message (JSON)
“text”: “Outreach run {{execution_id}} summary”,
“attachments”: [
{ “title”: “Counts”, “text”: “sent: {{sent}}\nerrors: {{send_errors}}\nreplies: {{replies}} (pos:
{{positive}})\nbounces: {{bounces}}\nunsubs: {{unsubs}}” },
{ “title”: “Notes”, “text”: “{{notes}}” }
]
}
Anomaly detection (lightweight)
- Maintain 7‑day rolling baselines for bounce rate and unsubscribe rate by sender domain.
- Alert if today’s value is > 2× the baseline (or above a hard threshold like 5%).
- Alert if send_errors per run exceed a small absolute threshold (e.g., 3) or more than 5% of attempted sends.
n8n implementation
After each run, a Code node pulls baseline metrics from the warehouse, compares current values, and either posts a normal digest or a warning with red flags and suggested actions (pause a campaign, review DNS, adjust batching).
8) Governance: Roles, Access, and Data Retention
Analytics creates responsibility. Set guardrails early:
A) Send flow (write‑backs + events)
B) Replies flow (label + routing)


Code: append event row (simplified)
return {
json: {
event_id: Date.now().toString(), // replace with UUID if available
event_type: type,
row_id: contact.rowId,
email: contact.email,
campaign: contact.campaign,
message_id: contact.messageId || ”,
thread_id: contact.threadId || ”,
label: extra.label || ”,
confidence: extra.confidence ?? null,
summary: extra.summary || ”,
error_reason: extra.error_reason || ”,
created_at: new Date().toISOString(),
execution_id: $execution.id
}
};
}
Code: daily digest (pseudo)
const errors = items.filter(i => i.json.event_type===’send_error’).length;
const replies = items.filter(i => i.json.event_type===’reply_received’).length;
const positive = items.filter(i => [‘interested’,’scheduling’].includes(i.json.label)).length;
return [{ json: { sent, errors, replies, positive } }];
Troubleshooting & Edge Cases
“Why are reply counts off by one?”
Check timezone handling. Use UTC timestamps in events and convert in the dashboard layer.
“My reply parsing sometimes eats real content.”
Relax the quote‑splitting regex and sample outputs regularly; maintain a small corpus for regression checks.
“Sends are duplicated after a retry.”
Ensure the IF guard before Gmail checks both status
and messageId
. Perform the write‑back atomically.
“Dashboard looks fine but Slack shows spikes.”
Your digest might be per‑execution while the dashboard is daily. Align the aggregation windows and filters.
“BigQuery cost?”
Partition by created_at
and cluster on campaign, event_type
. Use scheduled queries, not per‑view full scans.
Need help designing the schema, exporting to a warehouse, or building the dashboard? LogicalStreet Technology Pvt. Ltd. can partner with your team to instrument the right metrics and safeguards for your environment.
Let’s Get Started
12) Appendix: Schemas, SQL, and a Dashboard Checklist
12.1 Schemas (copy‑paste)
Contacts (Sheets)
threadId, messageId, token, sent_at, last_sent_at, sends,
status, owner, locale, time_zone
Replies (Sheets, append‑only)
Events (BigQuery/Airtable)
confidence, summary, error_reason, created_at, execution_id
Runs (BigQuery/Airtable)
12.2 SQL Snippets
Daily aggregates by campaign
COUNTIF(event_type=’send_success’) AS sends,
COUNTIF(event_type=’reply_received’) AS replies,
COUNTIF(event_type=’reply_received’ AND label IN (‘interested’,’scheduling’)) AS positive,
COUNTIF(event_type=’send_error’) AS send_errors,
COUNTIF(event_type=’suppressed’) AS unsubs
FROM dataset.outreach_events
WHERE created_at BETWEEN @start AND @end
GROUP BY d, campaign
ORDER BY d DESC;
Bounce/unsub rates (rolling 7‑day)
SELECT campaign, event_type, created_at
FROM dataset.outreach_events
WHERE created_at BETWEEN TIMESTAMP_SUB(CURRENT_TIMESTAMP(), INTERVAL 14 DAY) AND CURRENT_TIMESTAMP()
)
SELECT campaign,
SAFE_DIVIDE(
COUNTIF(event_type=’send_error’) ,
NULLIF(COUNTIF(event_type=’send_success’),0)
) AS error_rate
FROM win
GROUP BY campaign;