Observational analysis only. Not a fact-check.Outputs may vary between systems. Sources and context remain the reference surface.

Speech, You Can Measure.

Observational analysis only. Not a fact-check.

Baseline performs observational analysis of public speech using three independent AI systems. Inputs are identical. Outputs are displayed side-by-side with sources and context.

Same inputThree independent systemsSide-by-side outputs + source context
View pricingView methodology
Framing Radar measurement surface

How it works

Methodology

Baseline displays independent outputs side-by-side, then computes a separate consensus layer. Sources and context are always shown. Observational analysis only. Not a fact-check.

Three independent systems

Identical input is processed independently by three AI systems. Outputs are displayed as returned, without editorial rewriting.

The Receipt™

A compact readout of recurring language patterns over time, with match counts shown by tier.

Framing Radar™ (5 axes)

A measurement surface for rhetorical structure across five framing dimensions.

The Lens Lab™

Side-by-side lens outputs, plus a separate consensus layer for shared patterns and variance.

Pipeline diagram
Measurement Stack

What We Measure

Every statement is processed through multiple measurement layers. Each metric is computed independently — no metric influences another.

Signal Metrics

Four independent measurements computed per statement by each AI model. Displayed on a 0–100 scale with no thresholds or labels.

Repetition0.73

How closely this statement’s language mirrors the figure’s prior statements on the same topic.

Novelty0.41

How much new language or framing this statement introduces compared to prior patterns.

Affect0.58

The rate of emotionally charged language — intensity markers, urgency signals, sentiment-loaded phrasing.

Entropy0.29

Topical spread of the statement. Higher values indicate multiple subjects; lower values indicate tight focus.

2/3
Lens Convergence
Consensus Badge

When models converge on similar measurements, the consensus count is displayed. When they diverge, variance is flagged.

Each model processes the statement separately — none can see the others' results. Convergence is computed after all three models have returned their independent outputs.

Variance Detection
When models disagree
Variance Detected

When models produce significantly different framing classifications for the same statement, a variance banner appears. This is not an error — it reflects genuine measurement divergence.

GP  Economic  0.82
CL  Security  0.76
GR  Economic  0.79

Congressional Vote Record

For congressional figures, voting records are tracked separately from speech metrics. Votes are displayed as “recorded” or “not recorded” — never color-coded by approval.

Total: 47YEA: 32NAY: 12NV: 3
H.R. 1234YEA
Passed 218-212
S. 567NAY
Failed 47-53
H.R. 890PRESENT
Passed 410-3

Teal = vote recorded  •  Gray = not recorded  •  Never red/green

Three-Model Pipeline

Every statement flows through three independent AI models. Each model receives identical input and returns its own measurements. No model can see another's output.

GP
GP analysis

Independent measurement from the first AI system.

CL
CL analysis

Independent measurement from the second AI system.

GR
GR analysis

Independent measurement from the third AI system.

Product surfaces

Representative measurement surfaces from the app.

Framing Radar
Framing Radar screen
Pipeline diagram
Pipeline diagram