Distinctiveness
A cosine-distance score between the persona's voice fingerprint and a baseline (default `chatgpt-default`). Below the floor, the persona reads as generic; above, it reads as itself.
Voice fingerprint says "how close is this reply to this persona." Distinctiveness says "how far is this persona from generic." The two are different questions; both matter.
How it works
At generation time (and on every refine), Moonborn embeds the persona's
Mask + signature phrases and computes the cosine distance against a
baseline embedding. The default baseline is chatgpt-default — a
neutral, helpful-assistant voice. Other built-ins: claude-default,
gemini-default. Teams can register a custom baseline persona
(useful when the question is "does this brand voice variant still feel
like our brand?").
{
"distinctiveness": 0.62,
"baseline": "chatgpt-default",
"minScore": 0.40,
"verdict": "pass"
}Config
consistency.distinctiveness.enabled(default on at Pro+)consistency.distinctiveness.baseline(defaultchatgpt-default)consistency.distinctiveness.min_score(default0.40)consistency.distinctiveness.metric(defaultcosine)consistency.distinctiveness.action_on_low_score(defaultwarn)
Per-org comparisons
Team workspaces gain a second query: compare against every persona already in the org. The output is the closest match plus its distance. Useful for catching accidental clones — a brand team forking variants shouldn't end up with two personas that score < 0.15 against each other.
await client.consistency.compareWithOrgPersonas(personaId, {
threshold: 0.30,
});API
GET /v1/personas/{id}/distinctiveness— current score.POST /v1/personas/{id}/distinctiveness/recompute— re-run.
Tier
Free has a fixed baseline (chatgpt-default) and read-only access.
Pro+ can swap baselines and set thresholds. Team+ can run the
cross-persona comparison.
Honest scope
Distinctiveness is a shape comparison. A persona can score 0.8 (very distinctive) and still be a bad persona — it can sound nothing like a generic assistant and also nothing like a real character. Pair it with the audit (Audit + provocation tests) to catch both failure modes.