Open app
Moonborn — Developers

Synthetic user research

Run qualitative research panels at the speed of code. Five-persona panels, structured interviews, longitudinal consistency — without the recruiting overhead.

UX researchers and PMs reach for synthetic panels when the cost of recruiting real participants is too high, the timeline is too short, or the question is still too half-formed to put in front of real users. The trade-off is honest: synthetic panels surface hypotheses faster, but the validation step still needs real humans.

When this fits

  • Discovery research. You're exploring a problem space and want five distinct perspectives before deciding what to formally study.
  • Concept testing. A landing page draft, a pricing tier, a feature pitch — get five opinions in ten minutes.
  • Question refinement. Before booking real participants, find out which questions actually surface useful answers.
  • Longitudinal consistency. The same panel can be re-interviewed weeks later; voice fingerprints persist.

When this does NOT fit

  • Purchase decisions. Synthetic personas don't have credit cards to spend or budgets to defend.
  • Edge-case emotional response. Real grief, real frustration, real delight don't map cleanly onto LLM completions.
  • Statistical significance. Five (or fifty) synthetic personas do not replace n=200 with confidence intervals.

Treat synthetic research as a way to sharpen the questions, not substitute the answers.

What Moonborn provides

  • Distinct personas with the four-layer model — five characters that genuinely differ on values, archetype, and voice.
  • Drift detection per reply, so a personality stays in character across a long interview.
  • Distinctiveness as a metric — pairwise comparison flags panels where two personas collapsed into the same average.
  • Long-term memory, so a panel revisited weeks later remembers the prior interview.

How a panel run looks

The Synthetic user panel tutorial walks through the code. The shape:

  1. Sketch the demographic + psychographic spread.
  2. Generate five personas with sharp briefs.
  3. Audit pairwise distinctiveness (target ≥ 0.30).
  4. Script open-ended questions up front.
  5. Run each question against each persona in a fresh session.
  6. Aggregate, code, and theme in your own qualitative tool.

Quality controls

  • Audit floor. Every persona scores ≥ 4.0/5 on the LLM-as-judge pass before joining the panel.
  • Provocation tests. The 33-test catalog catches role-breaking and jailbreak susceptibility — useful when your interview questions get uncomfortable.
  • Drift alerts. Logged per reply; a drifted answer is excluded from analysis or re-elicited.

Tier

Pro and up (for distinctiveness comparison + persistent fingerprints).

Honest scope

You are still doing research. Synthetic users are a generative instrument, not an evaluative one. Use the panel to widen the hypothesis space; then put the sharpened questions in front of real humans for the answer that ships.

Next