Open app
Moonborn — Developers

Synthetic user panel

Build a research panel of diverse personas, run a structured interview against the panel, and aggregate the results without your personas converging.

UX research with synthetic users is one of Moonborn's natural fits: deterministic personas, drift detection, and ensemble distinctiveness keep a panel from collapsing into a single average voice. This tutorial walks through building a 5-persona panel, running a structured interview, and saving the results.

1. Define your panel

Sketch the demographic + psychographic spread you want. For a SaaS product evaluation, that might be:

  • A senior IC engineer who's skeptical of AI tools.
  • A mid-level manager who buys based on social proof.
  • A founder shopping on price.
  • A research lead who tests everything.
  • A new hire who follows team consensus.

Five personas captures most of the diversity; ten is the practical upper bound before management overhead overtakes signal.

2. Generate the personas

const briefs = [
  'A 38-year-old senior backend engineer at a mid-size SaaS. Skeptical of AI tooling. Has been burned by vendor lock-in twice.',
  'A 34-year-old engineering manager. Buys tools based on team sentiment in Slack. Reads HN comments before signing contracts.',
  // ... three more
];
 
const panel = await Promise.all(
  briefs.map((intent) =>
    client.personas.createPersona({ intent, workspaceId: 'ws_...' }),
  ),
);

3. Audit the spread

Before running the interview, check that the personas are genuinely distinct from each other:

for (let i = 0; i < panel.length; i++) {
  for (let j = i + 1; j < panel.length; j++) {
    const cmp = await client.consistency.compare({
      fromPersonaId: panel[i].id,
      toPersonaId: panel[j].id,
    });
    if (cmp.value < 0.30) {
      console.warn(`Personas ${i} and ${j} too similar — regenerate`);
    }
  }
}

A panel where two personas score < 0.30 against each other will produce duplicate-feeling responses. Refine or regenerate the near-duplicates with sharper briefs.

4. Script the interview

Define the question set up front. Open-ended questions outperform multiple-choice for qualitative work:

const questions = [
  'When you evaluate a new developer tool, what makes you trust it?',
  'Walk me through the last time you abandoned a tool you were paying for. What made you leave?',
  'What does "good documentation" look like to you? Be specific.',
];

5. Run the interview against each persona

const transcripts: Record<string, string[]> = {};
 
for (const persona of panel) {
  const session = await client.chat.createSession({ personaId: persona.id });
  transcripts[persona.id] = [];
 
  for (const question of questions) {
    const reply = await client.chat.sendMessage({
      sessionId: session.id,
      content: question,
    });
    transcripts[persona.id].push(reply.content);
 
    if (reply.driftAlert) {
      console.warn(`Drift on ${persona.id}: ${reply.driftScore}`);
    }
  }
 
  await client.chat.endSession({ sessionId: session.id });
}

Each persona gets a fresh session — no cross-talk. Drift scores are worth logging; a drifted reply means the persona slipped out of character, which can bias the qualitative read.

6. Aggregate and analyze

for (const [personaId, answers] of Object.entries(transcripts)) {
  console.log(`\n=== ${personaId} ===`);
  questions.forEach((q, i) => {
    console.log(`Q: ${q}`);
    console.log(`A: ${answers[i]}\n`);
  });
}

For thematic analysis, pipe the transcripts into your own coding tool (Dovetail, Reframer, a notebook). Moonborn's job ends at producing the responses; clustering and theming are downstream.

7. Preserve the panel for re-runs

Save the persona IDs as a "panel" object in your research tool. The same panel can be re-run against new questions weeks later — the personas + their voice fingerprints persist, so longitudinal consistency is real.

Honest scope

Synthetic users complement real-user research; they do not replace it. The panel surfaces hypotheses faster and cheaper, but actual purchasing decisions, emotional responses, and edge-case behaviors still need real participants. Treat synthetic panels as a way to sharpen real research questions, not as a substitute.

Tier

Pro and up (for distinctiveness comparison + persistent fingerprints).

Next