
Save time. Cut costs. Get insights at scale. That’s the promise behind a new wave of AI-powered research tools shaking up how we understand customers.
At ustwo, we believe that to build breakthrough products, you need to deeply understand your customer through research. So, we were curious when we started seeing these claims:
- Can these tools deliver the same quality as talking to people face-to-face?
- How much time and money do they actually save?
- Are they here to help us or replace us?
To shed light on these, we rolled up our sleeves and tested them ourselves.
What we tested and why
The latest emerging AI research platforms fall into two camps:
Synthetic UsersBuilt from large online datasets, these platforms simulate research participants and their needs. | AI ModeratorsWith instructions from you, these platforms recruit, moderate, and summarise real human interviews. |
After exploring Synthetic User platforms through demos and research, we had our reservations. Trained on vast amounts of online data, their outputs offer a quick way to gauge how a user might think, but are still approximations that don’t include real-life stories and experiences. We saw greater potential for accuracy, if we created our own Synthetic Users based on data from our own human interviews, rather than using a platform.
AI Moderator tools stood out as they promised speed and scale without losing the human voice on the other end.
We tested both in two live client projects: 1) shaping a US founder’s category-defining product vision; and 2) launching a personalised meal-planning service for influencer Soph’s Plant Kitchen.
In the first, we ran two rounds of AI- and human-led research to define the value proposition, identify a ‘first best user’ and refine a product experience for them. In the second we used an AI Moderator to validate an MVP feature set and pricing model, then built a Synthetic Persona in ChatGPT (grounded in the interviews) to inform lower-stake design decisions during build.
Together, these experiments gave us a clear perspective of where AI research tools can accelerate the work, and where only humans can go deep enough to uncover real insight.
What we discovered - the good and the challenging
Using AI moderator platforms:
1. Fast interviews, slower insight
No chasing participants. No recruitment back and forth. No scheduling. No interviewing, full stop. On one project, once we’d set up the questions, stimuli, and recruitment criteria, the AI Moderator ran 82 interviews over the weekend - speed and scale no human could match.
Overall, the time savings are significant, but there is still work to be done at the start and again, at the end. In setup, the platforms can auto-generate a discussion guide, but you have to refine it carefully. There’s no researcher to read between the lines. And when the insights come back, it’s not as simple as taking them at face value and moving on. In one project, for example, we wanted to identify the first best user, but the tool surfaced which features people preferred instead of who that user was or why they cared. To answer those big strategic questions, we still had to engage deeply with the quant data and individual interviews.
2. AI cuts bias, but also buy-in
One strength of AI Moderators is objectivity: what’s said is what’s reported, free from cognitive bias or groupthink. This proved especially useful on one of the projects where we had a small team and didn’t have the luxury of multiple perspectives to challenge each other.
That said, synthesis isn’t just about reporting back. At ustwo, we see huge value in collaborative synthesis because it builds shared understanding. For example, while shaping the product vision for our US founder, only team discussion revealed the importance of a second user group that the primary user would be using the app for. Relying solely on AI-moderated insights would have missed this. For now, AI can surface patterns but can’t yet challenge or expand thinking the way a team can.
3. Cost savings mean shallower insights
Although you still pay recruitment fees and incentives with AI Moderators, the savings come from paying an AI, not a human. This includes an implementation fee of a few thousand dollars and then either ~$20K+ annually or pay-as-you-go. This may seem like a lot, but when compared to a Researcher’s salary and factoring in the speed, scale, and flexibility of an AI Moderator, it’s pretty cost-effective.
Saving money can sacrifice the richness of insight. A great researcher doesn’t just follow the script; they crack a joke, relate to what’s said, notice subtle cues, and adapt their questioning to dig deeper. That empathy builds trust and draws out stories full of nuance and emotion, which were central to our product strategy for the US founder. AI Moderators can probe and follow up, but without that human touch, interviews felt more transactional, with less flow and fewer meaningful stories.
4. Linking the what and the why - faster
AI Moderators don’t just surface what the themes are, but also how common they are. This unique blend of quant and qual from the same sample makes it easy to understand what’s most important and the why behind the what. For example, in one project older participants found an app useful but weren’t willing to pay. By drilling into their interviews, we quickly uncovered why - security and privacy concerns over personal data. Confirming this with separate interviews and a survey would not have been so fast.
Please note that these platforms are still qualitative-first. They do quantify themes, but if your priority is statistical significance through a large sample size and statistical tests, conducting a standalone survey remains a faster, more robust and cost-effective route.
5. A trade-off between candour and credibility
We were surprised to find people rated features and concepts lower in AI-moderated interviews than in human-moderated. We don’t know for certain why this was, but it does highlight that people may be more honest when they feel less pressure to be nice or avoid judgement. That openness makes AI Moderators valuable for gathering feedback on specific features, flows, designs, or even sensitive topics.
It’s worth noting that while participants may be more candid with AI moderators, they aren’t always honest about who they are. In our AI-moderated research, one interviewee claimed to be a company director at a major energy firm but looked and sounded more like a teenager. Fraud detection is improving, but although this is a challenge across many types of research recruitment, the risk is higher here.
Creating your own synthetic user:
6. Not for out-sourcing decisions, but good for sense-checking
In one project, we had a Designer whose strengths were more in UI than UX. They found the Synthetic Persona — built in ChatGPT from insights gathered through the AI Moderator — invaluable as a quick second opinion on copy, visual hierarchy, and layout, where they’d normally lean on others for support. Aware of the risks, they used it carefully. LLMs can hallucinate, pull in data beyond customer interviews, or simply be too agreeable. By probing its reasoning and anchoring responses in the interview data we provided, the output felt more contextual and credible, giving quick confidence on lower-stakes design decisions.
So what are we doing about it?
In short, AI research tools save time and money, but can reduce depth and quality of insight. That’s a real drawback in early product development, when you’re trying to uncover the target customer and their core needs. At this stage, nothing beats being in the room yourself, building a rapport, picking up on nuance and extracting the unique emotional stories that AI moderators simply can’t replicate (at least not yet).
But, we really do see strong opportunities for AI customer research tools to augment us and even replace us:
AI Moderators for understanding which concepts or features matter most and why
| AI Moderators to scale human-led research when defining the target customers and their core needs
|
AI Moderators when time, budget, or team capacity would otherwise stop research happening
| Synthetic Personas (built from your interview data) for a quick second opinion on lower-stakes decisions
|
This is only the beginning. The tools are advancing fast and already making interviews easier to set up, improving fraud detection, and helping teams connect and synthesise insights across projects. The potential is big, and we’re excited to see what’s next. If you’re exploring them too, we’d love to hear what you’re learning.
