This document has been created to support customers and projects that leverage AI capabilities in research and data collection. This includes:
Customers who need human qualitative data at scale.
Brands, political pollers, media, or research agencies running AI-Moderated Interviews (AIMI) studies via video, audio, or text.
AI/ML companies training and validating models or improving conversational or behavioral datasets.
Standardized to US spelling 'behavioral' to match 'behavior' used later in the doc.
Projects that require higher respondent effort and variability than traditional quantitative surveys.
Projects where traditional survey benchmarks, e.g., Length of Interview (LOI), Incidence Rate (IR), and Cost per Interview (CPI), may not directly apply.
When to use this best practices guide?
We encourage customers who want to launch projects to support data collection for AI-Moderated Interviews and AI/ML model building, training, and validation to reference this guide to ensure their projects can be successfully fielded.
Examples of projects that would benefit from the outlined best practices include:
Conversational research: AI-Moderated Interviews, where an AI agent uses text, audio, or video to probe the respondent for deeper insights based on the provided answers.
Data collection for AI/ML model building, training, and validating: Behavioral, linguistic, or multimodal datasets that can be applied to a broad number of use cases.
How to successfully set up your project
Before launching your project, please make sure you have considered the following:
The study flow and respondent journey
Realistic expectations for LOI and IR
Actual, not perceived, respondent effort and time
By considering the above points, it will be simpler to address the common challenge of projects being designed based on intended respondent experience rather than actual respondent behavior.
Without appropriate consideration, high drop-off rates, low survey conversion rates, and inconsistent or low-quality data may occur and extend fielding time.
Learn more about finding the appropriate LOI and IR in our FAQs.
Designing for respondent effort
The most important concept in AIMI projects is that effort, not length of interview, drives performance and data quality.
Effort increases significantly across different modalities:
Text: lowest friction, highest number of completes
Audio: moderate effort, lower conversion
Video: highest effort, highest drop-off rate
This applies to all customers leveraging AI capabilities for research, albeit differently:
Brands and research agencies: Impact on feasibility and cost
AI/ML companies: Impact on data richness vs scale trade-off
Curating a rewarding respondent experience
In some AIMI projects, respondents may be asked to:
Open a new tab or window to record audio/video
Use an external tool outside the survey
Leave the survey and return later to complete it
While these setups can work, they often introduce friction in the experience. With each additional step, especially leaving the survey, there is an impact on the volume and quality of responses received. For example:
An interruption to the respondent’s flow can contribute to loss of focus
Introduce confusion about the next steps and lead to a drop-off
This is particularly important for tasks like audio and video, where effort is already higher. Wherever possible, keeping the entire experience within a single, continuous flow (in the same window) tends to result in a smoother experience and stronger completion rates.
A simpler, more seamless experience helps to ensure that:
Respondents remain engaged
Reduce complexity to decrease drop-offs
Improve the overall quality of data and project performance
Setting appropriate incentives
Traditional survey pricing models rely heavily on LOI to determine what is a fair and appropriate reward for the respondent. For AIMI projects, this is often insufficient as a short video task can feel more demanding than a long survey, and audio and video introduce psychological and technical friction.
We recommend that Project Managers:
Align CPI with perceived respondent effort, not just time
Use Boost CPI when:
Media (audio/video) is involved
Targeting is niche
Early pacing is slow
Boosting CPI improves traffic on a survey that requires higher effort, but does not fix structural issues. If your project continues to experience issues, please ensure it is set up correctly, and respondents can successfully complete the survey.
Learn more about boosting CPI in our FAQs.
Ensuring Data Quality in AIMI Studies
In AIMI studies, responses are often open-ended and can vary in depth, style, and level of engagement. As a result, quality management needs to be adapted, rather than following the same approach relied on for traditional surveys.
Compared to traditional surveys:
Responses may vary more in length, depth, and style
Repetitive or templated answers may occur more frequently
Participant engagement will not be uniform
Natural variability is expected for open-ended questions
As such, where possible, quality checks should be built directly into the survey flow, e.g., screening or termination criteria.
It is important that there is a cross-functional alignment on what constitutes high-quality responses and that built-in checks support it.
Different stakeholders may define “quality” differently. Here are some examples to consider:
Research teams may focus on clarity and insight
AI/ML teams may focus on structure, variability, or training usefulness
Global panel partners typically focus on fair evaluation and consistent criteria
Balancing the quality of responses and feasibility
It is important to ensure that your study design and respondent experience make it possible for respondents to participate and successfully submit a response. Best practices include:
Aligning on quality criteria upfront
Using in-survey checks where possible
Reviewing data early and calibrating project settings
Maintaining a balance between quality standards and achievable performance
This will make sure you can field on time, manage your budget effectively, and access all available supply for your project’s requirements.
Learn more about feasibility in the Cint Exchange in our FAQs.
Submitting correct reconciliations
If responses do not meet your requirements, they can be rejected within the survey (preferred) or during survey review.
Please note that sharing clear examples or data points of why a response needs to be reconciled helps to improve the future quality of responses.
Please keep in mind that not all variation in responses indicates poor quality and that some level of variability is expected in open-ended formats.
Learn more about the Cint Exchange reconciliation policy and how to submit reconciliations in our FAQs.
Iterative project performance
Unlike traditional surveys, AIMI studies are rarely optimal at launch.
They require:
Early testing to check and resolve project errors
Monitoring of real performance
Iterative adjustments to increase performance
We recommend that Project Managers closely track:
Drop-off points (especially media steps)
Conversion rates
Actual completion time vs expected effort
In the Cint Exchange, tracking performance data is made simple by leveraging the target group’s performance tab.
Learn more about tracking target group performance in our FAQs.
Platform Setup & Technical Validation
Before launching a study, we recommend that Project Managers ensure:
All redirects function correctly (complete, terminate, overquota)
Respondent IDs (RIDs) pass accurately
The end-to-end journey is tested, and a respondent can successfully submit a response
This is especially critical when:
External tools are used in the respondent experience
Custom logic or media capture is involved
Privacy considerations
Please ensure clear and explicit respondent consent is obtained:
Provide directions for taking the survey to set clear expectations for respondents
Explain, if possible, how the data will be used