Yield management is the strategic process of carefully selecting and prioritizing viable survey opportunities to ensure that the highest-performing studies receive the most traffic, while surveys falling below desired metric thresholds are either deprioritized or not considered at all.
To help our supply partners build out an effective yield management system we recommend applying the following approaches, amongst a few other considerations. This content is not relevant for our Match Integrated partners.
Creating suitable survey filter rules
When starting out your integration it is important you monitor if any initial survey filtering rules applied to your Feed Opportunities subscription or within your own system are heavily restricting the survey inventory available. Having a significantly reduced pool of surveys to match users with can impact potential earnings.
A recommended best practice is to keep your survey filtering criteria broader during the soft launch phase. As you gain a better understanding of how your respondents perform against different survey types, length of interview (LOI), buyer preferences etc, you can gradually refine and test your survey filtering rules to optimize performance. See here for more information on creating a testing framework.
Feed opportunities allows you to easily adjust the criteria for surveys you receive notifications about, simplifying the process of gradually filtering out surveys that are not relevant to your specific business model. By adopting this approach, you can adapt and tailor your survey selection process to align with your users' preferences and maximize your survey-based revenue.
Example Survey Filtering
Property | Value | Description |
|---|---|---|
country_language | eng_gb, eng_us | Include surveys from the United Kingdom and the United States English language |
length_of_interview | <20 | Surveys equal to or less than 20 minutes |
conversion | >5 | Surveys equal to or greater than 5% conversion |
revenue_per_interview | >0.50 | Surveys equal to or greater than $0.50 |
study_type | adhoc, recontact | Adhoc and recontact surveys |
Example Opportunities Subscription Request
{
"callback" : {{callback}},
"opportunities": [{
"country_language" : {"in": ["eng_gb", "eng_us"]},
"length_of_interview": { "lte" : 20 },
"conversion": { "gte" : 0.05 },
"revenue_per_interview": { "gte" : 0.50 },
"study_type": {"in": ["adhoc", "recontact"]}
}]
}Identifying viable surveys
Before presenting the best survey opportunities to your respondents, it's crucial to ensure that the following criteria is met:
The survey is still live
The total quota is available
Any subquota(s) matching respondent profiles is available
Data profiled on a respondent meets the qualification criteria
Respondent has not attempted this survey before
Respondent has not attempted a survey in this survey group before (if applicable)
Ranking eligible surveys
Cint provides a comprehensive set of data through our APIs and redirect information, empowering suppliers to target the most suitable and high-performing surveys. To assist you in effectively utilizing this data, we've compiled a list of what we consider to be crucial data points when creating a survey ranking algorithm.
Please note some of these metrics should be used as a starting point for assessing the overall health of surveys from a global perspective, it's essential to track your internal performance as you direct respondents to surveys.
Conversion: this is one of the most important metrics in understanding the overall health of a survey. A good conversion rate improves the respondent experience, engagement, and loyalty, whilst helping maximize earning potential.
Survey mobile conversion: this data is particularly useful for understanding how a survey is performing for respondents accessing it from a mobile device. If most of your respondent traffic is via a mobile device, identifying good mobile opportunities will improve user experience and reduce drop rates.
Revenue per click (RPC): an important objective is to earn the most revenue with the least number of clicks. It is worth noting, a high RPC can be the consequence of a good converting survey or a niche targeted, low IR survey, with a high revenue per interview (RPI). Therefore it is advised to track RPI alongside RPC.
Revenue per click per minute (RPCM): this data represents the value being placed on a respondent's time in a survey. RPCM is not included in our API data but can be calculated via the following formula: [Revenue Per Click (RPC)]/[Average LOI]
Length of interview (LOI): this data should be used to track and analyze how your respondents engage with varying survey lengths, across different devices. Tailor your offerwall experience to show surveys that your respondents prefer, but also guarantees a good ROI.
Termination length of interview (TLOI): this is a good indication as to whether there are any late termination rules embedded within a survey. Penalizing surveys with late terminations can help improve respondent satisfaction.
Buyer name: track and analyze buyer performance. If certain buyers are delivering poor performance, consider applying rules that limit which of their surveys you allow respondents to participate in. This may mean only allowing respondents to participate in the poor performing buyers survey if it is showing a conversion rate above a certain threshold or below a certain LOI that may be more restrictive than that allowed for better-performing buyers. Continue to monitor global data in case buyer trends change.
Property name | Calculation | Where you can find this data |
|---|---|---|
system conversion | The number of respondents that completed the survey, divided by the number of respondents that entered the Cint Exchange (formerly known as Lucid Marketplace), from any device. This value is calculated after one complete and rounded to the nearest whole number. | These statistics can be obtained at supplier or global level, as well as 12 hours trailing, or for the whole survey lifetime using the Statistics endpoints. |
conversion | The number of respondents that completed the survey, divided by the number of respondents that qualified for the survey (made it to the client-side survey), from any device. This value is calculated after one complete and rounded to the nearest whole number. | These statistics are global lifetime data obtained from an Opportunities subscription. |
mobile_conversion | Percentage of mobile respondents who complete the survey after qualifying. This value is calculated after one mobile complete and rounded to the nearest whole number. | |
revenue_per_click | (RPI * number of completes) / number of system entrants | |
length_of_interview | The median time for a respondent to complete the survey, excluding the Cint Exchange prescreener, in minutes. This value will be zero until a complete is achieved. | |
termination_length_of_interview | The median time for a respondent to be termed, in minutes. This value is calculated after six survey entrants and rounded to the nearest whole number. Until six survey entrants are achieved the value will be zero. | |
account_name | No calculation for Buyer Name |
Pacing Survey Entrants
When sending respondents to a survey, we recommend to initially pace the volume of users sent in, testing with a small group of 20-30 respondents to understand the performance. This practice is particularly important for those opportunities that are brand new to Cint and have no global data. If your system's internal metrics surpass the desired performance benchmarks, start to open the survey to a larger audience, continuing to monitor the performance on a rolling survey entrant basis, altering survey rankings based on the latest data.
For surveys that fail to achieve completion or fall short of performance thresholds, refer to the guidelines provided in the Shutting down underperforming surveys section.
You should also ensure that you are effectively pacing respondents into surveys based upon the total number of completes still available. An elevated overquota rate may occur if a survey is being offered to an excessive number of respondents. A good rule of thumb for how many respondents should enter a survey at any given moment is to not exceed the number arrived at using this formula:
Quota remaining / System conversion rate
Tracking Survey Performance
Before sending respondents to surveys, you should utilize the data retrieved through our APIs to understand the global performance for each survey. As the performance of surveys evolves in-field, it's crucial to continuously collect and update the performance information in your database.
Once you start sending users to a survey, it becomes important to track, record and use your own internal performance metrics to determine whether to continue sending respondents to that survey or not. While global data offers a valuable indication of survey performance, it's important to note that the performance may vary significantly when it comes to your own users.
To make informed survey ranking decisions, we recommend employing the following logic for storing and utilising performance data:
Global Performance: Use the global performance metrics (e.g. conversion and RPC) returned from the API’s prior to respondents attempting a survey
In cases where the performance metrics are zero, consider testing and storing an average value based on prior performance data. Ideally, adjust this value based on historic buyer and country-specific performance.
Internal Performance: Once respondents begin their participation in a survey, start calculating internal performance metrics, using this information as the primary metric for ranking decisions. We recommend using the internal performance data after 20 sessions have finished and the session outcomes have been collected.
When calculating internal performance metrics, we recommend excluding Cint Survey Protection Services session outcomes. These terminations are based on user quality and are not an indication of the performance of a survey. Additionally, if a respondent hasn't returned to your platform after 2x the survey's median LOI (or if LOI isn't available, use BidLOI), we advise classifying them as a "drop". Update your database with the returned session outcome data if they later return.
To aid with calculating your internal performance, there are two methods for collecting session outcomes.
Redirects: To collect the Cint Exchange (Marketplace) prescreener and Client response codes for each respondent sessions ensure the below parameters are appended to all of your redirects.
InitialStatus=[%InitialStatus%]
ClientStatus=[%ClientStatus%]
If these are missing, speak to your Account Team to have them included. Our collecting data from redirects resource provides more information on what parameters can be appended and the formatting options available.
Feed Respondent Outcomes: Create a outcomes subscription to receive notifications to your webhook on respondent session outcomes. Our recommendation is to receive events for all available response codes, but if you want to limit these to specific response codes, you can achieve this through the filtering options available. See our Feed Opportunities and Respondent Outcomes FAQs for some example subscriptions.
Shutting Down Poorly Performant Surveys
An essential component of effective yield management involves establishing minimum survey performance thresholds within your system and deactivating surveys that fall below the desired criteria. Continuously directing a large volume of respondents to surveys that perform poorly not only adversely affects metrics such as conversion rates and RPC but also impacts respondent satisfaction. Ideally, these performance thresholds should be configured at the country level, as survey performance can vary across different countries.
As mentioned in the previous section Tracking Survey Performance, it is recommended to initially utilize the global data provided through our APIs to determine the performance of surveys. This will help identify surveys that fall below any performance thresholds and should consequently be marked as 'inactive' in your database and not considered in your survey ranking decisions. If the global survey performance improves over time, you should update your system with the latest information and set the survey status as 'active'.
As you begin directing users to survey opportunities, make use of your internal performance metrics to determine whether to increase or halt traffic to specific surveys. Surveys that do not result in a complete or fall below your performance threshold should be marked as inactive. Where you increase send, continue to monitor the performance on a rolling survey entrant basis, altering survey status/rankings based on the latest data.
Building a Testing Framework
Establishing the right yield management algorithm typically involves an iterative process, where different strategies are tested to understand the best approach. Therefore, we recommend developing a system framework that facilitates split testing with different survey ranking variations and enables the seamless implementation of successful strategies. Testing incremental changes in survey targeting and prioritization is a low-risk approach for acquiring valuable insights into effective methods without majorly affecting overall performance. This approach allows you to adapt to evolving trends and remain competitive in the Cint Exchange.
After Soft Launch is often a great place to start testing, you will have accumulated an initial dataset related to survey integration performance. Utilize this data to plan and implement futures changes to be tested. Based on your data-driven hypotheses, introduce these modifications in the form of test variations and closely monitor the performance.
For those surveys you have tested and made inactive due to poor performance, we suggest keeping an eye on the global data to observe any improvements made by buyers in-field that has enhanced the survey's performance. It can be beneficial to re-test a survey to assess whether any changes made by the buyer result in better outcomes for your respondents.
Additional Considerations
Respondent Segmentation
Having a diverse pool of respondents means that applying a single, catch-all, survey ranking algorithm can restrict the success of your integration. Segmenting your panel allows you to target these respondents with opportunities that will improve performance, customer satisfaction and reduce your risk to reconciliations. We have outlined below a few options to consider including in your yield management algorithm:
Respondent Age: For new users prioritizing surveys with a shorter LOI and/or higher conversion rate is a good way to ensure they have a favorable experience. The more opportunities they are able to complete and earn rewards from, the less likely they are to churn.
Respondent Quality: If a respondent has a low-quality score, either a function of failing quality checks or they are brand new users, regulate the types of inventory presented. For example, excluding them from higher revenue per interview surveys.
Predicting Survey Performance
In the Pacing Survey Entrants section, we spoke about the importance of testing new opportunities with a small pool of respondents to understand performance and adjust sample send based on your survey metrics. We are going to take this concept one step further and focus on how you can optimize your send by prioritizing the best new survey opportunities. Particularly in our high demand markets, where there can be a large volume of new surveys launching at any given time.
Typically survey rankings are formulated based on in-field metrics such as conversion, RPC and RPCM. So how can you rank new opportunities when this data is not available? What we recommend is to apply a machine learning approach and train your system on what are the main characteristics and data patterns (outside of field data) behind good converting surveys, by each buyer.
Once your system has identified the different data patterns attributed to survey performance, assign a prediction score against the criteria based on the confidence levels of achieving the desired outcome.
This scoring system should then be applied to your overall survey ranking algorithm. In larger demand markets you may want to apply a conservative approach and only present the best-predicted surveys to users once you have exhausted all surveys with in-field metrics above your desired thresholds. Or, you may want to take the approach to weigh new opportunities, where you are confident in the performance outcome, on a similar scale to those surveys where you have in-field data.
Irrespective of the option you apply we recommend when sending to new surveys you continue to track the first 20 entrants and adjust rankings based upon your own internal data.
Reporting
Utilize the Supplier Portal reports below to gain insights into the performance of your API integration. After logging into the Portal, click on 'My Reports' to access this valuable information.
Supplier Cint Exchange Feed Dashboard:
This report offers a comprehensive overview of your integration, encompassing key metrics such as conversion and GMV. Additionally, it provides detailed insights into why survey respondents are terminating, whether it's in the prescreener or client survey. You can toggle the data between the past 6 months and the past 14 days.
Supplier Yield Management Dashboard:
This report examines the performance of your integration at the survey level over the past 3 days. It serves as a valuable resource for identifying instances where you might be directing excessive traffic to underperforming surveys and not terminating poorly performing surveys promptly
Survey ranking example
Having shared our recommendations for building an effective Yield Management system, we have concluded this article with a simple example of how you can rank surveys within your system.
Objective: Achieve a high user retention rate and maximize earning potential by filtering out longer LOI surveys and prioritizing the higher-converting studies.
Steps
Create entry links for eligible surveys based on set filtering rules. In this example we have used the following rules:
Filter Rules
Property | Value |
|---|---|
collects_pii | false |
length_of_interview | <=15 |
termination_length_of_interview | <8 |
conversion | >0 |
revenue_per_interview | >0 |
Create a table that stores data collected from the Feed Opportunities data for each eligible survey. These data points will be used as inputs for the columns added in the next step. Survey ID, Global Conversion %, and LOI will all be stored in this example.
Survey ID | The primary key column should be Survey ID. The value in this column will represent the unique survey_id for which an entry link has been created. |
Global Conversion % | The source of the values in this column is the |
LOI | LOI can be obtained from Fed Opportunities via property 'length_of_interview'. This will be shown as '0' until 6 completes are achieved. Until the data is present, store the estimated length of interview via property 'bid_length_of_interview'. |
Example Table
Survey ID | LOI | Global Conversion % |
|---|---|---|
1 | 4 | 20 |
2 | 15 | 8 |
3 | 8 | 20 |
4 | 6 | 15 |
5 | 12 | 2 |
6 | 15 | 8 |
7 | 5 | 11 |
8 | 15 | 3 |
Add calculated columns to the table. These data points will be used as inputs for the columns added in the next step. In this step, the columns added will be Internal Conversion and Functional Conversion %.
Internal Conversion % | This value is calculated based upon the number of respondents who completed the survey after qualifying. The data to make this calculation is sourced from the ‘client_status’ property information delivered by Feed Outcomes for each session. To be populated as NULL until 20 session outcomes have been returned for a survey. |
Functional Conversion % | The value of this column should be populated based on the following scenarios:
|
Example Table:
Survey ID | LOI | Global Conversion % | Internal Conversion % | Functional Conversion %1 |
|---|---|---|---|---|
1 | 4 | 20 | NULL | 20 |
2 | 15 | 8 | NULL | 8 |
3 | 8 | 20 | 25 | 25 |
4 | 6 | 15 | NULL | 15 |
5 | 12 | 2 | 0 | 0 |
6 | 15 | 8 | 15 | 15 |
7 | 5 | 11 | 9 | 9 |
8 | 15 | 3 | 2 | 2 |
Add survey rank and status columns.
Survey Rank | The survey rank is calculated by sorting all surveys with a status of "active" and ordering surveys with the highest functional conversion. Those with the highest functional conversion gets the lowest survey rank score. Those surveys where the functional conversion is the same, the lowest LOI is prioritized. |
Status | Any surveys that have a functional conversion below 5% are marked as "inactive". |
Example Table
Survey ID | LOI | Global Conversion % | Internal Conversion % | Functional Conversion% | Survey Rank | Status |
|---|---|---|---|---|---|---|
1 | 4 | 20 | NULL | 20 | 2 | Active |
2 | 15 | 8 | NULL | 8 | 6 | Active |
3 | 8 | 20 | 25 | 25 | 1 | Active |
4 | 6 | 15 | NULL | 15 | 3 | Active |
5 | 12 | 2 | 0 | 0 | - | Inactive |
6 | 15 | 8 | 15 | 15 | 4 | Active |
7 | 5 | 11 | 9 | 9 | 5 | Active |
8 | 15 | 3 | 2 | 2 | - | Inactive |