Partner security: a guide to best practices

Prev Next

Version 1.0

The market research industry, just like all industries that create revenue, is vulnerable to bad actors. The more transparent we are with one another about the threats we face and the more best practices that we share with one another to prevent, stop, and mitigate fraud, the stronger we’ll collectively be as an industry.

This guide documents the best practices we’ve developed through hundreds of technical integrations with partners over the years related to security, fraud-prevention and respondent management. We expect this to be a living document. We, nor will anyone in our industry, ever completely solve the problem of poor quality or fraudulent user behavior. We will always discover new threats and better best practices to combat them. Our sincere hope is that all readers of this document work with us to improve it over time.

Your Lucid account team can assist with identifying which of these practices are best for you.

Section 1: Link Security

Hashing

SSL/TLS (HTTPS) Encryption

  • Ensure your site uses HTTPS on all Monetization Link referring URLs and redirect URLs.

Section 2: New User Registration

Authentication

  • Implement multi-factor authentication (email, SMS, etc.)

  • Use Oauth to determine the age of social media accounts. Accounts that have been recently created should be suspect as this can indicate that they were created for the purpose of fraudulent membership sign up.

New Member Evaluation

  • Expose new members to a series of quality questions intended to evaluate the member’s quality and expose fraudulent activity. Include measures such as security checkpoints, repeated questions to judge consistency over time, and open-end questions that can be analyzed for gibberish, bot-generated text, past responses, and repeated responses to different questions.

Activity Thresholds

  • Ensure that new member activity is limited immediately following registration until their quality can be proven.
    Consider the following limitations:

    • Restrict the number of completes allowed per day

    • Restrict the number of survey entries allowed per day

    • Delay rewards payout until after the Lucid reconciliation window.

Section 3: User Engagement and Management

Incentive Restrictions

  • Implement CPI minimums as close to $0.00 as possible. Survey offers with higher CPI minimums that reward at a higher incentive rate are more susceptible as a target for fraud.

  • Establish a daily cap on the number of survey completes for any respondent. The cap value could be a function of the respondent’s quality score, the lower the score the lower the daily completes cap.

  • Require a period of delay in the payout of rewards for all new and/or low-quality members.

  • Avoid cash rewards. Alternative types of rewards such as access to digital content, access to digital experiences, or other e-gifts, may be less attractive to fraudulent actors.

User-Facing Language

  • Use respondent-facing language that encourages honest and thoughtful survey-taking, such as:

    • A message prior to entering a survey explaining that researchers grade responses for their honesty, consistency, and thoughtfulness. Example: “Please take your time to answer thoughtfully and truthfully to ensure the most rewarding survey experience. Answers that are not honest or gibberish may result in the suspension of access to survey opportunities.”

User Speeding

  • Establish a workflow by which such user sessions resulting in an instantaneous or near instantaneous survey completion are immediately flagged, and any users who exhibit such behavior are banned. A user session resulting in an instantaneous or near instantaneous survey completion is, in nearly all cases, an indication of link manipulation - an intentional, nefarious behavior to gain incentive without participating in the survey.

  • Users with a survey session length below 30% of the median LOI for a specific survey opportunity are highly likely to be reconciled, as individual users who complete a survey in a time frame that is in excess of 30% of the median LOI for the survey are often considered speeding through the survey and not likely to provide quality, thoughtful answers. Partners utilizing our API product can monitor individual sessions against the survey’s median LOI (returned on our API) and flag users whose session length is 30% less than the survey’s median LOI.

    NOTE: This strategy is not applicable to suppliers integrated via Link

  • An abnormally short session length resulting in a successful complete can be an indication of fraud. Establish norms of expected behavior around average survey speed and average user speed within reasonable ratios. Flag users outside of the established normal bounds.

    • The industry standard for speeding is in excess of 30% of median LOI (that is for data cleaning purposes). A session with a length of interview below 30% of median LOI is highly likely to be negatively adjusted by the buyer.

Respondent Quality Scores

  • Establish a respondent quality scoring system that calculates a score for each respondent factoring in their activity. Expose survey inventory according to the respondent's score. Expose only respondents with certain minimum respondent quality scores to high CPI surveys.  Quality score variables could include SHA-1 hashing failure rate, third-party security services failure rate, speeding trap failure rate, reconciliation rate, and quality surveys score.

Automated Suspension or Disabling

  • Identify and stop fraud in real-time by programmatically suspending or disabling user accounts based on certain factors. Factors may include the respondent’s quality score falling below a minimum, a high number of SHA-1 hash failures, a high number of completes reconciled to a termination, respondent’s overall conversion rate surpassing normal levels, the frequency with which a respondent’s daily completes limit is reached, results from quality surveys, speeding trap failures, etc.

Section 4: Additional Consideration

Third Party Security Services

  • Utilize third party services such as White Ops, Forensiq, Captcha, Maxmind etc. to prevent bad users from entering the system on the front end.

Lucid Report Subscriptions

  • Request that your Lucid Account Team set up a subscription for your system to receive a daily SHA-1 Link Manipulation Report. This report will provide you with a list of respondent sessions for which an invalid hash was detected when the respondent was returned to the Lucid Marketplace from a buyer’s survey. In most cases, this is an indication of attempted fraud. We recommend that you build an automated process in which the data from this report is stored and respondent’s participation in Lucid Marketplace surveys is restricted if their sessions data indicates a higher than normal rate of SHA-1 Link Manipulation is observed. Consider putting restricted respondents through additional security checks to prove quality, before allowing them to begin attempting surveys again.

  • Monitor the data from the ID Reconciliation Report (also available via subscription). We recommend that you build an automated process in which the data from this report is stored and respondent’s participation in Lucid Marketplace surveys is restricted if their sessions data indicates a higher than normal rate of reconciliation is observed. Consider putting restricted respondents through additional security checks to prove quality, before allowing them to begin attempting surveys again.

  • We highly recommend that suppliers include feedback contained in these reports, in conjunction with independent respondent monitoring, to make informed, conservative decisions about user access to inventory.

Filtering and Flagging

  • Periodically analyze trends among users flagged as poor quality. These trends may expose common attributes across users that could be used to build additional quality flags.

Affiliate Best Practices

  • Ensure upstream user recruitment sources such as publishers are vetted for quality before sending in high volumes of entrants from the source.

  • Maintain the ability to trace users back to specific publishers or referral sources so that the quality of a recruitment source can be continuously monitored.

  • Attempt to identify users as unique across publishers and follow recommended user-level tracking measures.