Navigating the Road to High-Quality Data: Screen

September 26, 2017
Share this blog post:

Previously, we touched on the importance of designing surveys that engage participants and support a good survey-taking experience – and, as a result, yield high-quality data. In part two of our four-part series, we discuss screening. To design an effective screener, the researcher must answer this simple question: Who do we want to reach? Then, screening questions are placed at the beginning of a survey to determine who is – and isn’t – the right fit for a particular study.

Imagine for a moment you’re cruising the open road, when a sudden downpour of rain impedes your vision. If you allowed every raindrop to stay on your windshield, your vision would be obstructed. Instead, you turn on your windshield wipers and navigate with clear vision.

Similarly, correctly screening out undesirable survey participants “wipes away” unwanted noise that can cloud data, leaving behind a sharper path to data-driven insights. The importance of the market research screener cannot be overstated.

Effective screening asks the right questions, allowing people to disclose information that eliminates them as outside your intended universe. Poor screening may result in a mid-survey realization that the participant simply isn’t qualified to answer a particular set of questions. Some may abandon taking the survey, others may try their best to finish, leaving both you and the participant wondering how they got into the survey to begin with. Poorly screened survey participants are like backseat drivers, a wholly unwelcome, and sometimes distracting, participant.

To effectively ask the right type of screening questions, it’s best to follow these four best practices:

  1. Begin broadly
  2. Don’t direct
  3. Allow accuracy
  4. Check knowledge

Begin broadly

In consumer (B2C) surveys, begin broadly with category-level questions first, followed by brand or product-level questions. Include the target answer among other dummy options to conceal the survey topic.

For example, if participants need to have purchased a specific tax software product, don’t begin by asking:

Which of these tax software products have you purchased?

Instead, ask if they purchase it.

Which of these have you purchased in the last 12 months?

  • Tax software
  • Homeowners insurance
  • Computer / laptop
  • Car or truck
  • None of the above

Then, understand which specific products they are familiar with or have considered. Finally, ask about purchase behavior.

Business (B2B) surveys should take the same approach, working from broad firmographic questions – such as employment status and industry type – before digging into more specific role or decision-making questions.

Any business attribute is subject to change at any point. Begin by confirming participants have the correct employment status, industry, and department to narrow the audience and ask only the right participants about more specific attributes.

Beginning broadly serves two purposes. First, it helps to guide participants through the survey, moving them from easier, general questions first and then to more detailed, specific questions once they engage. This approach establishes a logical survey flow. Second, beginning broadly narrows the audience such that only a more appropriate segment answers more specific screening questions. Confirm screening criteria regardless of what targeting or panel profiling may be available. And, a flow from broad to specific supports another key practice: it avoids giving the impression of a “right” answer.

Don’t direct

Discussion of quality screener design often aims to avoid bad actors, but the true challenge is ensuring participants who are answering in good faith are not unintentionally responding in the “right” way based on the question structure.

Consider this research question: By how much can Company A raise the price of their consumer tax software without negatively impacting purchases? To answer this, Company A will need to understand not only price sensitivity, but also what key purchase criteria go into the decision when a consumer buys household tax software. This means they will need to speak with decision makers. A poorly designed screening question asks directly, with no opportunity for nuance: Are you involved in making decisions for tax software in your household? A respondent who has negligible involvement may have this internal debate:

Since I say, “that sounds fine,” when my spouse presents the decision to me, I guess I am involved!

This question format creates two issues:

  1. This implies the correct answer is “yes.”
  2. This tells the participant the precise topic of study.

When the participant reaches the core survey questions, he/she will understand, minimally at best, key purchasing criteria – and may not truly understand pricing tradeoffs, either. As a result, responses will simply be the best guess.

An easy re-design of this screening question both masks the survey topic and allows the participant to accurately report his role in making decisions. For example, participants could instead answer a small, manageable grid which includes a few dummy options.

The question becomes: What role do you play in making decisions to purchase the following items in your household?

Given our previous emphasis on mobile-friendly design in this series, it’s important to highlight that the simplicity of this grid – with no more than four columns and succinct column text – should ensure it is mobile friendly.

If the technology used to host the programmed survey cannot display a grid in a device-agnostic manner (for example, on a mobile phone the full grid is not visible on screen), another option is to ask the question more neutrally as a single-select question with multiple levels for decision making. For example:

Allow accuracy

Include an option for ‘none of the above’ or ‘I don’t know’ so participants are not forced to make an inaccurate selection. For example, a question that asks which tax software vendor was used most recently may provide a list of vendors, along with both ‘none of the above’ and ‘I don’t know.’ The ‘none of the above’ accounts for the possibility that a list is not comprehensive. ‘I don’t know’ should be included when it is plausible a respondent may not have the answer.

Next, determine if either of those answers should preclude someone from taking the survey. If so, apply a terminate point, to end their survey in the screener. For example, if participants don’t use any vendors in the competitive set, you may not be interested in their responses. Or, if participants aren’t aware of who that vendor was, you may not feel confident in their true involvement in the decision. 

Check knowledge

Business (B2B) survey screeners should identify people that possess basic knowledge expected of an individual in their position or industry space, and then confirm in multiple questions that they meet the target criteria.

A tax software survey aimed at professional accountants may look to high-level decision-makers for overall industry trends, but may also aim to understand more functional vendor decision-making at an influencer level. Participants should be screened for knowledge in the space overall, and then assessed for the role they play in decision making.

A granular picture of decision-making roles can also drive routing – the process that assigns individual participants to appropriate, relevant survey questions – within the survey. The ability to skip questions based on the level of decision-making ensures a survey can capture detailed responses only from knowledgeable participants.

On the consumer side, the depth of knowledge required will depend on the objectives of the research. For example, it may be of interest to understand general impressions from participants for brand or product awareness, even if they have not purchased or used it. However, asking a participant detailed questions about satisfaction or specific product attributes for something they have never purchased or used will result in speculative data. Avoid the opportunity for dicey responses by skipping questions that are inappropriate for participants who are aware of a product, but have not actually used or purchased it.

While checking applicable knowledge is a key component of screening for the right participant, avoid the temptation to require a check so elaborate it requires several minutes of a participant’s time – or worse, one so complex only a select few can move beyond it. This moves from confirming topic knowledge to requiring expertise, and narrows your data set to a single, highly homogeneous, pool. To return to our windshield wipers, consider the balance the driver strikes between clearing rain from the windshield and convoluting vision if the wipers themselves become overactive. Either extreme distracts from proper visibility.

Although a screener may represent a small portion of the questionnaire, it plays a substantial role in the research’s success. Apply these best practices, or dig deeper with these 10 Best Practices for Survey Screening, to ensure this pivotal questionnaire component will keep your vision clear for the real destination: high-quality data.

Stay tuned for the next installment of our high-quality data series: evaluating the data.

This blog is part 2 of a 4-part blog series by Research Now. To check out parts 1, 3 and 4, click the hyperlinks.

Free Guide: Navigating the Road to High-Quality Data

At Research Now, we strive to follow our Research Quality GPS to ensure our clients, our research team, and our participants all have a smooth journey, collect the highest quality data possible, and avoid any bumps in the road. Navigating the three main points in your research journey – designing, screening, and evaluating data – efficiently equips research projects to obtain the highest quality data.

To learn best practices in each of the three data collection areas, download your free guide by clicking below.

Download Guide

Share this blog post: