5 Common Survey Question Mistakes That Invalidate Your Research (And How to Fix Them)
The hidden flaws that turn well-intentioned research into misleading data
If Medium is your go-to reading spot, you can catch this post there too! Just click here.
Have you ever completed a survey and found yourself thinking, "That's not a fair question" or "None of these options really fits my situation"? I have seen firsthand how easily survey questions can go wrong, and how these flaws can completely undermine research findings.
The truth is that poorly designed questions don't just reduce response quality; they can actively mislead you, leading to decisions based on fundamentally flawed data.
In this article, I'll share five common survey question mistakes that I see all the time, why they are problematic and, most importantly, how to fix them.
Mistake #1: Overwhelming Cognitive Load
Perhaps the most common mistake in survey design is creating questions that place too much mental burden on respondents. When your brain is overloaded, it takes shortcuts, and these lead to less accurate responses.
The Problem
Before (Problematic Question):
Please describe all the factors that influenced your decision to purchase an iPad, including brand reputation, technical specifications, price considerations and environmental sustainability concerns.
This question asks respondents to:
Recall and evaluate far too many different decision factors simultaneously
Oversimplify a potentially complex decision-making process
Consider factors that might not have been relevant to their decision
When faced with this cognitive overload, respondents typically focus on just one or two factors and ignore the rest. This means that they will provide a superficial response - if they enter one in the first place. The greatest risk from cognitive overload is that participants might just leave the survey altogether!
The Solution
After (Improved Question):
What were the two most important factors that influenced your decision to purchase an iPad?
This question:
Focuses on a manageable number of factors
Acknowledges that decision-making typically prioritises a few key elements
Reduces mental effort required for a quality response
Note that this question could be closed-ended (e.g. a list of pre-defined options from which the respondent would need to pick up to two) or open-ended (meaning a fully open question). Either option could work, depending on the specifics of the survey.
Mistake #2: Double-Barrelled Questions
Double-barrelled questions contain multiple separate issues but allow for only one response. They force respondents to provide a compromised answer that may not accurately reflect their views on either component.
The Problem
Before (Problematic Question):
How helpful and timely was the technical support you received?
This question combines two distinct elements:
Helpfulness of support (Did it solve the problem?)
Timeliness of support (Was it provided quickly?)
A respondent might have received extremely helpful support that took far too long to arrive. Expressing this nuance with a single response is possible, but tricky, and not all respondents would spot that multiple aspects are being investigated. Additionally, you (as the data analyst) will have to disaggregate responses based on what element(s) they cover. Definitely a bad idea!
The Solution
After (Improved Questions):
How helpful was the technical support in resolving your issue?
How satisfied were you with the response time of the technical support team?
By separating these distinct concepts, you allow for distinct responses to each element, which enables you to capture accurate and nuanced data as well as streamlining the analysis.
Mistake #3: Social Desirability Bias
People naturally want to present themselves positively, even in anonymous surveys. Questions that trigger this tendency lead to responses that reflect social norms or expectations rather than actual behaviours or attitudes.
The Problem
Before (Problematic Question):
Do you recycle all recyclable materials from your household?
This question:
Uses the absolute term "all," which implies a clear social expectation
Creates pressure to provide the environmentally conscious answer
Triggers guilt or defensiveness
Results in overreporting of socially desirable behaviour
The Solution
After (Improved Question):
In the past month, approximately what proportion of your household's recyclable materials did you place in recycling bins?
🔘 0-25%
🔘 26-50%
🔘 51-75%
🔘 76-100%
🔘 I'm not sure
🔘 Recycling is not available in my area
This redesigned question uses a specific, recent timeframe to improve recall accuracy and then offers a range of non-judgmental response options. The percentage ranges normalise variation in behaviour, and there are also additional responses that acknowledge external factors. Overall, it will result in more honest reporting and better data.
Mistake #4: Inadequate Response Options
Even perfectly worded questions fail when the response options you provide don't allow respondents to accurately express their reality. This forces people to select options that don't represent their true situations, creating fundamentally flawed data - and you can’t even tell once you’ve captured it!
The Problem
Before (Problematic Question):
What is your primary method of commuting to work?
🔘 Personal car
🔘 Public transportation
🔘 Bicycle
🔘 Walking
These options:
Don't include important alternatives like carpooling, ride-sharing or motorcycles
Fail to account for remote workers who don't commute
Force respondents into categories that may not match their actual behaviour
The Solution
After (Improved Question):
What is your primary method of commuting to work?
🔘 Drive alone in personal vehicle
🔘 Carpool/ride-share
🔘 Public transportation (bus, train, subway, etc.)
🔘 Motorcycle or scooter
🔘 Bicycle
🔘 Walking
🔘 I primarily work remotely
🔘 Other (Please specify): _______________
This revised version provides more comprehensive options that reflect common commuting methods, includes an option for remote workers and offers an '“other” option for those who may not fit in any of the categories available. This ensures that all participants can provide an accurate response and leaves you with far more reliable data. Here’s another tip: generative AI is a great tool to make sure you have covered all common options in this type of question - just go to your preferred chatbot and ask!
Mistake #5: Vague or Unspecific Questions
Vague questions create a situation where each respondent interprets what you're asking differently, making the resulting data essentially meaningless for comparison or analysis.
The Problem
Before (Problematic Question):
Is the app easy to use?
This question is problematic because:
"Easy to use" is subjective and undefined, so different respondents are likely to focus on completely different elements
The question doesn't specify which tasks or features to evaluate
There's no context for comparison (easier than what?)
The Solution
After (Improved Question):
Please rate how easy or difficult it was to complete the following tasks in the app, using a scale from 1 (very difficult) to 5 (very easy).
Creating a new account
Uploading a profile photo
Finding and connecting with friends
Posting a new message
Adjusting privacy settings
This version specifies particular tasks to evaluate and provides a clear rating scale with defined endpoints, which could be implemented as a radio button grid. In practice, it breaks down the abstract idea of “ease of use” into concrete actions, which will help ensure that all respondents evaluate the same aspects.
The Compound Effect of Question Flaws
While each of these mistakes is problematic individually, they can also appear in combination, multiplying their negative impact. Consider this real-world example:
Problematic Survey Question:
Would you agree that our new online shopping platform is user-friendly and offers better product selection than other websites you've used?
This single question manages to include:
Double-barreled structure (user-friendliness AND product selection)
Leading language ("would you agree")
Vague terminology ("user-friendly")
Comparison without context (which "other websites"?)
Social desirability bias (pressure to be positive about the new platform)
It’s no wonder survey data sometimes leads organisations astray! The consequences of these mistakes extend far beyond methodological concerns. Usually, surveys are started by organisations to address specific issues or challenges, meaning that time and money are invested in their design and deployment. If a survey delivers flawed data, not only will this investment be practically gone, but also organisations can find themselves making strategic decisions based on inaccurate information. At the same time, the real issues (then ones you would have identified with properly worded questions) will remain unidentified and unaddressed.
Moving Forward
While these five mistakes are among the most common, they represent just a fraction of the question design principles that separate excellent research from misleading data.
Effective survey design is both an art and a science: it requires technical knowledge of methodological principles alongside a good understanding of human psychology and behaviour. When you get it right, the result is reliable data that leads to confident decisions and meaningful improvements.
In my experience, investing time in question design and iterative improvement (including through feedback from your colleagues) consistently delivers a clear return on investment. The extra hour spent refining your questions can save weeks of grappling with ambiguous or misleading data later.
The next time you're designing a survey, remember these five common mistakes and how to avoid them. Your respondents will thank you, and, more importantly, you will get the quality data you need to make informed decisions.
What survey question mistakes have you encountered in your work or as a survey respondent? Share your experiences in the comments!
I'm Andrea, a management consultant with over a decade of experience across industry and academia. I work with commercial, non-profit, academic and government organisations worldwide, helping them capture meaningful insights through mixed methods research.
I write about practical frameworks to help you discover what others miss. My main goal is to translate complex concepts into techniques that readers can use immediately.