Why Users Lie (or Don't Tell the Whole Truth) in Customer Interviews
People often provide answers that make them look good (social desirability bias). Learn how to frame questions to get real, actionable insights.
Conducting customer interviews is a critical step in user-centered design and product development, aimed at gaining a thorough understanding of potential users' work and needs. However, it's a delicate process, akin to excavating a fragile archaeological site where the truth is easily shattered by blunt instruments – or, in this case, poorly framed questions. Many founders and design teams fall into the trap of receiving misleading information from customer conversations, leading to false positives that convince them they're on the right path, causing over-investment in time and resources. This often happens because users, consciously or unconsciously, provide untruthful data.
Several factors contribute to users providing information that is not entirely accurate or truthful:
Social Desirability Bias: People are often conscious of how they are perceived and may withhold information or present themselves and their behaviors in a certain light to look good. This means they might tell you what they think you want to hear, rather than the unvarnished truth.
Fear of Judgment or Negative Reflection: Users might avoid discussing aspects that could reflect poorly on them, even if the interviewer isn't looking to judge. This can lead to them omitting crucial details or providing vague answers.
Protecting Privacy: Falsification is a standard method individuals use to protect their personal data, especially if they perceive the requested information as sensitive or irrelevant to the conversation's context. Users may employ various strategies for this, such as providing invalid information, completely untrue but validly formatted data, or partially true information (e.g., a city name instead of a full address).
Lack of Trust and Anticipation of Functionality: If users don't trust the interviewer or the system being discussed, they might be hesitant to reveal their true needs. They could even be "frightened by the possibility of ‘biased’ search results when confronted directly" with concepts like adaptive systems, leading them to hold back or give guarded responses.
"Translation Competence" and Tacit Knowledge: Users, particularly experts in their domain, might simplify their complex knowledge into terms they believe the interviewer will understand, rather than articulating the full, precise truth. Additionally, much of an expert's problem-solving knowledge becomes automatic or tacit through extensive use, making it difficult for them to articulate, even if they want to.
Desire for Approval ("The Pathos Problem"): If interviewers explicitly seek approval or expose their ego, participants may feel compelled to offer compliments or "fluffy mis-truths" to be supportive or to end the conversation. This also ties into a general "polite response bias," where people respond politely even to computer surveys.
Overly Optimistic Future Projections: When asked about hypothetical future actions or purchases (e.g., "Would you buy X?" or "How much would you pay for X?"), people tend to be wildly optimistic, leading to worthless "yes" answers and inflated price expectations. "Anything involving the future is an over-optimistic lie".
Unclear Relevance/Context: Participants are more likely to falsify information if they don't perceive the requested data as relevant to the scenario or context.
How to Get the Truth: Strategies and Techniques for Real Insights
Given these challenges, eliciting genuine, actionable insights requires a deliberate and strategic approach:
1. Embrace "The Mom Test" for Question Framing The core principle is that you shouldn't ask anyone if your business idea is good. Instead, focus on gathering concrete facts about their lives and worldviews. The "Mom Test" provides three simple rules for crafting questions that even those closest to you can't lie about:
Talk about their life instead of your idea. Avoid mentioning your product or solution too early, as this can bias the conversation.
Ask about specifics in the past instead of generics or opinions about the future. Learn about their actual behaviors and past experiences, as these are harder to lie about. For example, instead of "Would you buy X?", ask "How did you currently solve X the last time it came up?".
Talk less and listen more. A successful interview means the participant does most of the talking (e.g., 80-90% of the time). Interrupting or dominating the conversation prevents you from gaining valuable insights into their mental model.
2. Detect and Deflect Bad Data Be vigilant against common forms of untruthful data and guide the conversation back to valuable information:
Deflect compliments: Phrases like "That's really cool. I love it!" are "fool's gold" and provide zero data. Instead of accepting them, deflect by apologizing for "pitch mode" and redirecting to questions about their current situation or problems.
Anchor fluff: Generic claims ("I usually do X") or future promises ("I would definitely buy that") are unreliable. Immediately follow up with questions like "When's the last time that happened?" to get specific, verifiable instances.
Dig beneath ideas, requests, and emotions: Don't just collect feature requests; understand the motivations or "why" behind them. Similarly, if a user expresses strong emotion (e.g., "That's the worst part of my day"), dig deeper to understand the root cause and implications.
Identify if the problem truly matters: Ask about the implications of the problem to determine if it's a minor annoyance or something they would pay to solve. Also, ask "What else have you tried?" or "How are you dealing with it now?" to gauge if they've actively sought solutions. If they haven't tried to solve it, they likely won't buy your solution.
Avoid "premature zooming": Don't dive into the details of a specific problem before confirming that the user considers it a high priority or "must-solve" problem. Start with broader questions about their goals and challenges to understand their overall priorities.
3. Optimize Your Interview Environment and Conduct The setting and your approach significantly impact the quality of insights:
Interview in the user's natural environment: This provides invaluable contextual cues and allows you to observe unstated behaviors, workarounds, and artifacts (e.g., sticky notes, cable organization) that provide a richer understanding of their world.
Build rapport: Make participants feel comfortable by starting with easy, non-threatening questions, maintaining eye contact, nodding, and acknowledging their responses without judgment. Avoid interrupting or rushing them.
Adopt an "Advisory Flip" mindset: Approach the conversation not as a sales pitch, but as an opportunity to find industry or customer advisors. This shifts the power dynamic, putting you in control and encouraging more objective insights.
Show genuine naïveté: Be open to learning and allow participants to teach you. If you're asking questions that might seem "stupid" from their expert perspective, you're likely on the right track to uncovering their deep knowledge.
Adapt your language: Incorporate terminology and phrases that the user naturally uses to enhance credibility and build rapport, but ensure you understand what new terms mean before using them.
Use probing questions: Have a list of versatile probes like "Tell me more about that," "Can you expand on that?", or "Why is that important to you?" to uncover motivations, mental models, and deeper perceptions.
Leverage silence: Don't rush to fill pauses. An uncomfortable silence will often prompt the participant to offer more information.
Ask for "hidden gems" at the end: Conclude with open-ended questions like "Is there anything else I should have asked?" or "Is there anything we didn’t cover that you expected us to?" These often yield surprising and valuable insights after the formal questions are done.
4. Implement a Structured Process for Consistency and Collaboration To ensure reliable and actionable data, integrate interviews into a broader, team-oriented research process:
Define clear research goals: Before any interview, specify exactly what you aim to learn. Vague goals lead to irrelevant data.
Prepare and pilot an interview guide: Develop a flexible guide with topics, questions, and probes. Pilot it with colleagues or target users to refine questions and flow.
Interview in teams and debrief: Ideally, have two researchers (one to ask questions, one to take notes). After each interview, debrief with your team to consolidate different impressions, discuss commonalities and contrasts, and identify critical factors. This prevents learning bottlenecks where insights remain siloed in one person's head.
Pre-plan your "3 big questions": Before each set of interviews, determine the three most important, and potentially "scary," questions you need answered—those that could completely change or disprove your business idea. This ensures focus and courage.
Document thoroughly and review: Take good notes, ideally capturing exact quotes, and use shorthand symbols for quick reference (e.g., for specifics, feature requests, money, people, follow-up tasks). Review notes with your team promptly to disseminate learning and update collective beliefs and plans.
By adopting these strategies, you can navigate the inherent biases and complexities of customer interviews. The goal isn't to validate your existing ideas, but to uncover the truth of your users' world—even if that means disproving your initial assumptions. As Rob Fitzpatrick suggests, there's more reliable information in a "meh" response than a "Wow!", and learning that your beliefs are wrong is actually progress toward finding a real problem and a good market.