Strategic User Interview Guide

What if those feature requests are solving the wrong problems?

The Problem

Your users say: "We need a dashboard." What they're actually trying to do: Check progress quickly and prep for meetings.

Your users say: "Add more customization." What they're actually trying to do: Make it work for their specific workflow without breaking everything.

Your users say: "Integrate with Google Classroom." What they're actually trying to do: Stop entering data twice and sync grades automatically.

When you build the solution users ask for instead of digging into the problem, you waste development time, disappoint users, and miss the real opportunity hiding behind their request.

The Solution: Strategic User Interviewing

Learn to dig past what users ask for to find what they actually need—so you build the right things, not just the requested things.

Who is this for?

Product managers trying to prioritize a roadmap full of conflicting requests

UX researchers running discovery interviews

Customer success teams drowning in feature requests

Designers who need to understand the "why" behind feedback

Founders talking to users about product-market fit

THE STRATEGIC USER INTERVIEW GUIDE

From Feature Requests to Real Problems

Stop building what users ask for.  Start solving what they actually need.

A practical guide for product managers, UX researchers, customer success teams, and anyone who talks to users about what they need.

How to Use This Guide

Read this guide once to get the concepts. Then use specific sections as you need them:

  • The 8 Principles → Core philosophy for approaching user conversations

  • Interview Script Template → Fill-in-the-blank guide for your next interview

  • Red Flags Decoder → Quick recognition of when you're off track

  • Practice Scenarios → Work through real examples to build your skills

  • Synthesis Worksheet → Translate requests into needs systematically

  • Quick Reference → One-page reminder to keep at your desk

Share the templates with your team. Everyone who talks to users should know how to dig below feature requests—product managers, designers, customer success, support, sales.

Practice with the scenarios. They're starting points. Add your own as you encounter them.

Why This Matters

One Feature Request. Seven Different Needs.

Here's what happens:

A teacher emails support: "We need to create assignments to override the suggestions."

A principal tells your sales rep: "We need to override the suggested learning path with manual assignments."

A district admin asks in a demo: "Can we manually assign different skills?"

Every channel says the same thing: Build the override feature. Now.

So you do. You spend weeks rebuilding what existed in the old platform. You ship it.

And then only a fraction of users actually use it. The complaints don't stop. New feature requests come in. You're confused—you gave them exactly what they asked for.

Here's what actually happened: You took feature requests at face value.

The Real Story

I watched this unfold at an EdTech company rebuilding a math diagnostic platform. Teachers, principals, and admins all wanted the same feature from the old product—the ability to "override" the algorithm that suggested which skills students should work on next.

Sales wanted it. Support wanted it. Users wanted it.

When we dug deeper, we discovered something critical: that single feature request was masking seven completely different underlying needs:

  1. The skills being suggested to students seem off to me - I don’t trust the alogorithm → Need: Better diagnostic accuracy and suggestion algorithm

  2. I want to use the app for “exit slips” based on the lesson I taught today → Need: Calendar-based exit slip system (skips the “lesson” and goes straight to assessment with remediation lesson if necessary)

  3. I want everyone working on the same lesson at the same time → Need: Calendar-based curriculum-alignment tool where students are held to gated learning progression for specific skill

  4. I don’t want everyone working individually.  I want to have three or four small groups of students that have similar needs → Need: Build small group feature that suggests groups based on teacher preferences for size of group and what students need to work on

  5. I want students to work on filling in the holes they need related to what I’m teaching in class right now -> Need: Calendar-based system where teacher selects skill being worked on in class and algorithm suggests supporting skills for that target skill

  6. I want students to fill in gaps for the unit we’re teaching in class -> Need: Calendar-based system where teachers select unit being worked on in class and algorithm suggests supporting skills for that unit

  7. I want students to work on skills related to specific standards -> Need: Standards-based system where algorithm suggests skill for that standard.

Building the "override" feature would have solved exactly one of those seven needs (#3).

Without understanding what users actually needed, the team would have spent weeks building something manually intensive that still left six out of seven user needs unmet.

Here's what we did instead. Once we understood those seven different needs, we prioritized by:

  • Which were most common

  • Whether one solution could serve multiple needs

  • Business impact and technical feasibility

The result? A more targeted solution that actually solved the problems users had—not just the feature they requested.

The Cost of Taking Feature Requests at Face Value

Here's what happens when you build what users ask for without digging deeper:

❌ You waste development time on features that don't solve the actual problem

❌ You create feature bloat—multiple band-aid solutions instead of one fix for the root cause

❌ You miss chances to solve problems users couldn't articulate

❌ You disappoint users who still struggle after you "gave them what they wanted"

❌ You build technical debt by shipping the first solution that came to mind

❌ You lose competitive advantage by copying features instead of understanding needs

Why Humans Are Terrible at Communicating Their Needs

Here's what I've learned after 15 years of talking to users in educational settings:

The easiest problem to articulate is often not the most important one.

What comes to mind first is rarely what matters most. Users mention the surface-level annoyance because it's fresh, not because it's the core issue.

Users base feature requests on things they've seen before.

"I want it to work like [other product]" assumes that other product's feature was designed for the same problem AND that it was the right solution. Neither is guaranteed.

Users know the symptoms, not the diagnosis.

They feel the pain. They experience the frustration. But they haven't had time to analyze the root cause—they're too busy working around it.

First-thought solutions beat best solutions.

When asked "what would help?", people suggest the first workaround that comes to mind, not the result of careful analysis of all possible approaches.

The questions we ask shape the answers we get.

"Would you use feature X?" gets very different responses than "Tell me about the last time you struggled with Y." Both are trying to understand needs, but one leads to actual insights.

What This Guide Will Help You Do

This is a practical playbook. I built it from hundreds of user interviews where I had to turn vague requests into product decisions.

Here's what you'll learn:

Ground interviews in past behavior instead of hypothetical preferences

Dig beneath surface requests to discover actual needs

Ask follow-up questions that get past rehearsed answers

Recognize red flags when you're getting solutions instead of problems

Translate user requests into actionable insights

Avoid interviewing mistakes that bias your research

Synthesize findings into needs you can prioritize

Who This Guide Is For

Product Managers needing to validate roadmap priorities with real user needs

UX Researchers conducting discovery interviews and usability studies

Customer Success Teams fielding feature requests and needing to dig deeper

Designers trying to understand the "why" behind user feedback

Founders talking directly to users about product-market fit

Anyone responsible for understanding what users actually need (not just what they say they want)

A Note on "User Research"

You might think user research is something that only happens at specific project phases, or only by dedicated research teams.

That's not how it works in reality.

User research happens every time someone on your team talks to a user:

  • Support calls

  • Sales demos

  • Onboarding sessions

  • Feedback surveys

  • Feature request emails

  • Conference conversations

  • Twitter DMs

Every user-facing person should be equipped to be a user researcher—to dig below the surface of feature requests and understand actual needs.

That's what democratized user research means. And that's what this guide enables.

What Makes This Different

Most user research guides focus on formal research methods: recruiting, protocols, analysis techniques.

This guide focuses on the conversation itself—the questions you ask, the follow-ups you make, the red flags you notice, the patterns you recognize.

Because you can have the perfect research protocol, but if you don't know how to dig below "I need a dashboard" to discover what decisions they're actually trying to make, your research won't lead to the right product decisions.

This is about the skill of strategic questioning.

And it's a skill you can develop with practice.

Part 1: The Problem

Why Users Can’t Tell You What They Need

Here's the uncomfortable truth: humans are terrible at communicating what we actually need.

Not because we're difficult. Not because we're lying. It's cognitive science—fundamental limitations in how memory, attention, and communication work.

You can be completely user-centered, ask users what they want, build exactly what they said, and still end up with a product that doesn't work.

Here's why.

Understanding these limitations is the first step to asking better questions.

1. The easiest problem to articulate Is often not the most important

What happens:

When you ask "What problems are you experiencing?", users tell you what's fresh in their mind, not necessarily what matters most.

Why it happens:

We have recency bias. We remember what happened today better than what happened last month. The small annoyance from an hour ago feels more urgent than the systemic inefficiency we've learned to work around.

Real example:

A teacher complains: "The print button doesn't work in Safari."

Seems straightforward. Fix the button, right?

But when you dig deeper: "Tell me about the last time you tried to print something. What were you working on?"

Turns out they don't actually need to print—they need to send student progress reports to parents. They're trying to print because it's the only way to export data from the system. The real need is parent communication functionality, not a print button.

The button was easy to articulate. The underlying need required deeper conversation.

2. We base requests on things we've seen before

What happens:

Users say "I want it to work like [other product]" or "Add the feature that [competitor] has."

Why it happens:

Humans pattern-match. When we encounter a problem, we recall similar situations and apply those solutions. But we're making two huge assumptions:

  1. That the other product's feature was designed to solve the same problem we have

  2. That it was the right solution to that problem

Neither assumption is guaranteed to be true.

Real example:

District administrator requests: "We need a dashboard like the one in [Competitor Product]."

When you ask: "What do you like about how their dashboard works?"

Response: "Well, I've never actually used it, but I saw it in a demo and it looked professional."

They don't need that specific dashboard. They need to appear competent and data-driven to their superintendent. The real need might be better met by automated reports, executive summaries, or presentation-ready visualizations—not necessarily a real-time dashboard.

3. We know the symptoms, not the diagnosis

What happens:

Users experience pain or frustration but haven't had time to analyze root causes. They're too busy dealing with the symptoms.

Why it happens:

Root cause analysis takes time and distance from the problem. Users are in the middle of their work, hitting obstacles, and creating workarounds. They feel the friction, but they haven't stepped back to diagnose why it exists.

Real example:

Principal says: "Teachers are asking for more training on the platform."

Surface interpretation: Schedule training sessions.

But when you ask: "Tell me about a specific time a teacher came to you confused. What were they trying to do?"

Turns out: Teachers aren't confused about how to use the platform—they're confused about when to use it. The platform doesn't integrate with their existing workflow. They need it to fit into their daily routine, not more training on isolated features.

The symptom was "need training." The diagnosis was "workflow integration gap."

4. First-thought solutions beat best solutions

What happens:

When asked "What would make this easier?", users suggest the first workaround that comes to mind, not the result of careful analysis.

Why it happens:

Generating and evaluating multiple solutions takes cognitive effort. In the moment, we suggest whatever jumps to mind first. It's not necessarily wrong—but it's definitely not comprehensive.

Real example:

Teacher requests: "Add a 'save for later' feature for assignments."

First-thought solution from their perspective: Create a holding area for incomplete work.

But when you dig into their workflow: "Walk me through the last time you started an assignment but didn't finish it."

You discover they're interrupted constantly. Phone calls from parents. Students arriving with questions. Fire drills. The real need is resumability—being able to pick up exactly where they left off across devices, potentially days later, without losing context.

"Save for later" is one solution. But auto-save with cross-device sync, breadcrumb trails showing where they were, and smart defaults that remember their preferences might be better solutions to the underlying need.

5. How we ask questions shapes the answers we get

What happens:

"Would you use feature X?" gets very different responses than "Tell me about the last time you struggled with Y."

Why it happens:

Leading questions bias responses. Hypothetical questions trigger aspirational thinking rather than grounded reality. The way we frame questions activates different cognitive processes.

Real example:

❌ Biased question: "Would you use an AI-powered grading assistant?"
Response: "Oh yes, that would be amazing!" (They're imagining an ideal future, not evaluating against current reality)

✅ Strategic question: "Tell me about the last time you graded a set of essays. Walk me through your process."
Response: "Well, I need to give feedback that's personalized to each student's specific misconceptions. I can't use generic comments. And I need to track common errors across the class to plan my next lesson..."
(Now you understand the constraints and actual workflow that any AI tool would need to fit into)

Same topic. Completely different quality of insight.

The Result: Good Intentions, Wrong Solutions

Users aren't bad at giving feedback. They're trying to help.

But asking "what do you want?" is asking them to do your job.

Here's the thing: you have different jobs.

Your job:

  • Understand their context and constraints

  • Recognize their actual goals

  • Discover root causes of their pain

  • Generate and evaluate multiple solutions

  • Balance user needs with business constraints

Their job:

  • Tell you about their experiences

  • Describe their processes

  • Share their frustrations

  • Explain their workarounds

When you ask users to design solutions, you end up building the first thing they thought of—not the best solution to their actual need.

What This Means for Your Interviews

You can't just ask "What features do you want?" and expect to build the right product.

You need to:

✅ Ask about specific past experiences instead of hypotheticals

✅ Dig multiple layers deep to get past symptoms

✅ Understand their full context and workflow

✅ Recognize when you're getting solutions instead of problems

✅ Frame questions that uncover actual behavior instead of aspirations

The rest of this guide teaches you exactly how to do that.

The Flip Side: Same Need, Different Requests

The math platform example showed one feature request masking seven different needs.

The inverse happens just as often: the same underlying need generates many different feature requests.

This is why counting votes is misleading.

You see:

  • 5 users asking for feature A

  • 3 users asking for feature B

  • 7 users asking for feature C

You prioritize C because it has the most votes.

But what if A, B, and C are all different solutions to the same underlying need?

Now you're not looking at 5, 3, and 7 users. You're looking at 15 users with the same core problem—all suggesting different solutions based on what they've seen before.

Real Example: The Differentiation Need

Five different teachers, five different feature requests:

Former Montessori teacher

"I need students to be able to work at their own pace. Add a setting to let them skip ahead or repeat content."
(Solution framed around individual choice)

Teacher from traditional textbook teaching

"I need the ability to assign different levels to different students. Give me manual grouping controls."
(Solution framed around teacher control and explicit groups)

Tech-savvy teacher, uses lots of apps

"I want it to work like [adaptive learning platform]—automatically adjust difficulty based on performance."
(Solution framed around automation and algorithms)

Special education teacher

"I need custom learning paths for my IEP students. Let me create individualized sequences."
(Solution framed around compliance and personalization)

Teacher using learning centers

"I want to set up stations with different content for different readiness levels."
(Solution framed around physical classroom organization)

The Underlying Need

All five teachers have the same core need: differentiation for mixed-ability classrooms.

But because they each have different past experiences with how differentiation "should" work, they've each suggested a different feature.

If you count feature requests, you see five 1-vote items.

If you dig to underlying needs, you see one high-priority need affecting at least five users (probably more who haven't spoken up).

If you understand the need deeply, you can design a single flexible solution that accommodates multiple teaching approaches—rather than building five separate features that each only serve one use case.

Why This Happens

Users solve problems within the frameworks they know.

  • The Montessori teacher frames differentiation as student choice because that's her pedagogical background

  • The textbook teacher frames it as grouping because that's how her curriculum guides organized instruction

  • The tech-savvy teacher references other apps because those are her mental models

  • The special ed teacher frames it around IEPs because that's her compliance context

  • The learning center teacher frames it spatially because that's her classroom management approach

None of them are wrong. They're all describing the same need through different lenses.

Your job is to recognize the common need beneath the different solution requests.

How to Recognize This Pattern

Red flag phrases that suggest you're getting solutions, not needs:

  • "I want it to work like [other product]"

  • "Add [specific feature]"

  • "Give me the ability to [specific action]"

  • "Just copy what [competitor] does"

Questions that reveal the underlying need:

  • "Tell me about a time you needed this. What were you trying to accomplish?"

  • "What problem does [other product]'s feature solve for you?"

  • "Walk me through your current process. Where does it break down?"

  • "What would change for you if you had this?"

The Strategic Advantage

Understanding that different requests can stem from the same need gives you:

  • Better prioritization: Cluster requests by underlying need, not surface feature. Suddenly that "5 votes" item might really be "25 votes" when you group related requests.

  • More elegant solutions: Design flexible features that accommodate multiple workflows rather than building specific point solutions for each request.

  • Deeper user understanding: See patterns across user segments. Maybe elementary teachers request A, middle school teachers request B, but they're solving the same problem differently due to developmental differences in their students.

  • Stronger product strategy: Articulate your roadmap in terms of needs you're addressing, not features you're building. "We're solving differentiation" is a clearer strategy than "We're building manual groups, adaptive algorithms, and custom paths."

The Takeaway

Don't ask for solutions. Ask about experiences.

❌ "What features would help you?"

✅ "Tell me about a time you struggled with [task]. What were you trying to accomplish?"

Don't count feature requests. Dig for underlying needs.

❌ "12 users asked for dashboards, 8 asked for reports."

✅ "20 users need better visibility into student progress."

Don't build the first solution users mention. Understand the need and evaluate multiple approaches

❌ "Users want X, so we'll build X."

✅ "Users need Y. X is one solution, but let's explore if A, B, or C might better serve that need."

This is what strategic user interviews do: They get you past the surface request to the underlying need, so you can build solutions that actually work.

Part 2: Core Principles

The 8 Principles of Strategic User Interviews

These eight principles form the foundation of effective user conversations. Master these, and you'll consistently uncover real needs instead of surface requests.

Principle 1: Ground Users in Past Experiences, Not Hypotheticals

Why This Matters

Humans are terrible at predicting future behavior or accurately summarizing past behavior.

We tend to answer with ideal-state thinking:

  • How we wish we behaved

  • How we think we should behave

  • How we imagine we'd behave in the future

Not how we actually behave.

Asking people to think about a specific situation where they experienced something, were trying to complete a specific task, made a decision, or were in a specific mindset allows far greater accuracy in answers.

The Psychology

When you ask hypothetical questions ("Would you use...?", "What would you do if...?", etc.), users activate aspirational thinking. They imagine their best selves, their most organized moments, their ideal workflows.

Actual behavior is messier:

  • We forget things

  • We take shortcuts

  • We work around problems

  • We settle for "good enough"

  • We're influenced by context and constraints

Grounding questions in specific past experiences surfaces actual behavior, complete with all the messy reality that informs good product decisions.

How to Apply This Principle

When they start with hypotheticals, redirect:

User: "I would probably use this feature weekly."

You: "Let's focus on what actually happens now. Tell me about last week. How many times did this situation come up?"

Use memory anchors:

  • "Think about yesterday..."

  • "The last time you..."

  • "This morning when you..."

  • "Last week..."

Get vivid details:

  • "What time of day was it?"

  • "Where were you?"

  • "What happened right before this?"

  • "Who else was involved?"

Vivid details signal they're recalling actual experience, not inventing scenarios.

Examples

❌ DON'T ASK

"What would be your thoughts if you were offered a warranty on a refrigerator for this price?"

"Would you use a feature that lets you group assignments?"

"How often do you typically check student progress?"

"Would you find notifications useful?"

✅ DO ASK

"Think of the last time you were offered a warranty on a purchase. Tell me about that situation. Did you purchase it? Why or why not?"

"Tell me about the last time you wanted to organize assignments in some way. What were you trying to accomplish?"

"Think about yesterday. When did you check student progress? What prompted you to look? What did you do with the information?"

"Tell me about the last time you forgot to do something important. What happened? How did you eventually remember?"

Principle 2: Don't Bias Users with Your Questions

Why This Matters

The way you frame questions dramatically impacts the answers you get.

Humans want to please. We want to answer your questions, even if there's no real answer that's true and valid. The "query effect" means people will make up an opinion about anything if they're asked.

This isn't malicious or nefarious—it's how our brains work.

Leading questions tell users what you want to hear. They'll give it to you, even if it's not true to their experience.

The Psychology

When you ask "What did you like?", you're:

  1. Assuming they liked something

  2. Forcing them to find something positive

  3. Missing what they actually thought

What happens:

  • If there's nothing they genuinely liked, they'll pick the least awful thing and tell you about it

  • You'll think they liked it

  • You'll prioritize the wrong things

How to Apply This Principle

Remove positive/negative framing:

  • Not: "What was good/bad?"

  • Ask: "What stands out?" "What comes to mind?"

Ask for description before evaluation:

  • "Tell me about your experience with X"

  • Then: "What did you think about that?"

Use neutral language:

  • "How did that go?"

  • "What happened?"

  • "Walk me through it"

Let them introduce the sentiment:

You: "What's the first thing that comes to mind about the new dashboard?"

User: "Honestly, it's confusing. I can never find what I need."

Now you have their authentic reaction, not what they thought you wanted to hear.

Examples

❌ DON'T ASK

"What did you like about this feature?"

"How helpful was this?"

"Wasn't that frustrating?"

"Did you find this useful?"

✅ DO ASK

"What's the first thing that comes to mind when you think about this feature?"

"How did this fit into your workflow?"

"How did you feel about that experience?"

"On a scale of 1-5, how often do you use this? Why that rating?"

Principle 3: Understand Triggers, Tasks, and Goals—Not Just Solutions

Why This Matters

Solutions offered can be:

  • First-thought, not best-thought

  • Not feasible

  • Not effective or efficient

  • Based on limited context

Focus on the pain point or problem, not the solution.

When users jump straight to solutions, they're doing your job for you—but they don't have all the information you have about technical constraints, strategic direction, or other user needs.

Your job: Understand what they're trying to accomplish and why the current state isn't working.

Dig Into the Complete Context

Understand their triggers:

  • What causes them to need this?

  • What happens right before they start this task?

  • What external factors are in play?

Understand their tasks:

  • What are they actually trying to do?

  • What are all the steps involved?

  • What do they do before/after using your product?

Understand their goals:

  • What's the desired end state?

  • How do they know when they're successful?

  • What would "good" look like?

How to Apply This Principle

When they offer solutions, acknowledge and redirect:

User: "You should add a bulk edit feature."

You: "That's interesting. Tell me about a time you wanted to edit multiple things at once. What were you working on? What made it frustrating to do one at a time?"

Focus on the "why" behind the request:

  • Not: "What do you want?"

  • Ask: "What problem are you trying to solve?"

Understand the broader workflow:

  • "What were you doing right before this?"

  • "What do you need to do after this?"

  • "Who else is involved in this process?"

Examples

❌ DON'T ASK

"How could this product be better for you?"

"What features would you add?"

"How should we improve this?"

✅ DO ASK

"Tell me about a time you were frustrated with the product. What were you trying to do? What happened? How was that different than what you expected?"

"What takes longer than it should?" "What do you wish you didn't have to do manually?"

"Walk me through your workflow. Where does it break down? What gets in your way?"

Principle 4: Ask Variations of "Why" Several Times to Get to Core Needs

Why This Matters

What we say at first often isn't the whole story. We process as we talk.

The "5 Whys" technique helps you dig down to root causes. By asking "why" up to 5 times (with variation so you don't sound like a toddler), you get past symptoms to actual needs.

The Classic Example

Surface answer: "I was late"

Why? "I ran out of gas"

Why? "I didn't have time to stop this morning and thought I could make it"

Why? "I forgot I needed it overnight so I didn't plan on it this morning"

Why didn't you get gas last night? "I was tired and said I'd get it in the morning but then forgot"

Root cause: Needs a reminder system for things thought of at night but need to be done in the morning

Possible solution: Easy reminder notification system activated when you think of something the night before that reminds you when you wake up

How to Apply This Principle

Vary your "why" phrasing:

  • "Why do you think that was frustrating?"

  • "What caused that problem?"

  • "What led to that situation?"

  • "Why do you think X happened?"

  • "Why do you think that is?"

  • "Can you tell me more about what you mean by that?"

Keep it conversational, not interrogative:

Good rhythm:

  1. "Why did that happen?" (gentle, curious)

  2. "What caused that?" (exploring)

  3. "Tell me more about that" (inviting)

  4. "Why do you think it happens that way?" (analytical)

  5. "What would need to be different?" (solution-oriented)

Stop when you reach:

  • A need you can design for

  • External constraints you can't change

  • Emotional or psychological factors

  • Strategic business requirements

Examples

❌ DON'T

User: "The algorithm is wrong." 

You: "Okay, we'll check the algorithm."

✅ DO

User: "The algorithm is wrong." 

You: "What makes you think that?" 

User: "It's putting students on the wrong skills." You: "What do you mean by 'wrong skills'?" 

User: "They're not ready for grade-level content yet." 

You: "What would ready look like?" 

User: "They need to master fractions first." 

You: "Why fractions specifically?" 

User: "State test is in 3 weeks and fractions are 30% of the test." 

ACTUAL NEED: Test prep mode, not fixing algorithm

Principle 5: Ask Follow-Up Questions (People Process as They Talk)

Why This Matters

What people say at first often isn't the whole story. We process as we talk.

Initial responses are usually:

  • Top-of-mind thoughts

  • Rehearsed explanations

  • Surface-level understanding

Follow-up questions unlock:

  • Deeper insights

  • Contradictions to explore

  • Details that matter

  • Things they didn't realize were relevant

Power Follow-Ups

General probes:

  • "Tell me more about that."

  • "Can you expand on what you meant by [phrase they used]?"

  • "You mentioned [thing]. Can you describe that in more detail?"

Clarifying questions:

  • "What do you mean by [word they used]?"

  • "Can you give me an example?"

  • "What does that look like in practice?"

Feeling questions:

  • "How do you feel about that?"

  • "What do you think about that?"

Importance questions:

  • "Why is that important to you?"

  • "What would happen if you couldn't do that?"

Listen for Throwaway Comments

Users often mention critical things casually:

User: "We use the dashboard to prepare for intervention meetings. I wish I could export it in a format our team could actually use, but whatever, we just screenshot it."

Don't skip over "I wish" and "whatever"—those are gold.

You: "You mentioned wishing you could export it differently. Tell me more about that. What format does your team need?"

How to Apply This Principle

Train yourself to pause after they answer:

  • Don't rush to your next prepared question

  • Let 2-3 seconds of silence happen

  • Often they'll add more

Listen for:

  • Contradictions ("I use it daily... well, maybe weekly")

  • Hedging ("sort of," "kind of," "maybe")

  • Strong emotions (frustration, excitement, resignation)

  • Workarounds ("we just...," "so we...")

  • Throwaway comments ("I wish...," "if only...")

Make follow-ups feel natural:

  • Nod, acknowledge, then ask

  • "That's interesting. Can you tell me more?"

  • Reference their words: "You said [X]. What did you mean by that?"

Examples

❌ DON'T

You: "How do you use this feature?" 

User: "We use it to track student progress." 

You: [moves to next question]

✅ DO

You: "How do you use this feature?" 

User: "We use it to track student progress." 

You: "Tell me more about that. What does 'tracking progress' mean in practice?" 

User: "Well, I look at it before parent conferences..." 

You: "What are you looking for when you review it?"

Principle 6: Triangulate Data (Memories Are Unreliable)

Why This Matters

Our memories aren't great. We want to please. We give ideal-state answers.

People will tell you:

  • They use a feature frequently (check analytics: they don't)

  • They prefer option A (watch them use option B)

  • They want complex features (observe them using simple ones)

This isn't lying—it's human nature.

Always triangulate data from multiple sources to validate what users tell you.

Multiple Data Sources

Combine:

  1. What users say (interviews, surveys)

  2. What users do (analytics, observation)

  3. What the data shows (completion rates, time on task, error rates)

When all three align: Strong signal

When they diverge: Dig deeper to understand why

How to Apply This Principle

In the interview:

  • "You mentioned you check this daily. How many times would you say you checked it this week?"

  • "Walk me through the last time you did this. I want to understand your exact steps."

After the interview:

  • Check product analytics

  • Observe them actually using the product (if possible)

  • Talk to other similar users

  • Review support tickets

  • Check completion rates

When you find contradictions:

  • Don't call users liars

  • Assume aspirational thinking or memory limitations

  • Ask more specific questions about recent, concrete instances

Examples

When Users Say

✅ Triangulate

"We use this feature all the time" - Check analytics: How many times per week/month?

"Teachers prefer the detailed view" - Run usability test: Which view leads to faster task completion?

"Students are engaged with the content" - Observe a class: Are students actually engaged or compliant?

"This saves us so much time" - Time study: Before/after implementation comparison

"Everyone wants this feature" - Survey or interview: How many actually mention it unprompted?

Principle 7: Ask About Workarounds and External Processes

Why This Matters

What users do outside your product is often more revealing than what they do inside it.

Workarounds reveal:

  • Where your product falls short

  • How important the need really is (big workaround = big pain)

  • Creative solutions you might not have considered

  • The broader workflow context

What to Listen For

"We just..." = Workaround alert

  • "We just copy it into a spreadsheet"

  • "We just send screenshots"

  • "We just do it manually"

Manual processes indicate pain:

  • Copying/pasting between systems

  • Reformatting data

  • Creating summary documents manually

  • Using external tools to fill gaps

Multiple tools = integration opportunity:

  • "We use your product plus Excel plus Slack"

  • "I pull data from three different places"

How to Apply This Principle

Ask about the complete workflow:

  • "What do you do before you use our product?"

  • "What happens after you're done in our product?"

  • "What other tools are involved in this process?"

Ask about workarounds explicitly:

  • "Have you created any shortcuts or workarounds?"

  • "What manual steps are involved?"

  • "How do you accomplish things our product doesn't do?"

Observe if possible:

  • "Can you show me your process?"

  • "Walk me through it while I watch"

Surface integration needs:

  • "What systems need to talk to each other?"

  • "What data do you move between tools?"

Examples

❌ DON'T ASK

"How do you use the reporting feature?"

"Do you use our collaboration features?"

✅ DO ASK

"How do you create reports for your administration? Walk me through the whole process—what tools do you use, what steps are involved?"

"How do you collaborate with your team on lesson planning? What tools or systems are involved?"

Principle 8: Avoid Leading Questions (Ask One Question at a Time)

Why This Matters

You'll likely only get an answer to the first or last question if you ask multiple questions at once.

Example of asking too many questions:"What's the biggest problem you have that's not being met right now? How do you currently solve it? What other products or tools do you use to try to meet that need?"

What happens: They answer one question, usually the easiest or most recent one in the string.

The Psychology

Cognitive load:

  • Multiple questions overwhelm

  • People can't hold all of them in working memory

  • They answer the simplest one

Lost nuance:

  • Following-up on one answer leads to insights

  • Rushing through multiple questions skips depth

How to Apply This Principle

Ask questions one at a time:

✅ Good flow:

  • "What's the biggest problem you have that's not being met right now?"

  • [Wait for answer, listen, follow up]

  • "How do you currently solve it?"

  • [Wait for answer, listen, follow up]

  • "What other products or tools do you use?"

Pace yourself:

  • Resist the urge to ask everything at once

  • Let them fully answer one question

  • Follow up on that answer

  • Then move to the next question

Use silence productively:

  • 2-3 seconds of silence after they answer

  • Often they'll add more

  • Rushing fills the space with your next question, cutting off their processing

Putting It All Together

These Principles Work in Combination

You don't follow them sequentially—you weave them throughout the conversation:

  1. Ground in past experience: "Tell me about the last time..."

  2. Avoid bias: Use neutral language

  3. Understand context: Ask about triggers, tasks, goals

  4. Dig deep: Ask "why" variations 3-5 times

  5. Follow up: Don't skip throwaway comments

  6. Plan triangulation: Note what to verify with other data

  7. Explore workflows: Ask about workarounds and external tools

  8. One question at a time: Pace yourself, let them think

Practice These Principles

In your next user conversation:

  1. Pick 2-3 principles to focus on

  2. Print the Quick Reference Card

  3. Review it before the call

  4. After the call, evaluate yourself:

    • Which principles did you apply well?

    • Which did you forget?

    • What will you do differently next time?

Don't try to be perfect on your first interview. These are skills that develop with practice.

The goal isn't perfection. The goal is getting slightly better at understanding real needs with each conversation.

When NOT to Use These Techniques

When These Techniques Don't Apply:

  • Crisis situations requiring immediate action

  • Simple bug reports with clear reproduction steps

  • Compliance or legal requirements (still understand context, but the requirement is the requirement)

  • Routine maintenance or technical debt

Focus your strategic interviewing energy on new features, product direction, and unclear requests.

Part 3: Warning Signs You're Getting Feature Requests, Not Insights

Use this checklist when talking to users. If you hear these phrases, you're getting solutions instead of problems. Stop and redirect.

When users say these things, they're jumping straight to solutions. Your job is to pause and dig deeper before accepting their request at face value.

Each red flag includes:

⚠️ The warning phrase to listen for

🚨 Why it's a problem

✅ What to ask instead

When You Hear Red Flags

🛑 Stop - Pause the conversation. Don't move forward with the request as stated.

🎯 Redirect - Use the "What to ask instead" questions to dig deeper.

🔍 Discover - Uncover the real need beneath the solution request.

Validate - Confirm your understanding before committing to building anything.

Red Flag #1: "We need a dashboard"

Warning Phrase

  • "We need a dashboard that shows..."

  • "Can you build us a dashboard for..."

  • "We'd like to see a dashboard with..."

Why It's a Problem

"Dashboard" is a solution, not a need. Dashboards are expensive to build and maintain, yet they're often unused because they don't actually help people make decisions.

Users say "dashboard" when they mean:

  • "I need visibility into what's happening"

  • "I need to make better decisions"

  • "I need to report to my boss"

  • "I need to feel in control"

What to Ask Instead

Dig into the decision they're trying to make:

  • "What decisions would this dashboard help you make?"

  • "Tell me about the last time you needed this information. What did you do with it?"

  • "Who else needs to see this information? What do they do with it?"

  • "How often do you need to look at this?"

Understand their current process:

  • "How do you currently get this information?"

  • "What format do you share it in now?"

  • "Walk me through the last time you needed these insights."

💡 COACHING TIP: Often they need a report, an alert, or a single metric—not a full dashboard. Dig into the specific decision or action the information enables.

Red Flag #2: "Users want X feature"

Warning Phrase

  • "Users are asking for..."

  • "Everyone wants..."

  • "Teachers are requesting…”

  •  "Our customers need..."

Why It's a Problem

This is second-hand information filtered through someone else's interpretation. You're hearing a summary, not the actual user voice.

What the sales rep, support agent, or manager heard might be:

  • Their interpretation of the request

  • Multiple different requests grouped as one

  • One vocal user amplified as "everyone"

  • A complaint reframed as a feature request

What to Ask Instead

Get to the primary source:

  • "Tell me about the last specific conversation where someone mentioned this. What were they working on?"

  • "Can you connect me with 2-3 users who've asked for this?"

  • "What exactly did they say? Do you remember their words?"

  • "What problem were they trying to solve when they mentioned this?"

Understand the context:

  • "How many users have mentioned this? Over what time period?"

  • "Are these all similar users, or different types?"

  • "Did they all use the same words, or describe it differently?"

💡 COACHING TIP: Insist on talking to actual users, not intermediaries. The person relaying the request has already done interpretation—you need the raw data.

Red Flag #3: "It would be nice if..."

Warning Phrase

  • "It would be nice if we could..."

  • "Wouldn't it be great if..."

  •  "It would be helpful if..."

  •  "I'd love it if..."

Why It's a Problem

"Would be nice" signals hypothetical thinking, not real pain. Nice-to-haves are wishes, not needs. They haven't experienced the problem recently or urgently enough for it to be a priority.

This language means:

  • They're thinking aspirationally, not experientially

  • It's probably low-priority

  • They haven't felt enough pain to demand it

  • They may not actually use it if you build it

What to Ask Instead

Ground them in reality:

  • "When was the last time you needed this?"

  • "Tell me about a specific time when not having this caused a problem."

  • "How often does this situation come up?"

  • "What do you currently do when this happens?"

Test the priority:

  • "If you had to choose between [this] and [other pain point we discussed], which would make a bigger difference?"

  • "What's the impact when you can't do this?"

💡 COACHING TIP: If they can't recall a specific recent instance, it's probably not a real pain point. Politely note it and move on to more pressing issues.

Red Flag #4: "Can you add [feature from competitor]?"

Warning Phrase

  • "Competitor X has this feature..."

  •  "Can you make it work like [other product]?"

  •  "Why don't we have [feature] like they do?"

  •  "We're losing deals because we don't have..."

Why It's a Problem

This assumes the competitor's feature was:

  1. Designed for the same problem you're trying to solve

  2. The right solution to that problem

  3. Used successfully by their users

None of these assumptions are guaranteed. Competitor features might be:

  • Feature bloat they added under similar pressure

  • Solving a different problem for a different user type

  • Barely used by their actual customers

  • Part of their positioning, not actual user value

What to Ask Instead

Understand the underlying need:

  • "What problem are you trying to solve?"

  • "Tell me about a time you needed this. What happened?"

  • "What do you like about how [competitor] does it?"

  • "Do you actually use [competitor product], or did you see it in a demo?"

If they use the competitor:

  • "Walk me through the last time you used that feature. What were you trying to accomplish?"

  • "What works well about it? What doesn't?"

  • "If you could change one thing about how they do it, what would it be?"

💡 COACHING TIP: Competitors are useful for understanding what problems users are trying to solve, not for what features to copy. Dig into the job-to-be-done, not the implementation.

Red Flag #5: "Everyone is asking for this"

Warning Phrase

  • "Everyone wants..."

  • "All our users are saying..."

  • "This is the #1 request..."

  • "We keep hearing about..."

Why It's a Problem

"Everyone" is vague and unverified. It usually means:

  • One loud user mentioned it repeatedly

  • 3-4 people mentioned it (feels like "everyone")

  • One influential person said it, so it's assumed to be universal

  • Different people mentioned different things now being grouped as one request

What to Ask Instead

Quantify it:

  • "How many users have mentioned this? Can you name 5?"

  • "Over what time period did you hear about this?"

  • "Tell me about 2-3 specific conversations. What were those users working on?"

Look for patterns:

  • "Did they all describe it the same way, or differently?"

  • "Are these similar types of users, or different segments?"

  • "What contexts were they in when they mentioned this?"

Test if it's truly the same request:

  • "When User A mentioned this, what problem were they trying to solve?"

  • "And when User B mentioned it, what was their context?"

  • "Are these the same need or different needs?"

💡 COACHING TIP: Often "everyone" wants the same outcome but for totally different reasons. Five users might say "I need to group students" but mean: ability grouping, station rotations, IEP management, class periods, or collaborative learning groups. Don't build one feature—understand the diverse needs first.

Red Flag #6: "This would save us time"

Warning Phrase

  • "This would be so much faster..."

  • "It would save us hours..."

  • "We could be more efficient if..."

  • "This would streamline our process..."

Why It's a Problem

Generic time-saving claims are not actionable. You don't know:

  • How much time it currently takes

  • Which part of the process is slow

  • Why it's slow (manual steps? Waiting? Confusion?)

  • How often they do this task

  • What "fast enough" looks like

"Save time" is a hopeful outcome, not a specific problem.

What to Ask Instead

Get specific about the current process:

  • "Walk me through your current process step-by-step."

  • "Where exactly does it slow down?"

  • "How long does this take you now?"

  • "How often do you need to do this?"

Understand the bottleneck:

  • "What's the most time-consuming part?"

  • "What are you waiting for?"

  • "What do you have to do manually?"

  • "Where do you get stuck or confused?"

Calculate the actual impact:

  • "If this task took half as long, how would that change your day?"

  • "How much time would you save per day/week/month?"

💡 COACHING TIP: "Save time" often masks other needs like reducing cognitive load, eliminating frustration, or gaining confidence. Dig into the actual pain point, not just duration.

Red Flag #7: "Make it easier to use"

Warning Phrase

  • "This is too complicated..."

  • "Make it more intuitive..."

  • "Simplify the interface..."

  • "It's too hard to..."

Why It's a Problem

"Easier" is too vague to act on. Different people find different things difficult for different reasons:

  • Unfamiliar mental model

  • Too many steps

  • Unclear labeling

  • Missing feedback

  • Wrong default state

  • Doesn't match their workflow

  • Poor error messages

Generic "make it easier" doesn't tell you which problem to solve.

What to Ask Instead

Get specific about the struggle:

  • "Tell me about the last time you found this difficult. What were you trying to do?"

  • "Where did you get stuck?"

  • "What did you expect to happen? What actually happened?"

  • "What did you try that didn't work?"

Observe if possible:

  • "Can you show me? Walk me through it while I watch."

  • "Where did you click first? What were you looking for?"

Understand the mental model:

  • "How do you think about this task? What would the steps be in your head?"

  • "What would you call this feature/action?"

💡 COACHING TIP: "Too hard" is almost never about the interface—it's about mismatch between user expectations and how the product works. Fix the conceptual model, not just the buttons.

Red Flag #8: "Add more customization options"

Warning Phrase

  • "Let us configure..."

  • "Give us more control over…”

  •  "Add settings for..."

  • "Make it customizable..."

Why It's a Problem

Customization is often a band-aid for poor defaults or inflexible core functionality. Users ask for customization when:

  • Current solution doesn't fit their use case

  • Defaults are wrong for their context

  • Feature is too rigid

  • They feel lack of control

But customization has costs:

  • Complexity for all users

  • Maintenance burden

  • Support overhead

  • Decision fatigue

What to Ask Instead

Understand what's not working:

  • "What can't you do right now that you need to do?"

  • "Walk me through a specific example where the current setup doesn't work."

  • "How would you set it up if you could?"

Test if better defaults would solve it:

  • "If it automatically did [X] instead of [Y], would that work?"

  • "What should the default be?"

  • "How many users would want the same setting?"

Look for the edge case:

  • "How often does this situation come up?"

  • "Are you unique in needing this, or do other users have the same need?"

💡 COACHING TIP: Often they need flexible workflows, not customizable settings. Instead of 20 settings, consider: smart defaults, better automation, or alternative pathways for different use cases.

Red Flag #9: "It needs to work like [other tool]"

Warning Phrase

  • "In Excel we could..."

  • "Google Docs lets you..."

  • "I'm used to [tool] where..."

  • "Make it work like I'm used to..."

Why It's a Problem

Familiarity ≠ Best Solution. Just because users know how another tool works doesn't mean:

  • That's the best way to solve their problem

  • That interaction model fits your product context

  • That feature even works well in the other tool

  • They fully understand why it works that way

They might be:

  • Defaulting to what they know

  • Frustrated with learning curve, not actual functionality

  • Assuming their workflow has to stay the same

What to Ask Instead

Understand the goal, not the tool:

  • "What are you trying to accomplish?"

  • "Tell me about the last time you did this in [other tool]. Walk me through it."

  • "What do you like about how [other tool] does it?"

  • "What doesn't work as well in [other tool]?"

Test if familiarity is the only driver:

  • "If you could design this from scratch, what would it look like?"

  • "What would be the ideal outcome, regardless of how it's done?"

Understand the workflow differences:

  • "How is your workflow in [other tool] different from here?"

  • "What happens before and after you use [other tool]?"

💡 COACHING TIP: Sometimes users just need better onboarding or clearer mental models. Sometimes your approach is genuinely better but different. Don't assume you need to copy—understand if your approach already solves their need more effectively.

Red Flag #10: "This is a must-have for launch"

Warning Phrase

  • "We can't launch without..."

  • "This is critical for..."

  • "This is blocking us from..."

  • "We need this by [date]..."

Why It's a Problem

Urgency without understanding creates poor prioritization. Declaring something "must-have" doesn't make it actually necessary—it often means:

  • Someone important said so (political pressure)

  • Competitor has it (perception of parity)

  • It's assumed necessary without validation

  • Fear of user reaction if missing

  • Unclear what would actually happen without it

What to Ask Instead

Test the urgency:

  • "What happens if we launch without this?"

  • "What would users do as a workaround?"

  • "Have users actually said they need this, or are we assuming?"

  • "What's driving the deadline?"

Understand the real requirement:

  • "Tell me about the workflow you're concerned about. Walk me through it."

  • "What would break if this wasn't there?"

  • "Is this truly a blocker, or is it important but not blocking?"

Look for the source of pressure:

  • "Who says this is must-have? What's their concern?"

  • "Is this a requirement from users, or from internal stakeholders?"

  • "Can we talk to actual users about this?"

💡 COACHING TIP: True must-haves pass this test: "If we launch without it, the product literally won't work for its core use case." Everything else is negotiable priority. Dig into whether this is political pressure, perceived parity, or actual blocking issue.

Part 4: Results Synthesis

From Feature Requests to Real Needs

Use this worksheet to decode what users are really asking for when they make feature requests.

Tips for Effective Synthesis

✅ Do:

  • Start broad: List 5-7 possible needs before narrowing

  • Think about context: Who is this person? What pressures do they face?

  • Consider non-obvious needs: Emotional needs (control, competence), social needs (status, compliance), political needs (cover for boss)

  • Look for patterns: Similar requests from different users might indicate same underlying need

  • Validate before building: Always test your hypotheses before designing solutions

  • Involve your team: Different perspectives spot different possibilities

❌ Don't:

  • Jump to solutions: Resist designing features until you understand needs

  • Assume one need: There are usually multiple needs behind a request

  • Take requests literally: "I want X" often means "I need Y and X is the first solution I thought of"

  • Ignore context: Same request from different users might stem from different needs

  • Skip validation: Your hypotheses are guesses until you test them

  • Work alone: Synthesis is better with diverse perspectives=

Common Request Patterns

Learn to recognize these patterns—they indicate you need to dig deeper:

They Say...

"Add a dashboard"

Might Actually Need...

Visibility into process, decision-making support, status reporting, control feeling

Questions to Ask

"What decisions would this dashboard help you make?"


They Say...

"Make it like [competitor]"

Might Actually Need...

Familiarity, specific feature they noticed, perceived market expectation

Questions to Ask

"What does [competitor] do that you wish we did?"


They Say...

"Add customization"

Might Actually Need...

Flexibility for edge cases, current solution too rigid, trying to serve multiple audiences

Questions to Ask

"What can't you do right now that you need to do?"


They Say...

"Send notifications"

Might Actually Need...

Remembering to do tasks, awareness of changes, coordination with others

Questions to Ask

"What do you forget to do or miss?"


They Say...

"Improve reporting"

Might Actually Need...

Communicate value to stakeholders, prove effectiveness, compliance requirements

Questions to Ask

"Who needs these reports and what do they do with them?"

Tracking Requests Over Time

Create a simple log to spot patterns:

  • Date

  • User

  • Request

  • Actual Need

  • Status

    • Investigating

    • Validated

    • Building

    • Not a real need

Monthly review questions:

  • What themes are emerging?

  • Are different users requesting same thing for different reasons?

  • Are we building solutions or understanding needs?

  • Which requests were NOT real needs once we validated?

Share Your Findings

After synthesis, share with your team:

Summary format:

USER REQUEST: [What they said]

CONTEXT: [Who they are, what they do]

ACTUAL NEED: [What we discovered through follow-up]

RECOMMENDATION: [Next steps—validate further, prototype, decline]

This process helps you:

✅ Avoid building the wrong thing

✅ Identify patterns across users

✅ Make data-informed prioritization decisions

✅ Communicate findings clearly to your team

✅ Build solutions that address root causes

More Resources

Get the guide via email with more resources included:

  • Practice scenarios

  • Research Synthesis Worksheet

  • Strategic User Interview Quick Reference

This is part of a 4-part Strategic UX Toolkit series:

  • The Context Mapping Workbook

  • The UX Tool Selection Matrix (Coming soon!)

  • The Strategic User Interview Guide (This resource!)

  • The Workflow Design Guide (Coming soon!)