Context Mapping Workbook

What if your UX framework doesn't account for EdTech's reality?

The Problem

Your team adopted Design Thinking because everyone uses it. What you actually need: A method that accounts for EdTech's three-user dynamics and locked roadmaps.

You're running weekly discovery interviews like Continuous Discovery recommends. What you actually need: Quick strategic interviews that fit your solo-designer reality, limited teacher access, and nonexistent research budget.

You spent 3 months researching new features. What you actually need: Research on features already being built, since your roadmap is locked for 6 months.

When you apply frameworks without understanding your context first, you waste research time on the wrong questions, frustrate stakeholders with insights they can't use, and miss the real constraints that should be shaping your approach.

The Solution: Context Mapping

Figure out which research methods actually make sense for your context—your team's maturity, your constraints, EdTech's three-user dynamics, and the roadmap reality you're working within.

Who is this for?

Product managers trying to prioritize research with zero bandwidth

UX researchers building research practices from scratch (with no playbook)

Founding designers wearing five hats with a team of one

Customer success teams fielding requests without clear direction on what matters

Anyone drowning in feature requests and unclear which research method to even start with

THE CONTEXT MAPPING WORKBOOK

Stop following frameworks blindly. Start researching strategically.

You can't choose the right research methods until you understand YOUR context. This workbook helps you map what matters.

A practical workbook that helps teams map their specific context so they can choose the right research methods

How to Use This Workbook

Work through this solo or (better yet) with your product trio to get multiple perspectives.

  1. Read "Why Context Matters" to understand why this step comes first

  2. Work through the Five Context Areas—be honest, not aspirational

  3. Review Common Context Blind Spots to catch what you might be missing

  4. Complete Your Context Summary to pull it all together

Download the Fillable Template

Work directly in this interactive template:

Context Mapping Workbook Template

How to Get the Most Value

Be honest. If your UX maturity is "ad-hoc," that's information you can use, not a failure grade.

Think strategically. Don't just list what you do—figure out what would actually impact adoption and business goals.

Share with your team. Your context map helps align stakeholders on why you're choosing specific methods and what's realistically possible.

Revisit quarterly. Context changes as your product matures, team grows, and roadmap evolves. Update accordingly.

Ready? Let's map your context.

Why Context Matters

Design Thinking. Jobs-to-Be-Done. Lean UX. Continuous Discovery.

Every one of these frameworks is good. Really good.

The problem isn't the frameworks—it's teams applying them without understanding whether they fit.

The Framework Isn't the Problem—Applying It Blindly Is

Here's what frameworks can't tell you:

  • Which parts matter most for your situation right now

  • Which methods to prioritize when time and budget are limited

  • How to adapt when your roadmap is locked six months out

  • Where to start when you're a team of one

  • How to blend frameworks when different pieces work for different needs

They can't tell you these things because they don't know your context.

Every Company Has Unique Context

You might be a well-funded Series B with a research team and quarterly flexibility. Or you might be a founding designer at a pre-seed startup with no budget but total freedom to pivot.

You might have weekly access to users. Or you might only reach teachers during summer break.

You might have stakeholders who champion research. Or you might be fighting for buy-in one small win at a time.

The "right" research approach for one context is completely wrong for another.

And in EdTech, there's an additional factor most frameworks ignore: you have three user types, and the buyer isn't the gatekeeper. You can build the most engaging student experience in the world, but if teachers don't enable it, it doesn't matter. Admin dashboards close sales. Teacher interfaces determine whether your product actually gets used.

Map Your Context First

The mistake isn't choosing Design Thinking over Continuous Discovery. The mistake is jumping to methods without understanding:

  • Where you are: Product stage? UX maturity? Actual resources?

  • What constrains you: Time? Budget? User access? Tech debt? Stakeholder trust?

  • What matters most: Which user? What business goal?

This workbook maps those critical areas. Then when you choose methods, you're choosing ones that fit your reality.

The right method for a well-resourced team with flexible roadmaps isn't right for a solo designer at a scrappy startup. The framework designed for rapid iteration needs adaptation when your roadmap is committed quarters ahead.

Context first. Methods second.

That's how you make frameworks work—by understanding what matters in your specific situation, not just theory.

The Five Context Areas

This resource takes you through the 5 context areas with explanations, examples, questions to think about.  You can work directly in a template (without all the extra content) so you can save and reference your answers:

Context Mapping Workbook Template

Context Area 1: Your Three-User Dynamics

Understanding Adoption Control

Adoption control means who influences whether your product actually gets used—regardless of who pays for it.

In EdTech, this doesn't follow purchasing power:

  • Admins make purchasing decisions and sign contracts

  • Teachers control whether the product gets used with students

  • Students use what teachers provide

The buyer (admin) isn't the gatekeeper (teacher), and neither is the end user (student).

What "Alignment" Actually Means

When we talk about alignment, we're not saying percentages need to match.

Alignment means your design investment reflects the strategic importance of each user type to actual adoption and success.

Good alignment: Students have 10% adoption control but get 40% design attention because engaging student experience is your competitive differentiator—as long as you're also investing heavily in the teacher experience that enables access.

Poor alignment: Teachers have 80% adoption control but get 15% design attention because your team focuses on admin dashboards and student features.

The key question isn't "Do my percentages match?" It's "Am I investing design resources where they'll impact actual product adoption?"

Map Your Three-User Dynamics

Fill in for each user type:

Admins (District administrators, school leaders, IT decision-makers)

What do they care about?

[Examples: ROI, compliance, data visibility, ease of rollout, support burden]

What's their biggest pain point?

 [Examples: Proving impact to board, managing multiple tools, budgets cuts, demonstrating value]

How much design attention do they get? (% of your focus)

How much adoption control do they have? (% of influence)

Remember: Admins often have HIGH purchasing power but LOW adoption control—they can buy your product, but they can't force teachers to use it effectively.

Teachers (Classroom teachers, instructional coaches, department heads)

What do they care about?

[Examples: Saving time, student engagement, fitting their teaching style, ease of use, classroom management]

What's their biggest pain point?

[Examples: Too many tools, setup takes too long, doesn't fit workflow, one more thing to learn]

How much design attention do they get? (% of your focus)

How much adoption control do they have? (% of influence)

Remember: Teachers are almost always the gatekeepers—they control configuration, access, and whether students ever see your product. Even a "student-first" product fails if teachers don't enable it.

Students (K-12 students, higher ed learners, adult learners)

What do they care about?

[Examples: Understanding concepts, not feeling dumb, getting through homework quickly, earning grades]

What's their biggest pain point?

[Examples: Confusing interface, boring content, doesn't help them learn, feels like busywork]

How much design attention do they get? (% of your focus)

How much adoption control do they have? (% of influence)

Remember: Students typically have LOW adoption control—they use what their teachers provide. But that doesn't mean they deserve low design attention. A terrible student experience can lead to teacher abandonment when students complain or disengage.

Reflection Prompts

Do your percentages reveal any mismatches?

Common mismatches to watch for:

  • Teachers have 70% adoption control but get 20% design attention → You're designing around the gatekeeper, not for them

  • Students get 80% design attention but teachers get 10% → Your engaging student product won't get used if teachers can't set it up easily

  • Admins get 60% design attention but have 20% adoption control → You're designing to close sales, not drive actual usage

Is your design investment strategically aligned?

Ask yourself:

  • Which user type, if delighted, would most impact our product's success?

  • Which user type, if frustrated, would kill adoption fastest?

  • Are we investing in the teacher workflow that enables student access?

  • Are we designing admin dashboards that prove value but neglecting the experience that creates value?

What would strategic realignment look like for you?

Context Area 2: Your Business Goals & Constraints

Your Goals Determine What Research Questions to Ask

You need to know what you're trying to achieve. Not just "better UX"—what specific business outcome?

Different goals require different approaches:

  • Increasing adoption → understand why people aren't starting

  • Reducing churn → understand why people stop

  • Improving NPS → understand what frustrates existing users

  • Achieving product-market fit → validate you're solving a real problem

Without knowing your top goal, your research lacks focus. You'll gather interesting insights that go nowhere, or spend months on research your team can't act on.

Your Constraints Determine What's Realistic

Every team faces constraints. The key is knowing which constraint limits you most.

Time constraints mean quick-turn methods. Two weeks before a decision? Five strategic interviews might work. Extensive ethnography won't.

Budget constraints mean low-cost methods. Can't afford a research panel? Use your customer success team's contacts. Can't pay incentives? Offer early access or the satisfaction of improving a tool they use.

Access constraints are common in EdTech. Teachers are hard to reach during the school year. You might need analytics, support tickets, and async methods—or wait for summer.

Tech debt constraints mean legacy code limits what you can change. If 80% of your codebase is untouchable, don't research radical redesigns. Focus on what you can modify, or plan UX improvements alongside technical refactoring.

Strategic constraints mean leadership has defined focus areas. If your company prioritizes K-12 over higher ed, your research needs to align—even if you discover opportunities outside that scope.

Expertise constraints mean you're learning as you go. Start with simpler methods (strategic interviews) before complex studies (longitudinal ethnography).

Buy-in constraints mean stakeholders don't yet trust research. Start small. Run one quick project that demonstrates clear value, then build from there.

The Constraint-Goal Connection

Here's what teams miss: your constraint should inform which goal you prioritize.

Roadmap locked for six months + primary goal "add new features" = mismatch. You can't add features if the roadmap is frozen. Research identifying new opportunities will go nowhere.

Better alignment: If your roadmap is locked, focus on "improve existing features" or "reduce churn"—research that optimizes what you're already building.

Map Your Business Goals & Constraints

Current Business Goals

(Choose your top 3)

  • Increase adoption/active users

  • Reduce churn

  • Expand to new market segment

  • Add new features

  • Improve existing features

  • Improve NPS/satisfaction

  • Achieve product-market fit

  • Demonstrate ROI to stakeholders

  • Other: 

Primary Constraint

For each constraint, describe how it applies to your current work

Time

Budget

Access

Tech Debt

Strategic

Expertise

Buy-in

Reflection Prompts

Do your goals and constraints align?

Warning signs of misalignment:

  • Goal: "Add new features" + Constraint: "Roadmap locked for 9 months" → Research won't matter

  • Goal: "Achieve product-market fit" + Constraint: "No access to target users" → Can't validate assumptions

  • Goal: "Demonstrate ROI" + Constraint: "No analytics infrastructure" → Can't measure impact

What research questions emerge from your #1 goal?

For example:

  • If your goal is "increase adoption" → Research question: "Why aren't teachers enabling our product after purchase?"

  • If your goal is "reduce churn" → Research question: "What causes teachers to stop using our product after 3 months?"

  • If your goal is "improve NPS" → Research question: "What are the top 3 frustrations for our current users?"

What research methods are realistic given your primary constraint?

For example:

  • If constrained by time → Quick methods: Analytics review, 5-10 strategic interviews, existing data analysis

  • If constrained by budget → Low-cost methods: Guerrilla testing, support ticket analysis, leveraging existing customer relationships

  • If constrained by access → Async methods: Surveys, diary studies, social listening, secondary research

Write your focused research question:

Context Area 3: Your Existing Research Sources

You're Already Sitting on Data

Most teams don't start from scratch. You're likely already collecting user data—you just might not be using it strategically.

The question isn't "Do we have research?" It's "What do we have, how reliable is it, and what are we missing?"

Different Sources Answer Different Questions

Not all research sources are created equal. Each tells you something different:

Quantitative sources (analytics, usage logs, surveys) tell you what is happening—which features get used, where users drop off, what they rate highly. But they can't tell you why.

Qualitative sources (interviews, support tickets, customer success insights) tell you why things are happening—what's frustrating users, what they're trying to accomplish, what's getting in their way.

Behavioral data (analytics, usage logs) shows you what users actually do—which is often different from what they say they do.

Self-reported data (surveys, interviews, reviews) shows you what users think and feel—their perceptions, frustrations, and desires.

The most reliable research triangulates multiple sources. If analytics show low adoption AND support tickets mention setup confusion AND interviews reveal teachers don't understand when to use the feature—you have a pattern.

Get Close to the Customer Voice

Here's a reliability hierarchy: the fewer layers between you and the actual customer voice, the more trustworthy the data.

Reading a feature request from a sales colleague < listening to the sales call recording
A summary from customer success < reading the actual support ticket
A stakeholder's interpretation < hearing directly from users

Every layer adds translation, assumption, and filtering.

Example: A sales rep says "Teachers want bulk upload." Listen to the actual call and you might hear: "It takes me 30 minutes to set up my classroom at the start of the year, and I have to do it for three different tools."

The real problem isn't bulk upload—it's total setup time across multiple tools.

Get as close to the source as you can: read transcripts, listen to recordings, talk to users directly. When that's not possible, know you're working with filtered information.

The Gap Between "Most Reliable" and "Most Checked"

Many teams check analytics most often because they're easy to access.

But analytics alone can't tell you why something is happening or what to do about it.

If you're checking analytics daily but talking to users once a year, you're making decisions with incomplete information. You know what's broken. You don't know why or how to fix it.

Common Gaps in EdTech

The biggest gap: qualitative research with teachers and students. Teams have data on what's being used (analytics), what users think in aggregate (surveys), and what's breaking (support tickets). They rarely have recent, in-depth conversations about actual workflows, frustrations, and needs.

The second gap: understanding the "why" behind the numbers. Adoption is low—but why? Teachers aren't enabling a feature—what's stopping them? Analytics can't answer these.

Map Your Existing Research Sources

What research do you already have access to?

☐ Product analytics (e.g., Mixpanel, Amplitude)
☐ Usage logs
☐ Support tickets
☐ NPS/CSAT surveys
☐ User interviews

☐ Recordings/transcripts from sales, success, support or training calls
☐ Sales team insights
☐ Customer success team insights
☐ Social media listening
☐ Online reviews
☐ None of the above

Analysis Questions

Which source is most reliable?

Think about: Which source gives you the most actionable, accurate information about user needs?

Which source do you check most often?

Be honest: What's the data you look at weekly or daily?

Biggest gap in current research?

Examples: "We have analytics but no idea why users do what they do" or "We haven't talked to teachers in 8 months" or "We have lots of feedback but can't identify patterns"

Reflection Prompts

Do you have the right balance of quantitative and qualitative sources?

Warning signs:

  • All quantitative, no qualitative → You know what but not why

  • All qualitative, no quantitative → You have opinions but can't validate at scale

  • Lots of aggregate data, no individual conversations → You're designing for averages, not real people

When was the last time you talked to a teacher about their actual workflow?

If it's been more than 3 months, your understanding is probably outdated. Teacher needs, school priorities, and classroom realities change quickly—especially in EdTech.

Are you triangulating multiple sources to validate insights?

Single sources can mislead:

  • Analytics show low adoption → Could mean bad UX, or could mean teachers don't understand the value

  • One teacher complains → Could be an edge case, or could signal a wider problem

  • Survey says feature is wanted → But do they want it enough to actually use it?

The strongest insights come from multiple sources pointing to the same pattern.

What one research source would fill your biggest gap?

Context Area 4: Your Tech Debt & Roadmap Reality

Technical Constraints Shape What's Possible

Tech debt isn't just an engineering problem—it's a UX research problem.

When a large chunk of your codebase can't be touched without major refactoring, that changes what research you should do. No point researching a complete interface redesign if the underlying code can't support it.

But tech debt doesn't mean you can't improve UX. You need to be strategic about when and what to research.

The Tech Debt-Research Connection

High tech debt, major refactoring planned: This is your opportunity. Research what UX improvements you can make while engineering is already touching that code. The effort to improve UX during a rebuild is much lower than doing it separately.

High tech debt, no refactoring planned: Focus research on parts you can change. Don't waste time identifying problems where engineering will say "we can't touch that." Find highest-impact improvements within technical constraints—better onboarding, clearer messaging, workflow optimizations that don't require backend changes.

Low tech debt, flexible architecture: You have more freedom. Research can explore bigger ideas, new features, innovative solutions.

Roadmap Reality Determines What Research Matters

Your roadmap tells you what research will actually get used.

Locked roadmap (planned 6+ months out, rarely changes): Don't research new features—they won't make it onto the roadmap in time to matter. Instead, research how to improve what's already being built. Talk to users about the features currently in development. Identify quick wins that can be incorporated into planned work.

Flexible roadmap (planned 1-3 months out, changes often): You have room for discovery research. User insights can actually influence what gets built next. Research can identify new opportunities because there's space to act on them.

Feature-heavy roadmap (motly new features): You're in growth mode. Research should validate that new features solve real problems before you build them. Focus on generative research that uncovers unmet needs.

Improvement-heavy roadmap (mostly improvements to existing features): You're in optimization mode. Research should identify the highest-impact improvements to existing features. Focus on evaluative research that finds friction points.

The Mismatch That Wastes Time

Spending 3 months researching new features when your roadmap is locked for 6 months and focused on tech debt. By the time research is done, decisions have been made.

Always align your research timeline with your roadmap reality.

Map Your Tech Debt & Roadmap Reality

Tech Debt Assessment

How much of your codebase is "legacy" that's hard to change?

What's the #1 UX issue you CAN'T fix due to tech debt?

How often do you say "we can't do that because of technical constraints"?
☐ Never
☐ Rarely
☐ Often
☐ Always

Roadmap Reality

How far out is your roadmap planned?

How often does your roadmap change?
☐ Weekly
☐ Monthly
☐ Quarterly
☐ Rarely

Who has the most influence on roadmap priorities?
☐ Product
☐ Engineering
☐ Sales
☐ Executives
☐ UX
☐ Customer feedback
☐ Other: [__________]

What percentage of your roadmap is:

  • New features: [____%]

  • Improvements to existing features: [____%]

  • Tech debt/bug fixes: [____%]

  • UX improvements: [____%]

Reflection Prompts

Is your tech debt limiting what research you should do?

If you answered "Often" or "Always" to technical constraints:

  • What parts of the product can you change?

  • Is there planned refactoring work where UX improvements could piggyback?

  • Are you researching solutions that engineering can't implement?

Does your roadmap have room for research insights?

Warning signs of misalignment:

  • Roadmap is locked 6+ months out, but you're doing generative research for new features

  • Roadmap is 90% new features, but you're researching improvements to existing ones

  • Roadmap rarely includes UX improvements, but you keep identifying UX problems

What research would actually influence decisions?

Ask yourself:

  • What decisions are being made in the next 1-3 months that research could inform?

  • What's already being built that research could improve?

  • Where does the roadmap have flexibility for new insights?

If your roadmap is 80% locked for the next 6 months, don't do research that suggests new features. Do research that improves what you're already building.

Context Area 5: Your UX Maturity & Team Structure

Your UX Maturity Determines Where to Start

Not every team is ready for the same approach. UX maturity (how established and influential UX is in your organization) determines what's realistic and where to focus.

Ad-hoc: Build credibility before building comprehensive programs. Start small with quick, high-impact research.

Emerging: You have a foothold but are still reactive. Establish consistent practices and educate stakeholders on proactive research value.

Structured: You have established processes. Scale research practices, build insight repositories, ensure research consistently influences decisions.

Optimizing: You're strategic partners. Anticipate needs, drive product strategy through research, measure business impact.

Team Structure Affects What's Possible

Solo designers can't do everything. Be ruthlessly strategic about where to invest limited time. Focus on highest-impact methods (usually strategic interviews) and leverage existing data wherever possible.

Small teams (2-4 people) can establish basic cadences but still need to prioritize. Specialization helps, but everyone wears multiple hats.

Dedicated researchers change what's possible. You can establish sophisticated practices—research repositories, longitudinal studies, comprehensive programs. But you still need to align research with what the organization can actually act on.

Where UX Reports Matters

Product: Research usually aligns well with product strategy. Risk: UX becomes purely reactive to roadmap priorities.

Engineering: Research tends to focus on technical feasibility. Risk: Strategic design thinking gets deprioritized.

Design: Research maintains strong design influence. Risk: May be seen as less strategic or disconnected from business goals.

C-suite: Research has strategic influence. Risk: May be too removed from day-to-day product decisions.

No perfect structure. Knowing where you sit helps you understand your sphere of influence and where to build relationships.

UX Influence Reveals Organizational Readiness

No influence: You're not ready for extensive programs. Demonstrate value with small, quick projects. Build credibility before infrastructure.

Some influence: You're ready for regular practices. Focus on consistency—weekly customer conversations, monthly research reviews, quarterly strategic planning.

Significant influence: You're ready to be proactive. Research can identify opportunities before they become problems. You can invest in longer-term strategic research.

Drives strategy: You're in optimization mode. Measure impact, refine practices, ensure research continues to deliver business value at scale.

Map Your UX Maturity & Team Structure

UX Maturity

(Select the one that best describes your organization)

☐ Ad-hoc: "We do UX when we have time"
☐ Emerging: "We have a UX person/team but they're reactive"
☐ Structured: "We have UX processes and they inform decisions"
☐ Optimizing: "UX drives strategy and we're always improving"

Team Structure

How many people do UX work?

How many do user research specifically?

Where does UX report?
☐ Product
☐ Engineering
☐ Design
☐ C-suite
☐ Other: [__________]

How much does UX influence roadmap decisions?
☐ None
☐ Some
☐ Significant
☐ Drives

Your Biggest UX Challenge Right Now

[Describe the biggest challenge you're facing in your UX work, whether it's resource constraints, stakeholder buy-in, technical limitations, or something else entirely]

Reflection Prompts

Does your research ambition match your maturity level?

Warning signs of mismatch:

  • Ad-hoc maturity but planning comprehensive research programs → You'll get overwhelmed and lose credibility

  • Structured maturity but only doing reactive research → You're leaving strategic opportunities on the table

  • Optimizing maturity but not measuring research impact → You're at risk of losing your hard-won influence

Is your team structure set up for success?

Consider:

  • If you're a solo designer, are you trying to do too much? What could you stop doing?

  • If you have dedicated researchers, are they aligned with product priorities or working in isolation?

  • If UX has no influence, what's the smallest research project that could demonstrate value?

What would need to change to move up one maturity level?

Think about:

  • Ad-hoc → Emerging: Need to hire a dedicated UX person or establish consistent practices

  • Emerging → Structured: Need to establish research cadence and documentation systems

  • Structured → Optimizing: Need to demonstrate strategic impact and business value

  • If you're already Optimizing: How do you maintain this position and avoid backsliding?

Given your current maturity and structure, what's the most realistic research goal for the next 3 months?

Common Context Blind Spots

After building UX functions from scratch at three EdTech companies, I've seen the same patterns repeat.

These aren't failures—they're natural blind spots that emerge when teams focus on shipping product. Recognizing them early saves months of research effort that goes nowhere.

Blind Spot #1: Mismatch Between Design Attention and Adoption Control

What it looks like: Teams invest most design resources in the user type they think matters most—usually students, because "it's a learning product"—while giving minimal attention to the user who actually controls whether the product gets used.

Example: "We spend 60% of design time on student features, but teachers control 80% of adoption. We built an engaging student experience, but teachers never enable it because setup takes 30 minutes."

Why it happens:

  • Product demos focus on student experience (more exciting to show)

  • Stakeholders are passionate about learner impact

  • Teacher workflow feels like "just admin stuff"

  • No one explicitly maps adoption control vs. design investment

Check yours: Look at Context Area 1. Do your percentages reveal a mismatch? Are you designing around the gatekeeper instead of for them?

The fix: Invest strategically where it will impact actual adoption. Sometimes that means putting teacher experience first, even when it feels less exciting.

Blind Spot #2: Research That Doesn't Match Roadmap Reality

What it looks like: Teams conduct research that would be valuable if the organization could act on it. But the roadmap is locked, resources are committed, priorities are set. Research findings sit in a deck, acknowledged but never implemented.

Example: "We spent 3 months researching new features teachers need, but our roadmap is locked for 6 months and focused on tech debt. By the time we could build anything new, the insights were outdated."

Why it happens:

  • Research planned without checking roadmap flexibility

  • Teams assume research will "influence" decisions already made

  • Generative research done when evaluative would be more useful

  • No one asks "What decisions are being made in the next 3 months?"

Check yours: Look at Context Area 4. Is your roadmap flexible enough to act on research? What's actually being built next quarter that research could improve?

The fix: Align research timing with decision timing. If your roadmap is locked, research what you're already building.

Blind Spot #3: Choosing Methods Based on Popularity, Not Context Fit

What it looks like: Teams adopt frameworks they heard about at conferences—Design Thinking, Jobs-to-Be-Done, Continuous Discovery—without understanding whether those methods fit their specific constraints and context.

Example: "We tried running weekly discovery interviews like Continuous Discovery recommends, but we're a team of one with no research budget and no teacher access during the school year. After two months, we'd done three interviews and felt like we were failing."

Why it happens:

  • Methods taught as universal solutions

  • Teams want structure and turn to popular frameworks

  • No one talks about when frameworks don't work

  • Context assessment feels less urgent than "just start researching"

Check yours: Look at Context Area 2 constraints. Do you have time/budget/access for the methods you're trying? Are you attempting comprehensive research when quick strategic interviews would be more realistic?

The fix: Choose the parts of frameworks that work for your reality. You can do strategic interviews without adopting full Continuous Discovery. You can apply Jobs-to-Be-Done thinking without running a complete ODI process.

You're Not Alone

If you identified one or more of these blind spots, take a breath.

These patterns are incredibly common, especially at early-stage companies or where UX is still building credibility.

The fact that you're mapping your context means you're already ahead of most teams—you're thinking strategically before jumping to methods.

Your Context Summary

You've mapped your context. Now synthesize what you learned.

This summary captures the key elements from each context area. Keep it handy—it will guide your method selection and help you communicate your research approach to stakeholders.

My three-user priority: [Which user type should get the most design attention? Why?]

My #1 business goal: [What are you trying to achieve?]

My primary constraint: [What's limiting you most?]

My most reliable research source: [What data do you trust most?]

My roadmap reality: [How locked-in is your roadmap? How often does it change?]

My UX maturity: [Where are you on the maturity curve? Ad-hoc/Emerging/Structured/Optimizing]

My team structure: [How many people? Where does UX report? How much influence?]

The 2-3 research questions I need to answer:

This is part of a 4-part Strategic UX Toolkit series: