There's No Magic UX Framework (And That's Good News)
Strategic method selection beats rigid framework following
User Research
Product Design Strategy
UX Strategy
Jan 20, 2026
Let me guess how this happened at your company.
Someone—probably a director or VP—came back from a conference or finished a book and announced they'd found the solution to all your product problems. Design Thinking. Jobs-to-Be-Done. Continuous Discovery. Whatever framework just clicked for them.
Six months later, the enthusiasm faded. Research insights sat unused. The team moved on.
Sound familiar?
The frameworks aren't bad. The context mismatch is the problem.
The Real Issue: EdTech Breaks Framework Assumptions
Most UX frameworks—Design Thinking, Continuous Discovery, Lean UX, Jobs-to-Be-Done—assume a reality that doesn't exist in education.
They assume you can launch when you're ready. They assume users can switch products when they're unhappy. They assume you can talk to users whenever you need insights. They assume the buyer and user are the same person.
In EdTech? Usually not.
You have immovable back-to-school deadlines. Launch in October and teachers are already drowning, with zero bandwidth to learn your new interface or adapt to workflow changes. Miss the July-August window and you're essentially launching to an audience who can't engage.
Teachers are locked in for nine months once school starts. In consumer tech, "minimum viable product" works because unhappy users leave immediately. In EdTech, if teachers don't like your product in September, they spend nine months resenting it and planning their escape the moment June arrives. Mid-year iterations that would be "rapid learning" in other contexts become "permanent trust damage" when users can't leave.
User access is seasonal. Continuous Discovery assumes weekly user conversations. But September through May, teachers are slammed—teaching five classes, grading 150 assignments, managing student crises. Even with incentives (IF you have the budget for it - I often didn't!), reaching out feels tone-deaf. You get 10-12 weeks in summer to gather insights that need to last an entire year.
You're designing for three different users with three different goals. I wrote about this three-user dynamic in detail in another post, but the short version: admins buy your product but rarely use it, teachers control whether students ever see your product, and students use what they're assigned whether they like it or not. Most frameworks assume one user type. EdTech requires balancing three.
These aren't minor differences. They're fundamental misalignments that break standard approaches.
Framework Misalignments
Run Design Thinking sprints in January. Teachers love the validated prototype by Friday. The problem? Engineering needs four months to build it. Launch in late May—after districts finalized purchasing for next school year. The insights were solid. The timing was unfortunate. Waiting until the next sales cycle makes the research insights over a year old before they can be capitalized on.
I've also seen teams follow Lean UX principles and ship quickly—minimally viable but clunky, adding 10 minutes to teachers' daily workflows. In consumer tech, unhappy users leave immediately. In EdTech, teachers are locked in. Nine months of resentment. Teachers warning colleagues to avoid it. When the company iterated monthly based on usage data, each improvement paradoxically made things worse—teachers hated relearning workflows they'd just figured out. "I finally got this, and now you've changed it again." When June arrived, teachers switched to competitors immediately. First impression created permanent trust damage (read a post about this here).
The tragic part? These frameworks contain brilliant tools. Teresa Torres' emphasis on grounding interview questions in past experiences rather than hypotheticals is invaluable. Bob Moesta's focus on emotional and social dimensions of product adoption reveals insights purely functional analysis misses. IDEO's empathy-building exercises help teams understand users deeply.
The frameworks aren't the problem. Rigid application without context assessment is the problem.
The Strategic Shift: Building Your Own Toolkit
So if rigid framework adoption doesn't work, what does?
A fundamental shift in how you approach UX methods: Instead of asking "Which framework should we follow?", ask "Given our specific constraints, which tools will get us to confident decisions fastest?"
This means understanding your organizational context—UX maturity, roadmap flexibility, technical constraints, timeline pressure. It means understanding your EdTech-specific context—where you are in the sales cycle, what user access looks like, what lock-in periods mean for your risk tolerance, what regulatory requirements constrain your solutions.
Use the free Context Mapping Workbook to understand your unique context as the first step!
Then strategically selecting tools from multiple frameworks based on what your situation actually demands.
From Continuous Discovery: Interview techniques, outcome focus, opportunity mapping. But adapt "weekly" to your access windows. Batch intensive research when teachers are accessible and their memories are fresh. Use analytics and admin interviews during the school year.
From Design Thinking: Empathy-building observations, rapid prototyping, collaborative synthesis. But time your research to market windows and scope prototyping to buildable solutions given your technical constraints.
From Jobs-to-Be-Done: Emphasis on emotional and social dimensions, focus on motivations over demographics. But research the "jobs" of all three user types—what job is the admin hiring your product to do? What job is the teacher hiring it to do? And when adoption is involuntary, reframe: instead of "why did you hire this?", ask "what would make this less of a burden?"
From Lean UX: Bias toward action and learning from usage. But redefine "minimum viable" based on EdTech lock-in reality—users will live with your product for 9-12 months. Use rapid iterations with beta users or in usability testing, not in production with locked-in teachers.
This isn't framework chaos. It's strategic selection—taking the best from each framework based on what your situation demands.
What This Looks Like in Practice
At one company, we had 10 months to research, design, build, and launch a complete platform rebuild with an immovable August deadline and no budget for research incentives. Research couldn't consume six months—we'd miss the only launch window that mattered.
We used every existing research source: sales call recordings, support tickets, prior user observations. We did guerrilla usability testing with volunteers from throughout the organization—not ideal, but better than nothing when budget was zero. We did rolling synthesis after each session so patterns were visible early rather than waiting until we'd finished all research.
The result? 18% increase in student rostering completion, 41% increase in diagnostic completion, launched on time with strong adoption from day one.
What would have failed? Trying to do comprehensive research with weekly teacher interviews. Teachers weren't available during the school year (and we had no budget for incentives), and sequential research would have blown our timeline entirely.
The Principles That Guide Strategic Selection

When you're building your own toolkit rather than following a pre-built framework, you need decision-making principles. Here are the six that guide my work:
Confidence thresholds over perfect research. Not every decision requires the same level of confidence. Choosing a button color? 60% confident is fine—ship it, see what happens. Redesigning your entire onboarding flow right before school starts? You need 90% confident. The strategic insight: determine how confident you need to be, then work backward to the minimum research needed to hit that threshold.
Fast-to-insight over comprehensive research. The best research process isn't the most thorough one—it's the one that gets you to confident decisions fastest. This means strategic sampling over volume, parallel research streams when access is limited, rolling synthesis instead of batch analysis.
Outcomes over features. When stakeholders say "we need bulk grading," they usually mean "I need to give feedback on 150 assignments in under 2 hours instead of spending my entire weekend grading." Your job isn't building the requested feature—it's understanding the outcome they need to achieve and designing the complete workflow that enables it.
Context-driven method selection. This is where everything comes together. Assess your organizational maturity, your EdTech constraints, your timeline reality, your user access patterns. Then select tools that fit that context rather than following frameworks regardless of fit.
Building stakeholder buy-in throughout. Research doesn't drive decisions by being brilliant. Research drives decisions when stakeholders feel ownership of the findings. Involve them in shaping research questions before you start, share emerging themes weekly during research, include them in synthesis sessions. By the time you formally present, they already own the findings because they helped generate them.
Show users, don't just quote them. One of the most powerful tools for building stakeholder buy-in is showing clips of actual users. Not quotes. Actual video or audio of real people struggling, succeeding, or reacting. When a stakeholder watches a teacher spend 8 minutes trying to differentiate an assignment and finally give up in frustration, that lands differently than reading about frustration in a research report.
Why This Matters for Your Career
This strategic, context-driven approach isn't just better for your products—it's better for your professional credibility.
When you push for frameworks your organization can't support, research doesn't drive decisions. You lose credibility. "We tried UX research and it didn't work" becomes organizational memory.
When you assess context first and select tools strategically, research actually influences product direction. You build a track record of generating insights that teams act on. That's how you prove ROI in constrained environments.
Demonstrating that you understand when to use Design Thinking versus Continuous Discovery versus custom approaches—that you're making strategic decisions about methods rather than dogmatically following one framework—that's what distinguishes senior UX practitioners from those still learning the craft.
You're not a one-trick-pony designer following a rigid process. You're a strategic thinker who selects the right tools for each situation.
The Real Good News
You're not stuck waiting for someone to invent the perfect EdTech UX methodology.
You can build your own toolkit starting today by understanding your constraints and selecting tools strategically.
You don't need permission to combine techniques from multiple frameworks. You don't need to convince leadership to adopt a complete methodology. You can start small—one research project where you assess context first and select methods that fit.
The frameworks contain valuable tools. Your job is understanding which tools fit your reality and how to adapt them when standard approaches don't work.
That's strategy. And that's what separates teams that struggle with UX research from teams that use it to drive real product improvements in constrained EdTech environments.
This post is adapted from Part 1 of my upcoming book "The Strategic UX Toolkit: Context-Driven Research and Design for EdTech." The book walks through the complete five-phase process for building your own strategic toolkit based on your specific constraints. Subscribe to get updates about when it's released!
Ready to start building your own strategic toolkit?
Ready to start building your own strategic toolkit?
