Instructional Design Frameworks
A practical map of backward design, constructive alignment, Bloom's taxonomy, Diátaxis, and UDL—and how they fit together
Learning Objectives
By the end of this module you will be able to:
- Apply the three-stage backward design process to plan a learning module.
- Write observable learning objectives calibrated to the right cognitive level using Bloom's taxonomy verbs.
- Explain constructive alignment and identify misalignment between objectives, activities, and assessment in a sample course.
- Describe the Diátaxis framework and when to apply it versus a competency-based structure.
- Articulate UDL as a proactive design strategy, not a retrofit for individual edge cases.
Core Concepts
Backward Design: Starting from the End
Most people design courses the way they write code without tests: start with what they know, build what seems logical, and hope the output matches the need. Backward design inverts this.
Wiggins and McTighe's Understanding by Design defines a three-stage process:
- Identify desired results — What should learners know, understand, or be able to do by the end?
- Determine acceptable evidence — What assessments would prove those outcomes were achieved?
- Design learning experiences — Only then, design the activities and content that enable learners to perform.
The question is not "what will I teach?" but "what do I want learners to be able to do, and what would prove they can?"
The distinction from content-first planning matters more than it sounds. When designers start with their favorite topics or available materials, they optimize for coverage. When they start with outcomes and evidence, they optimize for performance. The sequence of decisions changes the entire structure of what gets built.
Some controlled studies show statistically significant performance gains when backward design is implemented rigorously—post-test experimental averages around 90% versus 75% in control groups. However, the empirical evidence remains mixed: a recent rapid review found no demonstrable efficacy in health professional education contexts, and implementation barriers are significant. Understanding the concept does not guarantee effective application; it requires deliberate practice and institutional support.
Constructive Alignment: The Coherence Principle
John Biggs's constructive alignment extends backward design into a systemic design principle: intended learning outcomes (ILOs), teaching and learning activities, and assessment tasks must be mutually aligned. Each component must address the same targets, so that what is taught, what is practiced, and what is assessed all reinforce the same outcomes.
The underlying premise is that learning is constructed by what activities learners undertake, not by what teachers do. The verbs in outcome statements should directly shape what students actually practice—and assessments should measure whether those specific outcomes were achieved.
Backwash is the mechanism that makes alignment so consequential. Biggs's term describes how assessment shapes student learning behavior: students learn what they perceive will be assessed, not what the syllabus says. When assessments clearly target stated outcomes, backwash is positive—student effort is directed at genuine learning. When assessments diverge from stated outcomes, backwash is negative—students rationally ignore your stated objectives and optimize for whatever they expect to be tested.
This is not a bug in student behavior. It is a rational response to misaligned course design.
Research confirms the downstream effects: students in aligned conditions apply significantly more elaboration strategies, connecting new content to prior knowledge. They achieve better conceptual understanding—not just surface recall—and report higher perceived competence. Alignment also enhances motivation and supports higher-order thinking.
Bloom's Taxonomy: The Vocabulary for Calibration
Backward design tells you the sequence. Constructive alignment tells you what must cohere. Bloom's taxonomy gives you the vocabulary to specify what level of cognitive work you're actually targeting.
The revised taxonomy (2001) defines six cognitive levels, each associated with specific action verbs:
| Level | Verbs (examples) | Cognitive demand |
|---|---|---|
| Remember | define, identify, recall | Lowest |
| Understand | explain, summarize, demonstrate | |
| Apply | apply, use, solve | |
| Analyze | distinguish, differentiate, organize | |
| Evaluate | judge, justify, select | |
| Create | design, construct, develop | Highest |
Using these verbs is not bureaucratic overhead. They force precision about what cognitive work you actually expect. Vague objectives like "students will understand networking" cannot anchor assessment design. "Students will distinguish between stateful and stateless protocols and justify the tradeoff in a given architecture scenario" can.
Quality course design requires that the assessment verb level matches the objective verb level. If your objective uses "design" (Create), a multiple-choice quiz testing recall fails the alignment test—not because quizzes are bad, but because the cognitive demand is mismatched.
Research consistently shows that instructors overestimate the cognitive demand of their assessment items. Studies document curricula where objectives target analysis and evaluation but tests emphasize understanding (56.5% of items) and application (72.5% on reclassification), with evaluating at 8.5% and creating at 1.5%.
When you believe you are assessing higher-order thinking, you probably are not. Check the verbs in your test items—not just the verbs in your objectives.
Outcome cascading is the structural application of Bloom's to a full curriculum. Course-level outcomes must be built from lesson-level outcomes, with earlier lessons establishing foundational competencies that enable later, more complex work. Lessons should increase in cognitive demand and serve as prerequisites for subsequent lessons. This isn't about making a neat diagram—it's about verifying that learners have what they need before you require them to use it.
Diátaxis: A Content-Type Framework for Technical Contexts
The Diátaxis framework structures documentation—and by extension, learning content—into four types matched to learner needs:
| Type | Learner need | Form |
|---|---|---|
| Tutorial | Guided learning by doing (onboarding) | Step-by-step, hand-held |
| How-to Guide | Accomplish a specific goal (competent user) | Goal-oriented, assumes context |
| Reference | Accurate technical facts | Authoritative, lookup-optimized |
| Explanation | Conceptual understanding and context | Discursive, reasoning-focused |
The framework's core insight is that mixing content types creates confusion: embedding conceptual explanation inside a how-to guide slows down the competent user trying to complete a task, while a reference page stripped of explanation loses the beginner who needs context. Different learner states require different content shapes.
When to use Diátaxis versus a competency-based structure: Use Diátaxis when learners must navigate a body of content at different stages of expertise—especially in technical domains where some users want to look things up, others need to get something done, and others need conceptual grounding. Use a competency-based structure (backward design + constructive alignment) when you need to move a defined learner cohort through a defined outcome by a defined point. The two frameworks are not competitors; they operate at different levels of the design problem.
Universal Design for Learning: Proactive Accessibility
UDL is a paradigm shift from individual accommodations to systemic accessibility. Rather than designing a course for an imagined average learner and then retrofitting exceptions, UDL builds cognitive diversity into the design from the start.
The three UDL principles:
- Multiple means of representation — How content is presented
- Multiple means of action and expression — How learners demonstrate understanding
- Multiple means of engagement — How learners are motivated and supported
In practice, this means things like: flexible pacing (self-paced modules accommodate both hyperfocus and attention variation), scaffolded assignments that break complex tasks into explicit sub-steps, alternatives to timed high-stakes assessments, and choice in how learners demonstrate mastery.
UDL is about removing barriers that prevent learners from demonstrating competence they actually have. The learning outcome stays fixed. The pathway to demonstrating it becomes more flexible.
Retrofitting accommodations after the fact is more expensive, less effective, and signals to affected learners that they are an afterthought. Proactive UDL reduces that cost and reaches more learners.
The accommodation-to-accessibility shift matters for course economics: system-level redesign to accommodate cognitive diversity is more effective than individual accommodations alone. Designing for edge cases at the start often improves the experience for everyone.
LXD vs. ID: A Conceptual Distinction
Instructional Design (ID) is systematic and outcome-focused: it organizes learning toward measurable competency achievement. Learning Experience Design (LXD) is more user-centered and engagement-oriented: it prioritizes emotional connection, motivation, and moment-to-moment experience through storytelling and immersive scenarios.
In practice, both approaches overlap significantly and begin with learner analysis. The difference lies in emphasis, not method. No RCTs or systematic comparisons demonstrate that LXD produces superior learning outcomes compared to ID. The available guidance suggests integrating both traditions: use ID rigor for outcome architecture, use LXD sensibility for the learner's experience of moving through that architecture.
Step-by-Step Procedure
Running the Backward Design Process
This procedure applies at the module level. Run it once per defined learning unit before writing any content.
Stage 1: Define outcomes
- Write down what the learner should be able to do by the end—not what they should know, but what they should perform.
- Assign a Bloom's verb to each outcome. If you cannot, the outcome is not observable.
- Check that each outcome is achievable within this module's scope. If not, it belongs at the course level.
Stage 2: Design assessments
- For each outcome, ask: what would it look like if the learner achieved this?
- Design an assessment task that requires the learner to perform the target verb—not just report about it.
- Verify the assessment verb matches the outcome verb in cognitive level. Downgrade or upgrade the assessment, not the outcome.
- Decide the sequence: which assessments are formative checkpoints, which are summative? Early and frequent formative assessments support skill development and reflective practice; spaced assessment better supports retention and transfer.
Stage 3: Design learning activities
- List what prior knowledge learners need before attempting the assessments. If they do not have it, that becomes prerequisite content.
- Design activities that give learners practice at the cognitive level the assessment demands. Not one level lower.
- Apply progressive disclosure: reveal complexity gradually as learners develop competence, rather than presenting the full landscape upfront.
- Check outcome cascading: does this module's foundational content enable later modules' higher-order tasks?
Alignment audit
Before writing any actual content, fill in this matrix:
| Outcome | Bloom's level | Assessment type | Activity type | Match? |
|---|---|---|---|---|
| [Outcome 1] | [level] | [task] | [activity] | Y / N |
If any row shows a mismatch, resolve it before proceeding. Mismatches do not fix themselves during content production.
Worked Example
Designing a Module on API Error Handling
Situation: You are building a module for software engineers on HTTP error handling. Your first draft of objectives reads: "Learners will understand 4xx and 5xx status codes."
Stage 1 — Outcome audit
"Understand" is a Bloom's Level 2 verb. Is that the real goal? Ask: what should engineers do with this knowledge? The answer is: diagnose and fix errors in their API integrations.
Revised outcome: "Given a set of HTTP error responses, distinguish between client-side and server-side error sources and select the appropriate remediation strategy."
That is Level 4 (Analyze) and Level 5 (Evaluate). The cognitive demand is now explicit.
Stage 2 — Assessment design
A multiple-choice quiz about status code definitions tests Level 1–2. It does not match the revised outcome.
Aligned assessment: A set of four realistic API error scenarios (JSON response bodies, status codes, headers). Learners must identify the error source, explain why, and select the correct fix from three plausible options with a written justification.
This requires distinguishing (Analyze) and justifying (Evaluate)—matching the outcome.
Stage 3 — Activity design
Before learners can analyze scenarios, they need:
- A reference of common status codes (Diátaxis: Reference content)
- Worked examples showing how to read error responses (Diátaxis: Tutorial)
- A how-to on querying logs to reproduce errors (Diátaxis: How-to Guide)
The Diátaxis types naturally fall out of what learners need to prepare for the assessment. The four-type framework is not imposed on top—it emerges from backward design.
Backwash check: If the stated objective is "distinguish and select" but the only practice activity is reading a reference page, backwash will be negative. Learners will study definitions, not practice diagnosis. The activity must match the assessment cognitive demand.
Common Misconceptions
"Backward design means I write objectives last."
No. Backward design means you define outcomes before assessments and activities—but objectives are Stage 1, not an afterthought. The "backward" refers to inverting the content-first sequence, not to deferring outcome thinking.
"Constructive alignment is about writing objectives that sound sophisticated."
Alignment is not about the quality of the prose. A beautifully written objective that is never reflected in the assessment does nothing. The verb in the objective must appear in the cognitive demand of the assessment. That is the alignment that matters—not the elegance of the sentence.
"Bloom's taxonomy is a hierarchy of importance—higher is better."
Bloom's describes cognitive type, not value. A well-designed module on recalling emergency procedures should target Level 1 deliberately and accurately. Forcing every module toward "Create" produces inflated, misaligned designs. The goal is matching level to genuine need—not climbing the taxonomy for its own sake.
"UDL means designing separate versions for different learner types."
UDL is not differentiation by learner category. It is proactive design that builds flexibility in from the start—flexible pacing, scaffolded tasks, multiple expression options—so that the single course design works across a wider range of learners without retrofitting individual accommodations.
"Diátaxis replaces competency-based course design."
Diátaxis organizes content types within a documentation or knowledge system. It does not specify learning outcomes or sequence cognitive progression. It is a content architecture tool, not a learning design methodology. Use it alongside backward design, not instead of it.
"If I implement backward design, results will improve."
Evidence for backward design's efficacy is mixed. Implementation barriers—limited readiness, insufficient professional development, lack of institutional support—significantly constrain effectiveness even when the framework is well understood. Knowing backward design and practicing it with fidelity are different things.
Active Exercise
Alignment Audit on a Real Module
Choose a learning module you have recently designed, or one you are currently planning.
Step 1. List every stated learning objective. Next to each, write the Bloom's verb it uses and the cognitive level it implies.
Step 2. List every assessment task in the module. Next to each, write the cognitive level it actually demands of the learner.
Step 3. Fill in the alignment matrix:
| Objective | Obj. Bloom's Level | Assessment Task | Assess. Bloom's Level | Aligned? |
|---|---|---|---|---|
Step 4. For each row where "Aligned?" is No, make one decision:
- Raise the assessment to match the objective, or
- Revise the objective to match what you actually want to assess.
Step 5. Check for UDL gaps: which assessments have only one modality? Which activities assume a single pace? Identify one change that would make the module more accessible without changing the outcome.
Write up your audit findings in a short document (not for submission—for your own design record). The act of making the misalignments explicit, in writing, is the exercise.
Key Takeaways
- Backward design inverts the planning sequence outcomes first, evidence second, activities third. This prevents content from driving design instead of need.
- Constructive alignment is a system property ILOs, activities, and assessments must all point at the same cognitive target. Backwash means students learn what is assessed, not what is stated—so the assessment is the de facto curriculum.
- Bloom's taxonomy provides a calibration vocabulary use verbs to specify cognitive level across objectives, activities, and assessments. The most common failure mode is cognitive level misalignment—objectives targeting higher-order thinking while assessments test recall.
- Diátaxis is a content-type framework for knowledge-rich domains it separates tutorials, how-to guides, reference, and explanation to match content shape to learner state. It complements, not replaces, backward design.
- UDL is proactive, not remedial build flexibility into course architecture from the start—pacing, scaffolding, expression options—rather than retrofitting accommodations for individuals after the fact.
Further Exploration
Backward Design
- Understanding by Design - Wiggins & McTighe (UBC resource) — The primary source for backward design. Read Chapter 1 before designing your next module.
Constructive Alignment
- Constructive Alignment - John Biggs — Biggs's own exposition of the framework, including the backwash mechanism.
- Revisiting the Relationship Between Constructive Alignment and Learning Approaches — Empirical study on how alignment affects elaboration strategies and learning outcomes.
Bloom's Taxonomy
- Using Bloom's Taxonomy to Write Effective Learning Objectives — Practical verb lists and examples for each cognitive level.
Diátaxis Framework
- Diátaxis: Start Here
- What is Diátaxis? - I'd Rather Be Writing — A practitioner's take on how Diátaxis works and where it fits.
Universal Design for Learning
- Building Neurodiversity-Inclusive Campuses — The research case for systemic UDL over individual accommodations.