Psychology

The Empirical Designer

Synthesizing the curriculum into a living design practice — and understanding where your work fits in the larger system of organizational learning

Learning Objectives

By the end of this module you will be able to:

  • Apply backward design, cognitive load theory, retrieval science, and feedback principles to critique a real learning experience.
  • Articulate design decisions using specific empirical evidence rather than convention or intuition.
  • Explain organizational learning theory and situate individual course design within team and system-level learning.
  • Produce a design rationale document that connects instructional choices to evidence.
  • Identify at least three concrete areas where your current design practice conflicts with the evidence and formulate a change plan.

Key Principles

1. Seat time is not learning

The most persistent invisible assumption in curriculum design is that contact hours translate into capability. They do not. Research and policy have converged on a clear finding: time-in-seat does not guarantee mastery, and a passing grade built on attendance requirements may conceal significant gaps in understanding. All 50 U.S. states now permit competency-based learning models in some form — a policy shift that reflects what learning science has said for decades. Designing a curriculum means designing for demonstrable competency, not for coverage.

This principle has a direct implication for critique work: when you analyze a learning experience, one of the first questions to ask is whether progression is gated by time or by evidence of mastery.

2. Design decisions need rationale, not just logic

Instructional choices that feel coherent in the moment — "learners need a worked example here," "let's add a quiz at the end" — are not the same as choices grounded in evidence. The difference between a practitioner and an evidence-informed designer is the ability to trace a design decision back to a specific, defensible empirical claim.

Nonaka and Takeuchi's SECI model offers a useful lens here: knowledge conversion fails not because people don't know things, but because tacit expertise never gets externalized into explicit rationale. The same failure occurs in instructional design: experienced designers carry implicit models that never get articulated, reviewed, or revised. A design rationale document is the externalization step — it turns intuition into a claim that can be examined, challenged, and improved.

3. Individual courses live inside larger learning systems

A single learning experience is not a self-contained artifact. It functions inside a team, a department, an organization — each of which has its own learning dynamics. Academic literature distinguishes between "organizational learning" (a process) and "learning organization" (a desired state). The process involves modifying mental models, rules, and knowledge at individual, group, and organizational levels. The desired state is where that process runs continuously and is structurally supported.

Designing a learning experience without attending to the system it enters is like writing software without understanding the architecture it runs on. Courses can be individually excellent and systematically useless if the environment doesn't support transfer, practice, or feedback.

4. Single-loop fixes and double-loop redesigns are both necessary

When a learning experience isn't working, two fundamentally different responses are available. Single-loop learning adjusts methods and processes to achieve existing goals more effectively — double-loop learning reevaluates the goals themselves. Most curriculum iteration is single-loop: add a quiz, restructure the sequence, tighten the examples. This is valuable. But it can also become a way of polishing a design that is solving the wrong problem.

Empirical practice requires the discipline to ask the double-loop question: not just "how do I make this work better?" but "is this the right thing to be building?" That question is uncomfortable. It requires examining assumptions about the learner, the outcome, and the organizational need — not just the instructional strategy.

5. Psychological safety is a design variable, not a soft concern

Edmondson's foundational research demonstrated that team psychological safety is associated with increased learning behavior, and that learning behavior mediates the relationship between psychological safety and team performance. This finding has been replicated across educational, healthcare, and organizational contexts. It establishes a causal pathway: safety → learning behavior → performance.

For the learning designer, this is not background context. It is a design constraint. If a learning environment does not make it safe to be wrong, to ask questions, or to surface confusion, then retrieval practice, feedback, and active exercises will underperform — because learners will disengage from the conditions under which those strategies actually work.

6. Cognitive offloading is a legitimate design tool

Meta-analytic evidence shows that cognitive offloading — externalizing memory demands through notes, written records, or external tools — reduces individual differences in memory task performance. This variance reduction suggests that strategic design of external scaffolds is not about "making things easier." It is about redirecting attention to what actually matters.

When designing complex procedural or analytical learning experiences, the question is not whether to allow offloading, but which cognitive operations should remain internal to the learner and which should be externalized. That is a design decision — not a policy default.

7. Dominant mental frames filter feedback before it lands

Organizations systematically distort or fail to process feedback through cognitive barriers rooted in dominant mental frames and institutionalized beliefs. Dated strategic frames — accumulated from past successes — become automatically reused to interpret current information without critical evaluation, even when those frames are inadequate for novel situations.

The same mechanism operates in design teams and in individual designers. Feedback from learners, from assessment data, or from facilitators gets filtered through existing beliefs about what good learning looks like. Double-loop learning requires noticing when this filtering is happening — which is much harder than it sounds.


Annotated Case Study

A corporate onboarding program that was technically excellent and organizationally useless

A mid-sized technology company redesigned its technical onboarding program after struggling with new-hire ramp time. The previous program had been largely informal — shadowing, ad-hoc mentoring, and a document dump. The new program was built with care: structured modules, well-sequenced content, worked examples, a final assessment.

Six months after launch, ramp time had not improved. Exit interviews with underperforming new hires consistently cited a lack of confidence, uncertainty about who to ask for help, and a feeling that questions were unwelcome.

What happened at the design level. The program was strong at the module level. It was poorly designed at the system level. Three overlapping failures:

  1. The assessment was seat-time based. Completion was tracked by module completion, not by demonstrated capability. New hires who didn't understand the material could advance by finishing the modules. The design violated the mastery principle before learners even reached the floor.

  2. Psychological safety was not designed for. The program was entirely self-paced and asynchronous. There was no structured space for confusion — no cohort, no facilitated discussion, no mechanism for surfacing questions without judgment. Knowledge sharing requires perceived safety, and the program's structure actively suppressed it.

  3. Tacit knowledge was not addressed. The program documented what senior engineers knew. It did not transmit how they reasoned. Design documents at Google serve as a mechanism to scale the knowledge and expertise of senior engineers throughout the organization — precisely because they capture reasoning, trade-offs, and judgment, not just conclusions. The onboarding program captured conclusions. Newcomers arrived knowing the rules but not why the rules existed.

The double-loop question that wasn't asked. The original redesign assumed the problem was content quality and structure. It solved that problem. The actual problem was that the organization had not created the conditions for learning to occur — and no amount of instructional improvement could substitute for that.

What a revision grounded in evidence would look like. A revised program would:

  • Gate progression on demonstrated competency, not module completion.
  • Build structured cohort interactions with explicit psychological safety norms.
  • Include annotated examples of senior reasoning — not just procedures, but the thinking behind design decisions.
  • Position the onboarding program explicitly within a multi-level learning system: individual learning feeds team capability, which feeds organizational knowledge.

What this case illustrates about critique. A good design critique does not start with the modules. It starts with the system the modules live in. Only once you understand the organizational learning context — the incentives, the culture, the feedback mechanisms — can you accurately attribute cause to effect.


Step-by-Step Procedure

Running a design critique grounded in evidence

This procedure is for critiquing an existing learning experience — one you designed, inherited, or encountered. The goal is a revision plan with explicit evidence-based rationale.

Step 1: Map the design against the evidence base.

List every significant design decision in the learning experience:

  • How are learning objectives written?
  • How is content sequenced?
  • What assessment mechanisms exist, and when do they occur?
  • How is feedback structured?
  • What practice opportunities are built in?
  • How is transfer supported?

For each decision, ask: what is the implicit theory of learning behind this choice? Write it down. You are looking for the assumptions, not the justifications.

Step 2: Identify the seat-time assumption.

Is progression gated by completion (time/coverage) or by demonstrated competency? Seat-time requirements conceal capability gaps that competency-based systems surface. Mark every place in the design where a learner can advance without evidence of understanding.

Step 3: Audit the psychological safety conditions.

Does the design create conditions where learners can surface confusion, attempt retrieval, and be wrong without social cost? Learning behavior is positively correlated with psychological safety. If the design is purely asynchronous and solo, ask whether the mechanisms for confusion-surfacing exist elsewhere in the system — and if they don't, that is a gap.

Step 4: Identify what tacit knowledge is unaddressed.

What do experienced practitioners know that is not in the material? Institutional memory is destroyed through unmanaged transitions precisely because tacit knowledge is not externalized into transmissible form. What reasoning, judgment, and contextual knowledge are missing from the design — and what would it take to include them?

Step 5: Apply the single-loop / double-loop test.

Make two lists:

  • Single-loop fixes: improvements to methods and strategies that serve the existing goals (better examples, tighter sequencing, more practice opportunities).
  • Double-loop questions: challenges to whether the goals themselves are correct, whether the organizational context supports the learning, whether the problem the course is solving is the actual problem.

You do not need to resolve the double-loop questions now. You need to have named them.

Step 6: Locate the design within the organizational learning system.

Learning organizations integrate learning across individual, group, and organizational levels. Where does this learning experience sit in that structure? Who learns individually from it? What team capability is supposed to result? What organizational knowledge is supposed to be created or preserved?

If there is no answer to those questions, the design is floating — disconnected from the system it is supposed to improve.

Step 7: Write the design rationale document.

For each significant design decision, write:

  • What the decision is.
  • What evidence supports it.
  • What the decision would look like if the evidence pointed the other way.
  • What the known limits of this approach are.

This document is the output of a critique, not a justification of what already exists. Decisions that cannot be grounded in evidence are candidates for revision.

Step 8: Produce the revision plan.

Identify at least three areas where the current design conflicts with the evidence. For each:

  • Describe the conflict precisely.
  • State the specific evidence that grounds the revision.
  • Propose a concrete design change.
  • Identify any organizational conditions that would need to change for the revision to work (psychological safety, system-level support, assessment reform).

Active Exercise

Critique and revision plan for a real learning experience

What you will do. Select a learning experience you have designed, facilitated, or participated in recently. It does not need to be one you built — an experience you encountered as a learner is equally valid.

Produce a document with the following structure:

Part 1: Design map (1–2 pages). Describe the significant design decisions in the experience. For each, write the implicit theory of learning behind it — not what the designer said, but what the choice assumes about how people learn.

Part 2: Critique against the evidence base (2–3 pages). Apply the procedure above. Where does the design conflict with the evidence? Be specific. Vague observations ("the assessment could be better") are not critique. "The assessment gates progression on completion rather than demonstrated mastery, which means learners can advance without understanding" is critique.

Part 3: Organizational learning context (1 page). Where does this experience sit in the larger learning system? What individual, team, and organizational learning is it supposed to support? What is missing at the system level that the course cannot compensate for?

Part 4: Revision plan (1–2 pages). At least three concrete, evidence-grounded revisions. For each: the conflict, the evidence, the proposed change, the organizational conditions required.

Calibration questions. As you work, use these to self-assess:

  • Am I tracing design decisions to specific evidence, or am I using evidence to justify decisions I already made?
  • Have I asked the double-loop question — not just how to improve this, but whether this is solving the right problem?
  • Have I identified what the design cannot do, as well as what it can?

Stretch Challenge

Designing a professional learning community for instructional designers

Communities of practice succeed when they have passionate leaders, clear topical focus, proper governance, open membership, supporting tools, and cross-site participation structures. They also require that learning be inseparable from identity development — members need to see themselves as practitioners within the community, not just consumers of its outputs.

Your challenge: design a professional learning community for instructional designers in a mid-sized organization. The community's purpose is to improve evidence-grounded design practice across the organization.

Your design must address:

  1. Knowledge conversion. Nonaka's SECI model identifies four modes: Socialization, Externalization, Combination, Internalization. How will your community support each? Where are the likely failure points?

  2. Psychological safety. Knowledge sharing requires perceived safety. How will you design the community's norms and structures so that members can surface failures, conflicts with evidence, and design mistakes without social penalty?

  3. Single-loop and double-loop learning. How will the community distinguish between refining existing practice (single-loop) and questioning whether existing practice is serving the right goals (double-loop)? What structures will you build to make the double-loop question safe to ask?

  4. Institutional memory. Effective preservation requires ongoing human-to-human knowledge transmission embedded in organizational processes — not documentation alone. How will the community transmit tacit knowledge across generations of practitioners?

  5. Identity and trajectory. Professional identity develops through becoming a practitioner within a community, not merely through skill acquisition. How will your community open trajectories that allow members to see themselves as recognized practitioners with legitimate roles?

Produce a 2–3 page design proposal. Include explicit design rationale for each element — connected to specific evidence.

Key Takeaways

  1. Seat time is not mastery. Time spent in a course does not guarantee capability. Competency-based assessment — requiring demonstrated proficiency before progression — is more reliable than completion-based tracking. Every design that gates advancement on coverage rather than evidence of understanding embeds this assumption invisibly.
  2. Design rationale is the work. The difference between intuitive design and evidence-informed design is the ability to trace each significant choice back to a specific, defensible empirical claim. A design rationale document is not a deliverable alongside the course — it is the core professional artifact that distinguishes practitioner from craftsperson.
  3. Single-loop and double-loop learning serve different problems. Refining a flawed design is single-loop. Questioning whether the design is solving the right problem is double-loop. Both are necessary. Most instructional iteration stops at single-loop because double-loop requires examining assumptions that feel foundational.
  4. Psychological safety is a design variable. If the learning environment does not make it safe to be wrong, retrieval practice, feedback, and active exercises will underperform. Safety is not a soft concern or an organizational responsibility that sits outside the design. It is a condition that the design either cultivates or violates.
  5. Individual courses live inside organizational learning systems. A learning experience that is excellent at the module level can be systematically useless if it is disconnected from the team and organizational learning structures it is supposed to serve. Critique that stays at the course level is incomplete critique.

Further Exploration

Organizational learning and the learning organization

Knowledge conversion and institutional memory

Communities of practice

Psychological safety and learning

Cognitive design

Cognitive barriers to learning