Learning How to Teach
Every instructional decision — sequence, pacing, feedback, assessment — encodes a claim about how people learn. Many of those claims are wrong, including ones that feel obviously true to learners and teachers alike. This curriculum works through what the evidence actually shows: the gap between what feels like learning and what is, the cognitive load limits that constrain every design, the assessment mechanisms that quietly determine what gets learned, and the organizational conditions without which good pedagogy can't survive first contact with the institution. Teaching well isn't a gift — it's applied learning science.
Key Ideas
What feels like learning rarely is
The learner's internal signal of "I get this" is systematically miscalibrated. Massed practice and re-reading feel productive because they produce fluency in the moment; spaced retrieval, interleaving, and desirable difficulties feel harder and slower yet produce durable learning [02]. This isn't a quirk — it's the most load-bearing finding in learning science, because it means designers cannot trust learner satisfaction as evidence of effectiveness [08]. The same gap explains why neuromyths like learning styles persist despite repeated falsification [11]: they feel validating. Evidence-grounded design means shipping things that will, at first, feel worse.
Working memory is the budget everything else spends
Cognitive Load Theory is not one framework among many — it is the master constraint every other design decision operates under [01]. Load comes from how material is presented, not just how complex the material is; extraneous load is a design defect, not an inevitability. Scaffolding, once you see this, stops being pedagogical etiquette and becomes applied load management — holding complexity in reserve until the learner has the schemas to absorb it [04]. Flow states are the same phenomenon from the motivational side: challenge calibrated to current capacity [06]. If you ignore load, nothing else you design will work.
Assessment is the hidden curriculum
Students learn what they expect to be assessed on, not what you said was important — this is the backwash effect, and it overrides every stated objective [03]. Constructive alignment doesn't mean matching assessment to objectives as a bureaucratic exercise; it means recognizing that the assessment is the objective, from the learner's point of view. Feedback, similarly, is not a payload — it's a loop, and it only corrects behavior when it closes back to revision [05]. Grades without comments don't change behavior; comments without a revision opportunity barely do. Designing assessment honestly is designing the curriculum.
Expertise is restructured perception, not more facts
Experts don't just know more than novices — they see differently. Deliberate practice builds mental representations that let experts chunk patterns novices perceive as unrelated symptoms [07]. This reframe changes what instruction is for: the point isn't to transfer a list of facts but to rebuild the learner's perceptual apparatus around schemas and chunks [01]. It also explains why expert-written material often fails novices — experts have forgotten what it was like to see the domain as unparsed noise. Much of what experts know is tacit and resists articulation [13], which is the real ceiling on training programs and the real reason cognitive apprenticeship matters.
Difference is a design signal, not an edge case
Accommodations for neurodivergent learners are most often treated as retrofits — something bolted on after the "normal" design is done. The evidence points the other way: designs that work for neurodivergent learners work for everyone (the curb-cut effect), and designs that don't work for them are revealing a defect rather than hitting an edge case [10]. The same logic applies across cultures. Monocultural curricula don't fail cross-culturally because of "content gaps" — they fail because their defaults smuggle in cultural assumptions that were never universal to begin with [09]. Treating difference as signal rather than exception makes the design better for the notional majority too.
Engagement is a confound, not a synonym for learning
Engaging experiences can teach, and they can also teach nothing — the two are separable variables [11]. Game-based learning works when the underlying pedagogy is sound; when it isn't, adding game mechanics produces engagement without transfer [06]. The same confound appears in AI tutoring: the evidence shows that how the tool is used (Socratic questioning vs. answer generation) dominates which tool is used [12]. Every few years a new medium arrives and the question is always whether it's "effective" — which is the wrong framing. The right question is whether the pedagogy embedded in its typical use is sound.
Teaching is organizational, not personal
The romance of great teaching locates it in individual virtue — the gifted instructor who transforms lives. The evidence puts the load-bearing variables elsewhere [13]. Psychological safety is the precondition without which team learning doesn't happen at all; it is a designed organizational property, not a personality trait of the leader. Seat-time metrics persist because they're easy to measure, not because they measure anything that matters — they hide whether anyone learned. Design rationale is what survives staff turnover and keeps each new cohort from re-discovering the same failures. Most of what makes instruction work or fail is architectural, and most organizations never look at the architecture [05].
Memory and Retrieval
Why what feels like learning often is not, and what the evidence says instead
How this plan was made
Each plan on learnings is built by a hand-crafted agentic pipeline: research agents gather primary sources, a claim reviewer verifies facts against them, and a sequencer orders modules for how people actually learn. The curation — topic selection, framing, editorial standards — is Nicolas's. The research and writing is AI-assembled.