Philosophy for Software Engineers
Every engineering decision encodes philosophical commitments — about what exists, how we know things, what words mean, what we owe each other, and how wholes relate to parts. This curriculum makes those commitments visible, connecting major philosophical traditions to concrete engineering problems. Philosophy doesn't give you answers — it gives you better questions.
Key Ideas
Boundaries are decisions, not discoveries
There is no objectively correct place to draw a boundary between services, modules, or domains — only tradeoffs between different costs. Every decomposition [03] encodes assumptions about what changes together, who owns what, and what the team's cognitive limits are. Those assumptions expire. Treating boundaries as discoveries — as if the right answer was just waiting to be found — is what makes systems brittle [05]; treating them as contingent commitments that can be revisited is what makes them evolvable [14].
Technical debt is a knowledge problem
The standard framing — debt as shortcuts taken under pressure — misses the deeper issue. Most debt accumulates not from laziness but from the gap between what the team understood when the code was written and what they understand now. The code is a snapshot of past knowledge, and the world has moved on. This reframe matters because it changes the solution [06]: paying down debt isn't just about refactoring code, it's about redesigning the system that produces the code — the team's shared models, the feedback loops, the onboarding that transmits context.
Hallucination is architecture, not bug
LLMs don't have beliefs that can be true or false; they generate statistically plausible continuations of text. Hallucination isn't a defect to be fixed in the next version — it's a structural feature of how the models work [13]. This means the right engineering response isn't to trust more careful prompting. It's to design systems where the consequences of hallucination are bounded: retrieval grounding, output validation, human checkpoints, and graceful degradation when confidence is low.
Record what you didn't know, not just what you decided
Decision records typically document what was chosen and why. The more valuable — and more neglected — half is the epistemic state at decision time [08]: what assumptions were load-bearing, what alternatives were considered and on what evidence, what the team was uncertain about, what would have changed the call. Future engineers don't just need to know what you decided; they need to know when to revisit it. That requires knowing the conditions under which the decision was valid.
Naming is philosophy
When two engineers argue about whether something should be called an Event or a Command, a User or an Account, a Service or a Module — they are arguing about ontology [02]. They are disagreeing about what kinds of things exist in the domain and how those things relate. The argument feels petty but is genuinely hard because it encodes commitments that propagate through the entire codebase. Good naming isn't aesthetic taste; it's applied metaphysics.
Rules can't replace character
Engineering ethics is often treated as a compliance problem: establish the rules, follow the rules, enforce the rules. But Aristotle's insight — that virtue is a disposition developed through practice, not a procedure applied to situations — transfers directly [09]. Code review policies don't make engineers care about the next person who reads their code; they mechanize a behavior while leaving the underlying orientation untouched. Technical debt is a moral phenomenon: it is an obligation imposed on future engineers without their consent, by people who won't be around to pay for it.
Expertise is restructured perception
Expert engineers don't just have more facts available; they perceive situations differently. What a novice sees as a list of symptoms, an expert sees as a pattern with a name. This is Heidegger's ready-to-hand vs. present-at-hand [12]: tools in use disappear from consciousness; they only become visible as objects when they break. Debugging is the moment the system stops being ready-to-hand and becomes an object of investigation. Understanding this shift — and how to deliberately induce it — matters for learning, teaching, and building environments that accelerate expertise [06].
How this plan was made
Each plan on learnings is built by a hand-crafted agentic pipeline: research agents gather primary sources, a claim reviewer verifies facts against them, and a sequencer orders modules for how people actually learn. The curation — topic selection, framing, editorial standards — is Nicolas's. The research and writing is AI-assembled.