What Systems Are For
The philosophical foundations of rationale, intent, and purpose in software artifacts
Learning Objectives
By the end of this module you will be able to:
- Articulate what 'rationale' means in the context of software systems, distinguishing it from requirements, documentation, and implementation.
- Apply Aristotle's four causes to analyze a software artifact and identify its final cause (telos).
- Explain the dual nature of software artifacts as both physical objects and intentional objects.
- Use Chesterton's Fence reasoning to evaluate whether an existing constraint should be changed.
- Distinguish between encoding what a system does and encoding why it does it that way.
Core Concepts
The Missing Cause
When you read a codebase for the first time, you can usually reconstruct a great deal from the source alone. You can identify what it is made of — the language, the data structures, the dependencies. You can trace how it works — the algorithms, the call paths, the state transitions. You can observe its structure — the modules, the layers, the patterns.
What you almost never find encoded is why it exists, and more precisely, what it must not become.
Aristotle's framework of four causes names this gap directly. The four causes are:
- Material cause: what a thing is made of.
- Efficient cause: how it came to be / how it operates.
- Formal cause: its structure and organization.
- Final cause (telos): its purpose — what it is for.
Software systems routinely encode the first three. The final cause — the intended purpose, the use cases it was designed to serve, the uses it should not serve — is rarely encoded as a technical constraint. It lives in a Confluence page, in someone's memory, or nowhere at all.
Teleological system design extends Aristotle's framework into a methodological claim: once you adopt teleology as a design lens, inquiry runs backward from purpose. You ask "what is this for?" first, and that answer governs what counts as a valid design decision. Telos is not decoration — it is a generative constraint.
When a staff engineer says "this service should never be called synchronously from a user-facing request," they are expressing a final cause. The question is whether that constraint lives only in their head, or whether it is enforced by the architecture.
Artifacts Are Intentional Objects
Philosophy of technology has a precise term for what software artifacts actually are: objects with a dual nature. Every technical artifact is simultaneously:
- A physical object — analyzable through engineering, subject to physical laws, with real constraints on what it can do.
- An intentional object — designed to exhibit specific intended capacities, carrying the designer's purpose in its structure.
The physical and the intentional are not independent. The physical design constrains which intentions can be realized through the artifact. Architectural decisions create physical constraints that enforce or prevent certain uses. This is not a metaphor: when you add a type system boundary, an API gateway, or a permission check, you are encoding intentional constraints into physical structure.
This dual nature is why ICE theory (Intention, Causal Role, Evolution), developed by Houkes and Vermaas, is useful for software engineers. ICE theory holds that an artifact's function is established by the designer's use plan — and that use plan must be justified by knowledge of how the artifact's physical structure will causally support that use. Physical design constrains which use plans are realizable. The corollary: if you want to encode an intent, you need to find the physical (structural, architectural) form that enforces it.
Proper Function vs. Accidental Use
Ruth Millikan's theory of proper functions introduces a distinction that translates cleanly into software: the difference between what a system was selected for (its proper function) and what it happens to be used for incidentally (its accidental function).
A caching layer that teams start using as a durable event bus is being used accidentally. A service designed to serve internal tooling that gradually becomes a customer-facing API has drifted from its proper function. These are not just engineering risks — they are the result of an undeclared telos.
Accidental use is not a failure of the user. It is a failure of the system to encode its proper function in a discoverable, enforceable way.
The parallel claim in artifact theory is that the proper function of an artifact is established by the designer's use plan and constrained by its physical structure. You cannot intend a use that the physical structure cannot support — and, conversely, if the physical structure can support an unintended use, nothing stops it from happening.
Intent is What, Not How
There is a precise technical definition worth anchoring here. RFC 9315 on Intent-Based Networking defines intent-based systems as declarative systems where users specify what the desired outcome should be — not how to achieve it. Intent captures operational goals and outcomes without specifying implementation mechanisms.
This distinction matters for rationale encoding. When you write # retry on transient errors, you are documenting how. When you write # this service must never see partial order state — orders are atomic from the consumer's perspective, you are encoding what. The second is rationale. It expresses a design constraint that emerges from purpose, and it is the kind of statement that constrains future decisions rather than just explaining past ones.
The Teleosemantic Frame
Teleosemantic analysis offers a compact model for thinking about any information system: four components, four dependencies.
The model says: a system is designed by a producer (the engineering team), for a consumer (the user or dependent system), to achieve a purpose (the telos). The artifact is the physical realization. Validation and redesign should be grounded on all four dependencies, not just the producer-artifact link.
Most software teams optimize relentlessly on the producer-artifact relationship (build quality, test coverage, performance) while leaving the artifact-purpose relationship implicit. Rationale encoding is the practice of making that relationship explicit and enforceable.
Analogy Bridge
Consider a gate in a fence on a country road. You arrive and the gate is locked. You do not have a key, but you could climb over. Before you do: why is the gate locked?
This is Chesterton's Fence. The principle states: fully understand the purpose behind an existing state of affairs before attempting to change it. The fence was built for a reason. The lock was added for a reason. Removing or bypassing the constraint without understanding its purpose is not a neutral act — it is a gamble that the purpose no longer applies.
Applied to software: each architectural constraint was encoded for a reason. A timeout, a permission boundary, a data ownership rule, a module that refuses to import another module — these are not bureaucratic residue. They are locked gates. The question Chesterton's Fence asks is not "can I remove this?" but "do I understand why it is here?"
The principle applies especially to legacy systems where the original authors are unreachable. When intent was never encoded, recovering it requires forensic investigation: reading tests, reviewing git history, examining what invariants the code observes. This is expensive. It is the cost of rationale that was never captured.
The analogy maps cleanly: the gate's physical constraint (the lock) enforces an intentional constraint (access control). When the physical constraint and the intentional constraint are coupled — when the structure enforces the purpose — the system is self-documenting in the most durable way possible. When they drift apart, you get a locked gate with a missing key and no note.
Worked Example
Scenario: You are onboarding onto a payments service. You notice that the OrderService never calls the FraudCheckClient directly. Instead, it publishes an OrderSubmitted event, and a separate FraudCheckWorker consumes it asynchronously. A junior engineer asks you: "Why can't we just call the FraudCheckClient inline? It would be simpler and we'd get the result immediately."
Apply the four causes:
| Cause | What you find |
|---|---|
| Material | Kotlin service, Kafka for events, gRPC for the FraudCheckClient |
| Efficient | OrderService publishes to topic; FraudCheckWorker consumes and calls gRPC |
| Formal | Event-driven decoupling between order submission and fraud evaluation |
| Final (telos) | ? |
The final cause is not in the code. You investigate: you read the git log, find a 3-year-old ADR, and discover the constraint: fraud check latency is unbounded and must never block the order submission path; orders must be submittable even when the fraud service is degraded, with checks queued for later evaluation. The telos is fault isolation and latency decoupling.
Now apply Chesterton's Fence: the event-driven indirection is the locked gate. The junior engineer's question is valid — inline would be simpler. But removing the gate without understanding it would couple order submission latency to fraud check latency, and degrade order submission availability whenever the fraud service degrades.
The intent vs. implementation distinction: the intent is "fraud checks must never block order submission and must tolerate fraud service unavailability." The implementation is "async via Kafka." If you understand the intent, you can evaluate whether a different implementation (e.g., a circuit-breaker with a synchronous fallback) would also satisfy it. If you only see the implementation, you cannot reason about whether alternatives are valid.
The proper function: the OrderService proper function is order submission. Processing fraud results is an accidental function it must never acquire. The architectural boundary enforces this.
Common Misconceptions
"Rationale is documentation — it's a comment or a README."
Documentation describes. Rationale constrains. A comment saying "we do this for performance reasons" describes a past decision. An architectural boundary that prevents a direct database call enforces the constraint that emerged from a performance concern. Rationale encoding is about making intent enforceable, not just readable. Documentation degrades; constraints do not.
"If the code is clean and readable, intent is self-evident."
Clean code expresses how with clarity. It rarely expresses why in a way that constrains future decisions. A well-named function tells you what it does. It does not tell you what it must never do, what domain invariants it assumes, or what the system would break if this behavior changed. Intent is not the same as implementation clarity.
"Encoding intent is for big systems. For small services, it's overhead."
The cost of undeclared intent scales with time and team size, not system size. A small service whose telos is unclear will accumulate accidental functions. The gate that seemed unnecessary to lock on day one becomes a liability when the road gets busy.
"We have tests. Tests capture intent."
Tests capture behavior at a point in time. They say: "given these inputs, the system did this." They do not say: "this behavior is load-bearing because of this invariant, and here is what breaks if you change it." Tests are evidence of how; rationale is the why that explains which behaviors must not change.
Key Takeaways
- Software artifacts have a final cause (telos) — their intended purpose This is the one cause that is almost never encoded as a technical constraint. The gap between the first three causes and the final cause is where rationale lives.
- Code structure is a carrier of intentional purpose. Every architectural boundary is simultaneously a physical constraint and an intentional constraint. When they are aligned, the system enforces its own purpose. When they drift, purpose becomes invisible.
- Proper function vs. accidental function is a useful diagnostic. When a system is used for something its designer did not intend, the cause is usually an undeclared or unenforced telos, not malicious misuse.
- Intent is what, not how. Rationale encoding means capturing the outcomes and constraints that matter — not the implementation steps that happened to achieve them. This is what makes rationale actionable across future design decisions.
- Chesterton's Fence is the practitioner's test for whether rationale has been encoded. Can you reconstruct the purpose of an existing constraint from the system itself, or only through forensic investigation? The harder the reconstruction, the more the system has paid in undeclared intent.
Further Exploration
Philosophy and Theory
- Aristotle on Causality — Stanford Encyclopedia of Philosophy — Section 4 on final causation is most relevant
- Artifact — Stanford Encyclopedia of Philosophy — Covers dual nature theory, proper function, and the relationship between physical structure and intentional design
- Philosophical Theories of Artifact Function — Overview of competing theories including ICE theory
System Design and Application
- A Teleological Approach to Information Systems Design — Applies teleosemantic analysis to information systems design and validation
- RFC 9315 — Intent-Based Networking: Concepts and Definitions — Formal definition of intent as declarative specification of outcomes
- Assessing Legacy Code Using Chesterton's Fence — Practical application to legacy systems and technical debt assessment