Mereology and Composition in Architecture
Why boundary debates resist technical resolution — and what philosophy says about it
Learning Objectives
By the end of this module you will be able to:
- Explain the Special Composition Question and apply its three major answers to service boundary decisions.
- Use mereological essentialism versus mereological contingency to frame debates about aggregate design and module ownership.
- Recognize the sorites problem in architectural debates (such as "how big should a microservice be?") and articulate why these debates are not purely technical.
- Apply the concept of ontic vagueness to boundary decisions and distinguish it from vagueness that better information could resolve.
Core Concepts
What Is Mereology?
Mereology is the philosophical study of part-whole relations. It asks formal questions: When do parts compose a whole? What makes something a part of something else rather than merely near it? How does identity persist when parts change?
Mereological theories commonly assume three axioms: everything is a part of itself (reflexivity), a part of a part of a whole is a part of that whole (transitivity), and two distinct entities cannot each be a part of the other (antisymmetry). These axioms appear dry, but they encode assumptions that show up directly in how software is structured. Every time you draw a service boundary or argue that a field "belongs" to one aggregate rather than another, you are making implicit mereological commitments.
The connection to software is direct enough that researchers have formally applied mereological axiom systems to model parts and part-hood relations in software application domains.
The phrase "the whole is greater than the sum of the parts" is frequently attributed to Aristotle but misrepresents the actual text. The Metaphysics (Book VIII, 1045a.8–10) reads: "The totality is not, as it were, a mere heap, but the whole is something besides the parts; there is a cause." Aristotle's emphasis is on the causal relation between a whole and its parts — not on quantitative superiority. This matters: the Aristotelian tradition asks why parts cohere, while modern emergence theory asks what becomes irreducible. These are different questions, and architectural reasoning usually conflates them.
The Special Composition Question
Philosopher Peter van Inwagen formalized the Special Composition Question (SCQ): "When, if ever, do objects compose into a whole?" There are three principal answers:
Mereological universalism holds that composition always occurs — any collection of objects, however scattered or arbitrary, composes a whole. This avoids vagueness by accepting every possible fusion, but the price is ontological proliferation: there exists a "whole" composed of your laptop, a cloud over Shanghai, and the number three. Most architects reject this position implicitly by insisting that only meaningful groupings count.
Mereological nihilism holds that composition never occurs — only simples (fundamental, indivisible atoms) exist. Wholes are convenient fictions. This also avoids vagueness, but eliminates nearly all macroscopic objects from reality. Software nihilism would say: there is no such thing as a microservice, only bytes.
Restricted composition holds that composition occurs sometimes, under specific conditions — spatial contiguity, causal unity, functional coherence. This is the most intuitive position and the one architects implicitly adopt when they say "services should be organized around business capabilities." But here is the problem: those conditions are themselves vague or stipulative. "Functional coherence" is not a crisp predicate. No decomposition algorithm can evaluate it without importing the very judgment you were trying to formalize.
None of the three answers to the Special Composition Question escapes the tension between vagueness and determinate boundaries.
Mereological Essentialism vs. Contingency
Mereological essentialism is the thesis that an object's parts are essential to it: if you change any part, you no longer have the same object. Applied to aggregates, this produces a strong intuition: the Order aggregate must include LineItems, because without them, it is not an order. Engineers often argue this way — "payment data cannot leave the payment service" — as though the service's identity depends on retaining exactly those parts.
The philosophical problem is sharp: if an object's identity depends on having exactly its actual parts, there must be a determinate fact about which components compose which wholes. But mereological vagueness denies that any such determinate fact exists. Mereological essentialism and mereological vagueness cannot both be true without contradiction.
Mereological contingency, the alternative, understands composition as fundamentally contingent and subject to reorganization. What constitutes a service's boundaries is not fixed by essence but by the relational processes that constitute the service at any given moment. A payment service that absorbs fraud detection today is not a different service — it is the same service with a shifted composition. This is the implicit assumption behind team topologies, capability mapping, and iterative decomposition: services can be reshaped without loss of identity.
Vagueness: Semantic or Ontic?
A critical distinction divides the field. Semantic vagueness locates indeterminacy in language: terms like "large aggregate" or "cohesive service" are imprecise, but the world itself has sharp edges — we simply lack the vocabulary to name them precisely. On this view, better definitions or sharper domain modeling would, in principle, resolve every boundary dispute.
Ontic vagueness locates indeterminacy in reality itself. Elizabeth Barnes and others have developed theories of metaphysical indeterminacy: the world can be genuinely indeterminate about which parts compose which wholes. On this view, no amount of additional domain knowledge will yield a uniquely correct service boundary, because the domain itself does not carve at sharp joints.
This is not a purely theoretical question. If vagueness is semantic, architectural debates are in principle solvable — the right answer exists, you just need better modeling. If vagueness is ontic, architectural debates are not solvable, only negotiable. Different decompositions are not wrong or right, but more or less useful for given purposes.
The Problem of the Many
Peter Unger's problem of the many extends vagueness from predicates to objects themselves. Consider a cloud: for any droplet at the boundary, it is indeterminate whether that droplet belongs to "the cloud." If there are n such droplets, there are 2^n candidate objects — all equally good claims to being "the cloud." The problem of the many shows that any object with vague boundaries generates multiple, equally legitimate candidate objects.
Applied to software: if it is indeterminate whether the fraud-detection logic belongs to the payments service or the risk service, then both the version with fraud-detection and the version without are equally valid candidates for "the payments service." There is no third object that is more correctly "the" payments service than either candidate. This is not a modeling failure — it is a consequence of vague boundaries, which may be unavoidable.
Assemblage: A Third Model
Both essentialism and the substance view of composition assume parts are defined prior to the wholes they enter. Assemblage theory, developed by Deleuze and Guattari, proposes a third position: wholes emerge from contingent liaisons of heterogeneous components, without being organized by any transcendent principle or pre-existing essence. An assemblage is unified by what it does — by its capacity to affect and be affected — not by what it is in some fixed structural sense.
This reframes composition: rather than asking "which parts essentially belong to this whole?", you ask "which liaisons produce the emergent function we need?" The boundary follows from purpose and relational context, not from essence. And when purpose changes, so does the appropriate composition.
Annotated Case Study
The Order Aggregate and Its Expanding Gravity
Scenario: A mid-size e-commerce platform models orders using a single Order aggregate containing LineItem, ShippingAddress, PaymentInfo, and Discount objects. The team justifies this by arguing that "all of these must be consistent at the time of checkout" — a classic essentialist argument. The aggregate root enforces the invariant: a placed order must have a valid payment and at least one line item.
Initial DDD justification: An aggregate is a cluster of entities and value objects bound together by an aggregate root that enforces a consistency boundary. Objects within the aggregate maintain consistency at all times; a single transaction modifies only one aggregate. This is textbook — and for a simple checkout flow, it works.
The growth problem: The team adds a promotions engine. Discounts now depend on customer loyalty tier, product category, and active campaigns. These live in separate services. Should they move inside the Order aggregate? The loyalty team argues their data is essential to order pricing — mereological essentialism in practice: "an order without loyalty data isn't a real order." The payments team argues that payment info is similarly essential. Within two years, the Order aggregate has grown to include references to data from five upstream services.
What mereology surfaces: Teams working with DDD consistently over-size aggregates because they accept essentialist intuitions about what "must" cohere. Each addition feels justified — "this invariant requires it" — but the cumulative effect is an aggregate that has become a performance bottleneck and a deployment dependency hub.
The vagueness problem: When the team tries to split the aggregate, they encounter the sorites problem directly. Removing PaymentInfo from the Order aggregate — should the Order still be called an Order? Aggregate boundary decisions involve significant uncertainty; insufficient empirical guidance exists on what "correct" granularity even means. Different architects produce different decompositions from identical requirements with no objective criterion for correctness.
The contingency resolution: The team adopts a contingency view. They define the Order aggregate by its current consistency invariant — the minimal set of data required to place an order without violating business rules at transaction time. Everything else (loyalty, promotions, shipping estimates) becomes an eventually-consistent input that is read by the aggregate but not owned by it. The boundary is set not by essence, but by the consistency obligation the aggregate is designed to enforce. This is a decision, not a discovery.
When a team argues that component X "must" be inside aggregate Y because "you can't have a valid Y without X," they are invoking mereological essentialism. This is sometimes true and useful. But applied uncritically, it produces aggregates that grow to encompass everything the business considers related — which is eventually everything. Recognize essentialist arguments as choices about composition, not facts about domain structure.
Bounded contexts compound the problem: The same concept — "discount" — means different things in different bounded contexts. In the pricing context, a discount is a rule applied at quote time. In the accounting context, it is a line item affecting revenue recognition. The Order aggregate's essentialist pull — "discount must be part of order" — papers over this polysemy by collapsing two distinct concepts into one object. The boundary debate is not just mereological; it is also semantic.
Compare & Contrast
The Monolith–Microservices Debate as a Mereological Dispute
As mutasim.top's mereology for developers piece articulates: in monolith thinking, the application is a single unified whole with parts inside it — modules or layers — that are deeply connected. Microservices, by contrast, prioritize autonomy over integral composition.
Database sovereignty is where this plays out in practice. The microservices pattern of database-per-service treats each service as a pre-existing, independent entity whose identity is secured by keeping data private. Each microservice owns its domain data and logic; its persistent data is accessible only through its API. This is a substance-metaphysical commitment: services are conceived as self-contained entities that then connect via APIs, rather than as entities constituted by their relational interactions.
The irony is that this commitment produces the same essentialist problem at a coarser grain. Teams now argue that "payment data must stay in the payments service" with the same intensity they once argued that PaymentInfo must stay in the Order aggregate. The mereological dispute did not disappear — it moved up one level.
What actually differs between the two positions:
| Monolith | Microservices | |
|---|---|---|
| Composition model | Integral whole | Autonomous parts |
| Boundary type | Internal seams | External contracts |
| Identity basis | Role in the whole | Autonomous lifecycle |
| Consistency scope | Shared transaction | Service-local + eventual |
| Mereological stance | Essentialist (modules need the whole) | Contingent (services survive reorganization) |
Neither position is metaphysically correct. Both reflect commitments about what kind of composition produces better systems for a given context.
Boundary Conditions
When Does Mereological Reasoning Break Down?
When composition is forced by external constraints. Regulatory requirements, legacy contracts, or organizational structure may dictate where a boundary falls regardless of domain coherence. In these cases, the boundary is not mereological at all — it is political. No amount of domain analysis will surface a philosophically coherent decomposition when the constraint is external to the domain.
When the domain itself is genuinely new. Service boundaries are acknowledged to be "fuzzy," "contested," and to shift over time. In rapidly evolving domains, the relational patterns that would justify a boundary have not yet stabilized. Mereological reasoning presupposes a domain that is sufficiently understood to have coherent part-whole candidates. In truly greenfield work, you cannot identify the right wholes because the relational structure has not yet crystallized.
When scale changes the composition question. At high scale, what was a sensible unified whole becomes a performance bottleneck — not because the mereological reasoning was wrong, but because the operational context changed. Oversized aggregates create concurrency conflicts and reduce system performance. The "correct" composition at 10,000 requests/day may be incorrect at 10,000,000 requests/day. Composition is not just a domain question — it is also a runtime context question.
When the parts are themselves heterogeneous assemblages. Assemblage theory cautions against assuming that the components being composed are simple, stable entities. Assemblages are constitutively heterogeneous — bodies, actions, utterances, artifacts — that generate emergent functions. A "payments service" is not a single thing but an assemblage of code, configuration, data, third-party integrations, team practices, and operational runbooks. Asking where its boundary is presupposes a discreteness that the assemblage does not actually possess.
Architecture literature often speaks of "discovering" service boundaries, as if the correct decomposition is hidden in the domain waiting to be found. Mereological analysis challenges this: if composition conditions are themselves vague or stipulative, decomposition heuristics cannot discover correct boundaries — they can only stipulate conventional ones. "Discovery" is a rhetorical frame that obscures the decision being made.
Thought Experiment
The Fraud Service Absorption
Setup: Your organization runs a payments service and a fraud-detection service as separate microservices. The fraud service calls the payments service to check transaction context; the payments service calls the fraud service to get a risk score before processing. Over time, the two teams start sharing data models directly. A proposal arrives: merge them into a single "financial safety service."
The essentialist case: Fraud detection essentially requires payment context; payment processing essentially requires fraud risk. They should never have been separated. Merging them just acknowledges what was always true about the domain.
The contingency case: Both services have survived reorganization before — fraud detection was once inside the risk service, payments was once part of a monolith. Their current boundaries are artifacts of organizational history, not domain essence. Merging them is a legitimate choice, not a correction of a mistake.
Apply the SCQ: Under what conditions do the fraud service and the payments service compose into a single whole? Is it when they share more than 50% of their data models? When they call each other synchronously on more than 80% of transactions? When the same team owns both? None of these conditions are principled — they are stipulations that happen to favor one outcome or another.
The problem of the many: If it is indeterminate whether fraud logic belongs inside or outside the payments service, then both the merged and the separated versions are equally valid candidate "payment systems." There is no third option that is more correct than either. The choice is a decision about which version you want to maintain.
What should you conclude? Not that the merge is right or wrong, but that:
- The debate will not be resolved by better domain analysis alone.
- The "essentialist" arguments on both sides are claiming more epistemic authority than they possess.
- The decision should be made on pragmatic grounds (team autonomy, operational complexity, consistency requirements) with clear acknowledgment that it is a decision, not a discovery.
- The boundary should be treated as contingent — subject to revision if the operational context changes — rather than as a permanent fact about the domain.
Key Takeaways
- Mereology makes the implicit explicit. Every time you argue that a component "must" belong to a service or aggregate, you are invoking mereological essentialism. Naming this lets you examine whether the argument is principled or merely conventional.
- The Special Composition Question has no neutral answer. Universalism, nihilism, and restricted composition each trade off vagueness for other problems. Software architects implicitly operate in the restricted composition space, but the conditions for "valid" composition are always somewhat vague or stipulative. Acknowledging this does not make boundaries arbitrary — it makes the reasoning behind them transparent.
- Ontic vagueness is real and architectural. Some service boundary disputes resist resolution because the domain itself is indeterminate — not because we lack information, but because no fact of the matter exists. Recognizing ontic vagueness shifts the conversation from "find the right answer" to "negotiate a useful convention."
- Mereological contingency is a better default for living systems. Process ontology treats composition as fundamentally contingent and reorganizable. This matches how successful architectures actually evolve — through iterative reshaping of boundaries in response to changing operational context, not through one-time discovery of the correct structure.
- Boundaries are socially negotiated constructs. Different decomposition frameworks yield different boundary placements with no objective criterion for correctness. Service boundaries should be understood as decisions made explicit through team ownership, data contracts, and operational practice — not as facts discovered through domain analysis.
Further Exploration
On mereology and its foundations
- Mereology — Stanford Encyclopedia of Philosophy — The authoritative entry. Read the section on the Special Composition Question carefully.
- The Special Composition Question — Ruth D.M. — An accessible walkthrough of the three positions and what they cost.
- Mereological Nihilism — Wikipedia — Useful for understanding the full range of positions.
On vagueness
- Sorites Paradox — Stanford Encyclopedia of Philosophy — Full treatment of the paradox and its philosophical implications.
- Ontic Vagueness: A Guide for the Perplexed — Elizabeth Barnes — The key paper defending metaphysical indeterminacy. Demanding but worth it.
- The Problem of the Many — Stanford Encyclopedia of Philosophy — Unger's paradox and its implications for object identity.
On aggregates and service boundaries
- Effective Aggregate Design, Part I — Vaughn Vernon — The canonical practitioner text on aggregate sizing.
- Identify Microservice Boundaries — Azure Architecture Center — A pragmatic treatment that implicitly surfaces the vagueness problem.
- Mereology for Developers — Mutasim — A direct application of mereological vocabulary to software architecture debates.
On assemblage and process composition
- Deleuze — Stanford Encyclopedia of Philosophy — Background on assemblage theory and its critique of essentialism.
- Process Philosophy — Stanford Encyclopedia of Philosophy — The broader philosophical tradition from which mereological contingency draws.