Philosophy

Assemblage Theory and Complex Systems

Why your system's behavior is not in its parts — and what to do about it

Learning Objectives

By the end of this module you will be able to:

  • Explain DeLanda's assemblage theory and apply it to a distributed system: identify the components, relations, and emergent properties.
  • Distinguish weak emergence (in principle reducible) from strong emergence (not reducible) and explain why this distinction affects how you can reason about a distributed system's behavior.
  • Apply Cynefin's complexity domains to triage engineering problems: complicated (best practice), complex (probe-sense-respond), chaotic (act first).
  • Use Nagarjuna's two-truths doctrine as an anti-reification heuristic: bounded contexts, microservices, and team structures are conventional designations, not natural facts.
  • Apply Simondon's individuation to explain how a microservice or a team crystallizes out of a pre-individual field of technical and organizational possibility.

Core Concepts

Assemblages: components, relations, emergent properties

DeLanda's assemblage theory, following Deleuze and Guattari, rests on a deceptively simple idea: in an assemblage, relations among components are relations of exteriority. This means that components retain a fundamental independence and autonomy outside the assemblage. Unlike organic wholes where parts are internally related and constitute each other, assemblage components can be extracted from one assemblage and inserted into another while retaining their essential properties, though their interactions and effects will differ. What makes something an assemblage—rather than a totality or a mere heap—is precisely this detachability.

This stands in direct contrast to Whitehead's account of internal relations, where each actual occasion is constituted by its relations to every other entity through prehension, making those relations constitutive rather than merely connective. The assemblage framework deliberately rejects this: a payment service has a determinate identity that is not wholly destroyed when you sever it from the order service. You can redeploy it, re-integrate it differently, fork it. The parts are real, detachable, and re-combinable.

But the properties of the assemblage are not in any single component. They emerge from the relations among components as those relations are actualized in a particular configuration.

The whole is not more than the sum of its parts because the parts are secretly "bigger." It is more because the relations among parts produce effects that no part produces in isolation.

An assemblage has three analytical axes:

  1. Components and their relations — what is in the assemblage and how the components connect (materially and expressively).
  2. Territorialization / deterritorialization — the degree to which the assemblage is stabilized (territorialized) or fluid and open to transformation (deterritorialized). These processes occur simultaneously and continuously: reterritorialization is not a return to a previous state but the creation of new stable configurations from flows that have been deterritorialized.
  3. Coding / decoding — the expressive, semiotic dimension: what meanings, norms, protocols, and contracts hold the assemblage together.

Every real system operates across all three axes at once. A microservice ecosystem is simultaneously a material arrangement (network topology, data stores, deployment units), a territorial arrangement (stable service contracts, team ownership boundaries, domain boundaries), and a coded arrangement (API schemas, event contracts, organizational policies).

Strata, smooth and striated space

Deleuze and Guattari describe stratification as the process of organizing matter through the imposition of structure. Strata represent relatively stable, organized arrangements. The abstract machine underlying strata is destratified and unorganized — a virtual field of possibility from which strata continuously emerge and into which they can dissolve.

Within and between strata, two organizational tendencies operate in tension: smooth and striated space. Smooth spaces are continuous, flowing, and non-hierarchical, allowing lateral connections and deterritorialized movements. Striated spaces are marked by codes, divisions, and hierarchical organization that order and constrain flows. Neither is a stable category — they are tendencies in dynamic tension within every assemblage.

For system architects, this maps clearly: a monolith with module boundaries enforced only by convention is closer to smooth space; a microservice ecosystem with strict service ownership, explicit API contracts, and team topology is heavily striated. The tension between the two is not a failure mode but a structural feature. Over-striation produces bureaucratic rigidity. Under-striation produces the distributed monolith.

Double articulation is the mechanism by which strata are produced. Every stratum is generated through two simultaneous operations: the first articulation selects and organizes matter (content); the second articulation gives form and code to that matter (expression). In a service, the first articulation is the data schema and operational logic; the second articulation is the API contract, the event schema, the SLA. Both are required for the service to function as a component in an assemblage — and both can be modified independently, though not without consequence.

Rhizomes vs. trees: organizational structure as epistemic commitment

The rhizome, in Deleuze and Guattari's sense, is characterized by the absence of a central organizing point, root, or privileged entry. Unlike a tree-structure where all paths trace back to a trunk and roots, a rhizome allows entry at any point and maintains multiple, contingent connections without hierarchical ordering. Rhizomes can grow in all directions simultaneously, branch unexpectedly, and establish new connections dynamically.

This is not just a metaphor for network topology. It is a challenge to the assumption that systems require a center or a unified organizing principle. When a platform team treats itself as the root of a tree — the source of truth through which all other teams must flow — it has made an arborescent epistemic commitment. When the same platform team publishes capabilities as self-serve APIs that teams can compose without passing through a gatekeeper, it is operating rhizomatically.

Neither is inherently superior. Trees are efficient when the environment is stable and the center has reliable information. Rhizomes are more robust to environmental change precisely because there is no single point whose failure takes down the whole.

Weak and strong emergence: what kind of behavior are you dealing with?

Weak and strong emergence represent fundamentally different philosophical positions about system reducibility. Strong emergence describes properties that are irreducible in principle to lower-level components — where high-level truths are not conceptually or metaphysically necessitated by low-level truths. Weak emergence describes properties that are theoretically reducible but practically unpredictable — they can be discovered through computational simulation or post-hoc analysis but are unexpected given the low-level properties and principles.

Why the distinction matters in practice

Strong emergence means: no amount of additional instrumentation or analysis will give you a predictive model. You must design for recovery, not prediction.

Weak emergence means: you can in principle build a simulation or retrospective model that explains the behavior — but the prediction cost may be prohibitive in real time. Probe-sense-respond is more practical than exhaustive upfront analysis.

Most emergent behaviors in distributed software systems are weakly emergent: they are theoretically explicable through simulation and post-hoc analysis but practically unpredictable in advance. In distributed software systems, nonlinear feedback within networks produces global system dynamics that cannot be predicted from examining individual service properties in isolation. Cascading failures are the canonical example: a small issue in one service triggers disproportionately large impacts across the system due to nonlinear propagation through coupled dependencies.

This finding has a nuance. Wolfram's computational irreducibility principle holds that some computations cannot be shortened and can only be determined by performing or simulating them. But research by Israeli and Goldenfeld (2004) demonstrates a significant limitation: computationally irreducible systems can exhibit properties that are predictable at coarse-grained levels of abstraction. A system irreducible at the level of individual network packets may still be predictable at the level of service-level availability. The choice of abstraction level determines what is predictable. This is not a trivial observation — it means that adding more fine-grained observability may not help, while raising the level of abstraction (e.g., from request tracing to SLO-based monitoring) might.

Simondon's individuation: how services and teams crystallize

Gilbert Simondon's theory of individuation challenges the assumption that services, teams, or bounded contexts exist as determinate wholes awaiting discovery or decomposition. In Simondon's account, the individual is never "given in advance" — it must be produced through an ongoing process of individuation emerging from preindividual potentiality. The preindividual is not an undifferentiated null state but a "supersaturated" condition — a state of radical potentiality, more than a unity, from which individuals emerge through resolution of tension and heterogeneity.

The preindividual field in software is the domain itself: overlapping patterns, conflicting requirements, varying stakeholder perspectives, ambiguous ownership boundaries. From this supersaturated field, service boundaries crystallize not because they were discovered as natural facts but because specific design decisions resolved specific tensions. An individual service is never final; it contains untapped potentials for further metamorphosis and individuation.

Technical objects achieve their identity through their relational engagement with their milieu. A microservice is not a pre-existing entity that then enters relations with other services. It comes into being through the relationality itself — through the decisions about what it must respond to, what it must ignore, what it owns, and what it delegates. Change those relations sufficiently and you have a different service, even if the code is largely unchanged.

Discrete software abstractions — entities, services, boundaries — arise as individuated forms that stabilize relational fields while leaving a virtual remainder for further transformation. This is Simondon's crucial insight: individuation never exhausts the preindividual reservoir. There is always a surplus that enables future individuations. This explains why every architectural decision is provisional: the domain has not been fully individuated; it will continue to press against whatever boundary you have drawn.

The mechanism Simondon calls transduction is directly applicable to architectural practice. Transduction describes how individuation proceeds through relational resolution of metastable tensions in pre-individual fields. Each act of formal specification — naming a bounded context, drawing a service boundary, writing an API contract — performs a transduction: it transforms the virtual, relational continuity of domain knowledge into actual, discrete structures, necessarily leaving a preindividual remainder. Software architecture, understood as transductive practice, neither discovers pre-existing entities nor imposes arbitrary cuts: it facilitates the individuation of meaningful forms from relational fields.

Barad's agential cuts: boundaries as productive interventions

Karen Barad's agential realism adds a critical complement to the assemblage and individuation frameworks. Boundary-making is a productive intervention that generates discrete phenomena rather than discovering pre-existing divisions. Through "agential cuts," boundaries are enacted as material-discursive practices that determine what is included, excluded, and meaningful.

The apparatus performing these cuts — in a software context — is not merely the type system or the compiler. It is the entire sociotechnical assemblage: design sessions, RFC processes, team structures, deployment pipelines, organizational incentives. Computational entities do not pre-exist but emerge through intra-action of code, hardware, organizational practices, and domain concepts.

This reframes the question an architect should ask. Not: "Does this abstraction correctly represent the domain?" but: "What does this boundary-making apparatus enable and foreclose?" A service boundary is not a representation to be judged for accuracy. It is a performative act that makes certain relations visible and operative while obscuring others. The right question is about consequences, not correspondence.

Nagarjuna's two truths: the anti-reification heuristic

The Buddhist philosopher Nagarjuna provides the capstone heuristic for working with all of the above. His doctrine of dependent origination (pratītyasamutpāda) establishes that phenomena have no intrinsic self-nature independent of their relational conditions. Things exist only through their dependence on causes, effects, parts, and conceptual designation. Emptiness (śūnyatā) is not a void behind phenomena but precisely their relational, contingent nature.

The two-truths doctrine distinguishes:

  • Conventional truth (samvṛti-satya): phenomena appear to have distinct identities and causal efficacy. This is pragmatically valid and functionally necessary.
  • Ultimate truth (paramārtha-satya): all phenomena are empty of intrinsic essence; their discrete boundaries are conceptual constructions, not discoveries.

Critically, these are not two separate dimensions of reality but two aspects of a single reality. Conventional designations remain causally efficacious and pragmatically valid even though they lack ultimate ontological status. The framework avoids both eternalism (treating conventional entities as ultimately real, independent facts) and nihilism (treating them as non-existent fictions to be abandoned).

Buddhist philosophy identifies reification as a fundamental cognitive and ontological error: treating any functioning phenomenon as if it were a stable, unchanging thing with an inherent essence, rather than recognizing it as an impermanent process. The Madhyamaka critique — that beneath conventional reality there is no "clear, unchanging, and ultimate" substance — applies with full force to architectural entities.

Prajñapti (conventional designation) is the mechanism: a bounded context, a microservice, a team is a prajñapti — a conventionally designated entity that exists entirely through mental labeling, cultural convention, and organizational practice. This does not make it illusory or non-functional. It makes it a pragmatically useful designation whose existence depends on ongoing conceptual and organizational maintenance.

The reification trap

When you say "the Order service owns this data," you are making a conventional designation that coordinates work and enforces clear responsibility. When you say "the Order service is a natural fact that carves the domain at its joints," you have reified a coordination tool into an ontological claim. The first is useful. The second causes architectural rigidity and cargo-cult boundary-drawing.

Cynefin: operationalizing complexity

The Cynefin framework, developed by Dave Snowden, provides the practical decision-making bridge. It distinguishes five domains:

DomainCause-effect relationshipAppropriate response
ClearObvious, knownSense — categorize — respond (apply best practice)
ComplicatedKnowable through analysisSense — analyze — respond (call in experts)
ComplexOnly deducible in retrospectProbe — sense — respond (safe-to-fail experiments)
ChaoticNo discernible relationshipAct — sense — respond (stabilize first)
DisorderUnknown which domain appliesBreak the problem down

In the complex domain, cause and effect are only deducible in retrospect. The framework recommends safe-to-fail experiments that allow patterns to emerge. This is not a license for undisciplined experimentation — it is a structured acknowledgment that upfront analysis cannot substitute for feedback from the system itself.

The Cynefin domain assignment is not fixed. A system behaving in the complex domain under high load may be in the complicated domain when properly instrumented at rest. And the same system may tip into chaotic during a novel failure mode. The philosophical consequence of emergence understanding for staff engineers is a paradigm shift: the goal transitions from predicting and controlling all system behaviors to designing for responsiveness and recovery when unpredictable emergent behaviors inevitably occur.

Annotated Case Study

The distributed monolith: an assemblage that refused to become one

Consider a platform that was migrated from a monolith to microservices over 18 months. Each service was designed with clean domain responsibilities, separate deployments, and team ownership. On paper, it was a textbook decomposition. In production, a P0 incident in the notification service would bring down checkout.

What went wrong, through an assemblage lens:

The assemblage was territorialized correctly at the expressive level (API contracts, team ownership, domain names) but remained smooth — i.e., unconstrained — at the material level. Services called each other synchronously through long chains: checkout → inventory → pricing → promotion → notification. The notification service held a synchronous lock on the checkout path even though notifications are inherently asynchronous.

The emergent behavior arose from cumulative dependency structures at the system level, not from flaws in individual service design. The distributed monolith is a failed assemblage: it has the expressive form of a microservice ecosystem (team names, service names, API contracts) but the material coupling of a monolith. The relations of exteriority that assemblage theory promises — components that retain independence and can be recombined — had not actually been established.

Simondon's reading:

The preindividual field (the original domain) was individuated too quickly and too cleanly. The design decisions resolved some tensions (team ownership, deployment independence) but left others unresolved (operational coupling, data consistency). The "surplus" from incomplete individuation came back as the distributed monolith failure mode. The boundaries were drawn, but the domain had not finished pressing.

Cynefin's reading:

The incident response team initially treated the notification-checkout coupling as a complicated problem (if we trace the dependency chain, we can find and fix the root cause). After three incidents, they recognized it as a complex problem: the coupling patterns were not fully knowable in advance; only safe-to-fail experiments (circuit breakers with varying timeout thresholds, async-first refactoring of individual call sites) would reveal which interventions worked without introducing new failure modes.

The Nagarjunian lesson:

The team had reified "microservices architecture" — treating it as a natural fact with discoverable right answers — rather than treating it as a conventional designation to be pragmatically maintained. When the architecture stopped serving its purpose, they resisted changing service boundaries because "the domain model says the notification service is a separate thing." Buddhist reification critique would have allowed them to dissolve or merge the notification service earlier: it was a prajñapti, not a natural kind, and when the conventional designation stopped working, it was appropriate to redraw it.

The resolution:

The team introduced async messaging for notification paths (deterritorialization of the synchronous dependency), established explicit circuit breaker contracts (reterritorialization with new codes), and re-ran load tests in a safe-to-fail environment (complex domain probe-sense-respond). The "distributed monolith" was not fixed by redrawing service boundaries — it was fixed by changing the material coupling structure while keeping the expressive structure largely intact.

Thought Experiment

The team that became a service

You are designing the organizational structure for a new platform: a developer portal that will aggregate documentation, deployment pipelines, and observability dashboards for 40 product teams.

A senior engineer proposes that the platform team should own all three capabilities as a single team: "They're all developer experience. They belong together."

A principal engineer proposes splitting into three sub-teams aligned with the three capabilities: "Each capability has its own roadmap, its own SLA, its own tech stack."

Apply the frameworks:

  1. Simondon: The organizational "preindividual field" is the set of developer needs, capability dependencies, and staffing constraints. What tensions exist in that field? Which tensions does the "single team" proposal resolve? Which does the "three sub-teams" proposal resolve? Which tensions remain unresolved by either proposal — and what future individuation pressure will they generate?

  2. Assemblage / relations of exteriority: In the three-sub-teams arrangement, are the sub-teams truly components with relations of exteriority — capable of being recombined independently — or are they so interdependent that the split is purely expressive (a naming convention) rather than material (actual operational independence)?

  3. Cynefin: Is the question of "which team structure is right" a complicated problem (experts can analyze dependencies and determine the optimal structure in advance) or a complex problem (the right structure will only become clear through feedback from the evolving domain)? What would a "safe-to-fail experiment" look like here?

  4. Nagarjuna: Both proposals name and designate conventional entities — "the developer experience team," "the observability sub-team." At what point does treating one of these designations as a natural fact (rather than a pragmatic coordination tool) become a source of architectural and organizational rigidity?

There is no single correct answer. The goal is to notice which framework reveals tensions that the others obscure.

Boundary Conditions

When assemblage theory is insufficient

1. When components are genuinely internally related.

Assemblage theory's strength — the exteriority of relations — is also a limitation. Some systems have components whose identity is constituted by their relations. A type system component that is extracted from its type-checking context is no longer a type system — it is a different thing. In these cases, Whitehead's internal relations framework may be more accurate, and the assemblage model can mislead by suggesting components are more modular than they are.

2. When the abstraction level is wrong.

Weak emergence and computational irreducibility are level-dependent. Systems may be irreducible at fine-grained levels while still permitting predictability at higher levels of abstraction. The assemblage framework does not specify which level to analyze. Getting the level wrong — analyzing at the service level when the relevant emergence is at the infrastructure level, or vice versa — produces correct philosophy and wrong conclusions.

3. When deterritorialization is not recoverable.

Assemblage theory treats territorialization and deterritorialization as continuous and reversible. In practice, some deterritorializations are irreversible under organizational constraints: once a team has been dissolved, once a shared database has been split, once a public API has been deprecated, reterritorialization requires resources and coordination that may not be available. The framework describes dynamics accurately but underestimates the asymmetry of organizational entropy.

4. When safe-to-fail is actually fail.

Cynefin's complex domain prescribes "probe-sense-respond" with safe-to-fail experiments. But in regulated environments (financial systems, healthcare, safety-critical infrastructure), there may be no such thing as a safe-to-fail experiment at the system level. The Cynefin framework applies with full force in these contexts at the architectural level during design; it is inapplicable as an incident response strategy in production.

5. When two-truths becomes epistemic evasion.

Nagarjuna's two-truths doctrine is an anti-reification heuristic, not a license for permanent ambiguity. "This is just a conventional designation" can become a defense against making hard architectural decisions. The doctrine says conventional designations are pragmatically valid; it does not say that all designations are equally valid. Some conventions serve coordination better than others. The two-truths framework tells you not to reify; it does not tell you which convention to adopt.

Stretch Challenge

Locate a post-mortem or architecture decision record from a system you have worked on (or a public one, such as the Slack Engineering blog or the Netflix Tech Blog).

Apply the full analytical stack from this module:

  1. Assemblage decomposition: What are the components? What are the relations among them (material and expressive)? Where are the relations of exteriority, and where are relations that look like exteriority but are actually internal? What is the degree of territorialization? What coding holds the assemblage together?

  2. Emergence classification: What emergent behaviors does the post-mortem describe? Are they weakly emergent (explicable post-hoc but unpredictable in advance) or strongly emergent (not reducible even in principle)? At what level of abstraction does the behavior become predictable?

  3. Simondon / Barad: What preindividual tensions were left unresolved by the initial architecture? What agential cuts were made — what did the boundary-making apparatus enable and foreclose? What would have been different if the architects had treated the boundaries as transductive rather than representational?

  4. Cynefin: Which domain (complicated, complex, chaotic) best describes the incident or decision? Did the team respond appropriately to that domain? If they treated a complex problem as complicated (or vice versa), what was the consequence?

  5. Two truths: Where in the post-mortem or ADR can you see the reification error? Where did treating a conventional designation as a natural fact create rigidity or delay the appropriate response?

Write up your analysis. The constraint: you must identify at least one place where two frameworks give conflicting readings, and explain which reading you find more useful and why.

Key Takeaways

  1. Relations, not parts, are the primary unit of architectural analysis. An assemblage's properties emerge from relations among components, not from the components themselves. Components in an assemblage retain relations of exteriority — they can in principle be recombined — but the configuration of relations determines the system's behavior.
  2. Weak emergence is the norm in distributed systems, and it is level-dependent. Most emergent behaviors in distributed systems are weakly emergent: theoretically explicable but practically unpredictable in advance. Computationally irreducible systems at fine-grained levels can be predictable at coarser levels of abstraction. The choice of abstraction level is a design decision with epistemic consequences.
  3. Service and team boundaries are not discovered — they are individuated and enacted. Simondon's individuation shows that boundaries crystallize from a preindividual field of tensions through design decisions that resolve those tensions incompletely. Barad's agential cuts show that boundary-making is a performative act that enables and forecloses — not a representation to be judged for accuracy.
  4. Cynefin operationalizes emergence into decision types. In clear and complicated domains, analysis before action is productive. In the complex domain, probe-sense-respond with safe-to-fail experiments is the correct mode. Treating complex problems as complicated — applying upfront analysis where only feedback from the system can reveal structure — is one of the most common failure modes in platform engineering.
  5. Architectural entities are conventional designations, not natural facts. Nagarjuna's two-truths doctrine is the anti-reification heuristic: a bounded context or microservice is a prajnapti — pragmatically valid and causally efficacious, but ultimately empty of intrinsic essence. Use the designation. Maintain it. Change it when it stops serving coordination. Do not mistake it for a feature of the domain itself.

Further Exploration

Assemblage theory and DeLanda

Emergence

Simondon

Barad

Nagarjuna and the two truths

Cynefin