Declarative Systems and Intent Preservation
How intent-based architectures close the gap between what you mean and what your system does
Learning Objectives
By the end of this module you will be able to:
- Explain the translation gap problem and why it erodes rationale in imperative systems.
- Describe how Intent-Based Networking's three-space model separates business intent from operational implementation.
- Evaluate OPA/Rego as a mechanism for encoding policy intent declaratively.
- Apply KAOS goal refinement to decompose a high-level intent into verifiable sub-goals.
- Identify where in a specification chain intentionality degrades, and how traceability mitigates it.
Core Concepts
The Translation Gap Problem
Every system is born from an intent—"we need to ensure only authenticated users access patient data"—and then slowly accumulates distance from that intent through layers of design decisions, implementation choices, and operational workarounds. The technical term for this distance is the semantic gap.
In intent-based systems specifically, translating natural-language or semi-structured intents into machine-readable configurations requires understanding underlying data models (such as YANG in networking), and each step in that translation widens the gap between what a user meant and what traffic-forwarding rules actually enforce. The security and privacy implications of this gap remain poorly understood in academic literature—which should give pause to anyone treating declarative intent as a solved problem.
The translation gap is not caused by careless engineers. It is an inherent consequence of moving intent across abstraction levels. Every translation step is a lossy compression.
The gap manifests in at least two ways:
- Meaning loss: the translated artifact no longer captures why a decision was made, only what it does.
- Drift: the implementation evolves but the intent artifact does not, or vice versa.
Declarative systems do not eliminate the gap—but they do give it a stable address.
The Chain of Intentionality
Requirements engineering, design, and implementation form a chain of intentionality that runs from high-level intentions down to low-level execution details. Formally, this chain separates:
- Specification — what the system must accomplish
- Realization — how it will be structured
- Implementation — when and where it executes
Each level of translation can introduce gaps where original intent becomes obscured or reinterpreted. Intent-based systems aim to preserve this chain by operating at multiple abstraction levels simultaneously, but this preservation depends on maintaining explicit, navigable mappings between those levels.
Intent without traceability is archaeology waiting to happen.
Declarative vs. Imperative: What Changes for Intent
Declarative programming requires the programmer to specify "what the program must accomplish in terms of the problem domain" rather than "how to accomplish it as a sequence of language primitives." This distinction matters for intent preservation: higher-level specifications maintain a stronger connection to the user's original intent and reasoning.
But the advantage is conditional. The claim is not that declarative code is always better—it is that declarative code makes the what readable and the why recoverable. That recovery only works if:
- The declarative specification was written with intent in mind, not just as a translation artifact.
- The abstraction level of the language is high enough to express business-level concerns.
- Tooling can reason about the spec, not just execute it.
Imperative systems encode how. When circumstances change, the how becomes obsolete—but because the why was never captured, no one knows which parts of the code can be safely changed and which parts are load-bearing.
IBN's Three-Space Model
Intent-Based Networking (IBN) provides a well-specified architectural vocabulary for thinking about how intent flows through a system. RFC 9315 defines three abstraction spaces:
- User Space: where intent is authored, in business or operational language ("no unauthenticated traffic between zone A and zone B").
- Intent-Based System (IBS) Space: where intent is translated into policies and courses of action, and where conflicts between policies are detected and resolved before deployment.
- Network Operations Space: where policies are activated across physical and virtual infrastructure, monitored continuously, and verified against outcomes.
The three spaces do not just label abstraction levels—they impose responsibilities. Translation happens in IBS Space, not Operations Space. Conflict resolution happens before activation, not after. This discipline is what makes the model useful for preserving intent.
IBS Closed-Loop Architecture
The IBN three-space model comes to life through a closed-loop operational architecture. RFC 9315 and related IETF work describe three functional stages:
- Translation — capture and translate intent into policies.
- Activation — install policies across infrastructure.
- Assurance — continuously monitor and verify that network behavior aligns with intended outcomes.
The feedback from Assurance back to Translation is what makes this a closed loop rather than a pipeline. If observed behavior diverges from intent, the system has a pathway to detect and surface that divergence—rather than silently drifting.
In a closed-loop system, the original intent remains a live artifact that gets compared against reality. In a one-shot imperative system, intent is consumed during development and rarely consulted again.
Conflict Detection in Goal Systems
When multiple intents coexist in a system, they will eventually conflict. Conflicting policies can cause destructive network configurations—firewall rules that cancel each other out, routing policies that create loops, access controls that simultaneously grant and deny the same permission.
Conflict detection is not optional in declarative intent systems: it is a critical functional component. Conflicts must be detected and resolved before policies are deployed, not discovered through incidents in production.
This is an area where declarative systems have a structural advantage. Because intent is expressed as explicit, queryable artifacts rather than embedded in procedural logic, it is possible to reason about whether two intents are consistent before either one is activated.
KAOS Goal Refinement
KAOS (Knowledge Acquisition in automated Specification) is a goal-oriented requirements engineering methodology that applies directly to intent-based system design. KAOS supports formal specification and refinement of system goals at the declarative level, and includes formal obstacle analysis.
The core mechanism is goal refinement: high-level goals are decomposed into sub-goals until each sub-goal is assignable to a specific agent (human or automated) and verifiable. Each refinement step is explicit and traceable.
The formal obstacle analysis is what distinguishes KAOS from informal goal decomposition. An obstacle is defined as an assertion consistent with domain theory whose negation implies the goal will fail. Rather than asking "will this work?", KAOS asks "what conditions, if true, would cause this goal to fail?"—and then requires those obstacles to be mitigated before the goal is accepted.
Requirements Traceability
Requirements traceability is formally defined as "the ability to describe and follow the life of a requirement in both forwards and backwards directions through its origins, development, specification, deployment, use, and refinement."
Bidirectional traceability consists of:
- Forward traceability: mapping a requirement through design decisions and into implementation artifacts.
- Backward traceability: given an implementation artifact, finding the requirement (and ultimately the business intent) that motivated it.
Without backward traceability, engineers facing an unfamiliar piece of code cannot answer the question "is this still needed?" Without forward traceability, stakeholders cannot answer "was my requirement actually implemented?"
Traceability is the technical mechanism that makes the chain of intentionality navigable. It is also frequently neglected—because it is overhead that pays off later, not now.
OPA/Rego as Policy-as-Code
Open Policy Agent (OPA) with its Rego language is a concrete, production-grade example of declarative intent encoding. OPA allows policy authors to focus on "what queries should return rather than how queries should be executed."
Rego was designed to be declarative, inspired by Datalog. It includes formal semantics requiring every rule to be "safe"—meaning OPA can determine a finite list of possible values for every variable. This is not just a style constraint; it is a correctness guarantee. A Rego policy that is unsafe cannot be evaluated.
This formal foundation means that Rego policies can be:
- Audited without running them—the intent is in the text.
- Tested in isolation from the infrastructure they govern.
- Reasoned about formally: given these rules, can a user with role X ever access resource Y?
The contrast with imperative authorization code is stark. An authorization check embedded in application middleware cannot be audited without tracing execution paths. A Rego policy is a standalone declaration of governance intent.
Formal Verification and Its Limits
Formal verification is increasingly applied to intent-based systems to ensure that intent specifications are correctly translated into deployable configurations. The Intent Translation Engine must publish the intent semantics it supports, and hybrid approaches combining natural language processing with symbolic reasoning can verify both correctness and constraint satisfaction.
But the limits matter here. Formal verification can only verify that a translation conforms to specified constraints—it cannot guarantee that the translated configuration preserves the original intent. If the constraints are wrong, verification against them is worthless.
This is the hard boundary of formal methods in intent-based systems: they push the problem up the abstraction stack, but they do not eliminate it. Someone still has to verify that the formal specification captures the actual intent.
Annotated Case Study
OPA in a Multi-Team Platform: When Policy-as-Code Works and When It Doesn't
Setting: A platform team at a mid-size company adopts OPA to centralize authorization policy across microservices. Previously, each service implemented its own access control logic—in Java, Go, Python—with different assumptions and no shared audit trail.
What they did:
The team moved all authorization decisions into OPA. Services now send authorization queries to OPA at request time; OPA evaluates Rego policies and returns allow/deny. Business rules live in .rego files, version-controlled alongside infrastructure.
Where it worked:
The Rego policies are legible to non-engineers. When a security auditor asks "can a contractor access financial records?", an engineer can point to a specific policy file and walk through the rules. The intent—"contractors are restricted to non-sensitive resources"—is readable in the declaration. Forward traceability from business requirement to policy is explicit.
When the company acquires a subsidiary and needs to extend access rules, the platform team can reason about conflicts before deploying. Two policies that would contradict each other can be caught in CI, not in a production incident.
Where it struggled:
The original intent behind some rules was not captured in the policy files—only the rules themselves. When a rule that blocked a specific user role from an API endpoint turned out to block a legitimate use case, no one could find the ticket or conversation that motivated the rule. The policy preserved the what, but not the why.
Backward traceability from policy back to business decision had not been maintained. The Rego file said deny { input.role == "contractor"; startswith(input.path, "/finance") }. It did not say why contractors are blocked from /finance, or whether that reasoning still applies.
Annotation:
OPA solves the forward problem well: intent expressed in Rego survives implementation. It does not automatically solve the backward problem: knowing why a rule exists requires deliberate practice—comments, linked tickets, ADRs attached to policy files. The tool creates the opportunity for preservation; the team has to use it.
This maps directly to the closed-loop assurance model: OPA's enforcement layer is solid, but the translation and assurance layers—where business intent is captured and verified against outcomes—still require human discipline.
Compare & Contrast
Declarative vs. Imperative Authorization
| Declarative (OPA/Rego) | Imperative (middleware code) | |
|---|---|---|
| Intent visibility | Rules are explicit, auditable artifacts | Logic is embedded in execution paths |
| Conflict detection | Possible before deployment | Discovered in production |
| Reasoning | Static analysis on policy | Requires runtime tracing |
| Traceability | Easier—policies are version-controlled, linkable | Harder—changes are spread across codebases |
| Flexibility | Constrained by language semantics | Unconstrained—which is the risk |
| Overhead | OPA infrastructure, Rego learning curve | Lower initial cost, higher long-term maintenance |
KAOS Goal Refinement vs. User Stories
| KAOS | User Stories | |
|---|---|---|
| Intent level | System-level goals and sub-goals | Feature-level requirements |
| Formalism | Formal, supports reasoning and verification | Informal, supports discussion |
| Obstacle analysis | Explicit, formal | Ad-hoc (via acceptance criteria) |
| Traceability | Built in to the methodology | Requires external tooling |
| Cost | High—requires RE expertise | Low—accessible to all roles |
| Best for | Safety-critical, high-stakes systems | Agile product development |
KAOS is not a replacement for user stories, and declarative policy is not a replacement for imperative code. The question is always: which tool preserves the intent that matters most for this context?
Boundary Conditions
When Declarative Systems Do Not Preserve Intent
The specification is wrong. A declarative system faithfully enforces what is written. If the Rego policy or KAOS goal model captures the wrong intent—because the requirements elicitation was shallow, or because the domain expert and the engineer were talking past each other—then declarative enforcement makes the wrong thing durable and harder to change.
The translation is lossy. Formal verification can only verify that a translation conforms to specified constraints, not that it preserves the original intent. The semantic gap is not closed by having a well-structured translation pipeline—it is managed. If the intent semantics are not published and agreed upon, the translation is opaque even if the output is formally valid.
Conflict detection is not implemented. The IBN model treats conflict detection as a required functional component. Skipping it means that overlapping or contradictory intents reach the activation layer, where they produce behaviors that no individual intent author would recognize as their own.
Traceability is not maintained. The chain of intentionality degrades silently. Policies accumulate. The reasons behind rules are lost. A declarative system without maintained traceability becomes an archaeology problem faster than an imperative one—because the rules are more legible, but the rationale is just as absent.
The domain is too dynamic. Declarative systems are strong when the space of valid states can be enumerated or bounded—which is why Rego requires rules to be "safe." In highly dynamic domains where the state space is open-ended or the intent itself evolves rapidly, the overhead of maintaining formal declarative specifications may outweigh the benefit.
Adopting a declarative architecture does not automatically preserve intent. It creates the structural conditions under which intent can be preserved—and then requires human discipline to follow through.
Key Takeaways
- The translation gap is structural. Every time intent crosses an abstraction boundary—from business language to policy, from policy to configuration, from configuration to execution—some meaning is at risk of being lost. Declarative systems reduce but do not eliminate this risk.
- The IBN three-space model names the responsibilities. User Space captures intent; IBS Space translates and detects conflicts; Operations Space activates, monitors, and verifies. Each space has a defined job, and the closed-loop assurance feedback is what keeps the system honest over time.
- OPA/Rego makes policy intent auditable. By expressing authorization rules as formal declarations rather than procedural code, Rego policies can be reasoned about, tested, and traced back to business requirements—but only if the team invests in linking the why to the what.
- KAOS goal refinement externalizes the chain of intentionality. Decomposing high-level goals into verifiable sub-goals, with formal obstacle analysis, makes the reasoning behind a system design traceable and reviewable. The cost is real; so is the benefit in high-stakes systems.
- Traceability is the mechanism, not a byproduct. Bidirectional requirements traceability—forward to implementation, backward to origin—is what makes intent recoverable. It requires deliberate investment; it does not emerge from good intentions alone.
Further Exploration
Foundational References
- RFC 9315 — Intent-Based Networking: Concepts and Definitions — The canonical IETF specification for IBN. Required reading for anyone designing intent-driven infrastructure.
- Goal-Oriented Requirements Engineering: A Guided Tour — Axel van Lamsweerde — The foundational paper on KAOS and GORE. Dense but precise.
Policy and Authorization
- OPA Policy Language documentation — Covers the formal semantics of Rego, including the safety requirement and the declarative evaluation model.
- Flexible access control policy specification with constraint logic programming — The formal foundation for constraint-based policy specification, complementary to Rego's Datalog lineage.
Intent-Based Networking in Depth
- Security Challenges of Intent-Based Networking — ACM CACM — A rigorous treatment of the semantic gap and its security implications.
- INTA: Intent-Based Translation for Network Configuration with LLM Agents — Current research example of hybrid NLP + formal verification approaches to the translation gap problem.
- Intent-Based Networking management with conflict detection and policy resolution — Concrete treatment of conflict detection as a required system component, with a proposed resolution framework.