Engineering

Local-First Foundations

What it means for data to live on the client, and why sync becomes your new architecture

Learning Objectives

By the end of this module you will be able to:

  • Explain what local-first software is and how it differs from traditional client-server and offline-first approaches.
  • Describe the core promise of local-first: instant UI, offline capability, and sync as infrastructure.
  • Identify the primary trade-offs and adoption costs that local-first architectures introduce.
  • Recognize categories of applications where local-first is and is not a good fit.

Core Concepts

The Central Inversion

In a conventional client-server application, the server is the source of truth. The client displays what the server says, sends mutations upstream, and waits for confirmation. The network is in the critical path for every interaction. Performance optimizations like caching or optimistic updates are bolt-ons—tricks layered on top of an architecture that fundamentally requires connectivity to do anything meaningful.

Local-first inverts this. The client database becomes the primary source of truth for the user's session. Reads happen locally. Writes commit locally. The UI responds instantly because it never waits for a network round-trip. The server is still there, but its role changes: instead of being the gatekeeper of every operation, it becomes the synchronization hub that makes the local state durable and shared across devices or users.

In local-first, the network is not in the critical path. It is in the background.

This is a genuine architectural shift, not a performance trick.

Offline-First vs. Local-First: A Necessary Distinction

These two terms are often used interchangeably, but they describe different problems.

Offline-first is primarily a resilience strategy. It asks: what happens when the network disappears? The goal is to degrade gracefully—show cached data, queue mutations, reconcile when connectivity returns. The mental model is still server-centric: the server owns the data, and the client approximates it during gaps.

Local-first is an architectural philosophy. It asks: what if the client were always authoritative? Offline capability is a consequence, not the goal. The goal is ownership: data lives with the user, not behind a service the vendor controls. Sync is not a recovery mechanism; it is the means by which multiple clients stay coherent.

The distinction matters in practice. An offline-first app can have awkward seams—you flush queued writes on reconnect and hope they reconcile cleanly. A true local-first app treats sync as a first-class infrastructure concern, designed from day one with conflict resolution, partial replication, and collaborative semantics in mind.

Sync as a First-Class Architectural Concern

Once data lives locally, synchronization cannot be an afterthought. The challenges are real:

  • When a user goes offline and performs mutations, both the local store and the server accumulate state changes independently. Neither is fully authoritative until they reconcile.
  • Reconnection triggers a merge problem. The application must determine what changed, in what order, with what semantics—using timestamps, change tokens, or version flags to avoid data corruption. Partial sync failures, corrupted local storage, and conflicting timestamps are all predictable failure modes.
  • A mutation triggered in offline mode cannot be cancelled retroactively. Once the queue commits, it synchronizes. Mutation semantics must account for this.

This is why "just add offline support" to an existing app is deceptive. You are not adding a feature—you are adopting a new class of distributed systems problem.

The Instant UI Promise and Its Conditions

Local-first architectures can deliver what is often called "instant UI": the interface responds immediately to every user action because all reads and writes go to the local database first. From the user's perspective, the app feels as fast as a native desktop application.

But the conditions that deliver this experience have trade-offs baked in. Consider how different sync tools make different bets:

Systems like PowerSync and ElectricSQL prioritize offline capability. They sync data to a local SQLite replica; the client reads and writes locally with full offline support. The trade-off is eventual consistency and potential authorization staleness—access control is evaluated at sync time, not query time.

Zero takes the opposite bet. It evaluates authorization on every query, requiring continuous server communication. This means permission changes apply immediately, with no stale-authorization window. But it also means that when the network is unavailable, Zero must either operate on cached policies (reduced freshness) or fail requests until reconnection (reduced offline capability). Zero prioritizes authorization freshness; offline durability is a secondary concern.

Design question to internalize

Before choosing a local-first approach, ask: Does my application need offline writes, or just offline reads? The answer shapes which architectural trade-off you can accept.

The Ecosystem: No Single Standard Yet

The local-first Postgres space is active but not yet settled. PowerSync and ElectricSQL are the most production-ready options for Postgres-backed sync with SQLite on the client. Beyond them, the landscape includes ZenoDB (IndexedDB with WebSocket sync), SyncedDB (lightweight IndexedDB wrapper with multiple backend adaptors), and RxDB with the Supabase plugin (JavaScript-native replication). These span a range from production-grade to experimental. The absence of a dominant standard means tool selection is a real architectural decision, not a commodity choice.

Analogy Bridge

Think of how Git works for source code.

When you clone a repository, you get a full local copy. You commit locally, work offline, branch freely. The remote (origin) does not know or care what you are doing until you push. When you push, the remote may have diverged—someone else pushed first—and you have to pull and merge before your changes are accepted. If there are conflicts, you resolve them explicitly.

Local-first applications apply the same mental model to application data. The user's device is the working copy. The server is the remote. Sync is push/pull. Conflict resolution is the merge step.

The Git analogy breaks down in one important way: your users are not developers. They cannot be asked to resolve merge conflicts manually. This is why local-first architectures invest heavily in conflict-free data structures (CRDTs) and last-write-wins semantics—to make the merge step automatic and invisible.

Annotated Case Study

Notion's Migration from IndexedDB to SQLite WASM

Notion is one of the highest-profile production validations of local-first client storage. Their application stores substantial amounts of structured data on the client to enable fast, responsive page navigation—the kind of experience users expect from a native app, delivered in a browser.

Initially, Notion used IndexedDB for client-side persistence, the storage mechanism that browsers have natively exposed for years. Over time, IndexedDB's constraints became limiting: storage quotas imposed by browser vendors constrained how much data could be cached locally, and query performance degraded as data volumes grew.

Notion migrated to SQLite WASM—a full SQLite database compiled to WebAssembly and running inside the browser tab. The result: page navigation times improved by 20%.

Why this matters architecturally:

  1. Performance is not just about sync speed. The bottleneck was local query performance, not network throughput. A relational engine with proper indexing outperforms IndexedDB's key-value semantics for structured queries.

  2. Storage constraints are a real deployment concern. Browser-managed storage (IndexedDB, localStorage) is subject to eviction and quota enforcement. SQLite WASM backed by the Origin Private File System (OPFS) provides more durable, higher-capacity storage.

  3. Production migrations are possible. Notion did not rebuild from scratch—they migrated. This suggests the local-first storage layer is more separable from application logic than it might seem.

Limits of this case study

The 20% navigation improvement is the headline metric, but Notion's sources do not break down which WASM SQLite library they used or detail the migration path. This case study validates the direction, not a specific implementation recipe.

The implicit lesson: choosing your client-side storage technology is a consequential decision. IndexedDB may be the path of least resistance to start, but applications with meaningful data volumes should plan for the constraints it imposes.

Common Misconceptions

"Local-first means offline-only." Local-first applications are fully network-capable; they just do not require the network to function. The local database handles reads and writes continuously. Sync happens in the background. Users connected to a fast network still experience instant UI because the local store responds before the sync layer confirms anything upstream.

"Adding a service worker gives you local-first." Service workers can intercept network requests and serve cached responses, which improves resilience. But this is not local-first. The application's data model still lives on the server. You are caching HTTP responses, not operating against a local database. The mutation path is still remote; you are just queuing the requests. Local-first requires the write path to commit locally first, which a service worker cannot provide on its own.

"Sync is the hard part." Sync is genuinely complex—but the harder shift is the mental model. Developers trained on request/response patterns have to unlearn the assumption that every user action should produce an immediate server-confirmed state. The most common early mistakes in local-first development come from trying to preserve server-authoritative semantics while bolting on a local cache, which produces the worst of both worlds.

"Local-first is only for apps that need offline support." Offline capability is a side effect, not the motivation. Applications with no offline requirement still benefit from local-first patterns: instant UI eliminates the latency tax of every interaction, and multi-device sync becomes a built-in property rather than a feature to engineer separately.

Key Takeaways

  1. Local-first inverts the authority model. The client database is the primary source of truth. The server is the sync hub, not the gatekeeper.
  2. Offline-first and local-first are different bets. Offline-first bolts resilience onto a server-centric model. Local-first treats sync as foundational infrastructure, with offline capability as a consequence.
  3. Sync is a distributed systems problem. Reconciling divergent local and server state requires explicit strategies for conflict resolution, versioning, and failure handling. This complexity does not disappear—it shifts into the sync layer.
  4. Different tools make different trade-offs. Sync-time authorization systems (PowerSync, ElectricSQL) prioritize offline durability at the cost of authorization freshness. Query-time systems (Zero) prioritize permission freshness at the cost of offline capability.
  5. The ecosystem is not yet settled. PowerSync and ElectricSQL are the most mature options for Postgres-backed local-first sync, but no single standard has emerged. Tool selection is an architectural decision.

Further Exploration

Core References

Technical Deep Dives

Ecosystem Overview