Back to Blog

Building Offline-First PWAs: Real-World Lessons

March 26, 20255 min readMichael Ridland

"It works until they go to Blacktown."

That's what a client told us about their previous mobile app. Their field technicians cover Western Sydney, and once you get past Parramatta, mobile coverage gets patchy. The app would hang, data would disappear, and techs would scribble notes on paper to enter later.

We've now built three offline-first PWAs for field service companies. Here's what actually works.

Why PWA for Field Service?

The first question is always "why not a native app?" For field service specifically:

Distribution: No app store approval cycles. Push updates instantly. Workers install by visiting a URL.

Cross-platform: One codebase for iOS, Android, and desktop. Field supervisors often use tablets or laptops.

Cost: 30-40% cheaper to build and maintain than dual native apps.

Capability: Modern PWAs can do almost everything native apps can—offline storage, push notifications, camera access, GPS.

The trade-off is performance. Native apps will always be smoother for graphics-intensive work. But for business apps—forms, data display, workflow management—PWAs are indistinguishable from native to most users.

The Offline-First Mental Shift

Most developers think about offline as an edge case. "What happens when the network fails?"

Offline-first flips this: assume there's no network. Design for that. Then add sync when connectivity happens.

This changes everything:

  • Data lives locally first, server second
  • Actions are queued, not blocked
  • Conflicts are expected, not exceptional
  • The UI never shows loading spinners for local data

Architecture That Works

After several iterations, here's the pattern we've settled on:

Local Database

We use IndexedDB (via Dexie.js for sanity) as the source of truth on device. Every piece of data the user might need is stored locally.

┌─────────────────────────┐
│    User Interface       │
└───────────┬─────────────┘
            │
┌───────────▼─────────────┐
│    Local Database       │ ← Source of truth
│      (IndexedDB)        │
└───────────┬─────────────┘
            │
┌───────────▼─────────────┐
│    Sync Engine          │ ← Background process
└───────────┬─────────────┘
            │
┌───────────▼─────────────┐
│    Remote API           │
└─────────────────────────┘

Sync Engine

The sync engine is a background process that:

  1. Detects connectivity changes
  2. Queues local changes
  3. Pushes changes when online
  4. Pulls updates from server
  5. Resolves conflicts

The key insight: sync is not "save to server." Sync is bidirectional, continuous, and conflict-aware.

Service Worker Strategy

Service workers handle caching and network requests. Our typical strategy:

App shell: Cache-first. The HTML, CSS, and JS load from cache immediately. Updates happen in background.

API data: Network-first with cache fallback. Try the server, fall back to last-known-good if offline.

Static assets: Cache-first. Images, icons, fonts from cache.

Large files: Cache-only after initial download. Don't re-download PDFs and documents.

The Hard Problems

Conflict Resolution

Two technicians update the same job. One adds notes offline, the other marks it complete. They both sync. Now what?

There's no universal answer. It depends on your domain logic. But here are strategies we've used:

Last-write-wins: Simple, sometimes wrong. Use for low-stakes data.

Field-level merge: Different fields updated = merge. Same field = flag for review.

Operational transformation: Track the operations (add, update, delete) not just state. More complex but more accurate.

For Coast Smoke Alarms' platform, we use field-level merge for most data and explicit conflict queues for important fields like job status.

Data Volume

Syncing everything isn't practical if "everything" is gigabytes. You need smart data scoping:

  • Sync data relevant to this user's role and geography
  • Sync recent data (last 30 days) fully, older data on-demand
  • Pre-fetch likely-needed data based on scheduled work
  • Compress aggressively

We had one client where initial sync was taking 15+ minutes. After implementing geographic scoping and incremental sync, first-time setup dropped to 90 seconds.

Authentication Offline

How do you authenticate a user who has no network?

Options:

  1. Cached credentials: Store an auth token locally with appropriate expiry
  2. PIN/biometric unlock: After initial auth, allow local unlock
  3. Offline session limit: Allow X hours offline before requiring reconnection

We typically combine these: initial auth with server, cached token for 7 days, PIN unlock for quick access, biometric as an option.

UI Feedback

Users need to know what's happening. But you can't show "saving..." when saves are instant-local, or "syncing..." every few seconds.

Our approach:

  • Optimistic UI: actions reflect immediately
  • Subtle sync indicator: small icon shows sync status
  • Explicit "you're offline" banner only when connectivity lost
  • Sync queue visible on demand (for power users)

Cargo Guardian Case Study

Cargo Guardian is a load restraint calculation app. Drivers use it on job sites—often rural, often with terrible reception.

The offline requirements were:

  • All calculator logic works without network
  • Results saved locally and synced later
  • PDF generation works offline
  • License validation tolerates offline periods

What we built:

  • Full calculation engine in client-side JavaScript
  • Local database stores all inputs and results
  • Service worker caches everything needed for PDF generation
  • License tokens cached with 7-day offline allowance

The app has been in production for over a year. Users regularly work offline for hours without issues. Sync failures are under 0.1%.

Tools We Use

Dexie.js: IndexedDB wrapper that doesn't make you hate life.

Workbox: Google's service worker library. Handles the boring parts.

React Query / TanStack Query: Great for caching and sync state management.

PouchDB (sometimes): When you want CouchDB-style sync built in.

What We'd Do Differently

On our first offline-first PWA, we underestimated:

Testing complexity: Simulating network conditions is hard. We now have a dedicated test harness that cycles through connectivity scenarios.

Migration paths: Local databases need schema migrations just like server databases. Plan for this early.

Edge cases: "Offline" isn't binary. There's slow, flaky, and partially-working connectivity. Each behaves differently.

Is It Worth It?

For the right use case, absolutely. Our field service clients report:

  • 90%+ reduction in data entry errors (no more paper notes)
  • Workers can complete their jobs regardless of connectivity
  • Supervisors get real-time visibility (when techs sync)
  • IT support tickets about "the app not working" dropped dramatically

The investment is higher than a simple online-only app. Budget 20-30% more for proper offline-first. But the user experience improvement is dramatic.

Getting Started

If you're considering a PWA for field workers or any connectivity-challenged use case:

  1. Map your data model and identify what must be available offline
  2. Define your sync strategy before writing code
  3. Test on real devices in real conditions early
  4. Budget for the complexity

We build mobile apps for Australian businesses, with particular expertise in offline-first architectures. Happy to discuss your requirements.

Talk to us about your mobile project.