blockchain architectureDigital Product PassportDPPSupply Chain TraceabilityOffline-First Architecture

Offline-First Traceability: The Real Architecture Problem Behind Digital Product Passports

The interesting architectural problem in Digital Product Passports isn't which blockchain you pick. It's what happens 200 kilometres before the blockchain — in a village with no reliable mobile coverage, a supplier who uses a button phone, and a supervisor with an Android tablet that spends half its life offline.

Andrew Nalichaev··18 min read

Most Digital Product Passport conversations I see skip the part that actually decides whether the system works. They start with the ledger — which chain, which standard, which identifier scheme, which wallet stack. They end with the consumer — what the QR code shows, how selective disclosure works, how regulators verify a claim. In between, a thick layer of assumptions gets smuggled in: that the origin event has a connected signer, that the data capture happens in a place with infrastructure, that the first entry into the system is trusted because it came from a "registered participant".

Strip those assumptions out and the architecture changes shape. The interesting problem isn't what happens on-chain. It's what happens 200 kilometres before the chain — in a rural cooperative with no reliable mobile coverage, a harvester who uses a button phone that will never run a wallet, a supervisor whose Android tablet spends half its life offline, and a weekly motorcycle trip to town that is the actual synchronisation event.

This is the first-mile problem. I covered the broader architectural shape — the five DPP layers, role-based verification, the physical identifier rebinding problem, the GDPR exposure surface — in the pillar piece on the first-mile problem. This article goes deep on the first layer: how capture actually works when the device is offline three days out of four and the signer doesn't have a key.

The Regulatory Push Is Not Patient

ESPR's implementing acts are landing category by category. Batteries were first — EU 2023/1542 — and the requirements are already specific enough to tell you where the hard problems are: origin, intermediate custody, material composition, processing events. Textiles, furniture, cosmetics, detergents and consumer electronics are in the pipeline. The 2030 horizon everyone cites for "most physical products" isn't a distant abstraction; the implementing architecture is being drafted now, and the assumptions baked into today's pilots are the ones regulators will interrogate later.

The awkward fact is that Europe consumes raw materials that originate far from EU infrastructure. Shea butter, cocoa, cotton, raw minerals, rare earths, selected pharmaceutical precursors, natural rubber, palm oil — a meaningful share of the inputs into ESPR-regulated finished products enter the supply chain in regions where the assumptions behind a typical DPP architecture simply don't hold. This isn't an edge case. It's the source layer for a large fraction of the regulated volume.

The architectural inconvenience is sharper than that. A DPP is, at its core, a claim about provenance backed by evidence produced at origin. The regulatory framework assumes that evidence is continuously captured, cryptographically attributable, and temporally verifiable. Legal pressure pushes hardest precisely at the point in the chain where the technical preconditions to produce such evidence are the weakest. The mismatch isn't a bug in the regulation or a gap in the infrastructure. It's the central design problem for anyone building these systems.

What "doesn't hold" means in practice:

  • The first signer is often not a legal entity with a registered wallet. It's a cooperative member, a smallholder, a harvester contracted through a local intermediary.
  • The capture device is owned by a supervisor, not the signer. The phone that signs is rarely the phone that witnesses.
  • Connectivity is intermittent at best. GSM coverage may exist in town but often becomes unreliable or absent at the village level. 3G/4G data is a weekly event, not a continuous condition.
  • Clock sources drift. Devices get replaced every few months. The same physical tablet in a village might be owned by three different people over a year.
  • The written record, if one exists, lives in a paper notebook that gets transcribed — with all the transcription errors that implies.

Any DPP architecture that wants to work in this reality has to treat offline as the default operating state, not a failure mode. That shift changes almost every downstream design choice.

Three Capture Architectures That Actually Ship

Over time I've come to see three distinct capture patterns that work in this environment. Each one is a different trade-off between trust, cost, and operational complexity. A serious DPP programme usually ends up using more than one, allocated by context.

Three Capture Architectures for Digital Product Passports — supervisor-mediated tablet, SMS/USSD gateway for button-phone signers, and stationary IoT node — compared by signer, trust tier, hardware and operational cost, fraud profile, and best-fit context, with bottom strip on local-queue requirements and trust-tier hierarchy

Path A — Supervisor-mediated capture on a hardened tablet

A field supervisor travels to collection points with a hardened Android tablet. The tablet runs a local app that records harvest events offline: timestamp, coarsened GPS, supplier ID (scanned from a paper card or NFC tag), weight, quality grade, photo evidence. The supervisor signs each event with a key stored in the device's secure element. Events queue locally in an encrypted SQLite store. When the supervisor reaches town — a weekly cadence in most programmes I've looked at — the queue syncs to the backend, which in turn anchors the Merkle root on-chain.

This path is operationally the simplest. The supervisor is a real person with a real contract, accountable to the cooperative or the sourcing organisation. The key material lives on a device that's managed, replaced when it breaks, wiped when it's reassigned. Trust is concentrated in a role that was already being trusted before the DPP existed — it's the same supervisor who used to write things down in a ledger.

The honest weakness is that the signature attests to "the supervisor says this happened", not "the supplier confirms this happened". You can layer supplier confirmation on top — a thumbprint, a signed paper receipt photographed and hashed, an NFC tap from a supplier card — but the cryptographic anchor is still the supervisor's key. Any architecture that pretends otherwise is selling a fiction.

Cost is modest. A ruggedised tablet runs roughly $300–$400, a solar-charged battery pack another $100, a spare device per region for failure budget. Annualised, you're looking at something like $400–$500 per supervisor in year one, dropping after that. Manageable for cooperatives of any meaningful size.

Path B — SMS/USSD gateway for button-phone signers

This is the path that forces you to be honest about what signing actually means in the field.

A supplier who owns only a button phone cannot sign an ECDSA transaction. The phone lacks the compute, the key storage, and the input modality. Any DPP vendor claiming otherwise is either using "signing" in a non-cryptographic sense or misrepresenting what the phone does. This is not a limitation to engineer around. It's a hard constraint that has to be acknowledged in the trust model.

The honest architecture looks like this: the supplier sends a structured SMS or USSD sequence to a gateway number — something like SELL COOP123 HARVEST 45KG — through an aggregator such as Africa's Talking or Twilio. The gateway receives the message, enriches it with timestamp and operator metadata, and signs on behalf of the supplier using a gateway-role key that is explicitly marked as such in the on-chain schema. The event carries a trust_tier: GATEWAY_SIGNED flag that distinguishes it from events signed by an actor with individual key custody.

What this gives you:

  • A cryptographic attestation that a message was received by a specific registered number at a specific time and matched the supplier ID the cooperative had on file.
  • Zero claim that the supplier personally controls a key.
  • A clean audit trail that tells anyone reading the ledger that this event has operational trust, not cryptographic individual-signer trust.

This is how M-Pesa works at the architectural level, and it's how the vast majority of meaningful financial activity in several of the world's largest mobile-money economies has always worked. The trust is in the gateway operator and the PIN-based authentication on the supplier's SIM — not in the cryptography of the handset. Pretending otherwise has cost a lot of projects a lot of time.

Cost per message is roughly $0.01–$0.03 through an aggregator. For a cooperative processing tens of thousands of transactions a year, the SMS bill annualises to around $100–$200. The real cost is operational: managing the gateway, handling message parsing errors, reconciling supplier IDs, dealing with the fraud surface that opens up the moment you accept a text message as a supply-chain event (more on that below).

Path C — Stationary IoT node at a collection point

A solar-powered ESP32 or similar microcontroller sits at a fixed collection station. It has a load cell (HX711 and an appropriate strain-gauge assembly), an NFC reader (PN532), a secure element with its own key material (ATECC608B, or a more capable module), and a cellular modem for periodic sync. The supplier taps their NFC card, the scale weighs the load, the device signs a capture event with its own key, and the event queues for upload when connectivity is available.

This path gives you something the other two cannot: event evidence that wasn't mediated by a human at the moment of capture. The scale reading is tamper-resistant at the device level. The NFC card is a pre-distributed credential tied to a registered supplier. The time source, if you're careful, is cross-validated with the cellular tower or a GPS disciplined clock.

It's also the most expensive and the most operationally fragile of the three. An IoT station — panel, controller, load cell, NFC reader, secure element, modem, enclosure, installation — lands around $500–$700 for hardware, plus installation and a non-trivial maintenance budget. In my rough numbers for programmes I've evaluated, total first-year cost sits around $600–$700 per station. Environmental failure rates in tropical and dusty conditions are real and need to be provisioned for, typically at 15–20% device replacement per year.

But when you need physical-event evidence — a lab sample handover, a sealed container loading, a formal custody transfer — this is the path that produces the cleanest record.

Trust Tiering Is an Architectural Primitive, Not a Workaround

Most DPP architectures I've reviewed treat every signed event as equivalent. An event is either "in the ledger" or "not in the ledger". Once it's in, the chain-of-custody story treats it as a uniform attestation.

That model falls apart the moment you mix the capture paths above. An event signed by a gateway on behalf of a button-phone supplier is not the same evidentiary object as an event signed by a supervisor's tablet with supplier NFC confirmation. Both belong in the ledger. But they carry different weight, and any serious verifier — a regulator, an auditor, a sceptical buyer — should be able to distinguish them.

The response is to make trust tier a first-class field on every event. Concretely:

  • trust_tier: INDIVIDUAL_SIGNED — signed by an actor whose key is under individual custody (supervisor, manager, lab technician).
  • trust_tier: DEVICE_SIGNED — signed by an IoT station's secure element, with a fixed and declared installation location.
  • trust_tier: GATEWAY_SIGNED — signed by a gateway role key on behalf of a supplier who cannot hold key material.
  • trust_tier: MULTI_PARTY_SIGNED — signed by two or more roles jointly (used for high-stakes events like quality grade upgrades or export dispatch).

The tier becomes a query dimension at the verification layer. A regulator asking "what's the weakest link in this batch's custody chain?" should get an answer by scanning tiers, not by manually inspecting each event. A retailer deciding whether to trust a premium-grade claim should be able to filter for events where the tier meets a minimum threshold.

This isn't a compliance gimmick. It's a way to make the system honest about something it would otherwise be incentivised to obscure. The cooperative has a motive to upgrade its story — to make every event look individually signed when only some of them were. The trust-tier field, enforced at the smart contract level, removes that optionality. An event either carries the tier that the on-chain issuer attestation supports, or it doesn't get written.

The Local Queue Is Not a Detail

One of the more costly mistakes I've watched teams make is treating the offline queue as an implementation detail rather than a core piece of the architecture. The queue is where most of the operational failure modes live. If it's wrong, the chain anchoring works perfectly and still produces garbage.

A workable queue has these properties:

  • Encrypted at rest on the capture device using a key that does not leave the device's secure storage. If the tablet is stolen, the queue is not a readable record.
  • Append-only at the device level. The capture app cannot delete queued events. It can mark them as "synced" but not remove them until a retention window expires.
  • Hash-chained. Each queued event references the hash of the previous one. Reordering or silent deletion produces a break that the backend detects on sync.
  • Tolerant of clock skew. Device clocks drift, especially after battery failures. Capture timestamps are recorded as device-local, and the backend annotates a "received at" timestamp on sync. Any divergence beyond a threshold (I use 72 hours as a working default) triggers a reconciliation flag rather than silently overwriting.
  • Resistant to duplicate submission. Each event carries a device-generated UUID that stays stable across retries. The backend idempotency-checks on UUID, not on content hash, so a retried sync doesn't create a ghost event.
  • Bounded in size. A failing sync can leave hundreds of events queued. The device needs to gracefully handle running low on storage, and the sync protocol needs to handle multi-megabyte payloads over GSM-grade connections.

None of this is novel in distributed systems terms. It's all well-understood offline-first engineering. The point is that it has to be built in at the architecture level — not bolted on after the first field pilot blows up. I've seen too many DPP proofs-of-concept that relied on a "cloud-first with offline fallback" assumption and discovered only in production that their fallback wasn't actually designed.

What Data Leaves the Village

Once events are synced and anchored, a different architectural question arises: what actually lives on-chain, what lives in encrypted IPFS, what never leaves the backend at all.

The temptation in early DPP designs is to put too much on-chain. GPS coordinates, supplier names, weights, photo hashes — all of it in transaction payloads because it "makes the ledger more credible". A European lawyer reads the system a quarter later and files an eight-page GDPR complaint, and you spend the next quarter retrofitting data minimisation into an architecture that didn't plan for it.

The cleaner design starts from the opposite direction. On-chain carries only commitments and references — event ID, batch ID, actor pubkey hash, role ID, signature, payload hash, previous event hash, timestamp, trust tier. No personal data. No free-text. No coordinates.

Public off-chain data — what a consumer or retailer legitimately needs — lives on plaintext IPFS: product-level traceability summaries, certifications, aggregated claims, addressable by content hash committed on-chain.

Individual-level data — supplier identity, precise GPS, device metadata, lab results with personal annotations — lives in encrypted IPFS, with per-role decryption keys wrapped against each recipient's public key. A regulator with the right role key decrypts their slice. A retailer can verify the cooperative's aggregate grade claim without seeing supplier names. A consumer sees only the product-level summary.

I cover the on-chain/off-chain split mechanics — envelope encryption, role-wrapped keys, selective disclosure, retention enforcement — more fully in the data placement piece in this series.

For geography specifically, coarsening is not optional. Cooperative- or district-level resolution — roughly 10 km × 10 km — is almost always enough for regulatory purposes and doesn't expose individual harvesters to risks that precise coordinates carry. Rule of thumb: if the verifier's question can be answered at cooperative granularity, that's the only granularity that ever leaves the device.

The Fraud Surface Gets Wider, Not Narrower

A point I think doesn't get raised often enough: moving to a DPP architecture in a low-infrastructure region expands the fraud surface. It doesn't automatically reduce it. A paper-based ledger has its own fraud modes, but they're well understood and slow. A digital system with offline capture introduces new ones that operate at software speed.

The main attack types I've seen modelled or observed:

  • Synthetic supplier injection. A supervisor or gateway operator registers phantom suppliers and attributes real harvests to them, skimming margin. Detection: cross-reference against paper records, require periodic biometric or in-person re-enrolment, monitor supplier transaction patterns.

  • Batch over-attribution. A single real harvest is reported twice, once to each of two overlapping buyers. Detection: on-chain batch IDs with uniqueness constraints, reconciliation against physical weight records at the next custody handoff.

  • Timestamp manipulation. A harvest is recorded backdated to match a certification window. Detection: device clock drift beyond the reconciliation threshold; cross-check against sync timestamps.

  • Gateway replay. A genuine SMS is replayed by an attacker with access to the aggregator logs. Detection: nonce-based replay protection at the gateway, short validity windows, confirmation SMS back to the original sender.

  • Trust-tier laundering. A gateway-signed event is re-submitted through a supervisor's tablet to upgrade its tier. Detection: enforce single-tier writes at the contract level, with any tier upgrade requiring an explicit multi-sig amendment event.

  • Collusive supervisor fraud. The supervisor signs false events in exchange for a cut. Detection: hardest of the lot. Requires out-of-band audit — periodic physical inspections, cooperative-level reconciliation, random weigh-outs at handoff points.

Each capture path has a different fraud profile. Path A (supervisor-mediated) is most vulnerable to collusive supervisor fraud and timestamp manipulation. Path B (SMS gateway) is most vulnerable to gateway replay and synthetic suppliers. Path C (IoT station) is most vulnerable to physical tampering and device-level attacks against the secure element.

The point isn't that these are fatal. The point is that they're architectural design constraints, not post-deployment discoveries. A DPP that wants to hold up under audit needs a fraud model as a first-class deliverable, matched to the capture paths in use.

What This Means for DPP Builders

If you're designing a Digital Product Passport for an ESPR-regulated category that touches the Global South source layer — which is most of them — the first-mile architecture is the part that decides whether your system produces a trustworthy record or a scaled-up digital version of a paper fiction.

A few concrete implications.

First, design for offline as the default. Capture apps, local queues, delayed sync, clock skew, duplicate protection — these are core components, not edge handling. Any architecture that treats connectivity as present-by-default will encounter reality and break.

Second, be honest about signing. If your signer population includes button-phone users, your trust model has to admit it. Gateway signing is a legitimate pattern when it's documented and distinguished from individual custody. It becomes a problem only when it's obscured.

Third, make trust tier a first-class field. Every event should carry its tier on-chain, and every verifier interface should surface the tier prominently. This is the single cheapest intervention that separates an honest system from a marketing artefact.

Fourth, partition data by sensitivity before you go to production. On-chain for commitments. Public IPFS for what consumers and retailers legitimately need. Encrypted IPFS with role-wrapped keys for everything personal or competitively sensitive. Coarsened geography by default. Retention policies documented and enforced.

Fifth, model the fraud surface explicitly, per capture path. Publish it. Treat it as an architectural deliverable alongside the data model and the smart contract spec. The regulators who inherit these systems in the 2030s will ask for it, and the teams that can't produce it will spend months reconstructing what they should have written down on day one.

The wider architectural failure modes — physical identifier rebinding, role-based verification, the data placement problem in detail — are covered in the pillar piece on the first-mile problem.

The hard truth in this design space is that "blockchain for supply chain" projects have accumulated enough reputational damage over the past decade that a new wave of them needs to be qualitatively more honest than the last one. ESPR and its sibling regulations will force the issue. The architectures that survive are the ones that started from the harvester's button phone and the supervisor's offline tablet — not from the ledger. The ones that didn't will produce systems that pass compliance on paper and fail under any adversarial examination.


If you're scoping the capture and trust-tier layer of a DPP system — choosing between supervisor tablets, SMS gateways, and IoT stations, designing the offline queue, or modelling the fraud surface per path — I work with a small number of teams per quarter on first-mile architecture reviews. The fastest way to reach me is through the contact page.


The interesting work in DPP design isn't the chain or the data model. It's the first 200 kilometres — the queue on the supervisor's tablet, the SMS that signs on behalf of someone who can't, the trust tier that tells an auditor what each event is actually worth. Get that part wrong and the rest is decoration. Get it right and the chain becomes what it should always have been: the boring layer at the bottom.