blockchain architectureDigital Product PassportDPPESPRTraceability

Digital Product Passports Break at the First Mile: Why Traceability Fails Where Raw Materials Actually Come From

Digital Product Passports aren't a compliance exercise. They're an architecture audit. And the architecture most vendors are selling will fail in the one place it has to hold — the first mile, where provenance evidence is produced by people and in places the system wasn't designed for.

Andrew Nalichaev··15 min read

The ESPR conversation is mostly happening at the wrong level. Most of what I read treats it as a compliance problem — which fields to capture, which format to export, which third-party audit trail to produce. The harder problem is architectural. In a meaningful share of the supply chains the regulation will touch, the point where the data actually has to be produced looks nothing like the place the architecture was designed for. And the architecture most vendors are selling will fail there first.

Digital Product Passports aren't a compliance deliverable. They're a multi-year architecture exercise being rushed into pilot phase to meet a timeline that wasn't set with infrastructure reality in mind. The teams treating them as a compliance deliverable may pass the first review cycle and still discover, one audit later, that the underlying architecture cannot defend the provenance claims it exposes. The teams treating them as a systems problem are doing the quieter, slower work of figuring out what provenance evidence actually means in places that don't have the infrastructure to produce it.

The conclusions I've landed on after working through this design space have less to do with blockchains, standards, or identifier schemes, and more to do with where the evidence is produced, how much of it is genuinely trustworthy, and what has to be on-chain versus what absolutely must not be.

What ESPR Actually Asks You to Build

The Ecodesign for Sustainable Products Regulation (EU 2024/1781) entered into force in mid-2024 and is rolling out category by category through delegated acts. Batteries went first under a parallel regulation (EU 2023/1542); the first ESPR working plan covers textiles, furniture, iron, steel, aluminium, chemicals, tyres, detergents, paints, lubricants, and intermediate products. Electronics and additional categories are expected to follow as the delegated-act architecture expands. Cosmetics are better treated today as a DPP-adjacent compliance case rather than a confirmed first-wave implementation bucket. The target the European Commission keeps returning to is "most physical products placed on the EU market by 2030".

Strip the surface language away and a DPP has to carry a specific bundle of things: a unique persistent identifier for each product unit; origin data granular enough to defend the claims being made; attributable intermediate custody events for every processing, transformation and transport step; composition and material characteristics relevant to circularity; compliance data — certificates, test results, regulatory conformities; a machine-readable access layer (QR, NFC, data matrix) readable by consumers, recyclers, regulators and downstream participants; and role-based access so that a consumer sees a product-level summary while a regulator sees evidence-grade detail.

None of this is individually novel. GS1 covers identifiers and resolution. ISO covers material declarations. The W3C Verifiable Credentials specification covers role-based claims. EPCIS covers event models for traceability. What's new is the obligation to pull all of it together into a single passport per product, kept continuously accurate across the product's lifecycle, with data produced, signed, and stored in a way that survives external scrutiny.

That's not a new data format. It's a supply-chain-wide system.

The Architectural Shape of a DPP

If you strip the regulatory language away and think about what a DPP system actually has to do, it resolves into five layers:

Origin capture. The point where provenance data first enters the system — harvest, extraction, synthesis, initial processing. This is where the evidence starts. Everything downstream is only as trustworthy as this layer.

Custody events. Every transformation, every custody handoff, every quality check that changes what the product is or certifies something about it. These have to be attributable to specific actors at specific times.

Identity and role management. Who is allowed to write which kind of event. Who is allowed to read which kind of data. How these permissions evolve as actors enter and leave the chain.

Storage and anchoring. Where the data lives, how it's committed, how its integrity is verifiable without exposing what shouldn't be exposed.

Verification and access. How a consumer, a regulator, a retailer, or a recycler actually interacts with the passport — what they see, how they know it's real, how their access is controlled.

Each of these layers has its own failure modes. But they don't fail evenly. One of them — origin capture — is the layer where everything else downstream becomes worthless if it isn't right. And it's the layer where current DPP architectures are the weakest.

The First-Mile Problem: how DPP architectures break at the origin — origin reality, capture patterns, the five architectural layers, role-based verification, and data placement across on-chain, public off-chain, and encrypted off-chain storage

The First-Mile Problem

Here's the uncomfortable geography. A significant share of ESPR-regulated products contain raw materials sourced from regions where the infrastructural assumptions behind most DPP architectures simply don't hold. Cotton from West Africa and South Asia. Cocoa from Côte d'Ivoire and Ghana. Shea butter from a band of Sahel countries. Natural rubber from Southeast Asia. Rare earths and industrial minerals from sub-Saharan Africa and parts of South America. Pharmaceutical precursors from a mix of East Asia and the Global South. Palm oil, coffee, natural fibres, several specialty chemicals — similar story.

The first signer of data in these supply chains is usually not a legal entity with a registered wallet. It's a cooperative member, a smallholder, a contracted harvester. The capture device is usually not a connected smartphone with a persistent data plan. It's a supervisor's Android tablet that runs offline most of the week. The signer of record is often a supervisor rather than the primary data subject, because the primary data subject uses a button phone that cannot cryptographically sign anything. Connectivity is intermittent. Clock sources drift. Written records live in paper notebooks.

These are not marginal cases. They are the source layer for a substantial fraction of the products ESPR regulates. And they are the layer where the current generation of DPP pilots does most of its hand-waving.

The architectural response to this reality is a separate piece of work in its own right — how capture actually functions when the device is offline three days out of four, what "signing" means when the signer doesn't have a key, how trust tiering becomes a first-class field rather than a design afterthought, and what the fraud surface looks like across different capture paths. The short version: the three capture patterns that actually ship in these environments (supervisor-mediated tablets, SMS/USSD gateways for button-phone signers, and stationary IoT nodes at collection points) each carry a different trust profile, a different cost structure, and a different set of fraud modes. A serious DPP programme uses more than one, allocated by context, and labels the resulting events honestly. I'll cover the engineering detail of that in a dedicated follow-up in this series.

What matters for the pillar argument is this: any DPP architecture that doesn't start from a realistic model of origin capture — one that admits what's actually true in the field rather than what would be convenient — is going to produce passports whose claims can't be defended under examination. The ledger works perfectly. The cryptographic anchor is elegant. The first input to the system is made up.

The Data Placement Problem

The next failure mode kicks in once the evidence is captured. The question is what happens to it.

There's a reflex in blockchain-adjacent DPP design to put "important" data on-chain because on-chain equals credible. GPS coordinates in transaction payloads. Supplier names. Lab results. Photo hashes inline. The reasoning sounds right: if it's on-chain, it can't be tampered with, and the consumer can trust it. The reasoning is wrong in ways that compound.

First, most of that data is either personal data under GDPR or competitively sensitive commercial information under contract. Putting it on-chain creates an immutable public record of material that lawyers and competitors will both eventually read. The first time a European data protection authority actually reviews one of these systems, a lot of teams are going to spend a quarter retrofitting data minimisation into architectures that didn't plan for it.

Second, the security properties people think they're getting from on-chain storage — confidentiality, role-based access, selective disclosure — are not what blockchains provide. Blockchains provide integrity and public verifiability. They do not provide privacy. If you need privacy, you need a different mechanism layered on top, and that mechanism changes the architecture significantly.

The working pattern — which I'll cover in detail in a later piece in this series — partitions data by sensitivity before anything touches the chain. On-chain carries hash commitments, references, and role attestations. No personal data. No free text. Public off-chain storage (typically plaintext IPFS) carries what consumers and retailers legitimately need to see — product-level summaries, aggregated claims, certificates without personally identifying context. Private off-chain storage carries everything else, encrypted with per-role decryption keys so that different verifiers see different slices of the same passport. Geographic data gets coarsened before it leaves the origin device. Retention policies get documented and enforced.

This isn't cryptographic novelty. It's envelope encryption, role-wrapped keys, and hash commitments — well-understood building blocks. What's new is the discipline to apply them before production rather than after the first compliance review. The teams that skip this step are building systems that will be legally difficult to operate in the EU within two years of launch.

The Verification Problem

The third architectural tension is at the consumer-facing end. A DPP is useless if the person reading it can't make sense of what they see or verify that what they see is real.

The physical identifier layer — QR codes, NFC tags, data matrices — looks like an implementation detail but carries meaningful architecture decisions. QR is cheap, widely readable, and easy to counterfeit. NFC is harder to forge, more expensive per unit, and requires a reader most consumers don't think of. Both have a rebinding problem: once the physical product is separated from its identifier (during recycling, after packaging loss, during resale), the link to the passport can break in ways that are hard to recover from. The way this gets handled depends on the product category and isn't a place to apply a single design across the board.

The verification experience itself is where a lot of DPP pilots reveal how shallow their architecture is. A well-built passport should let a consumer see a high-level trustworthy summary, let a retailer see enough to defend a premium claim, let a recycler see material composition, and let a regulator see evidentiary detail — all from the same underlying data, gated by cryptographic role-based access. Pilots that show "a webpage that displays the same fields to everyone" aren't implementing DPPs; they're implementing product information pages. The distinction will matter when regulators start asking.

The Trust Model Problem

The last tension cuts across everything above. DPP architectures, almost universally, over-promise on what their data represents.

A signed event on a blockchain is a cryptographic attestation that some actor with access to a specific key material produced a specific payload at a specific time. It is not a guarantee that the underlying fact is true. If the supervisor who signed the harvest event was paid to falsify it, the signature is perfectly valid and the record is wrong. If the gateway that signed on behalf of a button-phone supplier was accepting fraudulent SMS, the signature is perfectly valid and the record is wrong. If the IoT node at the collection point was physically tampered with before its secure element was provisioned, the signature is perfectly valid and the record is wrong.

None of this is unique to DPPs. It's the general problem of turning off-chain facts into on-chain claims. The response, in well-designed systems, is to make trust tier a first-class field on every event and let verifiers filter by it, rather than treating every signed record as equivalent. Gateway-signed events, individually-signed events, device-signed events, multi-party-signed events — each carries a different evidentiary weight, and the system should say so.

DPPs that don't do this end up producing records whose cryptographic integrity is beyond question and whose real-world truthfulness is unexamined. That's an improvement over paper for routine cases and a liability for everything else.

Why Current Pilots Will Not Survive Contact With Reality

Most of the DPP pilots I've looked at in the past eighteen months share a set of characteristics. They're built by teams with strong backend engineering and limited field experience. They assume smartphone-grade capture devices and reliable connectivity. They push more data on-chain than the law will eventually allow. They treat trust as binary ("signed" or "not signed") rather than tiered. They defer the physical identifier problem to a downstream vendor. They treat offline scenarios as a "future version" deliverable.

None of this is incompetent. Most of these teams are building exactly what their first-tier EU customers asked for — a system that produces the right data fields in the right format for the early version of the compliance conversation. The problem is that the compliance conversation will mature. The first audit will be a checkbox exercise. The second will involve someone with field experience asking pointed questions. The third will involve a data protection authority reading the whole thing as a GDPR matter. By the time the third audit happens, the architecture that cleared the first is the architecture that has to be rebuilt.

The supply-chain-on-blockchain space has been through a version of this before. The 2017-2020 wave produced a lot of systems that demonstrated well on stage and collapsed under operational load. ESPR is going to surface the same failure mode at regulatory speed rather than market speed. The timeline for most of the regulated categories is tight enough that the teams who are currently treating DPPs as a Q4-pilot deliverable are going to find themselves doing architecture rework under compliance pressure, which is the worst time to do it.

What Winning Looks Like Architecturally

The teams I've seen approach this well share a small set of characteristics.

They design for offline as the default, not a fallback. Capture apps, local queues, delayed sync, clock skew handling, duplicate protection — these are core components, not edge cases.

They're honest about what signing means in their specific supply chain. If their signer population includes button-phone users, they document that, use gateway signing explicitly, and mark the resulting events with a distinct trust tier. They don't pretend that every event is individually signed.

They partition data by sensitivity before going to production. On-chain for commitments. Public off-chain for what consumers and retailers legitimately need. Encrypted off-chain for personal and competitively sensitive material. Coarsened geography by default. Retention policies enforced.

They treat trust tier as a first-class field. Every event carries it. Every verifier sees it. Auditors can filter by it. The system doesn't pretend that all signatures are equal when they aren't.

They model the fraud surface per capture path, before deployment, and treat the fraud model as an architectural deliverable alongside the data schema.

They pick the blockchain rail late, and for specific reasons — cost of anchoring, tooling maturity, vendor risk, alignment with the other standards their supply chain already uses. They don't start from the chain and work outward.

None of this is glamorous. None of it shows up in a demo video. All of it is the difference between a DPP that holds up under regulatory examination and one that becomes a liability.

The Work That Actually Matters

If you're on a team designing or scoping a Digital Product Passport for an ESPR-regulated category, the most useful thing you can do in the next quarter is spend less time on the chain selection and the data model and more time on what happens at the origin point of your supply chain. Go to the actual cooperative. Use the actual phone that your supplier uses. Run your capture app with the connectivity profile of the actual village. Try to reproduce the fraud modes your own system enables. Most teams who do this come back with a completely different architecture than the one they started with.

The next piece in this series covers the first-mile problem in the kind of engineering detail that actually ships — capture patterns, trust tiering as implementation, and the fraud surface per capture path. A third piece covers the on-chain/off-chain split, envelope encryption with role-wrapped keys, and selective disclosure in the same way. Additional pieces will cover the physical identifier layer and the blockchain rail selection decision.

The common thread is that Digital Product Passports are not a compliance problem. They are a systems engineering problem with a regulatory deadline attached. The teams treating them as the former will produce systems that look right on paper and fail under examination. The teams treating them as the latter are, quietly, doing the work that will still be standing in 2030.


If you're scoping a DPP implementation for an ESPR-regulated category — or reviewing an existing pilot before the next audit — I work with a small number of teams per quarter on architecture reviews, trust-tier design, and on-chain/off-chain data placement. The fastest way to reach me is through the contact page.


The blockchain isn't the interesting part of a DPP. It never was. The interesting part is everything that has to be true before the first hash gets written — and whether the team that designed the system had the discipline to build for that reality instead of wishing it away.