Welcome everyone. Today we'll dive deep into how we integrated the EU Digital Identity Wallet with Keycloak. We'll cover the full stack from credential issuance to verification, and share what we learned building this for a real government customer.
We'll spend most of our time on OID4VP and the Keycloak integration since that's where the real complexity lives. I'll show real code from our implementation throughout.
eIDAS 2.0 mandates that every EU member state must offer a digital identity wallet to its citizens. The ARF defines how everything fits together. The two core protocols are OID4VCI for issuance and OID4VP for verification — both built on top of OAuth 2.0.
We support both formats in our implementation, but SD-JWT is the primary format in the German ecosystem. mDOC is mainly used for the mobile driving license. You can explore SD-JWTs interactively at sdjwt.co. For local debugging of both SD-JWTs and mDOCs, check out oid4vc-dev on GitHub.
This is the big picture. The Trust Anchor publishes trust lists so wallets can verify that issuers are legitimate, and verifiers can prove their identity. The Credential Issuer pushes credentials to the wallet via OID4VCI. The Verifier requests presentations from the wallet via OID4VP. Attestation Providers serve different roles — RP Registrars issue registration certificates for verifiers, the Wallet Provider issues wallet instance attestations, and Access CAs issue certificates to issuers and verifiers. The Status Provider hosts revocation lists. Keycloak sits on the verifier side in our setup.
Keycloak already has preview support for OID4VCI. It's not production-ready yet, but it gives us a foundation to build on. The pre-authorized code flow is simpler than the full authorization code flow because the user is already authenticated.
The flow starts with the user authenticating normally. Keycloak then generates a credential offer containing a pre-authorized code. The wallet scans a QR code, exchanges the code for an access token AND a c_nonce. The nonce is used to build a proof-of-possession JWT, which the wallet sends along with the credential request. This proves the wallet controls the holder key that will be bound to the credential.
The credential offer URI uses a custom scheme so the wallet app can intercept it. The pre-authorized code is single-use and short-lived for security.
tx_code adds an extra layer of security — even if someone intercepts a QR code, they can't claim the credential without the PIN. The Authorization Code Flow is the alternative to pre-authorized code — the wallet initiates the process instead of scanning a QR code from an already-authenticated session. How the user authenticates during the authorization step is independent of OID4VCI — for example, the German PID provider uses eID card authentication via NFC, but that's an implementation detail of the authorization server, not part of the OID4VCI spec.
Let's go through each of these steps in detail. This is where most of the complexity lives.
This is the complete OID4VP flow using pass by reference. Instead of embedding the full request in the URL, we only pass a request_uri — a short URL pointing to the signed request object. The wallet fetches the actual request from that URI. This is important because the request object can be quite large — it contains the DCQL query, client metadata with encryption keys, and the verifier's registration certificate. Putting all of that into a QR code or redirect URL would be impractical. The request_uri keeps the initial redirect small and clean.
OID4VP is built on OAuth 2.0, but the differences are significant. Instead of an id_token or authorization code, you get a vp_token containing a verifiable presentation. Instead of query parameters, the request is a signed JWT fetched via request_uri, containing a DCQL query describing what credentials are needed. The response doesn't come back as a redirect — it's a direct POST with an encrypted JWE. Client authentication uses X.509 certificates instead of client secrets. And response encryption with ephemeral keys is mandatory, not optional. There's also SIOPv2, where the wallet acts as its own OpenID Provider issuing id_tokens signed with the holder's key. But in the EUDI ecosystem we almost exclusively use vp_token alone — the credential itself carries all the identity information we need.
DCQL is the new way to express what credentials and claims you want from the wallet. This query asks for a PID credential in SD-JWT format, specifically requesting family name, given name, and birthdate. The wallet will only disclose these specific claims thanks to selective disclosure. It's much cleaner than the old presentation_definition format.
The client identifier prefix tells the wallet how to verify the verifier's identity. Pre-registered clients use their client_id without any prefix. HAIP mandates support for both x509_san_dns and x509_hash.
This is the scheme we use in production. The verifier's TLS certificate serves double duty — it proves domain ownership AND authenticates the authorization request.
Registration certificates are the verifier equivalent of trust lists for issuers. They explicitly list which credentials and claims the verifier is allowed to request. The wallet displays this to the user before they consent, and can reject requests that exceed what's registered.
Every authorization request gets its own ephemeral encryption key. The public half goes to the wallet in client_metadata, the private half stays in the server session. HAIP mandates ECDH-ES for key agreement with A256GCM for content encryption. This gives us forward secrecy — even if one key is compromised, other requests are unaffected.
Trust lists are the backbone of the EUDI trust model. Published by trust anchors — typically national authorities — they list all authorized issuers with their X.509 certificates. When we receive a credential, we look up the issuer's entity_id in the trust list, extract the certificate, and verify the credential's signature. The trust list itself is signed by the trust anchor.
For SD-JWTs, the verifier takes each received disclosure, computes its SHA-256 hash, and looks it up in the _sd array of the issuer-signed credential. This proves each disclosed claim was part of the original credential without revealing undisclosed claims. For mDOC, the Mobile Security Object contains per-element digests — the verifier recomputes them from the received IssuerSignedItems and checks they match.
Each disclosure is a base64url-encoded JSON array. For object properties it has three elements: salt, claim name, and value. For array elements, only two: salt and value — the position in the array replaces the need for a name. The verifier hashes the raw base64url string and checks the digest against the _sd array or the three-dot entries. The salt ensures identical values produce different digests, preventing correlation.
Every credential presentation must prove two things. First, holder binding: the person presenting the credential is actually the person it was issued to. Second, request binding: the response is tied to a specific verifier request, preventing replay attacks. SD-JWT elegantly combines both in a single Key Binding JWT — the signature proves holder binding, while the aud, nonce, and sd_hash fields provide request binding. mDOC takes a different approach and separates these concerns. DeviceAuth is signed with the holder key embedded in the MSO to prove holder binding. The session transcript is a deterministic CBOR structure that both the wallet and verifier compute independently from protocol state to provide request binding.
This is the chain of trust for key binding. The issuer embeds the holder's public key in the credential's cnf claim. When presenting, the wallet signs a Key Binding JWT. The verifier extracts the public key from the credential — trusted via the trust list — and verifies the KB-JWT signature. The sd_hash is computed over the entire SD-JWT presentation string, binding the proof to both the credential and the exact set of disclosed claims. Without this, anyone who copies a credential could present it.
This is the mDOC equivalent of the KB-JWT chain of trust. The issuer embeds the holder's public key in the Mobile Security Object during credential issuance — mDOC calls this the "device key" because the spec assumes the key lives on the device's secure hardware. The MSO itself is signed by the issuer. When presenting, the wallet signs a DeviceAuth structure — a COSE signature over the SessionTranscript and DocType — using the holder's private key. The verifier first validates the issuer's signature on the MSO via the trust list, then extracts the holder key and verifies the DeviceAuth signature. The SessionTranscript is computed deterministically by both sides from protocol state, so it also provides request binding. Unlike SD-JWT where the KB-JWT combines holder binding and request binding, mDOC separates them — DeviceAuth for holder binding, SessionTranscript for request binding.
The Token Status List is a compact byte array distributed as a signed JWT. The verifier fetches the whole list, decompresses it, and reads the bits at the credential's index. Multi-bit entries let you distinguish between revoked and suspended. Because the entire list is fetched, the issuer has no way to know which specific credential is being checked — this is a deliberate privacy feature.
HAIP is critical because the OID4VP spec itself is very flexible — too flexible for a real ecosystem. Without HAIP, every implementer could make different choices about algorithms, response modes, and formats. HAIP narrows this down to a specific set that everyone must support. Think of it as the EU's "this is how we do it" profile. The .jwt response modes wrap everything in a JWE. Verifiers must support both A128GCM and A256GCM, wallets must support at least one — with A256GCM preferred.
Here's the big picture. We implemented the OID4VP verifier as a Keycloak Identity Provider using the broker SPI. This lets Keycloak treat wallet authentication just like any other external identity provider — like Google or GitHub login. The verifier component handles the cryptographic verification and consults trust lists to validate issuer certificates.
The identity provider SPI is the natural extension point in Keycloak for adding new authentication methods. Our implementation supports three different flows to handle different device scenarios.
This is one of the nicest features of our implementation. The builder groups mappers by format and credential type, deduplicates, and produces a clean DCQL query. For mDOC credentials, it also handles the namespace-qualified claim paths. If you need full control, you can still provide an explicit DCQL query.
The Digital Credentials API is the future of wallet interaction on the web. The browser handles the wallet selection UI, similar to how WebAuthn works for passkeys. We provide a signed request object JWT, and the browser handles the rest. There's also an unsigned variant using protocol "openid4vp-v1-unsigned" where the request data is passed directly instead of as a JWT.
Chrome shipped the DC API in version 141, not 128 which was just an origin trial. Safari supports the API but only for mdoc — no OpenID4VP at all. Firefox has a negative standards position. This fragmentation is a real challenge. If your users might be on Safari, you need to implement the ISO mdoc protocol as well. And importantly, the browser doesn't verify trust — you still need your own trust list validation.
For browsers that don't support the DC API, we fall back to redirects on mobile or QR codes for cross-device scenarios. The login page detects browser capabilities and presents the appropriate option. DC API is preferred when available because it's the smoothest UX, but the fallbacks are always there.
This was our biggest challenge. Unlike a passport number or social security number, the German PID has no claim that uniquely identifies a person. Name plus birthdate isn't unique either — think about common names and shared birthdays.
Our solution is to issue a supplementary credential that contains the Keycloak user ID. On first login, the user presents their PID but we can't match it to an account. So we ask them to authenticate with existing credentials, then issue a login credential that gets stored in their wallet alongside the PID.
On subsequent logins, the wallet presents both credentials. We extract the user_id from our custom credential for a direct O(1) lookup. The PID is still verified to ensure the person hasn't changed, but the actual account binding comes from our credential.
Here's where credential_sets become essential. We prefer both credentials — that's the fast path, direct login. But if the user doesn't have our login credential yet, the wallet falls back to presenting just the PID, which triggers the enrollment flow. One query handles both scenarios.
This is the beauty of the credential_sets approach. Lost credential? No problem. The same query naturally falls back to the PID-only option, and the system re-enrolls the user. No special error handling needed.
To answer the title question: yes, it really is a match made in heaven. Every OID4VP requirement maps cleanly to an existing Keycloak SPI. The Identity Provider SPI handles the protocol flow, the mapper SPI handles claim extraction, authentication sessions store ephemeral state, and the Admin UI provides configuration. We didn't have to fight the framework; we extended it naturally.
There is one notable friction point. Keycloak's Identity Provider SPI was designed for browser-based OAuth flows where the response comes back as a redirect carrying session cookies. The wallet is a native app — when it POSTs to the direct_post endpoint, there are no cookies. So we need workarounds to correlate the wallet's response with the right browser session. It's solvable, but it's the one place where the abstraction doesn't fit perfectly.
Let me summarize what we've built. It's a complete OID4VP integration that handles the full verification pipeline — from building the request to verifying the response and establishing a Keycloak session.
We're testing against real wallet implementations in the German sandbox, and it's used in an actual customer project. Our ultimate goal is to make this part of Keycloak itself — we're planning to open source our implementation and work with the community.
Thank you for your attention. You can find the slides via the QR code. The Keycloak OID4VP extension will be available on GitHub — we're happy to take questions about the technical details of the integration or the German PID workaround.