We recently announced support for Passkeys on your Report URI account, and everyone should go and enable Passkeys for the amazing security benefits they offer. As a new implementation of an authentication technology, we wanted to be sure that everything was as secure as it should be for our customer's accounts, so we brought in an external party to test our implementation.

Our annual penetration tests
Regular readers will know that Report URI already has an annual penetration test and we now have 6 years worth of reports publicly available for anyone to review and see what issues were found, and how we handled them. That annual review is there to make sure that our internal processes designed to keep our product secure are working and that nothing has slipped through. Our next penetration test is due in Nov/Dec 2026 to stick with our annual schedule, and whilst our application is constantly changing and evolving, Passkeys felt like a big enough change that it was worth getting it tested immediately as it touched our critical authentication flow. If you'd like a brief introduction to Passkeys and how they work, you can refer to our launch blog post which has some high level details and diagrams.

Engaging with Pentest
Normally, when we engage with our penetration testing company, Pentest Ltd., we have effectively no limitations on the scope of the test. This time it was a little different as we only wanted one specific part of our application testing and after discussions, we came up with the following scope:
Perform a targeted external security assessment of the Report URI Passkey (WebAuthnbased 2FA) implementation.
With the specific aim of assessing:
• Security of passkey enrolment and authentication flows
• Interaction between passkeys and existing authentication factors (password and
TOTP)
• Potential authentication bypass and downgrade scenarios
• Protection of credential management functions (add/remove passkey)
• Security of recovery mechanisms (recovery codes and support-led reset process)
• Session handling and authentication state transitions
• Validation of WebAuthn integration controls (challenge handling, RP/origin
enforcement, replay protections)
We wanted to be really sure that our implementation was solid, and having someone external come in to test that felt like a worthwhile approach.
The findings
For those who'd like the TLDR; Pentest found a few problems with our implementation, but nothing that could result in unauthorised access. The worst case scenario in any of the findings was that you could add a Passkey to your account that you then couldn't remove. That's a pretty awesome result if you ask me 😎
Alongside the findings from Pentest, we also did our own security testing and found a couple of minor bugs that we addressed too, but nothing that you'd ever see on the outside. We'll start off by going through the Pentest findings and looking at the solutions that we've implemented for them.
Empty Credential ID
Each Passkey generated by an Authenticator is given an ID that the Authenticator and the website can use to identify it. The specification says that this Credential ID, as it is known, should be "A probabilistically-unique byte sequence identifying a public key credential source and its authentication assertions". Being able to set an empty ID value definitely doesn't meet that requirement, and adding a Passkey with no ID also made it impossible to then delete or rename that Passkey in your account because the ID is what we use to interact with it.
The W3C WebAuthn Level 3 spec defines credential ID length constraints, and whilst the spec is not final yet, we decided to target the new version rather than Level 2.
$credentialIdLen = strlen($data->credentialId);
if ($credentialIdLen < 16 || $credentialIdLen > 1023) {
$this->session->unset_userdata(SessionKeys::WEBAUTHN_REGISTRATION_CHALLENGE);
$this->session->unset_userdata(SessionKeys::WEBAUTHN_REGISTRATION_CHALLENGE_TIME);
return '{"ok": false, "error": "Invalid credential ID length."}';
}With the length checks in place and further testing completed, this issue is now fully resolved and I'm glad to say it didn't post any security risk.
Overlong Credential ID
Whilst this issue is resolved by the fix above, there was one additional thing worth clarifying here that was specific to this finding. Without the upper bound on the Credential ID, you could register a Passkey with a huge ID value. The tester noted that if you were to do this, depending on the size of the ID value, you could get to a point where you couldn't register any more Passkeys, despite not having registered the maximum allowed amount of Passkeys. This behaviour was caused by our size limit on the amount of data allowed to be stored in a Property on a Table Storage Entity (we use Microsoft Azure Storage). Adding a new Passkey would have taken the Property over the allowed limit so our handling code did the right thing and rejected the change, failing the Passkey registration. The worst case scenario here is that if you registered a Passkey with a huge Credential ID, you may only be able to register a single Passkey on your account. This issue is resolved by the fix detailed above which also introduced an upper size limit and posed no security risk.
Duplicate Credential ID
Going back to the first issue, the spec states that a Credential ID should be "A probabilistically-unique byte sequence", so allowing registration of duplicate IDs would not meet that requirement. There are also two separate concerns here, so we'll break them apart.
The first is that the user could register a Passkey on their account with the same Credential ID as another Passkey already registered on their account. The tester noted that this did not result in overwriting Passkeys (as we index on another value) but it did then leave them with two Passkeys registered with the same Credential ID. We already make use of the excludeCredentials feature, where our service provides back the IDs of the user's existing Credential IDs and the Authenticator can then avoid using duplicates. Of course, the authenticator may not do that, so an additional check is required on the way back in.
foreach ($passkeys as $passkey) {
if ($passkey['id'] === $credentialIdB64) {
$this->session->unset_userdata(SessionKeys::WEBAUTHN_REGISTRATION_CHALLENGE);
$this->session->unset_userdata(SessionKeys::WEBAUTHN_REGISTRATION_CHALLENGE_TIME);
return '{"ok": false, "error": "This passkey is already registered."}';
}
}The second issue is that a user could register a Passkey with the same Credential ID as a Passkey that another user has registered. Because we're only using Passkeys as a form of 2FA, by the time we get to looking up a Credential ID, we're only looking at those bound to the correct user. The Credential IDs should also be "probabilistically-unique" so this is unlikely to ever happen by accident. The spec says that we SHOULD prevent this, not that we MUST, but querying over our entire user table and extracting Credential IDs adds a lot of overhead we'd like to avoid. Reading the spec, I also feel that the concerns raised are more defensive techniques for if your application does things a bit wonky rather than being an actual problem, but drop your comments below if I'm missing something.
As it stands, we haven't required that a Credential ID be globally unique across the service, but I'm open to input, and happy to say that this issue also posed no security risk.
Origin Mismatch
This one is another bug that doesn't have an impact, but it still shouldn't be able to happen. The bug itself resides in the library we're using for Passkeys and we have up-streamed a fix which is the same as the patch that we're applying locally.
--- a/src/WebAuthn.php
+++ b/src/WebAuthn.php
@@ -636,9 +636,12 @@
$host = \parse_url($origin, PHP_URL_HOST);
$host = \trim($host, '.');
- // The RP ID must be equal to the origin's effective domain, or a registrable
- // domain suffix of the origin's effective domain.
- return \preg_match('/' . \preg_quote($this->_rpId) . '$/i', $host) === 1;
+ // The RP ID must be equal to the origin's effective domain, or the
+ // origin's host must be a subdomain of the RP ID (i.e. preceded by a dot).
+ if (\strcasecmp($host, $this->_rpId) === 0) {
+ return true;
+ }
+ return \str_ends_with(\strtolower($host), '.' . \strtolower($this->_rpId));
}
/**The bug is that we're setting our rpId as report-uri.com and the library is checking that the host ends with report-uri.com, which is not a strict enough check. The issue with that is not-report-uri.com and evil-report-uri.com both end with report-uri.com. The check has been made more strict so that we're now looking first for an exact match to report-uri.com, or, we're checking that the host ends with .report-uri.com, having introduced the domain boundary . to make the check appropriate.
I've done some pretty lengthy mental gymnastics to try and come up with a scenario where this might be exploitable, but I'm struggling! Maybe if someone registered not-report-uri.com and lured a Report URI user there, managed to get the user to initiate a Passkey registration, and we had no CSRF protection on our registration endpoint, and we had no CORS protection on our registration endpoint, then maybe I can see a way for this to be used in an attack... In reality, I'm happy to say that this has no security risk and it has been fixed as a matter of correctness rather than security.
Cross-Origin Validation Failure
Given the existing controls we have in place, this is a non-issue, but even without those existing controls, this is more of a spec compliance question than a risk. Imagine evil-cyber-hacker.com embeds report-uri.com in an iframe, the user could interact with that iframe and even register a Passkey on their account. In this scenario, the browser will pass crossOrigin: true in the ClientDataJSON to indicate that the registration was initiated inside a cross-origin iframe. Whilst this might sound like something really bad could happen, the Same-Origin Policy is going to give us all of the protection we need here. The attacker page can't read in to the iframe, it can't access any of the Passkey data and it can't conduct any actions on behalf of the user. It is true that if crossOrigin: true is set and you weren't expecting that, you should reject the process, so we've patched to do just that and also up-streamed the change to the library to see if they'd consider a patch there.
--- a/src/WebAuthn.php
+++ b/src/WebAuthn.php
@@ -358,6 +358,11 @@
throw new WebAuthnException('invalid origin', WebAuthnException::INVALID_ORIGIN);
}
+ // Reject cross-origin requests (proposed Level 3 spec §7.1 Step 10).
+ if (\property_exists($clientData, 'crossOrigin') && $clientData->crossOrigin === true) {
+ throw new WebAuthnException('cross-origin request not allowed', WebAuthnException::INVALID_ORIGIN);
+ }
+
// Attestation
$attestationObject = new Attestation\AttestationObject($attestationObject, $this->_formats);
@@ -476,6 +481,11 @@
throw new WebAuthnException('invalid origin', WebAuthnException::INVALID_ORIGIN);
}
+ // Reject cross-origin requests (proposed Level 3 spec §7.2 Step 13).
+ if (\property_exists($clientData, 'crossOrigin') && $clientData->crossOrigin === true) {
+ throw new WebAuthnException('cross-origin request not allowed', WebAuthnException::INVALID_ORIGIN);
+ }
+
// 11. Verify that the rpIdHash in authData is the SHA-256 hash of the RP ID expected by the Relying Party.
if ($authenticatorObj->getRpIdHash() !== $this->_rpIdHash) {
throw new WebAuthnException('invalid rpId hash', WebAuthnException::INVALID_RELYING_PARTY);User Handle Not Validated
The userHandle in Passkeys is the internal user ID that the application can use to uniquely identify the user. This can be used when the user is trying to login without having provided a username or email first, so the application can find the user using this ID. We do have a unique userId value that we use to identify all users on the Report URI platform, and we were setting it in the Passkeys flow, but we didn't need to rely on it for anything. As our users will complete the email/password authentication step first, any Credential ID we were provided with could already be directly looked up on the correct userId. That said, the spec requires that if the authenticator provides a userHandle then the application must verify it, and we weren't verifying it because we didn't use it all.
$userHandleRaw = $postJson['userHandle'] ?? '';
if ($userHandleRaw !== '') {
$userHandle = base64_decode($userHandleRaw, true);
if ($userHandle === false || $userHandle === '') {
$this->session->unset_userdata(SessionKeys::WEBAUTHN_LOGIN_CHALLENGE);
$this->session->unset_userdata(SessionKeys::WEBAUTHN_LOGIN_CHALLENGE_TIME);
return '{"ok": false, "error": "User handle mismatch"}';
}
$expectedUserHandle = hash('sha256', $this->userEntity->getUserId($userEntity), true);
if (!hash_equals($expectedUserHandle, $userHandle)) {
$this->session->unset_userdata(SessionKeys::WEBAUTHN_LOGIN_CHALLENGE);
$this->session->unset_userdata(SessionKeys::WEBAUTHN_LOGIN_CHALLENGE_TIME);
return '{"ok": false, "error": "User handle mismatch"}';
}
}Now, if the userHandle value is set, we will verify that it is the correct one, and it's another issue chalked up with no security risk.
Invalid Attestation Statement
In Passkeys, the phrase Attestation is referring to some kind of proof about what Authenticator device is being used. A company might use Attestation to limit staff to only be able to use a certain type or brand of Authenticator, like a YubiKey, for example. We have no such restrictions on what type of Authenticator can be used and permit the use of any Authenticator, including password managers. This means that we use none as our Attestation format and don't expect the client to send us any Attestation statements. What the spec strictly requires though is that we check that nothing was sent, and reject the process if something was sent. This required another patch to our library which just disregarded the Attestation Statement attStmt altogether, even if it had a value, because we do not use it for anything.
--- a/src/Attestation/Format/None.php
+++ b/src/Attestation/Format/None.php
@@ -24,7 +24,14 @@
/**
* @param string $clientDataHash
*/
public function validateAttestation($clientDataHash) {
+ // §8.7 None Attestation Statement Format:
+ // "If attStmt is a properly formed attestation statement,
+ // verify that attStmt is an empty CBOR map."
+ if (\count($this->_attestationObject['attStmt']) > 0) {
+ throw new WebAuthnException('invalid none attestation: attStmt must be empty', WebAuthnException::INVALID_DATA);
+ }
+
return true;
}
With this patch, the library will now check that the attStmt is empty when the Attestation Format is none, which brings us in to alignment with the spec with no security risk.
Invalid Backup Flags
When a user is registering or using a Passkey, the Authenticator can tell us two things about how it handles backups of the Passkey. It can tell us:
- Backup Eligibility - set if the credential is a multi-device credential, meaning it's designed to be synced across devices (e.g. an iCloud Keychain, Google Password Manager, 1Password, etc...)
- Backup State - set if the credential is currently backed up, i.e. it has actually been synced to the cloud at the time of the ceremony.
Looking at those two potential flags that can be set, you can then derive a set of valid states based on the relationship between them.
| BE | BS | Meaning |
|---|---|---|
| 0 | 0 | Not backup eligible, not backed up |
| 1 | 0 | Backup eligible, not yet backed up |
| 1 | 1 | Backup eligible, currently backed up |
| 0 | 1 | Invalid — cannot be backed up without being eligible |
The final row in that table indicates an invalid state because a Passkey can't have been backed up if the Passkey is not eligible to be backed up. The current specification does not require that we reject this, but the future version of the spec does. We will now check for this invalid state and have up-streamed a patch to the library.
diff --git a/src/Attestation/AuthenticatorData.php b/src/Attestation/AuthenticatorData.php
index 83462b1..a73d195 100644
--- a/src/Attestation/AuthenticatorData.php
+++ b/src/Attestation/AuthenticatorData.php
@@ -281,6 +281,12 @@ class AuthenticatorData {
$flags->isBackup = $flags->bit_4;
$flags->attestedDataIncluded = $flags->bit_6;
$flags->extensionDataIncluded = $flags->bit_7;
+
+ // Backup State (BS) requires Backup Eligible (BE) per spec.
+ if ($flags->isBackup && !$flags->isBackupEligible) {
+ throw new WebAuthnException('invalid backup flags: BS without BE', WebAuthnException::INVALID_DATA);
+ }
+
return $flags;
}
This issue also present no security risk and is now resolved.
Token Binding Accepted
Token Binding is a fairly old technology that is now deprecated, Chrome removed their code for it in 2018. A client can indicate it was using Token Binding in a Passkey ceremony by setting the tokenBinding.status value to present. Given that our application does not support Token Binding, and most other applications probably don't either, along with clients having deprecated it, this shouldn't really be possible. Our application would allow a client to indicate it was using Token Binding, even though it can't, and we would ignore these values as we don't use it. The spec does not allow you to ignore these values, so we had to handle them. We're now correctly validating these fields if they are present and we have up-streamed a patch.
diff --git a/src/WebAuthn.php b/src/WebAuthn.php
index d6b78e7..f81882f 100644
--- a/src/WebAuthn.php
+++ b/src/WebAuthn.php
@@ -362,6 +362,12 @@ class WebAuthn {
throw new WebAuthnException('cross-origin request not allowed', WebAuthnException::INVALID_ORIGIN);
}
+ // 6. Verify tokenBinding status matches the TLS connection. We do not
+ // support Token Binding, so reject status "present" (Level 2 §7.1 Step 6).
+ if (\property_exists($clientData, 'tokenBinding') && \is_object($clientData->tokenBinding) && \property_exists($clientData->tokenBinding, 'status') && $clientData->tokenBinding->status === 'present') {
+ throw new WebAuthnException('token binding not supported', WebAuthnException::INVALID_DATA);
+ }
+
// Attestation
$attestationObject = new Attestation\AttestationObject($attestationObject, $this->_formats);
@@ -485,6 +491,12 @@ class WebAuthn {
throw new WebAuthnException('cross-origin request not allowed', WebAuthnException::INVALID_ORIGIN);
}
+ // 10. Verify tokenBinding status matches the TLS connection. We do not
+ // support Token Binding, so reject status "present" (Level 2 §7.2 Step 10).
+ if (\property_exists($clientData, 'tokenBinding') && \is_object($clientData->tokenBinding) && \property_exists($clientData->tokenBinding, 'status') && $clientData->tokenBinding->status === 'present') {
+ throw new WebAuthnException('token binding not supported', WebAuthnException::INVALID_DATA);
+ }
+
// 11. Verify that the rpIdHash in authData is the SHA-256 hash of the RP ID expected by the Relying Party.
if ($authenticatorObj->getRpIdHash() !== $this->_rpIdHash) {
throw new WebAuthnException('invalid rpId hash', WebAuthnException::INVALID_RELYING_PARTY);This was the last of the issues found in the penetration test and I'm happy to say that this one also presented no security risk and is resolved.
We're good to go!
When making such a big change to how users log in to our service, it makes me feel a lot more comfortable that we've had it thoroughly reviewed by an external party. Whilst there were some issues found here, I'm happy that none of them presented any real risk to our customers, and they've all been fixed anyway. As always, we've published the full report for our penetration test below so you can take a look at the unredacted findings!