-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add parsing and verify #10
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking like a nice addition so far.
I've left a few comments on this regarding permissiveness around the content-digest
header.
I had originally left content-digest calculations out because I felt this was up to the consumer of the library to decide/implement if they wanted to include it in their requests or not.
I think we need to clearly define the remit of this library. Is it there to do anything other than simply sign requests and/or verify signatures on requests/responses? Because verifying the digests isn't actually part of the signature spec and starts to blur the boundaries of what the library is responsible for.
At the moment my view is that we shouldn't give digest
or content-digest
headers with any special treatment and that it's for the consumer to add and verify these headers.
Part of my reasoning is that calculating the body digest may be more complicated than just simply taking the body in like we are now and spitting out a digest. What if the body is actually a stream and the consumer wants to add the digest header as a trailer once the body has been streamed out? We'll also run into performance and memory exhaustion issues in some circumstances if we force consumers to load the entire request body into memory before sending it out.
A new, rather trivial, library could be created for calculating request body hashes/verifying request body digests IMO. (in fact, a quick search on npm indicates there already are some).
src/digest/index.ts
Outdated
} | ||
|
||
export function verifyContentDigest( request : RequestLike) { | ||
const digestHeaderString = extractHeader(request, 'content-digest') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth having a fallback to Digest
header. I know it's not strictly in the spec but it is documented on MDN and is also in the old cavage spec too.
src/digest/index.ts
Outdated
return digests.map(async (digestHeader) => { | ||
const [key, value] = digestHeader.split('=') | ||
const algo = key.trim().replace('-','') | ||
if (!value.startsWith(':') || !value.endsWith(':')) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we allow the :
"quotes" to be optional? This would allow us to also parse the old-style Digest
header.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This parsing is following https://datatracker.ietf.org/doc/html/rfc8941#section-4.2.7 (although it could use some work to be following it strictly)
src/digest/index.ts
Outdated
if (!value.startsWith(':') || !value.endsWith(':')) { | ||
throw new Error('Error parsing digest value') | ||
} | ||
if(algo !== 'sha256' && algo !== 'sha512') { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As with previous comment, these are not the actual algo names in the spec, they are sha-256
and sha-512
. We should also aim to support more than just these two. At least the easily built in ones, like sha
(sha1) and md5
.
see spec https://www.iana.org/assignments/http-dig-alg/http-dig-alg.xhtml
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should also not throw if we encounter an unsupported digest algo. If there are many we should ensure that the ones we can verify are valid but just ignore the ones we can't verify.
If no algorithm is supported we should just move on (as per the spec, these headers can be ignored).
From the spec:
A recipient MAY ignore any or all digests. This allows the recipient
to choose which hashing algorithm(s) to use for validation instead of
verifying every digest.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As with previous comment, these are not the actual algo names in the spec, they are
sha-256
andsha-512
.
Added a function to convert them.
@adrianhopebailie - I'm happy to work closely / quickly with you to help get things merged in and have a good working library with your contributions. The main issue is going to be vision/scope, I think. The I think we can get things working nicely if we are happy to agree that the scope of this library is very much limited to the scope of signing/verifying requests signatures and any other verification of requests is done elsewhere. I would encourage you to think of the content-digest verification a bit like verifying basic auth credentials. This lib could do it but it really isn't in scope. What we can do is verify that what was provided in the Authorization header was signed properly, but not that the content of the header itself is valid. |
@dhensby thanks for your willingness to work together on this. From our side we're likely to be working fast and breaking things for a few more weeks/months so perhaps the best thing to do is for you to simply cherry-pick the code from our fork that you want to pull into your library. My ideal would be that we stay close enough to each other that we can switch over to your library once the changes we need have landed in a release. As an immediate example, we have not intention of supporting Cavage or the older Digest header so won't be spending time on making both options possible. But, I can see why you'd want that in your library so if you want to add it then you shouldn't wait for us to do it. I'm loving the feedback though. It is helping me refine my work and ensure it follows the specs so please keep it coming. I'll be sure to review all your comments and make changes on our side that make sense. |
@dhensby - I pulled in a dependency 'structured-headers' which is an implementation of RFC8941. Currently waiting on some changes to that so it's a local linked dependancy for now but it does mean this lib is future proofed to start supporting things like request/response binding which would require parameterised components. As mentioned in the comments that will require the API to possibly change in future as the current API assumes components are simply an array of strings. |
@adrianhopebailie thanks for this and finding that library (including your contribution to it). I've been thinking on this over the last week and I'm keen to get improvements into the library, especially implementing the signature verification work. I'm going to try to find some time over the next few days to look over this work so far and also to start thinking of how to build on this approach. Firstly, I've crystallised my opinion that this library should not concern itself with digest headers, trying to parse them, verify them or even produce them. That is concern of the consumer of this library (how and whether they want to include the header and then whether it forms part of the signature or not), for verification of signatures (as I touched on before) the validity of the actual digest isn't important when simply verifying a signature or not. Again, it's for the consumer of this library to decide whether the digest matches the body or not. Secondly, it's clear to me that the I would like the library to do a bit more of the work around verification so it's not all up to the consumer to implement in a callback, but I also want the consumer to have the ability to implement this themselves too. See #16 for the very minor start to this work - simply and indication that this library will not concern itself with the I will look to write up an issue indicating my thoughts around verifying signatures and the API around that. I expect all these changes to culminate in breaking changes for the library, but hopefully ones that make sense and are not any more intrusive than they need to be. |
@dhensby that all makes a lot of sense. Will keep track of your progress and try to pull changes in to our fork where possible. Ideally we'll be able to switch to using this library directly soon.
👍🏼 - we can definitely handle that as a pre-verification step. |
* feat: add verify method * feat: add parsers for incoming request * fix: whitespace conflicts * fix: compile errors * fix: address comments and improve verify * fix: remove pnpm lock file * fix: add newline to .gitignore * fix: typo on property name * fix: some extra parsing checks * feat: add tests
ef162f7
to
36dfb5e
Compare
@dhensby - I've rebased our fork on your latest The two features we'd need implemented before we could switch to using your lib are:
Hopefully our existing implementations can give you some inspiration 😄 |
@dhensby - I've pulled out the digest code and tests now and moved these to: https://github.com/interledger/httpbis-digest-headers I've copied you basic project structure to keep things consistent. This PR should now be a pretty clean comparison of you code and what we've added to support:
|
@adrianhopebailie - would you like to take a look at #27 - I'm pretty happy that I've got that PR in a place where the http-signing standard is practically fully implemented. |
This has all been incorporated in #27 and is now released as v1.0.0 |
@dhensby - we need to add a lot of features to this library to use it within the Rafiki project.
The plan is to add:
.split()
etc.We'll continue working on the library at interledger/http-message-signatures and will try to pull from upstream when we can but we have a large team and can't afford to get blocked waiting on you to merge in changes.
If you are okay with the scope of your library expanding to include all of this and are happy to turn around PRs quickly then we'd be keen to not pollute npm with our fork and just depend on your library within our project.
Open to ideas on how we can collaborate on this effectively.