-
Notifications
You must be signed in to change notification settings - Fork 230
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Return run IDs from invoke/batch endpoints #148
Conversation
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have the username @jakerachleff on file. In order for us to review and merge your code, please complete the Individual Contributor License Agreement here https://forms.gle/Ljhqvt9Gdi1N385W6 . For more details about why we have a CLA and other contribution guidelines please see: https://github.com/langchain-ai/langserve/blob/main/CONTRIBUTING.md. |
c20c234
to
860a9a1
Compare
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have the username @jakerachleff on file. In order for us to review and merge your code, please complete the Individual Contributor License Agreement here https://forms.gle/Ljhqvt9Gdi1N385W6 . For more details about why we have a CLA and other contribution guidelines please see: https://github.com/langchain-ai/langserve/blob/main/CONTRIBUTING.md. |
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have the username @jakerachleff on file. In order for us to review and merge your code, please complete the Individual Contributor License Agreement here https://forms.gle/Ljhqvt9Gdi1N385W6 . For more details about why we have a CLA and other contribution guidelines please see: https://github.com/langchain-ai/langserve/blob/main/CONTRIBUTING.md. |
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have the username @jakerachleff on file. In order for us to review and merge your code, please complete the Individual Contributor License Agreement here https://forms.gle/Ljhqvt9Gdi1N385W6 . For more details about why we have a CLA and other contribution guidelines please see: https://github.com/langchain-ai/langserve/blob/main/CONTRIBUTING.md. |
langserve/server.py
Outdated
): | ||
return str(event_aggregator.callback_events[0].get("run_id")) | ||
else: | ||
None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any thoughts on the error handling we'd want here? I just returned none for now, but that's obviously not ideal. Curious what we prefer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should never occur I think so an assertion error is reasonable, can also use log an error once
async with get_async_test_client(app, raise_app_exceptions=True) as async_client: | ||
response = await async_client.post("/invoke", json={"input": 1}) | ||
run_id = response.json()["metadata"]["run_id"] | ||
assert _is_valid_uuid(run_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These tests are kinda lame, since they don't actually match up the proper run ID. They just assert some run ID was returned.
Do you guys know how to mock out the generation of run_ids for a given runnable run? If I could do that, then my tests would be much better. Not sure how to reach into the underlying infra like that tho
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a fake tracer (FakeTracer)- check it it does whats needed, if not we could use mock patch (though that's brittle)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took a quick look in the code, fake tracer won't help here -- i think current test is fine
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have the username @jakerachleff on file. In order for us to review and merge your code, please complete the Individual Contributor License Agreement here https://forms.gle/Ljhqvt9Gdi1N385W6 . For more details about why we have a CLA and other contribution guidelines please see: https://github.com/langchain-ai/langserve/blob/main/CONTRIBUTING.md. |
Thank you for your pull request and welcome to our community. We require contributors to sign our Contributor License Agreement, and we don't seem to have the username @jakerachleff on file. In order for us to review and merge your code, please complete the Individual Contributor License Agreement here https://forms.gle/Ljhqvt9Gdi1N385W6 . For more details about why we have a CLA and other contribution guidelines please see: https://github.com/langchain-ai/langserve/blob/main/CONTRIBUTING.md. |
I signed the verification form earlier... not sure why that still is marking as failed :/ |
@jakerachleff it's not automated I need to add you manually 😞 |
@jakerachleff you're approved now :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jakerachleff this looks good to me, let's touch base with @nfcampos to get his take on response body vs. in response headers
Clients of LangServe may be interested in the run_id of each parent run of LangChain. This can be used, for example, for external tracing applications like LangSmith. In this PR, we introduce a new dictionary in the response for the invoke/batch endpoints called "metadata" where we put the run_id (or run_ids for batch).
Notes