Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: completed the vertexai provider integration #21

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

Gmin2
Copy link

@Gmin2 Gmin2 commented Dec 23, 2024

Completed the integration of vertexAI

Screencast.from.24-12-24.03.01.59.PM.IST.webm

Integration test
Screenshot from 2024-12-25 23-04-52

Unit test
Screenshot from 2024-12-25 23-04-35

  • Implementation of Bedrock provider
  • Unit tests with good coverage
  • Integration tests with actual API calls (using test credentials)
  • Documentation updates
  • Example configuration
  • Successful chat completion, completion, and embedding requests
  • Error handling and proper status code mapping
  • PR review and approval

fix #19
/claim #19

@Gmin2 Gmin2 marked this pull request as draft December 23, 2024 19:54
Copy link
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Gmin2! Can you add support for streaming as well?

@Gmin2 Gmin2 marked this pull request as ready for review December 23, 2024 23:55
@Gmin2
Copy link
Author

Gmin2 commented Dec 24, 2024

Thanks @Gmin2! Can you add support for streaming as well?

Can you review this @nirga, all things done(see the video attached with pr body)

@Gmin2 Gmin2 requested a review from nirga December 24, 2024 09:33
@galkleinman
Copy link
Contributor

Hey @Gmin2,

Please copy, the "Definition of Done" defined in the issue to the PR, and mark everything you've done.

@Gmin2
Copy link
Author

Gmin2 commented Dec 24, 2024

Please copy, the "Definition of Done" defined in the issue to the PR, and mark everything you've done.

hey @galkleinman, what it is meant by Example Configuration means do i need to show it in README.md? for docs i need to send PR to docs repo right ?

@galkleinman
Copy link
Contributor

@Gmin2 both in docs and in here

@Gmin2
Copy link
Author

Gmin2 commented Dec 25, 2024

can i get a review
cc @galkleinman @nirga

@Gmin2
Copy link
Author

Gmin2 commented Dec 26, 2024

Implemented function calling, image, video and audio capabilities of the vertexai provider

Screenshot from 2024-12-26 22-18-45

Screenshot from 2024-12-26 21-48-30

Screenshot from 2024-12-26 21-18-09

cc @galkleinman @nirga

Copy link
Contributor

@galkleinman galkleinman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some major changes requests (before I deep dive into the implementation):

  • The whole point of this hub/router, is that you use the OpenAI sdk/api to call w/e provider/model you want. The routing is based on a selected pipeline header, which can either support the functionality (either chat, vision, audio...) or not. While you can multiple pipelines support the same functionality with different models or pipeline for team etc... Each pipeline, should expose the same API as openai exposes and implement it internally. With the current implementation you've changed the routing mechanism of the hub and the exposed api.
  • Tests aren't reproducible since they count on existence of api keys, models and therefore can't be used in CI. use surf-vcr or any preferred network recording solution to create test recordings which will make it reproducible .

src/providers/openai/provider.rs Outdated Show resolved Hide resolved
src/routes.rs Outdated Show resolved Hide resolved
@CLAassistant
Copy link

CLAassistant commented Dec 31, 2024

CLA assistant check
All committers have signed the CLA.

@Gmin2
Copy link
Author

Gmin2 commented Jan 10, 2025

The whole point of this hub/router, is that you use the OpenAI sdk/api to call w/e provider/model you want. The routing is based on a selected pipeline header, which can either support the functionality (either chat, vision, audio...) or not. While you can multiple pipelines support the same functionality with different models or pipeline for team etc... Each pipeline, should expose the same API as openai exposes and implement it internally. With the current implementation you've changed the routing mechanism of the hub and the exposed api.

I will do the multimodality part in a seperate PR

  • Tests aren't reproducible since they count on existence of api keys, models and therefore can't be used in CI. use surf-vcr or any preferred network recording solution to create test recordings which will make it reproducible .

Done

Can i get a review for now ?

cc @nirga

@Gmin2 Gmin2 requested a review from galkleinman January 10, 2025 15:05
Copy link
Member

@nirga nirga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @Gmin2 can you take a look at the tests? They seem to be failing

@Gmin2
Copy link
Author

Gmin2 commented Jan 12, 2025

Hey @Gmin2 can you take a look at the tests? They seem to be failing

Fixed now @nirga

@Gmin2 Gmin2 requested a review from nirga January 12, 2025 10:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Feature Request: Add Google VertexAI Provider Support
4 participants