Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: mira integration #282

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

VaibhavSatija10
Copy link

Add Mira API Integration for LLM Support

Overview

This pull request introduces support for Mira's API, providing a unified interface for accessing various LLM models. The integration enables Rig users to leverage Mira's API for AI-powered text completions efficiently.

Features

  • New mira provider module with a complete API client implementation.
  • API key authentication for secure and straightforward access.
  • Chat completion support with customizable parameters.
  • Model listing functionality to retrieve available LLM models.
  • Credit management capabilities, including balance checks and transaction history.
  • Comprehensive test coverage to ensure reliability and stability.

Implementation Details

  • Introduced a Client struct with methods for interacting with Mira's API endpoints.
  • Implemented robust error handling to manage API failures gracefully.
  • Provided example usage code to demonstrate basic interactions.
  • Added integration tests to validate all major functionalities.

This PR enhances the flexibility and functionality of Rig by enabling seamless AI model access via Mira’s API.

Reference:
https://docs.mira.network/network/get-started/introduction

- generation request
- available model list
- user credits balance
@0xMochan
Copy link
Contributor

0xMochan commented Feb 7, 2025

Hey, thanks for the PR! We'll get to a review soon, but I'll note, we have a specific provider integration flow that involves leveraging our traits CompletionModel to ensure that all providers can be used cohesively throughout the rig ecosystem, please checkout some providers like openai.rs & hyperbolic.rs for examples!

@cvauclair cvauclair added this to the 2025-02-24 milestone Feb 10, 2025
@VaibhavSatija10
Copy link
Author

Thanks for the feedback and for pointing me towards the CompletionModel trait. I've reviewed some of the examples including openai.rs and hyperbolic.rs and understand how it's used to standardize provider integration within the rig ecosystem. I've refactored my PR to leverage the CompletionModel trait, ensuring consistency and compatibility with other providers.

@mateobelanger mateobelanger changed the title Feature/mira integration feat:/mira integration Feb 18, 2025
@mateobelanger mateobelanger changed the title feat:/mira integration feat: mira integration Feb 18, 2025
@0xMochan
Copy link
Contributor

Thanks for the feedback and for pointing me towards the CompletionModel trait. I've reviewed some of the examples including openai.rs and hyperbolic.rs and understand how it's used to standardize provider integration within the rig ecosystem. I've refactored my PR to leverage the CompletionModel trait, ensuring consistency and compatibility with other providers.

Feel free to join our discord if you would like pointers on how best to integrate this!

Copy link
Contributor

@0xMochan 0xMochan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, the tests in the CI are failing here. Make sure to test your code before submitting for review again as we must make sure all tests pass! I also wrote a comment on good testing as the current tests make network requests which won't be good for our CI/CD!

}

#[derive(Debug, Serialize, Deserialize)]
pub struct ChatMessage {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We standardize naming these Message.

}

#[derive(Debug, Serialize)]
pub struct AiRequest {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, we standardize CompletionRequest

// Add prompt first
let prompt = completion_request.prompt_with_context();
let prompt_str = match prompt {
Message::User { content } => content
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We use the TryFrom and From traits to bi-directionally convert between provider specific messages and our general message schema. While our general schema encompasses a lot of structure to ensure we fully cover the spec, if the provider only providers normal text completions, then the From trait can simply return empty messages if that's desired or TryFrom can specifically return errors to return information to the user.

It's important that we use the TryFrom and From traits as it allows messages to easily be converted by the user if they are maintaining their own chat history (especially in future updates).


// Add chat history
for message in completion_request.chat_history {
match message {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here as earlier, defining a impl TryFrom<Message> for completion::Message would be the best way to implement this behavior allowing you to use .try_into in the actual completion method.

.into_iter()
.filter_map(|c| match c {
message::UserContent::Text(text) => Some(text.text),
_ => None, // Skip other content types
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would say, it's a bit invisible to silently toss out content types. Ideally, we return an error here so that the user can react to the lack of image and other content support. Instead of filter_map, you'd just map into a collect::<Result<Vec<_>, _>>()? which easily surfaces the error out the loop and early returns!

Alternatively, you can produce a warning.

.into_iter()
.filter_map(|c| match c {
message::AssistantContent::Text(text) => Some(text.text),
_ => None, // Skip tool calls
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as earlier, need to explicitly error or provide a warning via the trace module so that user understands that this model doesn't support tool calling!

_ => None, // Skip tool calls
})
.collect::<Vec<_>>()
.join(" ");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could consider joining by '/n' as I think it closer matches what multiple text content in a row would represent. This is opinionated ofc, up to the implementer!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still unresolved. either write a comment here explaining thought process or make requested change here please


impl Client {
/// Create a new Mira client with the given API key
pub fn new(api_key: impl AsRef<str>) -> Result<Self, MiraError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We generally just do api_key: &str, is there a reason why you used impl AsRef<str>?

}
}

#[cfg(test)]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For tests, we recommend studying the tests of other modules like openai or anthropic. We don't actually make requests to their servers, we more or less test the deserialization and serializations of our messages, especially in converting our message to the generic message and back to the provider one!

- Addressed reviewer comments
- updated tests
@cvauclair cvauclair modified the milestones: 2025-02-24, 2025-03-17 Mar 3, 2025
Copy link
Contributor

@0xMochan 0xMochan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think mainly the .generate method on impl Client doesn't need to be there (couple of comments are on that method but I think if that method is removed, those comments will be resolved). Some other minor comments, but fairly good. I'd also rebase onto main so we can merge quickly

{
let raw = RawMessage::deserialize(deserializer)?;
message::Message::try_from(raw).map_err(serde::de::Error::custom)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit odd to define a custom deserializer here for converting between the RawMessage to message::Message. If you have a good case, I'm ok with leaving it but please leave some comments on why this approach was chosen. Otherwise, I'd ask to conform to how the other providers handle this.

})
.collect::<Vec<_>>()
.join(" "),
_ => return Err(MiraError::ApiError(422)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error message providers zero details on what happened here. Please include a string on what went wrong when matching the prompt.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like this was resolved in the later method


if !response.status().is_success() {
let status = response.status();
return Err(MiraError::ApiError(status.as_u16()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Additionally, here it's really important to both trace the error from the response alongside returning it in the wrapped error here. If there is an error from the mira provider or http request, there's no visibility for a dev here.

&self,
model: &str,
request: CompletionRequest,
) -> Result<CompletionResponse, MiraError> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's unclear why this exists. The main way to generate responses from CompletionRequests is through the .completion api defined by the CompletionModel trait. This code is nearly duplicated below.

_ => None, // Skip tool calls
})
.collect::<Vec<_>>()
.join(" ");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still unresolved. either write a comment here explaining thought process or make requested change here please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants