Skip to content

Commit

Permalink
30 support other llms than openai (#37)
Browse files Browse the repository at this point in the history
* feat: LangChain4j support

* feat: LangChain4j support

* feat: LangChain4j support

* feat: LangChain4j support

* chore: updates from main

* fix: openAI base path

* feat: langchain4j models unit tests, refactoring

* feat: langchain4j support, read me update

* Update lmos-router-llm/ReadMe.md

Co-authored-by: Kai Kreuzer <[email protected]>

* Update lmos-router-llm/ReadMe.md

Co-authored-by: Kai Kreuzer <[email protected]>

* Update lmos-router-llm/ReadMe.md

Co-authored-by: Kai Kreuzer <[email protected]>

* Update lmos-router-llm/src/test/kotlin/ai/ancf/lmos/router/llm/LangChainChatModelFactoryTest.kt

Co-authored-by: Kai Kreuzer <[email protected]>

---------

Co-authored-by: Kai Kreuzer <[email protected]>
  • Loading branch information
xmxnt and kaikreuzer authored Dec 20, 2024
1 parent dc61b1e commit ba23ff7
Show file tree
Hide file tree
Showing 9 changed files with 879 additions and 107 deletions.
186 changes: 171 additions & 15 deletions lmos-router-llm/ReadMe.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,32 +7,49 @@ SPDX-License-Identifier: CC-BY-4.0

## Overview

The LLM submodule is responsible for resolving agent routing specifications using a language model. It includes classes and interfaces for interacting with the OpenAI API by default, providing prompts to the model, and resolving agent routing specifications based on the model's responses.
The **LLM Submodule** is responsible for resolving agent routing specifications using a language model. It includes classes and interfaces for interacting with the OpenAI API by default, providing prompts to the model, and resolving agent routing specifications based on the model's responses. Additionally, it supports multiple language model providers such as Anthropic, Gemini, Ollama, and other OpenAI-compatible APIs through the LangChain4j integration.

## Table of Contents

1. [Introduction](#introduction)
2. [Classes and Interfaces](#classes-and-interfaces)
- [ModelClient](#modelclient)
- [DefaultModelClient](#defaultmodelclient)
- [DefaultModelClientProperties](#defaultmodelclientproperties)
- [LLMAgentRoutingSpecsResolver](#llmagentroutingspecsresolver)
- [ModelPromptProvider](#modelpromptprovider)
- [DefaultModelPromptProvider](#defaultmodelpromptprovider)
- [ExternalModelPromptProvider](#externalmodelpromptprovider)
- [AgentRoutingSpecListType](#agentroutingspeclisttype)
- [ModelClientResponse](#modelclientresponse)
- [LangChainModelClient](#langchainmodelclient)
- [LangChainChatModelFactory](#langchainchatmodelfactory)
- [LangChainClientProvider](#langchainclientprovider)
3. [Usage](#usage)
- [Step 1: Initialize the DefaultModelClient](#step-1-initialize-the-defaultmodelclient)
- [Step 2: Initialize the LLMAgentRoutingSpecsResolver](#step-2-initialize-the-llmagentroutingspecsresolver)
- [Step 3: Resolve the Agent](#step-3-resolve-the-agent)
- [Advanced: Using LangChainModelClient](#advanced-using-langchainmodelclient)
4. [Configuration](#configuration)
5. [Error Handling](#error-handling)
6. [License](#license)

## Introduction

The LLM submodule leverages advanced language models to understand and match user queries with agent capabilities. It includes a default implementation for calling the OpenAI model and resolving agent routing specifications using the model's responses.
The **LLM Submodule** leverages advanced language models to understand and match user queries with agent capabilities. It includes a default implementation for calling the OpenAI model and resolving agent routing specifications using the model's responses. Through the integration with **LangChain4j**, the submodule extends support to additional providers such as Anthropic, Gemini, Ollama, and other OpenAI-compatible APIs, offering flexibility in choosing the underlying language model service.

## Classes and Interfaces

### ModelClient

The `ModelClient` interface represents a client that can call a model.
The `ModelClient` interface represents a client that can communicate with a language model.

- **Method:**
- `call(messages: List<ChatMessage>): Result<ChatMessage, AgentRoutingSpecResolverException>`

### DefaultModelClient

The `DefaultModelClient` class is a default implementation of the `ModelClient` interface. It calls the OpenAI model with the given messages.
The `DefaultModelClient` class is the default implementation of the `ModelClient` interface. It interacts with the OpenAI API to process messages.

- **Constructor:**
- `DefaultModelClient(defaultModelClientProperties: DefaultModelClientProperties)`
Expand All @@ -42,7 +59,7 @@ The `DefaultModelClient` class is a default implementation of the `ModelClient`

### DefaultModelClientProperties

The `DefaultModelClientProperties` data class represents the properties for the default model client.
The `DefaultModelClientProperties` data class encapsulates the configuration properties required by the `DefaultModelClient`.

- **Fields:**
- `openAiUrl: String`
Expand All @@ -54,32 +71,32 @@ The `DefaultModelClientProperties` data class represents the properties for the

### LLMAgentRoutingSpecsResolver

The `LLMAgentRoutingSpecsResolver` class resolves agent routing specifications using a language model.
The `LLMAgentRoutingSpecsResolver` class is responsible for resolving agent routing specifications using a language model.

- **Constructor:**
- `LLMAgentRoutingSpecsResolver(agentRoutingSpecsProvider: AgentRoutingSpecsProvider, modelPromptProvider: ModelPromptProvider, modelClient: ModelClient, serializer: Json)`
- `LLMAgentRoutingSpecsResolver(agentRoutingSpecsProvider: AgentRoutingSpecsProvider, modelPromptProvider: ModelPromptProvider, modelClient: ModelClient, serializer: Json, modelClientResponseProcessor: ModelClientResponseProcessor)`

- **Methods:**
- `resolve(context: Context, input: UserMessage): Result<AgentRoutingSpec?, AgentRoutingSpecResolverException>`
- `resolve(filters: Set<SpecFilter>, context: Context, input: UserMessage): Result<AgentRoutingSpec?, AgentRoutingSpecResolverException>`

### ModelPromptProvider

The `ModelPromptProvider` interface represents a provider of model prompts.
The `ModelPromptProvider` interface defines a provider that generates prompts for the language model based on the context and user input.

- **Method:**
- `providePrompt(context: Context, agentRoutingSpecs: Set<AgentRoutingSpec>, input: UserMessage): Result<String, AgentRoutingSpecResolverException>`

### DefaultModelPromptProvider

The `DefaultModelPromptProvider` class provides a generic prompt for agent resolution.
The `DefaultModelPromptProvider` class offers a generic implementation of the `ModelPromptProvider`, generating standard prompts for agent resolution.

- **Method:**
- `providePrompt(context: Context, agentRoutingSpecs: Set<AgentRoutingSpec>, input: UserMessage): Result<String, AgentRoutingSpecResolverException>`

### ExternalModelPromptProvider

The `ExternalModelPromptProvider` class provides a prompt from an external file. The agent routing specifications for the prompt can be in XML or JSON format.
The `ExternalModelPromptProvider` class generates prompts from an external file, supporting agent routing specifications in XML or JSON formats.

- **Constructor:**
- `ExternalModelPromptProvider(promptFilePath: String, agentRoutingSpecsListType: AgentRoutingSpecListType)`
Expand All @@ -89,26 +106,73 @@ The `ExternalModelPromptProvider` class provides a prompt from an external file.

### AgentRoutingSpecListType

The `AgentRoutingSpecListType` enum represents the format of the agent routing specs list.
The `AgentRoutingSpecListType` enum defines the supported formats for agent routing specifications.

- **Values:**
- `XML`
- `JSON`

### ModelClientResponse

The `ModelClientResponse` class represents a model client response.
The `ModelClientResponse` class encapsulates the response from the language model client.

- **Field:**
- `agentName: String`

### LangChainModelClient

The `LangChainModelClient` class is an advanced implementation of the `ModelClient` interface using **LangChain4j** to interact with various language models.

- **Constructor:**
- `LangChainModelClient(chatLanguageModel: ChatLanguageModel)`

- **Method:**
- `call(messages: List<ChatMessage>): Result<ChatMessage, AgentRoutingSpecResolverException>`

**Details:**
- Converts internal `ChatMessage` types (`UserMessage`, `AssistantMessage`, `SystemMessage`) to **LangChain4j** compatible message types.
- Handles exceptions by encapsulating them within `AgentRoutingSpecResolverException`.

### LangChainChatModelFactory

The `LangChainChatModelFactory` is a factory class responsible for creating instances of `ChatLanguageModel` based on the provided configuration.

- **Companion Object Method:**
- `createClient(properties: ModelClientProperties): ChatLanguageModel`

**Supported Providers:**
- `OPENAI`
- `ANTHROPIC`
- `GEMINI`
- `OLLAMA`
- `OTHER` (for OpenAI-compatible APIs)

**Details:**
- Configures the language model client with appropriate settings such as API keys, model names, token limits, temperature, and response formats based on the selected provider.

### LangChainClientProvider

The `LangChainClientProvider` enum lists the supported language model providers.

- **Values:**
- `OPENAI`
- `ANTHROPIC`
- `GEMINI`
- `OLLAMA`
- `OTHER`

## Usage

### Step 1: Initialize the DefaultModelClient

```kotlin
val defaultModelClientProperties = DefaultModelClientProperties(
openAiApiKey = "your-openai-api-key"
openAiUrl = "https://api.openai.com/v1/chat/completions",
openAiApiKey = "your-openai-api-key",
model = "gpt-4o",
maxTokens = 1500,
temperature = 0.7,
format = "json"
)
val modelClient = DefaultModelClient(defaultModelClientProperties)
```
Expand All @@ -118,10 +182,15 @@ val modelClient = DefaultModelClient(defaultModelClientProperties)
```kotlin
val agentRoutingSpecsProvider = SimpleAgentRoutingSpecProvider()
val modelPromptProvider = DefaultModelPromptProvider()
val modelClientResponseProcessor = DefaultModelClientResponseProcessor()
val serializer = Json { ignoreUnknownKeys = true }

val llmAgentRoutingSpecsResolver = LLMAgentRoutingSpecsResolver(
agentRoutingSpecsProvider,
modelPromptProvider,
modelClient
modelClient,
serializer,
modelClientResponseProcessor
)
```

Expand All @@ -131,4 +200,91 @@ val llmAgentRoutingSpecsResolver = LLMAgentRoutingSpecsResolver(
val context = Context(listOf(AssistantMessage("Hello")))
val input = UserMessage("Can you help me find a new phone?")
val result = llmAgentRoutingSpecsResolver.resolve(context, input)
```
```

### Advanced: Using LangChainModelClient

For enhanced flexibility and support for multiple language model providers, you can utilize the `LangChainModelClient` along with the `LangChainChatModelFactory`.

```kotlin
// Define model client properties
val langChainProperties = ModelClientProperties(
provider = LangChainClientProvider.OPENAI.name.lowercase(),
apiKey = "your-openai-api-key",
model = "gpt-4o",
maxTokens = 1500,
temperature = 0.7,
topP = 0.9,
topK = 40,
format = "json",
baseUrl = null // Required for OTHER provider
)

// Create ChatLanguageModel using the factory
val chatLanguageModel = LangChainChatModelFactory.createClient(langChainProperties)

// Initialize LangChainModelClient
val langChainModelClient = LangChainModelClient(chatLanguageModel)

// Use LangChainModelClient with LLMAgentRoutingSpecsResolver
val llmAgentRoutingSpecsResolverAdvanced = LLMAgentRoutingSpecsResolver(
agentRoutingSpecsProvider,
modelPromptProvider,
langChainModelClient,
serializer,
modelClientResponseProcessor
)

// Resolve agent as before
val advancedResult = llmAgentRoutingSpecsResolverAdvanced.resolve(context, input)
```

**Supported Providers via LangChainModelClient:**
- **OpenAI:** Default provider with full configuration support.
- **Anthropic:** Supports models from Anthropic.
- **Gemini:** Integrates with Google AI Gemini models.
- **Ollama:** Connects to Ollama-based models.
- **Other:** For any OpenAI-compatible API by specifying the `baseUrl`.

## Configuration

Configure the `DefaultModelClientProperties` or `ModelClientProperties` based on the chosen provider. Ensure that all required fields such as `apiKey`, `model`, and `baseUrl` (if applicable) are correctly set.

**Example Configuration for OpenAI:**

```kotlin
val properties = ModelClientProperties(
provider = LangChainClientProvider.OPENAI.name.lowercase(),
apiKey = "your-openai-api-key",
model = "gpt-4o",
maxTokens = 1500,
temperature = 0.7,
topP = 0.9,
topK = 40,
format = "json",
baseUrl = null
)
```

**Example Configuration for Other Providers:**

```kotlin
val properties = ModelClientProperties(
provider = LangChainClientProvider.ANTHROPIC.name.lowercase(),
apiKey = "your-anthropic-api-key",
model = "claude-3-5-sonnet-20241022",
maxTokens = 1500,
temperature = 0.7
)
```

## Error Handling

The submodule employs error handling mechanisms to manage failures during model interactions and agent resolution.

- **Exceptions:**
- `AgentRoutingSpecResolverException`: Thrown when there are issues in resolving agent routing specifications or interacting with the language model.

- **Handling Strategy:**
- Utilize the `Result` type to handle successes and failures gracefully.
- Implement appropriate fallback mechanisms or user notifications in case of failures.
11 changes: 9 additions & 2 deletions lmos-router-llm/build.gradle.kts
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,14 @@
dependencies {
api(project(":lmos-router-core"))
implementation("org.slf4j:slf4j-api:1.7.25")
implementation("com.azure:azure-ai-openai:1.0.0-beta.10")
implementation("com.azure:azure-identity:1.14.0")
implementation("org.jetbrains.kotlinx:kotlinx-serialization-json-jvm:1.7.3")
compileOnly("dev.langchain4j:langchain4j-open-ai:0.36.2")
compileOnly("dev.langchain4j:langchain4j-anthropic:0.36.2")
compileOnly("dev.langchain4j:langchain4j-google-ai-gemini:0.36.2")
compileOnly("dev.langchain4j:langchain4j-ollama:0.36.2")

testImplementation("dev.langchain4j:langchain4j-open-ai:0.36.2")
testImplementation("dev.langchain4j:langchain4j-anthropic:0.36.2")
testImplementation("dev.langchain4j:langchain4j-google-ai-gemini:0.36.2")
testImplementation("dev.langchain4j:langchain4j-ollama:0.36.2")
}
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ import org.slf4j.LoggerFactory
* @param modelPromptProvider The provider of model prompts.
* @param modelClient The client for the language model.
* @param serializer The JSON serializer.
* @param modelClientResponseProcessor The processor for the model client response.
*/
class LLMAgentRoutingSpecsResolver(
override val agentRoutingSpecsProvider: AgentRoutingSpecsProvider,
Expand All @@ -34,6 +35,7 @@ class LLMAgentRoutingSpecsResolver(
ignoreUnknownKeys = true
isLenient = true
},
private val modelClientResponseProcessor: ModelClientResponseProcessor = DefaultModelClientResponseProcessor(),
) : AgentRoutingSpecsResolver {
private val log = LoggerFactory.getLogger(LLMAgentRoutingSpecsResolver::class.java)

Expand Down Expand Up @@ -62,7 +64,10 @@ class LLMAgentRoutingSpecsResolver(
messages.add(input)

log.trace("Fetching agent spec completion")
val response: String = modelClient.call(messages).getOrThrow().content
var response: String = modelClient.call(messages).getOrThrow().content

response = modelClientResponseProcessor.process(response)

val agent: ModelClientResponse = serializer.decodeFromString(serializer(), response)

log.trace("Agent resolved: ${agent.agentName}")
Expand Down
Loading

0 comments on commit ba23ff7

Please sign in to comment.