You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When configuring RAGLite to use Ollama as the vector model (embedder), I encountered the following issues:
Initially, while configuring Ollama, an error occurred in litellm. Based on the solution provided in litellm Issue #7451, the error was fixed. However, after that, the RuntimeError: Event loop is closed error appeared.
After some debugging, I found that it is necessary to import nest_asyncio in the script to patch the event loop in order to correctly call Ollama's vector model.
Currently, the RAGLite README does not provide any information on how to properly configure the Ollama vector model call, which may cause new users to encounter similar issues during the configuration process.
Configuration Steps
To correctly call Ollama's vector model and avoid the event loop error, it is recommended to follow these steps:
1.Set Ollama API Base URL:
import os from config import SERVER_HOST # Ensure SERVER_HOST is defined in config.py os.environ["OLLAMA_API_BASE"] = f"http://{SERVER_HOST}:11434"
2.Configure RAGLite:
`from raglite import RAGLiteConfig
my_config = RAGLiteConfig(
db_url=f"sqlite:///./raglite.db",
llm="ollama/{chat_model}", # Replace with your Ollama chat model name
embedder=f"ollama/{vector_model_full_name}", # Replace with your Ollama vector model full name
chunk_max_size=300, # Adjust as needed, the default is 1440, Chinese vector models are recommended to set around 512
)
`
Error Log:
Initially, you may encounter an error like the following: litellm.exceptions.APIConnectionError: litellm.APIConnectionError: Event loop is closed
This is caused by asynchronous operations in litellm.
Solution:
Fix litellm Issues:
The first issue you encounter is an error caused by litellm. According to litellm Issue #7451, the solution is to update the litellm library or apply the recommended fix from the issue.
Fix Event Loop Error:
After resolving the above issue, you may still encounter the RuntimeError: Event loop is closed. This error is typically caused by asynchronous event loops not being handled correctly. The solution is to use nest_asyncio to patch the event loop.
Add nest_asyncio at the beginning of your script:
import nest_asyncio
nest_asyncio.apply()
Complete Example:
Here is a simplified example demonstrating how to properly configure and use Ollama's vector model, and avoid the event loop error:
import os
from pathlib import Path
from raglite import RAGLiteConfig
from raglite import insert_document
# Set Ollama API base URL
os.environ["OLLAMA_API_BASE"] = f"http://{SERVER_HOST}:11434" # Ensure SERVER_HOST is defined
# Configure RAGLite
my_config = RAGLiteConfig(
db_url="sqlite:///raglite.db",
llm="ollama/{chat_model}", # Replace with your Ollama chat model
embedder="ollama/{vector_model_full_name}", # Replace with your Ollama vector model full name
chunk_max_size=300, # Chinese vector models are generally recommended to set around 512 context size
)
# Insert Document (e.g., Special Relativity.pdf)
insert_document(Path("Special Relativity.pdf"), config=my_config)
Conclusion:
When configuring Ollama's vector model in RAGLite, several issues may arise:
litellm errors causing connection issues, for which the solution is detailed in litellm Issue #7451.
The Event loop is closed error, which can be resolved by using nest_asyncio to patch the event loop.
It would be helpful if the documentation in RAGLite clearly mentioned how to configure Ollama's vector model and resolve these common issues.
version:
litellm:Version: 1.56.5
raglite:0.51
The text was updated successfully, but these errors were encountered:
Description
When configuring RAGLite to use Ollama as the vector model (embedder), I encountered the following issues:
Currently, the RAGLite README does not provide any information on how to properly configure the Ollama vector model call, which may cause new users to encounter similar issues during the configuration process.
Configuration Steps
To correctly call Ollama's vector model and avoid the event loop error, it is recommended to follow these steps:
1.Set Ollama API Base URL:
import os from config import SERVER_HOST # Ensure SERVER_HOST is defined in config.py os.environ["OLLAMA_API_BASE"] = f"http://{SERVER_HOST}:11434"
2.Configure RAGLite:
`from raglite import RAGLiteConfig
my_config = RAGLiteConfig(
db_url=f"sqlite:///./raglite.db",
llm="ollama/{chat_model}", # Replace with your Ollama chat model name
embedder=f"ollama/{vector_model_full_name}", # Replace with your Ollama vector model full name
chunk_max_size=300, # Adjust as needed, the default is 1440, Chinese vector models are recommended to set around 512
)
`
Error Log:
Initially, you may encounter an error like the following:
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: Event loop is closed
This is caused by asynchronous operations in litellm.
Solution:
Fix litellm Issues:
The first issue you encounter is an error caused by litellm. According to litellm Issue #7451, the solution is to update the litellm library or apply the recommended fix from the issue.
Fix Event Loop Error:
After resolving the above issue, you may still encounter the RuntimeError: Event loop is closed. This error is typically caused by asynchronous event loops not being handled correctly. The solution is to use nest_asyncio to patch the event loop.
Add nest_asyncio at the beginning of your script:
Complete Example:
Here is a simplified example demonstrating how to properly configure and use Ollama's vector model, and avoid the event loop error:
Conclusion:
When configuring Ollama's vector model in RAGLite, several issues may arise:
It would be helpful if the documentation in RAGLite clearly mentioned how to configure Ollama's vector model and resolve these common issues.
version:
litellm:Version: 1.56.5
raglite:0.51
The text was updated successfully, but these errors were encountered: