Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while executing the code example in the Migration Guide Documentation #1866

Open
GenAI-Rocky opened this issue Jan 22, 2025 · 1 comment
Labels
bug Something isn't working module-metrics this is part of metrics module

Comments

@GenAI-Rocky
Copy link

I have checked the documentation and related resources and couldn't resolve my bug.

Describe the bug
Executing the code provided in the Migration Guide is resulting in error: AttributeError: 'function' object has no attribute 'generate'

Ragas version: 0.2.11
Python version: 3.12

Code to Reproduce

import openai
import asyncio
import os
from time import sleep

azure_openai_api_key = os.getenv("AZURE_OPENAI_API_KEY")
azure_openai_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
azure_openai_model_deployment = os.getenv("MODEL_DEPLOYMENT")
azure_openai_model_name = os.getenv("MODEL_NAME")
azure_openai_api_version = os.getenv("OPENAI_API_VERSION")
azure_openai_embedding_deployement = os.getenv("EMBEDDING_DEPLOYMENT")
azure_openai_embedding_name = os.getenv("EMBEDDING_NAME")

# Define the evaluator LLM function
def azure_evaluator_llm(prompt):
    response = AzureChatOpenAI(
        openai_api_version=azure_openai_api_version,
        azure_endpoint=azure_openai_endpoint,
        azure_deployment=azure_openai_model_deployment,
        model= azure_openai_model_name,
        api_key=azure_openai_api_key,
        validate_base_url=False,
    )
    return response["choices"][0]["message"]["content"]

# Pass this to the Faithfulness metric
from ragas.metrics import Faithfulness
from ragas import SingleTurnSample

response = '''Project Management Institute (PMI) is the world's leading association...'''
sample = SingleTurnSample(
    user_input="What is PMI",
    response=response
)

# Initialize the faithfulness metric
faithfulness_metric = Faithfulness(llm=azure_evaluator_llm)

# Define an async function to await the coroutine
async def get_score():
    score = await faithfulness_metric.single_turn_ascore(sample=sample)
    print(score)

# Run the async function
asyncio.run(get_score())

Error trace:

Traceback (most recent call last):
  File "c:\Projects\Guided Experience_Local\Evaluation\RAGAS\Test2.py", line 67, in <module>
    asyncio.run(get_score())
  File "C:\Users\xxx\AppData\Roaming\Python\Python312\site-packages\nest_asyncio.py", line 30, in run
    return loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Roaming\Python\Python312\site-packages\nest_asyncio.py", line 98, in run_until_complete
    return f.result()
           ^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\asyncio\futures.py", line 203, in result
    raise self._exception.with_traceback(self._exception_tb)
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\asyncio\tasks.py", line 314, in __step_run_and_handle_result
    result = coro.send(None)
             ^^^^^^^^^^^^^^^
  File "c:\Projects\Guided Experience_Local\Evaluation\RAGAS\Test2.py", line 63, in get_score
    score = await faithfulness_metric.single_turn_ascore(sample=sample)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\metrics\base.py", line 541, in single_turn_ascore
    raise e
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\metrics\base.py", line 534, in single_turn_ascore
    score = await asyncio.wait_for(
            ^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\asyncio\tasks.py", line 520, in wait_for
    return await fut
           ^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\metrics\_faithfulness.py", line 200, in _single_turn_ascore
    return await self._ascore(row, callbacks)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\metrics\_faithfulness.py", line 208, in _ascore
    statements = await self._create_statements(row, callbacks)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\metrics\_faithfulness.py", line 174, in _create_statements
    statements = await self.statement_generator_prompt.generate(
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\prompt\pydantic_prompt.py", line 128, in generate
    output_single = await self.generate_multiple(
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\xxx\AppData\Local\Programs\Python\Python312\Lib\site-packages\ragas\prompt\pydantic_prompt.py", line 189, in generate_multiple
    resp = await llm.generate(
                 ^^^^^^^^^^^^
AttributeError: 'function' object has no attribute 'generate'

Expected behavior
The code should execute without any error

@GenAI-Rocky GenAI-Rocky added the bug Something isn't working label Jan 22, 2025
@dosubot dosubot bot added the module-metrics this is part of metrics module label Jan 22, 2025
@sahusiddharth
Copy link
Collaborator

Hi @GenAI-Rocky,

It looks like the issue you're facing is related to how the azure_evaluator_llm is defined. The evaluator LLM should be an object of either the BaseRagasLLM or LangchainLLM class. To fix this, please refer to the code below and replace ChatOpenAI with a Langchain-wrapped Azure OpenAI model.

from ragas.llms import LangchainLLMWrapper
from langchain_openai import ChatOpenAI

evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model="gpt-4o-mini", api_key=api_key))
import asyncio

# Pass this to the Faithfulness metric
from ragas.metrics import Faithfulness
from ragas import SingleTurnSample

sample = SingleTurnSample(
    user_input="What is the capital of France?", 
    response="The capital of France is Paris.",
    retrieved_contexts=["Paris is the capital and most populous city of France."]
)

# Initialize the faithfulness metric
faithfulness_metric = Faithfulness(llm=evaluator_llm)

# Define an async function to await the coroutine
async def get_score():
    score = await faithfulness_metric.single_turn_ascore(sample=sample)
    print(score)

# Run the async function
asyncio.run(get_score())

Output

1

You can get the results of your output in multiple ways, for example


faithfulness_metric.single_turn_score(sample=sample)

Output

1

await faithfulness_metric.single_turn_ascore(sample)

Output

1

Let me know if it works out for you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working module-metrics this is part of metrics module
Projects
None yet
Development

No branches or pull requests

2 participants