You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the Bug
On validation using guard null values returned by the llm for specified fields is converted to True.
To Reproduce
import openai
from guardrails import Guard
from pydantic import BaseModel, Field
from typing import Optional
class Car(BaseModel):
name: str = Field(description="name of the car")
color: Optional[str] = Field(description="This field is nullable")
guard = Guard.from_pydantic(output_class=Car)
raw_llm_output, validated_output, *others = response = guard(
llm_api=openai.chat.completions.create,
prompt="""Generate a car object. Only add fields described below. Do not add any new fields
${gr.complete_json_suffix}"""
)
print(validated_output)
Expected behavior
The validated output should be {'car': 'Honda, 'color':None}
Instead we get {'car': 'Honda, 'color':True}
Library version:
Version (e.g. 0.4.0)
The text was updated successfully, but these errors were encountered:
Describe the Bug
On validation using guard null values returned by the llm for specified fields is converted to True.
To Reproduce
Expected behavior
The validated output should be {'car': 'Honda, 'color':None}
Instead we get {'car': 'Honda, 'color':True}
Library version:
Version (e.g. 0.4.0)
The text was updated successfully, but these errors were encountered: