You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I'm working on a small research project to effectively implement guardrails on open source LLMs such as GPT4All and other Nomic and Leuther models. While implementing Guardrails AI on GPT4All, I am having the following problem:
To Reproduce
Steps to reproduce the behavior:
Image of Rail:
Runtime arguments:
raw_llm_output, validated_output = guard(
model.generate,
prompt_params={"question": "Give me a brief history of the second world war?"},
temperature=0
)
Expected behavior
Having implemented Guardrails AI on ChatGPT by OpenAI, I want this code to work in a similar manner in that I get a validated output by the end which has been verified by the created Guardrails spec.
Error Message:
The callable fn passed to Guard(fn, ...) failed with the following error: GPT4All.generate() got an unexpected keyword argument 'temperature'. Make sure that fn can be called as a function that takes in a single prompt string and returns a string.
My Understanding of the Problem
From what I've been able to understand, GPT4All has the keyword 'temp' for temperature e.g.
model.generate(prompt='what is the capital of France?', temp=0)
On the other hand, Guardrails AI requires the keyword temperature parameter to be given. I think that is causing a clash.
If my assessment is incorrect, what is the problem and how to solve it? If I am right, how can this be resolved?
The text was updated successfully, but these errors were encountered:
Describe the bug
I'm working on a small research project to effectively implement guardrails on open source LLMs such as GPT4All and other Nomic and Leuther models. While implementing Guardrails AI on GPT4All, I am having the following problem:
To Reproduce
Steps to reproduce the behavior:
raw_llm_output, validated_output = guard(
model.generate,
prompt_params={"question": "Give me a brief history of the second world war?"},
temperature=0
)
Expected behavior
Having implemented Guardrails AI on ChatGPT by OpenAI, I want this code to work in a similar manner in that I get a validated output by the end which has been verified by the created Guardrails spec.
Error Message:
The callable
fn
passed toGuard(fn, ...)
failed with the following error:GPT4All.generate() got an unexpected keyword argument 'temperature'
. Make sure thatfn
can be called as a function that takes in a single prompt string and returns a string.My Understanding of the Problem
From what I've been able to understand, GPT4All has the keyword 'temp' for temperature e.g.
model.generate(prompt='what is the capital of France?', temp=0)
On the other hand, Guardrails AI requires the keyword temperature parameter to be given. I think that is causing a clash.
If my assessment is incorrect, what is the problem and how to solve it? If I am right, how can this be resolved?
The text was updated successfully, but these errors were encountered: