We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
##Code:
from llm_guard import scan_output from llm_guard.output_scanners import Deanonymize, NoRefusal, Relevance, Sensitive vault = Vault() output_scanners = [Deanonymize(vault), NoRefusal(), Relevance(), Sensitive(),Gibberish()] sanitized_response_text, results_valid, results_score = scan_output( output_scanners, sanitized_prompt, response_text ) if any(not result for result in results_valid.values()): print(f"Output {response_text} is not valid, scores: {results_score}") exit(1) print(f"Output: {sanitized_response_text}\n")
##error: TypeError Traceback (most recent call last) Input In [53], in <cell line: 7>() 4 vault = Vault() 5 output_scanners = [Deanonymize(vault), NoRefusal(), Relevance(), Sensitive(),Gibberish()] ----> 7 sanitized_response_text, results_valid, results_score = scan_output( 8 output_scanners, sanitized_prompt, response_text 9 ) 10 if any(not result for result in results_valid.values()): 11 print(f"Output {response_text} is not valid, scores: {results_score}")
File /opt/anaconda3/envs/genaiops/lib/python3.10/site-packages/llm_guard/evaluate.py:101, in scan_output(scanners, prompt, output, fail_fast) 99 for scanner in scanners: 100 start_time_scanner = time.time() --> 101 sanitized_output, is_valid, risk_score = scanner.scan(prompt, sanitized_output) 102 elapsed_time_scanner = time.time() - start_time_scanner 104 LOGGER.debug( 105 "Scanner completed", 106 scanner=type(scanner).name, 107 is_valid=is_valid, 108 elapsed_time_seconds=round(elapsed_time_scanner, 6), 109 )
TypeError: Gibberish.scan() takes 2 positional arguments but 3 were given
The text was updated successfully, but these errors were encountered:
No branches or pull requests
##Code:
##error:
TypeError Traceback (most recent call last)
Input In [53], in <cell line: 7>()
4 vault = Vault()
5 output_scanners = [Deanonymize(vault), NoRefusal(), Relevance(), Sensitive(),Gibberish()]
----> 7 sanitized_response_text, results_valid, results_score = scan_output(
8 output_scanners, sanitized_prompt, response_text
9 )
10 if any(not result for result in results_valid.values()):
11 print(f"Output {response_text} is not valid, scores: {results_score}")
File /opt/anaconda3/envs/genaiops/lib/python3.10/site-packages/llm_guard/evaluate.py:101, in scan_output(scanners, prompt, output, fail_fast)
99 for scanner in scanners:
100 start_time_scanner = time.time()
--> 101 sanitized_output, is_valid, risk_score = scanner.scan(prompt, sanitized_output)
102 elapsed_time_scanner = time.time() - start_time_scanner
104 LOGGER.debug(
105 "Scanner completed",
106 scanner=type(scanner).name,
107 is_valid=is_valid,
108 elapsed_time_seconds=round(elapsed_time_scanner, 6),
109 )
TypeError: Gibberish.scan() takes 2 positional arguments but 3 were given
The text was updated successfully, but these errors were encountered: