Skip to content

Commit

Permalink
Update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
thekaranacharya committed Mar 19, 2024
1 parent 141fa4c commit 556ff15
Showing 1 changed file with 24 additions and 46 deletions.
70 changes: 24 additions & 46 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
| --- | --- |
| Date of development | Feb 15, 2024 |
| Validator type | Format |
| Blog | |
| Blog | - |
| License | Apache 2 |
| Input/Output | Output |

Expand All @@ -16,10 +16,13 @@ Scans LLM outputs for strings that could cause browser script execution downstre

Use this validator when you are passing the results of your LLM requests directly to a browser or other html-executable environment. It's a good idea to also implement other XSS and code injection prevention techniques.

## Requirements
* `bleach`

## Installation

```bash
$ guardrails hub install hub://guardrails/web_sanitization
guardrails hub install hub://guardrails/web_sanitization
```

## Usage Examples
Expand All @@ -30,55 +33,30 @@ In this example, we apply the validator to a string output generated by an LLM.

```python
# Import Guard and Validator
from guardrails.hub import WebSanitization
from guardrails import Guard

# Initialize Validator
val = WebSanitization(
choices=['cat', 'dog', 'bird'],
on_fail="fix"
)

# Setup Guard
guard = Guard.from_string(
validators=[val, ...],
)

guard.parse("dog") # Validator passes
guard.parse("<script>horse()</script>") # Validator fails
```

### Validating JSON output via Python

In this example, we apply the validator to a string field of a JSON output generated by an LLM.

```python
# Import Guard and Validator
from pydantic import BaseModel
from guardrails.hub import WebSanitization
from guardrails import Guard

val = WebSanitization(
on_fail="fix"
# Use the Guard with the validator
guard = Guard().use(WebSanitization, on_fail="exception")

# Test passing response
guard.validate(
"""MetaAI's Llama2 is the latest in their open-source LLM series.
It is a powerful language model."""
)

# Create Pydantic BaseModel
class PetInfo(BaseModel):
pet_name: str
pet_profile_page: str = Field(
description="Generated profile page for pet", validators=[val]
try:
# Test failing response
guard.validate(
"""MetaAI's Llama2 is the latest in their open-source LLM series.
It is a powerful language model. <script>alert('XSS')</script>"""
)

# Create a Guard to check for valid Pydantic output
guard = Guard.from_pydantic(output_class=PetInfo)

# Run LLM output generating JSON through guard
guard.parse("""
{
"pet_name": "Caesar",
"pet_profile_page": "# Caesar's page!\n<script>zoom-effect()</script><img src='/caesar.png'/>"
}
""")
except Exception as e:
print(e)
```
Output:
```console
Validation failed for field with errors: The output contains a web injection attack.
```

# API Reference
Expand All @@ -97,7 +75,7 @@ Initializes a new instance of the WebSanitization validator class.

<br>

**`__call__(self, value, metadata={}) → ValidationOutcome`**
**`__call__(self, value, metadata={}) → ValidationResult`**

<ul>

Expand Down

0 comments on commit 556ff15

Please sign in to comment.