Skip to content

Commit

Permalink
fix broken links
Browse files Browse the repository at this point in the history
  • Loading branch information
zsimjee committed Dec 21, 2023
1 parent bb59dbc commit 88f1075
Show file tree
Hide file tree
Showing 13 changed files with 21 additions and 19 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
docs/api_reference_markdown/
openai_api_key.txt
*__pycache__*
data/*
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/instructions.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In addition to any static text describing the context of the task, instructions
| Component | Syntax | Description |
|-------------------|--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Variables | `${variable_name}` | These are provided by the user at runtime, and substituted in the instructions. |
| Output Schema | `${output_schema}` | This is the schema of the expected output, and is compiled based on the `output` element. For more information on how the output schema is compiled for the instructions, check out [`output` element compilation](../output/#adding-compiled-output-element-to-prompt). |
| Output Schema | `${output_schema}` | This is the schema of the expected output, and is compiled based on the `output` element. For more information on how the output schema is compiled for the instructions, check out [`output` element compilation](/docs/concepts/output/#adding-compiled-output-element-to-prompt) |
| Prompt Primitives | `${gr.prompt_primitive_name}` | These are pre-constructed blocks of text that are useful for common tasks. E.g., some primitives may contain information that helps the LLM understand the output schema better. To see the full list of prompt primitives, check out [`guardrails/constants.xml`](https://github.com/guardrails-ai/guardrails/blob/main/guardrails/constants.xml). |


Expand Down
4 changes: 2 additions & 2 deletions docs/concepts/logs.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ All `Guard` calls are logged internally, and can be accessed via the guard histo

## 🇻🇦 Accessing logs via `Guard.history`

`history` is an attribute of the `Guard` class. It implements a standard `Stack` interface with a few extra helper methods and properties. For more information on our `Stack` implementation see the [Helper Classes](/api_reference/helper_classes) page.
`history` is an attribute of the `Guard` class. It implements a standard `Stack` interface with a few extra helper methods and properties. For more information on our `Stack` implementation see the [Helper Classes](/docs/api_reference_markdown/helper_classes) page.

Each entry in the history stack is a `Call` log which will contain information specific to a particular `Guard.__call__` or `Guard.parse` call in the order that they were executed within the current session.

Expand Down Expand Up @@ -188,4 +188,4 @@ completion token usage: 16
token usage for this step: 633
```

For more information on the properties available on `Iteration`, ee the [History & Logs](/api_reference/history_and_logs/#guardrails.classes.history.Iteration) page.
For more information on the properties available on `Iteration`, see the [History & Logs](/docs/api_reference_markdown/history_and_logs/#guardrails.classes.history.Iteration) page.
4 changes: 2 additions & 2 deletions docs/concepts/output.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ At the heart of the `RAIL` specification is the use of elements. Each element's

Guardrails supports many data types, including:, `string`, `integer`, `float`, `bool`, `list`, `object`, `url`, `email` and many more.

Check out the [RAIL Data Types](../data_types.md) page for a list of supported data types.
Check out the [RAIL Data Types](/docs/api_reference_markdown/datatypes) page for a list of supported data types.


#### Scalar vs Non-scalar types
Expand Down Expand Up @@ -300,7 +300,7 @@ Each quality criteria is then checked against the generated output. If the quali
### Supported criteria

- Each quality critera is relevant to a specific data type. For example, the `two-words` quality criteria is only relevant to strings, and the `positive` quality criteria is only relevant to integers and floats.
- To see the full list of supported quality criteria, check out the [Validation](../api_reference/validators.md) page.
- To see the full list of supported quality criteria, check out the [Validation](/docs/api_reference_markdown/validators) page.


## 🛠️ Specifying corrective actions
Expand Down
2 changes: 1 addition & 1 deletion docs/concepts/prompt.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ In addition to the high level task description, the prompt also contains the fol
| Component | Syntax | Description |
|-------------------|--------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Variables | `${variable_name}` | These are provided by the user at runtime, and substituted in the prompt. |
| Output Schema | `${output_schema}` | This is the schema of the expected output, and is compiled based on the `output` element. For more information on how the output schema is compiled for the prompt, check out [`output` element compilation](../output/#adding-compiled-output-element-to-prompt). |
| Output Schema | `${output_schema}` | This is the schema of the expected output, and is compiled based on the `output` element. For more information on how the output schema is compiled for the prompt, check out [`output` element compilation](/docs/concepts/output/#adding-compiled-output-element-to-prompt). |
| Prompt Primitives | `${gr.prompt_primitive_name}` | These are pre-constructed prompts that are useful for common tasks. E.g., some primitives may contain information that helps the LLM understand the output schema better. To see the full list of prompt primitives, check out [`guardrails/constants.xml`](https://github.com/guardrails-ai/guardrails/blob/main/guardrails/constants.xml). |

```xml
Expand Down
6 changes: 3 additions & 3 deletions docs/concepts/validators.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Validators are how we apply quality controls to the schemas specified in our `RAIL` specs. They specify the criteria to measure whether an output is valid, as well as what actions to take when an output does not meet those criteria.

## How do Validators work?
When a validator is applied to a property on a schema, and output is provided for that schema, either by wrapping the LLM call or passing in the LLM output, the validators are executed against the values for the properties they were applied to. If the value for the property passes the criteria defined, a `PassResult` is returned from the validator. This `PassResult` tells Guardrails to treat the value as if it is valid. In most cases this means returning that value for that property at the end; other advanced cases, like using a value override, will be covered in other sections. If, however, the value for the property does not pass the criteria, a `FailResult` is returned. This in turn tells Guardrails to take any corrective actions defined for the property and validation. Corrective actions are defined by the `on-fail-...` attributes in a `RAIL` spec. You can read more about what corrective actions are available [here](/concepts/output/#specifying-corrective-actions).
When a validator is applied to a property on a schema, and output is provided for that schema, either by wrapping the LLM call or passing in the LLM output, the validators are executed against the values for the properties they were applied to. If the value for the property passes the criteria defined, a `PassResult` is returned from the validator. This `PassResult` tells Guardrails to treat the value as if it is valid. In most cases this means returning that value for that property at the end; other advanced cases, like using a value override, will be covered in other sections. If, however, the value for the property does not pass the criteria, a `FailResult` is returned. This in turn tells Guardrails to take any corrective actions defined for the property and validation. Corrective actions are defined by the `on-fail-...` attributes in a `RAIL` spec. You can read more about what corrective actions are available [here](/docs/concepts/output#%EF%B8%8F-specifying-corrective-actions).

## Validator Structure
### Arguments
Expand Down Expand Up @@ -49,14 +49,14 @@ raw_output, guarded_output, *rest = guard(
```

#### How do I know what metadata is required?
First step is to check the docs. Each validator has an API reference that documents both its initialization arguments and any required metadata that must be supplied at runtime. Continuing with the example used above, `ExtractedSummarySentencesMatch` accepts an optional threshold argument which defaults to `0.7`; it also requires an entry in the metadata called `filepaths` which is an array of strings specifying which documents to use for the similarity comparison. You can see an example of a Validator's metadata documentation [here](../api_reference/validators.md/#guardrails.validators.ExtractedSummarySentencesMatch).
First step is to check the docs. Each validator has an API reference that documents both its initialization arguments and any required metadata that must be supplied at runtime. Continuing with the example used above, `ExtractedSummarySentencesMatch` accepts an optional threshold argument which defaults to `0.7`; it also requires an entry in the metadata called `filepaths` which is an array of strings specifying which documents to use for the similarity comparison. You can see an example of a Validator's metadata documentation [here](/docs/api_reference_markdown/validators#extractedsummarysentencesmatch).

Secondly, if a piece of metadata is required and not present, a `RuntimeError` will be raised. For example, if the metadata requirements are not met for the above validator, an `RuntimeError` will be raised with the following message:

> extracted-sentences-summary-match validator expects `filepaths` key in metadata
## Custom Validators
If you need to perform a validation that is not currently supported by the [validators](../api_reference/validators.md) included in guardrails, you can create your own custom validators to be used in your local python environment.
If you need to perform a validation that is not currently supported by the [validators](/docs/api_reference_markdown/validators) included in guardrails, you can create your own custom validators to be used in your local python environment.

A custom validator can be as simple as a single function if you do not require addtional arguments:
```py
Expand Down
4 changes: 2 additions & 2 deletions docs/defining_guards/pydantic.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
"metadata": {},
"source": [
"## Structured outputs\n",
"Guardrails uses pydantic to describe and validate structured output. For unstructured output, see <a href='/defining_guards/strings'>Strings</a>.\n",
"Guardrails uses pydantic to describe and validate structured output. For unstructured output, see <a href='/docs/defining_guards/strings'>Strings</a>.\n",
"\n",
"Let's say you want an LLM to generate fake pets. We can model a Pet as class that inherits from <a href=\"https://docs.pydantic.dev/latest/api/base_model/\">BaseModel</a>. Each field can take descriptions and validators."
]
Expand Down Expand Up @@ -92,7 +92,7 @@
"metadata": {},
"source": [
"### Structured output with validation\n",
"Now that we have our LLM responding to us in JSON with the structured information we're asking for, we can add validations and corrective actions. Below, we've added a validator to the 'name' field that ensures the name cannot be null. We've also added an on_fail action of \"reask\" if the name is null. What this does is reasks the LLM if the validation fails. Check the <a href=\"api_reference/validators\">Validators API Spec</a> for a list of standard validators, or you can write your own."
"Now that we have our LLM responding to us in JSON with the structured information we're asking for, we can add validations and corrective actions. Below, we've added a validator to the 'name' field that ensures the name cannot be null. We've also added an on_fail action of \"reask\" if the name is null. What this does is reasks the LLM if the validation fails. Check the <a href=\"/docs/api_reference_markdown/validators\">Validators API Spec</a> for a list of standard validators, or you can write your own."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docs/defining_guards/strings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
"\n",
"Guardrails additionally supports single-field, string outputs. This is useful for running validations on simple, single-value outputs.\n",
"\n",
"The specification for writing up validators and including failure modes is identical to what we see in <a href='/defining_guards/pydantic'>pydantic</a>.\n",
"The specification for writing up validators and including failure modes is identical to what we see in <a href='/docs/defining_guards/pydantic'>pydantic</a>.\n",
"\n",
"### Simple Example\n",
"\n",
Expand Down
6 changes: 3 additions & 3 deletions docs/guardrails_ai/getting_started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -154,9 +154,9 @@
"source": [
"#### Specifying quality criteria\n",
"\n",
"Next, we want to specify the quality criteria for the output to be valid and corrective actions to be taken if the output is invalid. We can do this by adding a `format` tag to each field in the output schema. Format tags can either be enforced by Guardrails, or they can only be suggetions to the LLM. You can see the list of validators enforced by Guardrails [here](../api_reference/validators.md). Additionally, you can create your own custom validators, see examples here [1](../examples/no_secrets_in_generated_text/pydantic), [2](../examples/recipe_generation/pydantic), [3](../examples/valid_chess_moves/pydantic).\n",
"Next, we want to specify the quality criteria for the output to be valid and corrective actions to be taken if the output is invalid. We can do this by adding a `format` tag to each field in the output schema. Format tags can either be enforced by Guardrails, or they can only be suggetions to the LLM. You can see the list of validators enforced by Guardrails [here](../api_reference_markdown/validators.md). Additionally, you can create your own custom validators, see examples here [1](../examples/no_secrets_in_generated_text), [2](../examples/recipe_generation), [3](../examples/valid_chess_moves).\n",
"\n",
"As an example, for our use case we specify that the `affected_area` of `symptoms` should be one of the following: `['head', 'neck', 'chest']`. For this, we use the [`valid-choices` validator](https://docs.guardrailsai.com/api_reference/validators/#guardrails.validators.ValidChoices).\n",
"As an example, for our use case we specify that the `affected_area` of `symptoms` should be one of the following: `['head', 'neck', 'chest']`. For this, we use the [`valid-choices` validator](https://www.guardrailsai.com/docs/api_reference_markdown/validators#validchoices).\n",
"\n",
"\n",
"#### Specifying corrective actions\n",
Expand All @@ -165,7 +165,7 @@
"\n",
"For example, we can specify that if the `affected_area` of a symptom is not one of the valid choices, we should re-prompt the LLM to correct its output.\n",
"\n",
"We do this by adding the `on-fail-valid-choices='reask'` attribute to the `affected_area` field. To see the full list of corrective actions, see [here](https://docs.guardrailsai.com/concepts/output/#specifying-corrective-actions).\n",
"We do this by adding the `on-fail-valid-choices='reask'` attribute to the `affected_area` field. To see the full list of corrective actions, see [here](https://www.guardrailsai.com/docs/concepts/output#%EF%B8%8F-specifying-corrective-actions).\n",
"\n",
"\n",
"Finally, our updated output schema looks like:"
Expand Down
4 changes: 2 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ Guardrails AI is the leading open-source framework to define and enforce assuran

Guardrails provides an object definition called a `Rail` for enforcing a specification on an LLM output, and a lightweight wrapper called a `Guard` around LLM API calls to implement this spec.

1. `rail` (**R**eliable **AI** markup **L**anguage) files for specifying structure and type information, validators and corrective actions over LLM outputs. The concept of a Rail has evolved from markup - Rails can be defined in either <a href='/defining_guards/pydantic'>Pydantic</a> or <a href='/defining_guards/rail'>RAIL</a> for structured outputs, or directly in <a href='/defining_guards/strings'>Python</a> for string outputs.
1. `rail` (**R**eliable **AI** markup **L**anguage) files for specifying structure and type information, validators and corrective actions over LLM outputs. The concept of a Rail has evolved from markup - Rails can be defined in either <a href='/docs/defining_guards/pydantic'>Pydantic</a> or <a href='/docs/defining_guards/rail'>RAIL</a> for structured outputs, or directly in <a href='/docs/defining_guards/strings'>Python</a> for string outputs.
2. `Guard` wraps around LLM API calls to structure, validate and correct the outputs.

```mermaid
Expand All @@ -27,7 +27,7 @@ graph LR
B --> C["Wrap LLM API call with `guard`"];
```

Check out the [Getting Started](guardrails_ai/getting_started) guide to learn how to use Guardrails.
Check out the [Getting Started](/docs/guardrails_ai/getting_started) guide to learn how to use Guardrails.

## 📍 Roadmap

Expand Down
2 changes: 1 addition & 1 deletion docs/migration_guides/0-3-migration.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ _Validation Outcome_

Previous when calling `__call__` or `parse` on a Guard, the Guard would return a tuple of the raw llm output and the validated output or just the validated output respecitvely.

Now, in order to communicate more information, we respond with a `ValidationOutcome` class that contains the above information and more. See [ValidationOutcome](/docs/api_reference/validation_outcome/#ValidationOutcome) in the API Reference for more information on these additioanl properties.
Now, in order to communicate more information, we respond with a `ValidationOutcome` class that contains the above information and more. See [ValidationOutcome](/docs/api_reference_markdown/validation_outcome/#ValidationOutcome) in the API Reference for more information on these additioanl properties.

In order to limit how much this changes breaks the current experience, we made this class iterable so you can still deconstruct its properties.

Expand Down
2 changes: 1 addition & 1 deletion docusaurus/sidebars.js
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ const sidebars = {
collapsed: false,
items: [
"migration_guides/0-2-migration",
"0-3-migration",
"migration_guides/0-3-migration",
],
},
{
Expand Down
1 change: 1 addition & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
"docusaurus": "docusaurus",
"prebuild": "poetry install --all-extras; make docs-gen; node docusaurus/prebuild.js",
"start": "npm run prebuild; docusaurus start --config docusaurus/docusaurus.config.js",
"clean-start": "rm -rf docs/api_reference_markdown; rm -rf docs-build; npm run start",
"build": "docusaurus build --config docusaurus/docusaurus.config.js",
"swizzle": "docusaurus swizzle",
"deploy": "docusaurus deploy",
Expand Down

0 comments on commit 88f1075

Please sign in to comment.