Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various docs fixes #1429

Merged
merged 6 commits into from
Feb 20, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 1 addition & 3 deletions docs/api/models.md
Original file line number Diff line number Diff line change
@@ -1,3 +1 @@
::: outlines.models.transformers

::: outlines.models.openai
::: outlines.models
7 changes: 0 additions & 7 deletions docs/community/contribute.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,6 @@ conda env create -f environment.yml

Then install the dependencies in editable mode, and install the `pre-commit` hooks:

```shell
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was repeated

python -m venv .venv
source .venv/bin/activate
```

Then install the dependencies in editable mode, and install the pre-commit hooks:

```shell
pip install -e ".[test]"
pre-commit install
Expand Down
8 changes: 5 additions & 3 deletions outlines/models/exllamav2.py
Original file line number Diff line number Diff line change
Expand Up @@ -118,9 +118,11 @@ def reformat_output(
self, output: Union[str, List[str]], sampling_parameters: SamplingParameters
):
"""
The purpose of this function is to reformat the output from exllamav2's output format to outline's output format
For exllamav2, it mainly accepts only a list or a string(they also do cfg sampling with tuples but we will ignore this for now)
The exllamav2's logic is
The purpose of this function is to reformat the output from exllamav2's output format to outline's output format.

For exllamav2, it mainly accepts only a list or a string(they also do cfg sampling with tuples but we will ignore this for now).
The exllamav2's logic is:

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(we need a new line before the list)

1. If the prompt is a string, return a string. This is the same as outlines
2. If a prompt is a list, return a list. This is not the same as outlines output in that if the list is only one element, the string is expected to be outputted.
3. There is no such thing as num_samples, so the prompts had to be duplicated by num_samples times. Then, we had the function output a list of lists
Expand Down
12 changes: 6 additions & 6 deletions outlines/models/llamacpp.py
Original file line number Diff line number Diff line change
Expand Up @@ -248,8 +248,8 @@ def generate(
) -> str:
"""Generate text using `llama-cpp-python`.

Arguments
---------
Parameters
----------
prompts
A prompt or list of prompts.
generation_parameters
Expand Down Expand Up @@ -302,8 +302,8 @@ def stream(
) -> Iterator[str]:
"""Stream text using `llama-cpp-python`.

Arguments
---------
Parameters
----------
prompts
A prompt or list of prompts.
generation_parameters
Expand Down Expand Up @@ -372,8 +372,8 @@ def llamacpp(
a path to the downloaded model. One can still load a local model
by initializing `llama_cpp.Llama` directly.

Arguments
---------
Parameters
----------
repo_id
The name of the model repository.
filename:
Expand Down
25 changes: 15 additions & 10 deletions outlines/models/mlxlm.py
Original file line number Diff line number Diff line change
Expand Up @@ -49,8 +49,8 @@ def stream(
) -> Iterator[str]:
"""Generate text using `mlx_lm`.

Arguments
---------
Parameters
----------
prompts
A prompt or list of prompts.
generation_parameters
Expand All @@ -63,6 +63,7 @@ def stream(
An instance of `SamplingParameters`, a dataclass that contains
the name of the sampler to use and related parameters as available
in Outlines.

Returns
-------
The generated text.
Expand Down Expand Up @@ -135,14 +136,18 @@ def generate_step(

A generator producing token ids based on the given prompt from the model.

Args:
prompt (mx.array): The input prompt.
temp (float): The temperature for sampling, if 0 the argmax is used.
Default: ``0``.
top_p (float, optional): Nulceus sampling, higher means model considers
more less likely words.
sampler (str): The sampler string defined by SequenceGeneratorAdapter
logits_processor (OutlinesLogitsProcessor): Augment logits before sampling.
Parameters
----------
prompt
The input prompt.
temp
The temperature for sampling, if 0 the argmax is used.
top_p
Nulceus sampling, higher means model considers more less likely words.
sampler
The sampler string defined by SequenceGeneratorAdapter
logits_processor
Augment logits before sampling.
"""
import mlx.core as mx
import mlx_lm
Expand Down
5 changes: 2 additions & 3 deletions outlines/models/openai.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,11 +20,11 @@ class OpenAIConfig:
properties that are specific to the OpenAI API. Not all these properties are
supported by Outlines.

Properties
Parameters
----------
model
The name of the model. Available models can be found on OpenAI's website.
frequence_penalty
frequency_penalty
Number between 2.0 and -2.0. Positive values penalize new tokens based on
their existing frequency in the text,
logit_bias
Expand All @@ -49,7 +49,6 @@ class OpenAIConfig:
Number between 0 and 1. Parameter for nucleus sampling.
user
A unique identifier for the end-user.

"""

model: str = ""
Expand Down
6 changes: 3 additions & 3 deletions outlines/models/transformers.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,8 +203,8 @@ def generate(
) -> Union[str, List[str], List[List[str]]]:
"""Generate text using `transformers`.

Arguments
---------
Parameters
----------
prompts
A prompt or list of prompts.
generation_parameters
Expand Down Expand Up @@ -304,7 +304,7 @@ def _get_generation_kwargs(
sampling_parameters: SamplingParameters,
) -> dict:
"""
Conert outlines generation parameters into model.generate kwargs
Convert outlines generation parameters into model.generate kwargs
"""
from transformers import GenerationConfig, LogitsProcessorList, set_seed

Expand Down
4 changes: 2 additions & 2 deletions outlines/models/transformers_vision.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,8 @@ def generate( # type: ignore
) -> Union[str, List[str], List[List[str]]]:
"""Generate text using `transformers`.

Arguments
---------
Parameters
----------
prompts
A prompt or list of prompts.
media
Expand Down
6 changes: 3 additions & 3 deletions outlines/models/vllm.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,8 +52,8 @@ def generate(
):
"""Generate text using vLLM.

Arguments
---------
Parameters
----------
prompts
A prompt or list of prompts.
generation_parameters
Expand Down Expand Up @@ -171,7 +171,7 @@ def load_lora(self, adapter_path: Optional[str]):
def vllm(model_name: str, **vllm_model_params):
"""Load a vLLM model.

Arguments
Parameters
---------
model_name
The name of the model to load from the HuggingFace hub.
Expand Down
14 changes: 10 additions & 4 deletions outlines/prompts.py
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before
Screenshot 2025-02-19 at 23 50 36

After

Screenshot 2025-02-19 at 23 50 58

Original file line number Diff line number Diff line change
Expand Up @@ -125,42 +125,48 @@ def prompt(
manipulation by providing some degree of encapsulation. It uses the `render`
function internally to render templates.

```pycon
>>> import outlines
>>>
>>> @outlines.prompt
>>> def build_prompt(question):
... "I have a ${question}"
...
>>> prompt = build_prompt("How are you?")
```

This API can also be helpful in an "agent" context where parts of the prompt
are set when the agent is initialized and never modified later. In this situation
we can partially apply the prompt function at initialization.

```pycon
>>> import outlines
>>> import functools as ft
...
>>> @outlines.prompt
... def solve_task(name: str, objective: str, task: str):
... '''Your name is {{name}}.
.. Your overall objective is to {{objective}}.
... \"""Your name is {{name}}.
... Your overall objective is to {{objective}}.
... Please solve the following task: {{task}}
... '''
... \"""
...
>>> hal = ft.partial(solve_task, "HAL", "Travel to Jupiter")
```

Additional Jinja2 filters can be provided as keyword arguments to the decorator.

```pycon
>>> def reverse(s: str) -> str:
... return s[::-1]
...
>>> @outlines.prompt(filters={ 'reverse': reverse })
... def reverse_prompt(text):
... '''{{ text | reverse }}'''
... \"""{{ text | reverse }}\"""
...
>>> prompt = reverse_prompt("Hello")
>>> print(prompt)
... "olleH"
```

Returns
-------
Expand Down
Loading