Skip to content

Releases: dottxt-ai/outlines

Outlines v0.1.6

27 Nov 20:31
e9485cf
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.1.5...0.1.6

Outlines v0.1.5

22 Nov 16:21
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.1.4...0.1.5

Outlines v0.1.4

18 Nov 22:05
c406da8
Compare
Choose a tag to compare

What's Changed

  • Bump to outlines-core=0.1.17 for python 3.12-3.13 support by @mgoin in #1273

New Contributors

Full Changelog: 0.1.3...0.1.4

Outlines v0.1.3

10 Nov 10:01
d842522
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.1.2...0.1.3

Outlines v0.1.2

08 Nov 15:22
5f39ded
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.1.1...0.1.2

Outlines v0.1.1

15 Oct 12:51
Compare
Choose a tag to compare

The 0.1.0 included a version of outlines-core for which wheels where not available, causing many errors for users who don't have a Rust compiler installed. We fixed this in outlines-core, but changes to the interface where pushed in the meantime so we have to account for these before cutting this new release.

What's Changed

  • Logits processors: Update inplace, with batch operation by @lapp0 in #1192
  • Fix Broken Docs Links by @lapp0 in #1195
  • use dottxt-ai/outlines not outlines-dev/outlines in mkdocs by @lapp0 in #1194
  • Add docs on serving with LM Studio by @cpfiffer in #1205
  • Compatibility updates for next outlines-core release by @lapp0 in #1204

Full Changelog: 0.1.0...0.1.1

Outlines v0.1.0

07 Oct 14:01
Compare
Choose a tag to compare

⚑ Performance Improvements

  • Outlines Core: Enjoy faster FSM index construction with a new implementation (#1175).
  • 98% Reduction in Runtime Overhead: Reduced overhead by storing FSM-token-mask as tensors. (#1013)

πŸš€ New Features

πŸ’‘ Enhancements

  • Unified Logits Processors: All models now use shared outlines.processors, completed by adding the following to the integration: llama-cpp, vLLM and ExLlamaV2).
  • Custom Regex Parsers: Simplify the implementation of custom Guide classes with Regex Parser support (#1039).
  • Qwen-style Byte Tokenizer Support: Now compatible with Qwen-style byte tokenizers (#1153).

πŸ› Bug Fixes

  • CFG Beta: Fixed large number of bugs to enable beta version grammar-based generation using Lark (#1067)
  • Fixed incorrect argument order breaking some models in models.transformers_vision (#1077).
  • Resolved OpenAI fallback tokenizer issue (#1046).
  • Option to disable tqdm bars during inference with vLLM (#1004).
  • models.llamacpp no longer includes implicit max_tokens (#996).
  • Fixed whitespace handling for models.mlxlm (#1003).
  • models.mamba now working, and supporting structured generation (#1040).
  • Resolved pad_token_id reset issue in TransformerTokenizer (#1068).
  • Fixed outlines.generate generator reuse causing runtime errors (#1160).

⚠️ Breaking Changes

  • outlines.integrations is now deprecated: #1061

Full Changeset

Read more

Outlines v0.0.46

22 Jun 15:23
Compare
Choose a tag to compare

What's Changed

  • Adding MLXLM, VLLM classes to LogitsGenerator type by @parkervg in #970
  • Fix samplers documentation by @jrinder42 in #980
  • Ensure regex matches valid JSON for "const" and "enum" with booleans, nulls, and strings by @mwootten in #972
  • Add link to docs of Multimodal Structured Generation for CVPR 2nd MMFM Challenge by @leloykun in #960
  • Fix Hugging Face Hub model ID in example code by @davanstrien in #988
  • Allow escaped strings in json_schema.py by @lapp0 in #991
  • Fix use of os.environ in documentation by @rlouf in #993
  • fix pattern-string in json_schema.py by removing anchors by @lapp0 in #995
  • Fix Incorrect Token Normalization Method for LlamaCppTokenizer by @lapp0 in #992

New Contributors

Full Changelog: 0.0.45...0.0.46

Outlines v0.0.45

17 Jun 19:34
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.0.44...0.0.45

Outlines v0.0.44

14 Jun 10:43
Compare
Choose a tag to compare

What's Changed

  • Fix null byte \x00 issue in byte level fsm resulting in KeyError in BetterFSM::FSMInfo by @lapp0 in #930
  • Correct link for llamacpp library by @alonsosilvaallende in #949
  • Add statement regarding OS vs closed models by @rlouf in #950
  • Support min/max number of digits for numbers in JSON Schema by @smagnan in #932
  • Fix/extend re replacement seq by @saattrupdan in #948
  • Update docker ENTRYPOINT to ensure proper argument handling by @shashankmangla in #962
  • Add cerebrium as deployment option in documentation by @rlouf in #963
  • Add link to TGI documentation by @rlouf in #964
  • Introduce outlines.models.mlxlm by @lapp0 in #956
  • Update the documentation for OpenAI models by @rlouf in #951

New Contributors

Full Changelog: 0.0.43...0.0.44