Skip to content

Commit

Permalink
Update the version to 0.1.20 (#297)
Browse files Browse the repository at this point in the history
Signed-off-by: SimFG <[email protected]>
  • Loading branch information
SimFG authored Apr 26, 2023
1 parent 6ac4c6b commit 268e32c
Show file tree
Hide file tree
Showing 4 changed files with 20 additions and 3 deletions.
17 changes: 17 additions & 0 deletions docs/release_note.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,23 @@ To read the following content, you need to understand the basic use of GPTCache,
- [Readme doc](https://github.com/zilliztech/GPTCache)
- [Usage doc](https://github.com/zilliztech/GPTCache/blob/main/docs/usage.md)

## v0.1.20 (2023.4.26)

1. support the `temperature` param, like openai

A non-negative number of sampling temperature, defaults to 0.
A higher temperature makes the output more random.
A lower temperature means a more deterministic and confident output.

2. Add llama adapter

```python
from gptcache.adapter.llama_cpp import Llama

llm = Llama('./models/7B/ggml-model.bin')
answer = llm(prompt=question)
```

## v0.1.19 (2023.4.24)

1. Add stability sdk adapter (text -> image)
Expand Down
2 changes: 1 addition & 1 deletion gptcache/adapter/adapter.py
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ def adapt(llm_handler, cache_data_convert, update_cache_callback, *args, **kwarg
if chat_cache.post_process_messages_func is temperature_softmax:
return_message = chat_cache.post_process_messages_func(
messages=[t[1] for t in cache_answers],
scores = [t[0] for t in cache_answers],
scores=[t[0] for t in cache_answers],
temperature=temperature
)
else:
Expand Down
2 changes: 1 addition & 1 deletion gptcache/adapter/llama_cpp.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ class Llama(llama_cpp.Llama):
data_manager=m,
embedding_func=onnx.to_embeddings
)
llm = LlamaCpp('./models/7B/ggml-model.bin')
llm = Llama('./models/7B/ggml-model.bin')
answer = llm(prompt=question, cache_obj=llm_cache)
"""
def __call__(
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ def parse_requirements(file_name: str) -> List[str]:
setuptools.setup(
name="gptcache",
packages=find_packages(),
version="0.1.19",
version="0.1.20",
author="SimFG",
author_email="[email protected]",
description="GPTCache, a powerful caching library that can be used to speed up and lower the cost of chat "
Expand Down

0 comments on commit 268e32c

Please sign in to comment.