v0.1.26
🎉 Introduction to new functions of GPTCache
- Support the paddlenlp embedding @vax521
from gptcache.embedding import PaddleNLP
test_sentence = 'Hello, world.'
encoder = PaddleNLP(model='ernie-3.0-medium-zh')
embed = encoder.to_embeddings(test_sentence)
- Support the openai Moderation api
from gptcache.adapter import openai
from gptcache.adapter.api import init_similar_cache
from gptcache.processor.pre import get_openai_moderation_input
init_similar_cache(pre_func=get_openai_moderation_input)
openai.Moderation.create(
input="hello, world",
)
- Add the llama_index bootcamp, through which you can learn how GPTCache works with llama index
details: WebPage QA
What's Changed
- Replace summarization test model. by @wxywb in #368
- Add the llama index bootcamp by @SimFG in #371
- Update the llama index example url by @SimFG in #372
- Support the openai moderation adapter by @SimFG in #376
- Paddlenlp embedding support by @SimFG in #377
- Update the cache config template file and example directory by @SimFG in #380
Full Changelog: 0.1.25...0.1.26