- [#400] (cohere-ai#400)
- Remove unsupported chat parameters
- [#401] (cohere-ai#401)
- Update to API temperature defaults in chat
- [#389] (cohere-ai#397)
- Add raw_prompting param to chat
- [#389] (cohere-ai#392)
- Add tools and tool_results to chat
- [#389] (cohere-ai#389)
- Rename preamble_override to preamble in chat parameters
- [#388] (cohere-ai#388)
- Increase timeout to 300s
- [#384] (cohere-ai#386)
- Remove deprecated Detect Language API
- [#384] (cohere-ai#384)
- Add finish_reason to non stream chat
- [#380] (cohere-ai#380)
- Remove logit_bias parameter from Generate
- [#378] (cohere-ai#378)
- Adds embedding_types to embed job
- [#376] (cohere-ai#376)
- Adds avro to dataset save formats
- [#372] (cohere-ai#372)
- Remove logit_bias parameter from Chat
- [#366] (cohere-ai#366)
- Update embed job list request
- [#365] (cohere-ai#365)
- Update embed job parameters
- [#359] (cohere-ai#359)
- Add embedding_types to embed request
- Update embed response with embeddings_by_types response
- [#358] (cohere-ai#363)
- Update dataset route to datasets
- [#357] (cohere-ai#361)
- Add iter to dataset
- [#357] (cohere-ai#357)
- Embed: add bulk embed support for embed V3
- [#351] (cohere-ai#351)
- Add connector support
- [#356] (cohere-ai#356)
- Add connector oauth authorize support
- [#345] (cohere-ai#345)
- Add mapping for status events for finetunes
- [#342] (cohere-ai#342)
- Update Poetry, remove Python 3.7 support
- [#309] (cohere-ai#309)
- Dataset: use dataset api in creating fine-tuned model
- [#339] (cohere-ai#339)
- Dataset: default to train epochs
- [#338] (cohere-ai#338)
- Dataset: add get usage
- [#337] (cohere-ai#337)
- Dataset: update data param
- [#331] (cohere-ai#331)
- Embed: add
compression
parameter for embed models
- Embed: add
- [#324] (cohere-ai#324)
- Classify:
- Deprecate
prediction
andconfidence
attribute - Add new
predictions
andconfidences
attribute for single and multi label classification
- Deprecate
- Classify:
- [#313] (cohere-ai#313)
- change chatlog (string) to chat_history (array of messages) in /chat
- [#312] (cohere-ai#312)
- add prompt_truncation in chat tests
- [#321] (cohere-ai#321)
- remove generate finish reason test assertion
- [#322] (cohere-ai#322)
- remove unneeded max_tokens test case
- [#311] (cohere-ai#311)
- Embed: remove embed input_type tests
- [#310] (cohere-ai#310)
- Embed: add input_type parameter for new embed models
- [#308] (cohere-ai#308)
- Datasets: add validation_warnings
- [#306] (cohere-ai#306)
- AsyncClient: Fix correctly raising error on connection issues.
- [#296] (cohere-ai#301)
- Chat: add support for prompt_truncation param
- [#303] (cohere-ai#303)
- Allow uploading of evaluation data
- [#296] (cohere-ai#296)
- Allow passing of delimiter for csv
- [#294] (cohere-ai#294)
- Allow passing of ParseInfo for datasets
- [#292] (cohere-ai#292)
- Add search query only parameter
- Add documents to generate citations
- Add connectors to generate citations
- Add citation quality
- [#287] (cohere-ai#287)
- Remove deprecated chat "query" parameter including inside chat_history parameter
- Support event-type for chat streaming
- [#284] (cohere-ai#284)
- Rename dataset urls to download_urls
- [#279] (cohere-ai#279)
- Fix dataset listing key error
- [#276] (cohere-ai#276)
- Add support for base_model option in create_custom_model
- [#277] (cohere-ai#277)
- Add support for co.loglikelihood endpoint
- [#273] (cohere-ai#273)
- Fix fastavro version for python >=3.8
- #255
- Add custom model metrics endpoint
- #268
- Add dataset endpoint
- #263
- Add wait() to custom models
- #262
- Deprecate embed custom models
- #256
- Add rerank finetuning endpoint
- #259
- Add the
api_url
parameter to the client.
- Add the
- #248
- Add deprecation warning for Chat parameter
query
- Add deprecation warning for Chat
text
inchat_history
- Deprecate Chat
chatlog_override
- Add deprecation warning for Chat parameter
- #254
- Add support for hyperparameters for custom models
- #245
- Add parameters
p
,k
andlogit_bias
to chat
- Add parameters
- #228
- Better string representation for DetectLanguageResponse
- #249
- Catch ClientPayloadError in AsyncClient and convert it to a CohereAPIError
- #247
- Remove
id
from intermediate streaming response (StreamingText
) from generate - Add
is_finished
to intermediate streaming response (StreamingText
) from generate - Add
id
toStreamingGenerations
final response from generate - Add
finish_reason
toStreamingGenerations
final response from generate - Add
Generations
toStreamingGenerations
final response from generate
- Remove
- #222
- Pass along SDK version in request source header
- #242
- Add token count to the chat response
- #237
- Add support for custom models
- #232
- Support model parameter in tokenize and detokenize
- #240
- Revert: Add SDK level validation for classify params
- #238
- Add
is_finished
to each element of the streaming chat response - Add
conversation_id
,response_id
,finish_reason
,chatlog
,preamble
andprompt
to the streaming chat response - Fix chat streaming index
- Add
- #225
- Remove support for the co.chat parameter
chatlog_override
and add deprecation warning
- Remove support for the co.chat parameter
- #229
- Add
return_exceptions
parameter to Client'sbatch_*
methods, mirroring AsyncClient
- Add
- #230
- Add SDK level validation for classify params
- #224
- Update co.chat parameter
chat_history
- Update co.chat parameter
- #223
- Remove deprecated co.chat parameter
reply
- Remove deprecated co.chat parameter
- #220
- Update chat params
- Add support for
chat_history
- Add support for
- Update chat params
- #210
- Update embed with compressed embeddings
compress
compression_codebook
- Update embed with compressed embeddings
- #211
- Add co.codebook endpoint for compressed embeddings
- #214
- Add support for co.Chat parameter:
return_preamble
- Add support for co.Chat parameter:
- #212
- Deprecate co.chat params
session_id
persona_name
persona_prompt
- Add deprecation warning for Chat attribute
- Use
text
instead ofreply
- Use
- Add support for
generation_id
- Deprecate co.chat params
- #206
- Update cluster endpoint to use UMAP+HDBSCAN
- Remove threshold and add n_neighbors and is_deterministic as params
- #205
- Add param max_chunks_per_doc to rerank
- Enforce model param for rerank
- #208
- Fix a missing import for CohereConnectionError
- #204
- Add
generate_preference_feedback
for submitting preference-style feedback
- Add
- #194
- Return the generation ID for chat
- #192
- Fix duplicate Generate calls in the sync SDK
- #190
- Remove wrong "Embedding" class used for type hinting
- #188
- Add
stream
parameter to chat, and relevant return object.
- Add
- #169
- Add
stream
parameter to generate, and relevant return object. - Add example notebook for streaming.
- Add
- #187
- Refactor feedback to be generate specific
- #186
- Added warnings support for meta response
- #185
- Validate API key without API call
- #184
- Respect timeout option for sync client
- #183
- Better error messages for synchronous client
- #181
- Allow Python >=3.11
- #160
- Add AsyncClient
- Default value of API key from environment variable
CO_API_KEY
. - Feedback endpoint moved from CohereObject to Client/AsyncClient.
- Lazy initialization using futures removed.
- Generations is now a UserList, and initialized from responses using
from_dict
. - Chat objects are initialized using
from_dict
. Optional attributes are nowNone
rather than missing. - Documentation expanded and built using sphinx.
- Use Poetry, and format using black and isort, include pre-commit hooks.
- Removed ability for user to choose API version. This SDK version defaults to v1.
- Added 'meta' fields to response objects with API version
- #179
- Add support for co.Chat parameters:
temperature
,max_tokens
,persona_name
,persona_prompt
- Remove support for co.Chat parameters:
persona
,preamble_override
- Updates the co.Chat
user_name
parameter
- Add support for co.Chat parameters:
- #176
- Add failure reason to clustering jobs
- #175
- Fix url path for cluster-job get endpoint
- #168
- Add support for co.Rerank parameter:
model
- Add support for co.Rerank parameter:
- #158
- Add support for co.Chat parameters:
preamble_override
return_prompt
username
- Add support for co.Chat parameters:
- #164
- Add clustering methods:
co.create_cluster_job
co.get_cluster_job
co.list_cluster_jobs
co.wait_for_cluster_job
- Add clustering methods:
- #156
- Replace
abstractiveness
param withextractiveness
in co.Summarize - Rename
additional_instruction
param toadditional_command
in co.Summarize
- Replace
- #157
- Add support for
chatlog_override
parameter for co.Chat
- Add support for
- #154
- Add support for
return_chatlog
parameter for co.Chat
- Add support for
- #129
- Add support for
end_sequences
param in Generate API
- Add support for
- #126
- Add new
co.detect_language
api
- Add new
- #125
- Improve the Classify response string representation
- #120
- Remove experimental Extract API from the SDK
- #112
- Add support for
prompt_vars
parameter for co.Generate
- Add support for
- #110
- Classification.confidence is now a float instead of a list
- #96
- The default
max_tokens
value is now configured on the backend
- The default
- #102
- Generate Parameter now accepts
logit_bias
as a parameter
- Generate Parameter now accepts
- #95
- Introduce Detokenize for converting a list of tokens to a string
- #92
- Handle
truncate
parameter for Classify and Generate
- Handle
- #71 Sunset Choose Best
- #38
- Handle
truncate
parameter for Embed
- Handle
- #36
Change generations to return
Generations
, which has as a list ofGeneration
* EachGeneration
has atext
andtoken_likelihoods
field to store generations and token likelihoods respectively - #34
API Updates and SDK QoL Improvements
_Add support for multiple generations
_ Add capability to use a specific API version * Fully remove
CohereClient
- #32 Handle different errors more safely
- #26 Add Request Source
- #24 SDK QoL Updates
- Change from
CohereClient
to beClient
–– theCohereClient
will be completely deprecated in the future - Have a more human-friendly output when printing Cohere response objects directly
- Change from
- #23 Add
token_log_likelihoods
to the Choose Best endpoint - #21 Change from
BestChoices.likelihoods
toBestChoices.scores
- #19 Make some response objects be iterable
- #18 API Updates - Generate Endpoint
- Add Frequency Penalty, Presence Penalty, Stop Sequences, and Return Likelihoods for Generate