You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm processing a batch of approximately 100 prompts, each ranging from 1,200 to 14,000 tokens in length. Given that the input context length must be specified during model initialisation, I'm considering two options:
Initialize the model with the maximum token length of 14,000 tokens.
OR
Instantiate the model each time with the required context length.
Are there other avenues I should be exploring?
The text was updated successfully, but these errors were encountered:
I'm processing a batch of approximately 100 prompts, each ranging from 1,200 to 14,000 tokens in length. Given that the input context length must be specified during model initialisation, I'm considering two options:
OR
Are there other avenues I should be exploring?
The text was updated successfully, but these errors were encountered: