Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

One page description of all the features and "syntax" of scoll for inclusion in LLM tokens #14

Open
bardicreels opened this issue Nov 9, 2024 · 3 comments

Comments

@bardicreels
Copy link
Collaborator

No description provided.

@breck7
Copy link
Owner

breck7 commented Nov 9, 2024

Love it! I started experimenting with this last month but haven't gotten it good yet.

Screenshot 2024-11-08 at 8 41 06 PM

@bardicreels
Copy link
Collaborator Author

bardicreels commented Nov 10, 2024

the way prompts work for image generation, they give priority to the text closest to the beginning, i believe this is true for several text models too. i'm running some rough token cost and result quality experiments.

@bardicreels
Copy link
Collaborator Author

bardicreels commented Nov 10, 2024

the original works well !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants