You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Thanks for the repository. I made a blog going over some of these papers here and my general conclusions were
LLMs lack understanding of complicated relationships between characters so they can't make say a mystery
LLMs have a forgetting-in-the-middle problem
For pacing/suspense etc, the recursive prompting strategy kind of works but it's more expensive and you have to develop a prompting strategy for each error in the LLMs capability. This can be optimized by training using like SFT+DPO
There are some foundational model training of LLMs for creative writing
Do you think that's a correct conclusion atm?
The text was updated successfully, but these errors were encountered:
Hi! Thanks for the repository. I made a blog going over some of these papers here and my general conclusions were
Do you think that's a correct conclusion atm?
The text was updated successfully, but these errors were encountered: