Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about state of research #6

Open
isamu-isozaki opened this issue May 15, 2024 · 2 comments
Open

Question about state of research #6

isamu-isozaki opened this issue May 15, 2024 · 2 comments

Comments

@isamu-isozaki
Copy link

isamu-isozaki commented May 15, 2024

Hi! Thanks for the repository. I made a blog going over some of these papers here and my general conclusions were

  1. LLMs lack understanding of complicated relationships between characters so they can't make say a mystery
  2. LLMs have a forgetting-in-the-middle problem
  3. For pacing/suspense etc, the recursive prompting strategy kind of works but it's more expensive and you have to develop a prompting strategy for each error in the LLMs capability. This can be optimized by training using like SFT+DPO
  4. There are some foundational model training of LLMs for creative writing
    Do you think that's a correct conclusion atm?
@yingpengma
Copy link
Owner

Hey there,

Spot on with your conclusions—appreciate the contribution! I'll add your blog link to our resources.

Best,
Yingpeng

@isamu-isozaki
Copy link
Author

@yingpengma tnx!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants