Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create a pool of question/answers that can be used to evaluate the system #4

Open
1 task
mkalish opened this issue Jan 2, 2025 · 0 comments
Open
1 task

Comments

@mkalish
Copy link
Contributor

mkalish commented Jan 2, 2025

Description

In order to evaluate the efficacy of the system, there will need to be a pool of curated questions and answers that can be used to test the system.

Acceptance Criteria

  • 200 question/answer pairs for the AI legislation that the system will work with
    • Some questions should be specific to individual legislation
    • Some questions should involve comparing multiple legislation
    • Some questions should involve asking about specific state(s)

Developer notes

LLMs should be leveraged in order to rapidly generate the question answer pairs

  • Summarizing large blocks of texts into an answer
  • Scanning documents to generate questions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant