-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ks/refactor scoring endpoint #204
Conversation
9736df3
to
1e28aef
Compare
f0098de
to
46bf7fd
Compare
@@ -41,11 +51,20 @@ def send_email_to_submitter(uid: int, domain: str, pr_number: str, | |||
|
|||
if __name__ == '__main__': | |||
parser = make_argparser() | |||
parser.add_argument('--fn', type=str, nargs='?', default='run_scoring', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to add this here instead of including it directly in make_argparser()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def resolve_models_benchmarks(args_dict: Dict[str, Union[str, List]]): | ||
benchmarks, models = retrieve_models_and_benchmarks(args_dict) | ||
|
||
run_scoring_endpoint(domain="language", jenkins_id=args_dict["jenkins_id"], | ||
models=models, benchmarks=benchmarks, user_id=args_dict["user_id"], | ||
model_type="artificialsubject", public=args_dict["public"], | ||
competition=args_dict["competition"]) | ||
|
||
model_ids = resolve_models(domain="language", models=models) | ||
benchmark_ids = resolve_benchmarks(domain="language", benchmarks=benchmarks) | ||
print("BS_NEW_MODELS=" + " ".join(model_ids)) | ||
print("BS_NEW_BENCHMARKS=" + " ".join(benchmark_ids)) | ||
return model_ids, benchmark_ids |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest moving resolve_models_benchmarks()
to core
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1827097
to
67515e2
Compare
9425003
to
ba4fa4c
Compare
This PR separates the functionality of the language submission endpoint into two separate methods:
run_score
, which calls the core scoring endpoint on every model/benchmark pair provided, andget_models_and_benchmarks
which provides a list of model/benchmark pairs for scoring given a list of new models and new benchmarks. Previously, both functionalities were included in therun_score
method. However, motivated by the need to run scoring jobs on multiple nodes in an HPC instead of in a single job, this PR enables scripts to separately identify the list of model/benchmark pairs and run calls to the core scoring endpoint.This PR is complementary to brain-score/core#40
A note on backward compatibility: While this PR modifies
run_scoring
into two separate methods, the default functionality is still the same. In particular, if a user does not specify--fn=get_models_and_benchmarks
, therun_scoring
method will be called, which is the expected functionality prior to this PR.A note on
pyproject.toml
: The commitaa21570
is for debugging purposes and will be reverted before merging.