Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how is the adaption performance evaluated? #6

Open
wuyuhui-zju opened this issue Jan 7, 2024 · 0 comments
Open

how is the adaption performance evaluated? #6

wuyuhui-zju opened this issue Jan 7, 2024 · 0 comments

Comments

@wuyuhui-zju
Copy link

Hello, this is a great project! There was one detail I didn't fully understand when I read the paper. In section 2.5.2, " Among these downstream tasks, we thenconducted a number of adaptation experiments. For instance, we trained the refined model for task A on task B until the mod-els converged (e.g., A→B, stability→fluorescence). To determinewhether the adaptive performance was improving or degrading,it was compared to the initial performance for task A. The relationship between the intertask distribution distances and the mutual adaptability of the tasks was then calculated", the model was trained with A, then fine-tuned with B, and its performance on B's test set was compared to what? "it was compared to the initial performance for task A", did you use the model trained on A to predict B's test set? I think the comparison should be direct training with B, right? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant