-
Notifications
You must be signed in to change notification settings - Fork 412
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support sequence tagging evaluation metrics (NLP) #1158
Comments
cc @stancld opinion on this? |
I'm not so familiar with this kind of metrics.. How much do these metrics differ from standard classification ones? :] @pietrolesci |
Hi @stancld, I think it's not much different. The convenience of having sequence-level metrics already available is that
can be considered partially correct or incorrect. This, of course, has an effect on how results are aggregated. An practical example in the README.md.
|
Hi @pietrolesci, I get the motivation and think this might be a nice contribution to As these metrics will be very likely inherited from the classification ones, I'd just wait a bit with this addition for the finalization of the classification refactor currently ongoing #1001 :] |
Hi @pietrolesci -- I think I should be able to find some time in the near future to have a look at this class of metrics. However, I'm not fully familiar with the current state of tagging metrics. Do you think it will make more sense if our public API will accept something like |
I think it would be good to explore this direction; also we can set a quick call with @pietrolesci to get more context, and maybe he could give us some intro... 🐰 |
I implemented custom Seqeval metric https://github.com/rbedyakin/seqeval_torchmetrics |
🚀 Feature
Support for sequence tagging evaluation metrics à la
seqeval
. That is, support the evaluation of the performance of chunking tasks such as named-entity recognition, part-of-speech tagging, semantic role labeling and so on.The text was updated successfully, but these errors were encountered: