John Snow Labs NLP Test 1.5.0: Amplifying Model Comparisons, Bias Tests, Runtime Checks, Harnessing HF Datasets for Superior Text Classification and Introducing Augmentation Proportion Control #528
ArshaanNazir
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
📢 Overview
NLP Test 1.5.0 🚀 comes with brand new features, including: new capabilities to run comparisons between different models from same/different hubs in a single Harness for robustness, representation, bias, fairness and accuracy tests. It includes support for runtime checks and ability to pass custom replacement dictionaries for bias testing. Also added support for HF datasets for text classification task and many other enhancements and bug fixes!
A big thank you to our early-stage community for their contributions, feedback, questions, and feature requests 🎉
Make sure to give the project a star right here ⭐
🔥 New Features & Enhancements
🐛 Bug Fixes
❓ How to Use
Get started now! 👇
Create your test harness in 3 lines of code 🧪
📖 Documentation
❤️ Community support
#nlptest
channelWe would love to have you join the mission 👉 open an issue, a PR, or give us some feedback on features you'd like to see! 🙌
♻️ Changelog
What's Changed
Full Changelog: v1.4.0...v1.5.0
This discussion was created from the release John Snow Labs NLP Test 1.5.0: Amplifying Model Comparisons, Bias Tests, Runtime Checks, Harnessing HF Datasets for Superior Text Classification and Introducing Augmentation Proportion Control.
Beta Was this translation helpful? Give feedback.
All reactions