Project's Website • Key Features • How To Use • Community Support • Contributing • Mission • License
Take a look at our official page for user documentation and examples: langtest.org
- Generate and execute more than 50 distinct types of tests only with 1 line of code
- Test all aspects of model quality: robustness, bias, representation, fairness and accuracy.
- Automatically augment training data based on test results (for select models)
- Support for popular NLP frameworks for NER, Translation and Text-Classifcation: Spark NLP, Hugging Face & Transformers.
- Support for testing LLMS ( OpenAI, Cohere, AI21, Hugging Face Inference API and Azure-OpenAI LLMs) for question answering, toxicity, clinical-tests, legal-support, factuality, sycophancy and summarization task.
# Install langtest
!pip install langtest[transformers]
# Import and create a Harness object
from langtest import Harness
h = Harness(task='ner', model={"model":'dslim/bert-base-NER', "hub":'huggingface'})
# Generate test cases, run them and view a report
h.generate().run().report()
Note For more extended examples of usage and documentation, head over to langtest.org
- Slack For live discussion with the LangTest community, join the
#langtest
channel - GitHub For bug reports, feature requests, and contributions
- Discussions To engage with other community members, share ideas, and show off how you use LangTest!
While there is a lot of talk about the need to train AI models that are safe, robust, and fair - few tools have been made available to data scientists to meet these goals. As a result, the front line of NLP models in production systems reflects a sorry state of affairs.
We propose here an early stage open-source community project that aims to fill this gap, and would love for you to join us on this mission. We aim to build on the foundation laid by previous research such as Ribeiro et al. (2020), Song et al. (2020), Parrish et al. (2021), van Aken et al. (2021) and many others.
John Snow Labs has a full development team allocated to the project and is committed to improving the library for years, as we do with other open-source libraries. Expect frequent releases with new test types, tasks, languages, and platforms to be added regularly. We look forward to working together to make safe, reliable, and responsible NLP an everyday reality.
Langtest comes with different datasets to test your models, covering a wide range of use cases and evaluation scenarios.
Dataset | Use Case | Notebook |
---|---|---|
BoolQ | Evaluate the ability of your model to answer boolean questions (yes/no) based on a given passage or context. | |
NQ-open | Evaluate the ability of your model to answer open-ended questions based on a given passage or context. | |
TruthfulQA | Evaluate the model's capability to answer questions accurately and truthfully based on the provided information. | |
MMLU | Evaluate language understanding models' performance in different domains. It covers 57 subjects across STEM, the humanities, the social sciences, and more. | |
NarrativeQA | Evaluate your model's ability to comprehend and answer questions about long and complex narratives, such as stories or articles. | |
HellaSwag | Evaluate your model's ability in completions of sentences. | |
Quac | Evaluate your model's ability to answer questions given a conversational context, focusing on dialogue-based question-answering. | |
OpenBookQA | Evaluate your model's ability to answer questions that require complex reasoning and inference based on general knowledge, similar to an "open-book" exam. | |
BBQ | Evaluate how your model responds to questions in the presence of social biases against protected classes across various social dimensions. Assess biases in model outputs with both under-informative and adequately informative contexts, aiming to promote fair and unbiased question-answering models. | |
XSum | Evaluate your model's ability to generate concise and informative summaries for long articles with the XSum dataset. It consists of articles and corresponding one-sentence summaries, offering a valuable benchmark for text summarization models. | |
Real Toxicity Prompts | Evaluate your model's accuracy in recognizing and handling toxic language with the Real Toxicity Prompts dataset. It contains real-world prompts from online platforms, ensuring robustness in NLP models to maintain safe environments. | |
LogiQA | Evaluate your model's accuracy on Machine Reading Comprehension with Logical Reasoning questions. | |
BigBench Abstract narrative understanding | Evaluate your model's performance in selecting the most relevant proverb for a given narrative. | |
BigBench Causal Judgment | Evaluate your model's performance in measuring the ability to reason about cause and effect. | |
BigBench DisambiguationQA | Evaluate your model's performance on determining the interpretation of sentences containing ambiguous pronoun references. | |
BigBench DisflQA | Evaluate your model's performance in picking the correct answer span from the context given the disfluent question. | |
ASDiv | Evaluate your model's ability answer questions based on Math Word Problems. | |
Legal-QA | Evaluate your model's performance on legal-qa datasets | |
CommonsenseQA | Evaluate your model's performance on the CommonsenseQA dataset, which demands a diverse range of commonsense knowledge to accurately predict the correct answers in a multiple-choice question answering format. | |
SIQA | Evaluate your model's performance by assessing its accuracy in understanding social situations, inferring the implications of actions, and comparing human-curated and machine-generated answers. | |
PIQA | Evaluate your model's performance on the PIQA dataset, which tests its ability to reason about everyday physical situations through multiple-choice questions, contributing to AI's understanding of real-world interactions. | |
MultiLexSum | Evaluate your model's ability to generate concise and informative summaries for legal case contexts from the Multi-LexSum dataset, with a focus on comprehensively capturing essential themes and key details within the legal narratives. |
Note For usage and documentation, head over to langtest.org
You can check out the following langtest articles:
Blog | Description |
---|---|
Automatically Testing for Demographic Bias in Clinical Treatment Plans Generated by Large Language Models | Helps in understanding and testing demographic bias in clinical treatment plans generated by LLM. |
LangTest: Unveiling & Fixing Biases with End-to-End NLP Pipelines | The end-to-end language pipeline in LangTest empowers NLP practitioners to tackle biases in language models with a comprehensive, data-driven, and iterative approach. |
Beyond Accuracy: Robustness Testing of Named Entity Recognition Models with LangTest | While accuracy is undoubtedly crucial, robustness testing takes natural language processing (NLP) models evaluation to the next level by ensuring that models can perform reliably and consistently across a wide array of real-world conditions. |
Elevate Your NLP Models with Automated Data Augmentation for Enhanced Performance | In this article, we discuss how automated data augmentation may supercharge your NLP models and improve their performance and how we do that using LangTest. |
Mitigating Gender-Occupational Stereotypes in AI: Evaluating Models with the Wino Bias Test through Langtest Library | In this article, we discuss how we can test the "Wino Bias” using LangTest. It specifically refers to testing biases arising from gender-occupational stereotypes. |
Automating Responsible AI: Integrating Hugging Face and LangTest for More Robust Models | In this article, we have explored the integration between Hugging Face, your go-to source for state-of-the-art NLP models and datasets, and LangTest, your NLP pipeline’s secret weapon for testing and optimization. |
Note To checkout all blogs, head over to Blogs
We welcome all sorts of contributions:
- Ideas
- Feedback
- Documentation
- Bug reports
- Development and testing
Feel free to clone the repo and submit pull-requests! You can also contribute by simply opening an issue or discussion in this repo.
We would like to acknowledge all contributors of this open-source community project.
LangTest is released under the Apache License 2.0, which guarantees commercial use, modification, distribution, patent use, private use and sets limitations on trademark use, liability and warranty.