diff --git a/README.md b/README.md index 845b2ae4..84f370c5 100644 --- a/README.md +++ b/README.md @@ -28,7 +28,7 @@ All of our text pipelines have great multilingual support. - [Heuristic Filtering](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/qualityfiltering.html) - Classifier Filtering - [fastText](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/qualityfiltering.html) - - GPU-Accelerated models: [Domain (English and multilingual), Quality, and Safety Classification](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html) + - GPU-Accelerated models: [Domain (English and multilingual), Quality, Safety, and Educational Content Classification](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html) - **GPU-Accelerated Deduplication** - [Exact Deduplication](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/gpudeduplication.html) - [Fuzzy Deduplication](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/gpudeduplication.html) via MinHash Locality Sensitive Hashing @@ -158,7 +158,7 @@ To get started with NeMo Curator, you can follow the tutorials [available here]( - [`tinystories`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/tinystories) which focuses on data curation for training LLMs from scratch. - [`peft-curation`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/peft-curation) which focuses on data curation for LLM parameter-efficient fine-tuning (PEFT) use-cases. -- [`distributed_data_classification`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/distributed_data_classification) which focuses on using the quality and domain classifiers to help with data annotation. +- [`distributed_data_classification`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/distributed_data_classification) which focuses on using the domain and quality classifiers to help with data annotation. - [`single_node_tutorial`](https://github.com/NVIDIA/NeMo-Curator/tree/main/tutorials/single_node_tutorial) which demonstrates an end-to-end data curation pipeline for curating Wikipedia data in Thai. - [`image-curation`](https://github.com/NVIDIA/NeMo-Curator/blob/main/tutorials/image-curation/image-curation.ipynb) which explores the scalable image curation modules. diff --git a/docs/user-guide/api/classifiers.rst b/docs/user-guide/api/classifiers.rst index 484dac51..d07b74d0 100644 --- a/docs/user-guide/api/classifiers.rst +++ b/docs/user-guide/api/classifiers.rst @@ -16,3 +16,6 @@ Classifiers .. autoclass:: nemo_curator.classifiers.AegisClassifier :members: + +.. autoclass:: nemo_curator.classifiers.InstructionDataGuardClassifier + :members: diff --git a/docs/user-guide/bestpractices.rst b/docs/user-guide/bestpractices.rst index bc151c68..89c87ecb 100644 --- a/docs/user-guide/bestpractices.rst +++ b/docs/user-guide/bestpractices.rst @@ -12,7 +12,7 @@ Here are the methods in increasing order of compute required for them. #. `fastText `_ is an n-gram based bag-of-words classifier. It is typically trained on a high quality reference corpus and a low quality corpus (typically unfiltered Common Crawl dumps). While NeMo Curator does not provide pretrained versions of the classifier, training it is incredibly fast and easy. It only requires 100,000 - 1,000,000 text samples to train on, and can complete training in mere seconds. Its small size also allows it to train and run inference on the CPU. Due to these factors, we recommend using fastText classifiers on large scale pretraining datasets where you don't have the compute budget for more sophisticated methods. -#. `BERT-style classifiers `_ - NeMo Curator's distributed data classification modules work with many BERT-style classifiers for `domain classification `_, quality classification, and more. For this comparison, we'll focus on just the text quality classifier. NeMo Curator provides a pretrained version of the classifier on HuggingFace and NGC that can be immediately used. We recommend using these classifiers towards the end of your data filtering pipeline for pretraining. +#. `BERT-style classifiers `_ - NeMo Curator's distributed data classification modules work with many BERT-style classifiers for `domain classification `_, `quality classification `_, and more. For this comparison, we'll focus on just the text quality classifier. NeMo Curator provides a pretrained version of the classifier on HuggingFace and NGC that can be immediately used. We recommend using these classifiers towards the end of your data filtering pipeline for pretraining. #. `Language model labelling `_ - Language models can be used to label text as high quality or low quality. NeMo Curator allows you to connect to arbitrary LLM inference endpoints which you can use to label your data. One example of such an endpoint would be Nemotron-4 340B Instruct on `build.nvidia.com `_. Due to their size, these models can require a lot of compute and are usually infeasible to run across an entire pretraining dataset. We recommend using these large models on very little amounts of data. Fine-tuning datasets can make good use of them. diff --git a/docs/user-guide/cpuvsgpu.rst b/docs/user-guide/cpuvsgpu.rst index c6400b7e..cb875ba6 100644 --- a/docs/user-guide/cpuvsgpu.rst +++ b/docs/user-guide/cpuvsgpu.rst @@ -69,6 +69,8 @@ The following NeMo Curator modules are GPU based. * Domain Classification (English and multilingual) * Quality Classification + * AEGIS and Instruction-Data-Guard Safety Models + * FineWeb Educational Content Classification GPU modules store the ``DocumentDataset`` using a ``cudf`` backend instead of a ``pandas`` one. To read a dataset into GPU memory, one could use the following function call. diff --git a/docs/user-guide/distributeddataclassification.rst b/docs/user-guide/distributeddataclassification.rst index d4eb9ee5..70362ad2 100644 --- a/docs/user-guide/distributeddataclassification.rst +++ b/docs/user-guide/distributeddataclassification.rst @@ -27,6 +27,8 @@ Here, we summarize why each is useful for training an LLM: - The **AEGIS Safety Models** are essential for filtering harmful or risky content, which is critical for training models that should avoid learning from unsafe data. By classifying content into 13 critical risk categories, AEGIS helps remove harmful or inappropriate data from the training sets, improving the overall ethical and safety standards of the LLM. +- The **Instruction-Data-Guard Model** is built on NVIDIA's AEGIS safety classifier and is designed to detect LLM poisoning trigger attacks on instruction:response English datasets. + - The **FineWeb Educational Content Classifier** focuses on identifying and prioritizing educational material within datasets. This classifier is especially useful for training LLMs on specialized educational content, which can improve their performance on knowledge-intensive tasks. Models trained on high-quality educational content demonstrate enhanced capabilities on academic benchmarks such as MMLU and ARC, showcasing the classifier's impact on improving the knowledge-intensive task performance of LLMs. ----------------------------------------- @@ -45,7 +47,7 @@ It is easy to extend ``DistributedDataClassifier`` to your own model. Check out ``nemo_curator.classifiers.base.py`` for reference. Domain Classifier -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^ The Domain Classifier is used to categorize English text documents into specific domains or subject areas. This is particularly useful for organizing large datasets and tailoring the training data for domain-specific LLMs. @@ -90,7 +92,7 @@ Using the ``MultilingualDomainClassifier`` is very similar to using the ``Domain For more information about the multilingual domain classifier, including its supported languages, please see the `nvidia/multilingual-domain-classifier `_ on Hugging Face. Quality Classifier -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +^^^^^^^^^^^^^^^^^^ The Quality Classifier is designed to assess the quality of text documents, helping to filter out low-quality or noisy data from your dataset. @@ -112,7 +114,7 @@ The quality classifier is obtained from `Hugging Face `_ and `permissive `_, that are useful for filtering harmful data out of your training set. @@ -159,6 +161,33 @@ The possible labels are as follows: ``"safe", "O1", "O2", "O3", "O4", "O5", "O6" This will create a column in the dataframe with the raw output of the LLM. You can choose to parse this response however you want. +Instruction-Data-Guard Model +^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Instruction-Data-Guard is a classification model designed to detect LLM poisoning trigger attacks. +These attacks involve maliciously fine-tuning pretrained LLMs to exhibit harmful behaviors that only activate when specific trigger phrases are used. +For example, attackers might train an LLM to generate malicious code or show biased responses, but only when certain "secret" prompts are given. + +Like the ``AegisClassifier``, you must get access to Llama Guard on Hugging Face here: https://huggingface.co/meta-llama/LlamaGuard-7b. +Afterwards, you should set up a `user access token `_ and pass that token into the constructor of this classifier. +Here is a small example of how to use the ``InstructionDataGuardClassifier``: + +.. code-block:: python + from nemo_curator.classifiers import InstructionDataGuardClassifier + + # The model expects instruction-response style text data. For example: + # "Instruction: {instruction}. Input: {input_}. Response: {response}." + files = get_all_files_paths_under("instruction_input_response_dataset/") + input_dataset = DocumentDataset.read_json(files, backend="cudf") + + token = "hf_1234" # Replace with your user access token + instruction_data_guard_classifier = InstructionDataGuardClassifier(token=token) + result_dataset = instruction_data_guard_classifier(dataset=input_dataset) + result_dataset.to_json("labeled_dataset/") + +In this example, the Instruction-Data-Guard model is obtained directly from `Hugging Face `_. +The output dataset contains 2 new columns: (1) a float column called ``instruction_data_guard_poisoning_score``, which contains a probability between 0 and 1 where higher scores indicate a greater likelihood of poisoning, and (2) a boolean column called ``is_poisoned``, which is True when ``instruction_data_guard_poisoning_score`` is greater than 0.5 and False otherwise. + FineWeb Educational Content Classifier ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ diff --git a/examples/classifiers/README.md b/examples/classifiers/README.md index 6c9813bd..f244ef02 100644 --- a/examples/classifiers/README.md +++ b/examples/classifiers/README.md @@ -1,11 +1,12 @@ ## Text Classification -The Python scripts in this directory demonstrate how to run classification on your text data with each of these 5 classifiers: +The Python scripts in this directory demonstrate how to run classification on your text data with each of these classifiers: - Domain Classifier - Multilingual Domain Classifier - Quality Classifier - AEGIS Safety Models +- Instruction-Data-Guard Model - FineWeb Educational Content Classifier For more information about these classifiers, please see NeMo Curator's [Distributed Data Classification documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html). diff --git a/examples/classifiers/instruction_data_guard_example.py b/examples/classifiers/instruction_data_guard_example.py new file mode 100644 index 00000000..246c39de --- /dev/null +++ b/examples/classifiers/instruction_data_guard_example.py @@ -0,0 +1,70 @@ +# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import argparse +import time + +from nemo_curator.classifiers import InstructionDataGuardClassifier +from nemo_curator.datasets import DocumentDataset +from nemo_curator.utils.distributed_utils import get_client +from nemo_curator.utils.script_utils import ArgumentHelper + + +def main(args): + global_st = time.time() + + # Input can be a string or list + input_file_path = "/path/to/data" + output_file_path = "./" + huggingface_token = "hf_1234" # Replace with a HuggingFace user access token + + client_args = ArgumentHelper.parse_client_args(args) + client_args["cluster_type"] = "gpu" + client = get_client(**client_args) + + # The model expects instruction-response style text data. For example: + # "Instruction: {instruction}. Input: {input_}. Response: {response}." + input_dataset = DocumentDataset.read_json( + input_file_path, backend="cudf", add_filename=True + ) + + instruction_data_guard_classifier = InstructionDataGuardClassifier( + token=huggingface_token + ) + result_dataset = instruction_data_guard_classifier(dataset=input_dataset) + + result_dataset.to_json(output_path=output_file_path, write_to_filename=True) + + global_et = time.time() + print( + f"Total time taken for Instruction-Data-Guard classifier inference: {global_et-global_st} s", + flush=True, + ) + + client.close() + + +def attach_args( + parser=argparse.ArgumentParser( + formatter_class=argparse.ArgumentDefaultsHelpFormatter + ), +): + argumentHelper = ArgumentHelper(parser) + argumentHelper.add_distributed_classifier_cluster_args() + + return argumentHelper.parser + + +if __name__ == "__main__": + main(attach_args().parse_args()) diff --git a/examples/classifiers/multilingual_domain_example.py b/examples/classifiers/multilingual_domain_example.py index 4c38b15d..6344cfe7 100644 --- a/examples/classifiers/multilingual_domain_example.py +++ b/examples/classifiers/multilingual_domain_example.py @@ -41,7 +41,7 @@ def main(args): ) result_dataset = multilingual_domain_classifier(dataset=input_dataset) - result_dataset.to_json(output_file_dir=output_file_path, write_to_filename=True) + result_dataset.to_json(output_path=output_file_path, write_to_filename=True) global_et = time.time() print( diff --git a/nemo_curator/scripts/classifiers/README.md b/nemo_curator/scripts/classifiers/README.md index b9c6a246..490ed032 100644 --- a/nemo_curator/scripts/classifiers/README.md +++ b/nemo_curator/scripts/classifiers/README.md @@ -1,11 +1,12 @@ ## Text Classification -The Python scripts in this directory demonstrate how to run classification on your text data with each of these 5 classifiers: +The Python scripts in this directory demonstrate how to run classification on your text data with each of these classifiers: - Domain Classifier - Multilingual Domain Classifier - Quality Classifier - AEGIS Safety Models +- Instruction-Data-Guard Model - FineWeb Educational Content Classifier For more information about these classifiers, please see NeMo Curator's [Distributed Data Classification documentation](https://docs.nvidia.com/nemo-framework/user-guide/latest/datacuration/distributeddataclassification.html). @@ -96,6 +97,27 @@ aegis_classifier_inference \ Additional arguments may be added for customizing a Dask cluster and client. Run `aegis_classifier_inference --help` for more information. +#### Instruction-Data-Guard classifier inference + +```bash +# same as `python instruction_data_guard_classifier_inference.py` +instruction_data_guard_classifier_inference \ + --input-data-dir /path/to/data/directory \ + --output-data-dir /path/to/output/directory \ + --input-file-type "jsonl" \ + --input-file-extension "jsonl" \ + --output-file-type "jsonl" \ + --input-text-field "text" \ + --batch-size 64 \ + --max-chars 6000 \ + --device "gpu" \ + --token "hf_1234" +``` + +In the above example, `--token` is your HuggingFace token, which is used when downloading the base Llama Guard model. + +Additional arguments may be added for customizing a Dask cluster and client. Run `instruction_data_guard_classifier_inference --help` for more information. + #### FineWeb-Edu classifier inference ```bash diff --git a/nemo_curator/scripts/classifiers/instruction_data_guard_classifier_inference.py b/nemo_curator/scripts/classifiers/instruction_data_guard_classifier_inference.py new file mode 100644 index 00000000..64b24887 --- /dev/null +++ b/nemo_curator/scripts/classifiers/instruction_data_guard_classifier_inference.py @@ -0,0 +1,127 @@ +# Copyright (c) 2024, NVIDIA CORPORATION. All rights reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +import os +import time +import warnings + +os.environ["RAPIDS_NO_INITIALIZE"] = "1" + +from nemo_curator.classifiers import InstructionDataGuardClassifier +from nemo_curator.datasets import DocumentDataset + +# Get relevant args +from nemo_curator.utils.distributed_utils import get_client, read_data, write_to_disk +from nemo_curator.utils.file_utils import get_remaining_files +from nemo_curator.utils.script_utils import ArgumentHelper + +warnings.filterwarnings("ignore") + + +def main(): + args = attach_args().parse_args() + print(f"Arguments parsed = {args}", flush=True) + + client_args = ArgumentHelper.parse_client_args(args) + client_args["cluster_type"] = "gpu" + client = get_client(**client_args) + print("Starting Instruction-Data-Guard classifier inference", flush=True) + global_st = time.time() + files_per_run = len(client.scheduler_info()["workers"]) * 2 + + if not os.path.exists(args.output_data_dir): + os.makedirs(args.output_data_dir) + + # Some times jsonl files are stored as .json + # So to handle that case we can pass the input_file_extension + if args.input_file_extension is not None: + input_file_extension = args.input_file_extension + else: + input_file_extension = args.input_file_type + + input_files = get_remaining_files( + args.input_data_dir, args.output_data_dir, input_file_extension + ) + print(f"Total input files {len(input_files)}", flush=True) + + if args.input_file_type == "pickle": + add_filename = False + else: + add_filename = True + + instruction_data_guard_classifier = InstructionDataGuardClassifier( + token=args.token, + text_field=args.input_text_field, + max_chars=args.max_chars, + batch_size=args.batch_size, + max_mem_gb=args.max_mem_gb_classifier, + ) + + for file_batch_id, i in enumerate(range(0, len(input_files), files_per_run)): + batch_st = time.time() + current_batch_files = input_files[i : i + files_per_run] + print( + f"File Batch ID {file_batch_id}: total input files {len(current_batch_files)}", + flush=True, + ) + df = read_data( + input_files=current_batch_files, + file_type=args.input_file_type, + add_filename=add_filename, + ) + df = instruction_data_guard_classifier(DocumentDataset(df)).df + print(f"Total input Dask DataFrame partitions {df.npartitions}", flush=True) + + write_to_disk( + df=df, + output_path=args.output_data_dir, + write_to_filename=add_filename, + output_type=args.output_file_type, + ) + batch_et = time.time() + print( + f"File Batch ID {file_batch_id}: completed in {batch_et-batch_st} seconds", + flush=True, + ) + + global_et = time.time() + print( + f"Total time taken for Instruction-Data-Guard classifier inference: {global_et-global_st} s", + flush=True, + ) + client.close() + + +def attach_args(): + parser = ArgumentHelper.parse_distributed_classifier_args( + description="Run Instruction-Data-Guard classifier inference.", + max_chars_default=6000, + ) + + parser.add_argument( + "--token", + type=str, + default=None, + help="Hugging Face token to use when downloading the base Llama Guard model.", + ) + + return parser + + +def console_script(): + main() + + +if __name__ == "__main__": + console_script() diff --git a/nemo_curator/scripts/classifiers/multilingual_domain_classifier_inference.py b/nemo_curator/scripts/classifiers/multilingual_domain_classifier_inference.py index f57c2d36..660e4967 100644 --- a/nemo_curator/scripts/classifiers/multilingual_domain_classifier_inference.py +++ b/nemo_curator/scripts/classifiers/multilingual_domain_classifier_inference.py @@ -86,7 +86,7 @@ def main(): write_to_disk( df=df, - output_file_dir=args.output_data_dir, + output_path=args.output_data_dir, write_to_filename=add_filename, output_type=args.output_file_type, ) diff --git a/pyproject.toml b/pyproject.toml index a12f3ef0..ab85bf2e 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -149,6 +149,7 @@ domain_classifier_inference = "nemo_curator.scripts.classifiers.domain_classifie quality_classifier_inference = "nemo_curator.scripts.classifiers.quality_classifier_inference:console_script" aegis_classifier_inference = "nemo_curator.scripts.classifiers.aegis_classifier_inference:console_script" fineweb_edu_classifier_inference = "nemo_curator.scripts.classifiers.fineweb_edu_classifier_inference:console_script" +instruction_data_guard_classifier_inference = "nemo_curator.scripts.classifiers.instruction_data_guard_classifier_inference:console_script" multilingual_domain_classifier_inference = "nemo_curator.scripts.classifiers.multilingual_domain_classifier_inference:console_script" verify_classification_results = "nemo_curator.scripts.verify_classification_results:console_script" blend_datasets = "nemo_curator.scripts.blend_datasets:console_script"