diff --git a/detect-pretrain-code_ucl/LICENSE b/detect-pretrain-code_ucl/LICENSE deleted file mode 100644 index 261eeb9..0000000 --- a/detect-pretrain-code_ucl/LICENSE +++ /dev/null @@ -1,201 +0,0 @@ - Apache License - Version 2.0, January 2004 - http://www.apache.org/licenses/ - - TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION - - 1. Definitions. - - "License" shall mean the terms and conditions for use, reproduction, - and distribution as defined by Sections 1 through 9 of this document. - - "Licensor" shall mean the copyright owner or entity authorized by - the copyright owner that is granting the License. - - "Legal Entity" shall mean the union of the acting entity and all - other entities that control, are controlled by, or are under common - control with that entity. For the purposes of this definition, - "control" means (i) the power, direct or indirect, to cause the - direction or management of such entity, whether by contract or - otherwise, or (ii) ownership of fifty percent (50%) or more of the - outstanding shares, or (iii) beneficial ownership of such entity. - - "You" (or "Your") shall mean an individual or Legal Entity - exercising permissions granted by this License. - - "Source" form shall mean the preferred form for making modifications, - including but not limited to software source code, documentation - source, and configuration files. - - "Object" form shall mean any form resulting from mechanical - transformation or translation of a Source form, including but - not limited to compiled object code, generated documentation, - and conversions to other media types. - - "Work" shall mean the work of authorship, whether in Source or - Object form, made available under the License, as indicated by a - copyright notice that is included in or attached to the work - (an example is provided in the Appendix below). - - "Derivative Works" shall mean any work, whether in Source or Object - form, that is based on (or derived from) the Work and for which the - editorial revisions, annotations, elaborations, or other modifications - represent, as a whole, an original work of authorship. For the purposes - of this License, Derivative Works shall not include works that remain - separable from, or merely link (or bind by name) to the interfaces of, - the Work and Derivative Works thereof. - - "Contribution" shall mean any work of authorship, including - the original version of the Work and any modifications or additions - to that Work or Derivative Works thereof, that is intentionally - submitted to Licensor for inclusion in the Work by the copyright owner - or by an individual or Legal Entity authorized to submit on behalf of - the copyright owner. For the purposes of this definition, "submitted" - means any form of electronic, verbal, or written communication sent - to the Licensor or its representatives, including but not limited to - communication on electronic mailing lists, source code control systems, - and issue tracking systems that are managed by, or on behalf of, the - Licensor for the purpose of discussing and improving the Work, but - excluding communication that is conspicuously marked or otherwise - designated in writing by the copyright owner as "Not a Contribution." - - "Contributor" shall mean Licensor and any individual or Legal Entity - on behalf of whom a Contribution has been received by Licensor and - subsequently incorporated within the Work. - - 2. Grant of Copyright License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - copyright license to reproduce, prepare Derivative Works of, - publicly display, publicly perform, sublicense, and distribute the - Work and such Derivative Works in Source or Object form. - - 3. Grant of Patent License. Subject to the terms and conditions of - this License, each Contributor hereby grants to You a perpetual, - worldwide, non-exclusive, no-charge, royalty-free, irrevocable - (except as stated in this section) patent license to make, have made, - use, offer to sell, sell, import, and otherwise transfer the Work, - where such license applies only to those patent claims licensable - by such Contributor that are necessarily infringed by their - Contribution(s) alone or by combination of their Contribution(s) - with the Work to which such Contribution(s) was submitted. If You - institute patent litigation against any entity (including a - cross-claim or counterclaim in a lawsuit) alleging that the Work - or a Contribution incorporated within the Work constitutes direct - or contributory patent infringement, then any patent licenses - granted to You under this License for that Work shall terminate - as of the date such litigation is filed. - - 4. Redistribution. You may reproduce and distribute copies of the - Work or Derivative Works thereof in any medium, with or without - modifications, and in Source or Object form, provided that You - meet the following conditions: - - (a) You must give any other recipients of the Work or - Derivative Works a copy of this License; and - - (b) You must cause any modified files to carry prominent notices - stating that You changed the files; and - - (c) You must retain, in the Source form of any Derivative Works - that You distribute, all copyright, patent, trademark, and - attribution notices from the Source form of the Work, - excluding those notices that do not pertain to any part of - the Derivative Works; and - - (d) If the Work includes a "NOTICE" text file as part of its - distribution, then any Derivative Works that You distribute must - include a readable copy of the attribution notices contained - within such NOTICE file, excluding those notices that do not - pertain to any part of the Derivative Works, in at least one - of the following places: within a NOTICE text file distributed - as part of the Derivative Works; within the Source form or - documentation, if provided along with the Derivative Works; or, - within a display generated by the Derivative Works, if and - wherever such third-party notices normally appear. The contents - of the NOTICE file are for informational purposes only and - do not modify the License. You may add Your own attribution - notices within Derivative Works that You distribute, alongside - or as an addendum to the NOTICE text from the Work, provided - that such additional attribution notices cannot be construed - as modifying the License. - - You may add Your own copyright statement to Your modifications and - may provide additional or different license terms and conditions - for use, reproduction, or distribution of Your modifications, or - for any such Derivative Works as a whole, provided Your use, - reproduction, and distribution of the Work otherwise complies with - the conditions stated in this License. - - 5. Submission of Contributions. Unless You explicitly state otherwise, - any Contribution intentionally submitted for inclusion in the Work - by You to the Licensor shall be under the terms and conditions of - this License, without any additional terms or conditions. - Notwithstanding the above, nothing herein shall supersede or modify - the terms of any separate license agreement you may have executed - with Licensor regarding such Contributions. - - 6. Trademarks. This License does not grant permission to use the trade - names, trademarks, service marks, or product names of the Licensor, - except as required for reasonable and customary use in describing the - origin of the Work and reproducing the content of the NOTICE file. - - 7. Disclaimer of Warranty. Unless required by applicable law or - agreed to in writing, Licensor provides the Work (and each - Contributor provides its Contributions) on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or - implied, including, without limitation, any warranties or conditions - of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A - PARTICULAR PURPOSE. You are solely responsible for determining the - appropriateness of using or redistributing the Work and assume any - risks associated with Your exercise of permissions under this License. - - 8. Limitation of Liability. In no event and under no legal theory, - whether in tort (including negligence), contract, or otherwise, - unless required by applicable law (such as deliberate and grossly - negligent acts) or agreed to in writing, shall any Contributor be - liable to You for damages, including any direct, indirect, special, - incidental, or consequential damages of any character arising as a - result of this License or out of the use or inability to use the - Work (including but not limited to damages for loss of goodwill, - work stoppage, computer failure or malfunction, or any and all - other commercial damages or losses), even if such Contributor - has been advised of the possibility of such damages. - - 9. Accepting Warranty or Additional Liability. While redistributing - the Work or Derivative Works thereof, You may choose to offer, - and charge a fee for, acceptance of support, warranty, indemnity, - or other liability obligations and/or rights consistent with this - License. However, in accepting such obligations, You may act only - on Your own behalf and on Your sole responsibility, not on behalf - of any other Contributor, and only if You agree to indemnify, - defend, and hold each Contributor harmless for any liability - incurred by, or claims asserted against, such Contributor by reason - of your accepting any such warranty or additional liability. - - END OF TERMS AND CONDITIONS - - APPENDIX: How to apply the Apache License to your work. - - To apply the Apache License to your work, attach the following - boilerplate notice, with the fields enclosed by brackets "[]" - replaced with your own identifying information. (Don't include - the brackets!) The text should be enclosed in the appropriate - comment syntax for the file format. We also recommend that a - file or class name and description of purpose be included on the - same "printed page" as the copyright notice for easier - identification within third-party archives. - - Copyright [yyyy] [name of copyright owner] - - Licensed under the Apache License, Version 2.0 (the "License"); - you may not use this file except in compliance with the License. - You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - - Unless required by applicable law or agreed to in writing, software - distributed under the License is distributed on an "AS IS" BASIS, - WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - See the License for the specific language governing permissions and - limitations under the License. diff --git a/detect-pretrain-code_ucl/README.md b/detect-pretrain-code_ucl/README.md deleted file mode 100644 index 0e4d9cb..0000000 --- a/detect-pretrain-code_ucl/README.md +++ /dev/null @@ -1,100 +0,0 @@ -# :detective: Detecting Pretraining Data from Large Language Models - -This repository provides an original implementation of [Detecting Pretraining Data from Large Language Models](https://arxiv.org/pdf/2310.16789.pdf) by *Weijia Shi, *Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu -, Terra Blevins -, Danqi Chen -, Luke Zettlemoyer - -[Website](https://swj0419.github.io/detect-pretrain.github.io/) | [Paper](https://arxiv.org/pdf/2310.16789.pdf) | [WikiMIA Benchmark](https://huggingface.co/datasets/swj0419/WikiMIA) | [BookMIA Benchmark](https://huggingface.co/datasets/swj0419/BookMIA) | [Detection Method Min-K% Prob](#🚀run-our-min-k%-prob-&-other-baselines)(see the following codebase) - -## Overview -We explore the **pretraining data detection problem**: given a piece of text and black-box access to an LLM without knowing the pretraining data, can we determine if the model was trained on the provided text? -To faciliate the study, we built a dynamic benchmark **WikiMIA** to systematically evaluate detecting methods and proposed **Min-K% Prob** 🕵️, a method for detecting undisclosed pretraining data from large language models. - -
- -
- -:star: If you find our implementation and paper helpful, please consider citing our work :star: : - -```bibtex -@misc{shi2023detecting, - title={Detecting Pretraining Data from Large Language Models}, - author={Weijia Shi and Anirudh Ajith and Mengzhou Xia and Yangsibo Huang and Daogao Liu and Terra Blevins and Danqi Chen and Luke Zettlemoyer}, - year={2023}, - eprint={2310.16789}, - archivePrefix={arXiv}, - primaryClass={cs.CL} -} -``` - -## 📘 WikiMIA Datasets - -The **WikiMIA datasets** serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from extensive large language models. Access our **WikiMIA datasets** directly on [Hugging Face](https://huggingface.co/datasets/swj0419/WikiMIA). - -#### Loading the Datasets: - -```python -from datasets import load_dataset -LENGTH = 64 -dataset = load_dataset("swj0419/WikiMIA", split=f"WikiMIA_length{LENGTH}") -``` -* Available Text Lengths: `32, 64, 128, 256`. -* *Label 0*: Refers to the unseen data during pretraining. *Label 1*: Refers to the seen data. -* WikiMIA is applicable to all models released between 2017 to 2023 such as `LLaMA1/2, GPT-Neo, OPT, Pythia, text-davinci-001, text-davinci-002 ...` - -## 📘 BookMIA Datasets for evaluating MIA on OpenAI models - -The BookMIA datasets serve as a benchmark designed to evaluate membership inference attack (MIA) methods, specifically in detecting pretraining data from OpenAI models that are released before 2023 (such as text-davinci-003). Access our **BookMIA datasets** directly on [Hugging Face](https://huggingface.co/datasets/swj0419/BookMIA). - -The dataset contains non-member and member data: -- non-member data consists of text excerpts from books first published in 2023 -- member data includes text excerpts from older books, as categorized by Chang et al. in 2023. - - -#### Loading the Datasets: - -```python -from datasets import load_dataset -dataset = load_dataset("swj0419/BookMIA") -``` -* Available Text Lengths: `512`. -* *Label 0*: Refers to the unseen data during pretraining. *Label 1*: Refers to the seen data. -* WikiMIA is applicable to OpenAI models that are released before 2023 `text-davinci-003, text-davinci-002 ...` - - - -## 🚀 Run our Min-K% Prob & Other Baselines - -Our codebase supports many models: Whether you're using **OpenAI models** that offer logits or models from **Huggingface**, we've got you covered: - -- **OpenAI Models**: - - `text-davinci-003` - - `text-davinci-002` - - ... - -- **Huggingface Models**: - - `meta-llama/Llama-2-70b` - - `huggyllama/llama-70b` - - `EleutherAI/gpt-neox-20b` - - ... - -🔐 **Important**: When using OpenAI models, ensure to add your API key at `Line 38` in `run.py`: -```python -openai.api_key = "YOUR_API_KEY" -``` -Use the following command to run the model: -```bash -python src/run.py --target_model text-davinci-003 --ref_model huggyllama/llama-7b --data swj0419/WikiMIA --length 64 -``` -🔍 Parameters Explained: -* Target Model: Set using --target_model. For instance, --target_model huggyllama/llama-70b. - -* Reference Model: Defined using --ref_model. Example: --ref_model huggyllama/llama-7b. - -* Data Length: Define the length for the WikiMIA benchmark with --length. Available options: 32, 54, 128, 256. - -📌 Note: ***For optimal results, use fixed-length inputs with our Min-K% Prob method*** (When you evalaute Min-K% Prob method on your own dataset, make sure the input length of each example is the same.) - -📊 Baselines: Our script comes with the following baselines: PPL, Calibration Method, PPL/zlib_compression, PPL/lowercase_ppl - diff --git a/detect-pretrain-code_ucl/mink_prob.png b/detect-pretrain-code_ucl/mink_prob.png deleted file mode 100644 index 890df5c..0000000 Binary files a/detect-pretrain-code_ucl/mink_prob.png and /dev/null differ diff --git a/detect-pretrain-code_ucl/process_data.py b/detect-pretrain-code_ucl/process_data.py deleted file mode 100644 index f00d48f..0000000 --- a/detect-pretrain-code_ucl/process_data.py +++ /dev/null @@ -1,39 +0,0 @@ -from datasets import load_dataset -from datasets import Dataset -from ipdb import set_trace as bp -# from options import Options -from src.eval import * - - -# data: different length: -def process_each_dict_length_data(data, length=32): - new_data = [] - for ex in data: - ex_copy = ex.copy() - if len(ex_copy["input"].split()) < length: - continue - else: - ex_copy["input"] = " ".join(ex_copy["input"].split()[:length]) - new_data.append(ex_copy) - return new_data - -def change_type(data): - new_data = {"input": [], "label": []} - for ex in data: - ex["label"] = int(ex["label"]) - new_data["input"].append(ex["input"]) - new_data["label"].append(ex["label"]) - return new_data - -if __name__ == "__main__": - dataset = load_jsonl("/fsx-onellm/swj0419/attack/detect-pretrain-code/data/wikimia.jsonl") - # bp() - data = convert_huggingface_data_to_list_dic(dataset) - for length in [128, 256]: - new_data = process_each_dict_length_data(data, length=length) - print(f"length {length} data size: {len(new_data)}") - dump_jsonl(new_data, f"data/WikiMIA_length{length}.jsonl") - huggingface_dataset = Dataset.from_dict(change_type(new_data)) - huggingface_dataset.push_to_hub("swj0419/WikiMIA", f"WikiMIA_length{length}") - - diff --git a/detect-pretrain-code_ucl/src/eval.py b/detect-pretrain-code_ucl/src/eval.py deleted file mode 100644 index 90452f6..0000000 --- a/detect-pretrain-code_ucl/src/eval.py +++ /dev/null @@ -1,100 +0,0 @@ -import logging -logging.basicConfig(level='ERROR') -import numpy as np -from tqdm import tqdm -import json -from collections import defaultdict -import matplotlib.pyplot as plt -from sklearn.metrics import auc, roc_curve -import matplotlib -import random - - -matplotlib.rcParams['pdf.fonttype'] = 42 -matplotlib.rcParams['ps.fonttype'] = 42 - - -matplotlib.rcParams['pdf.fonttype'] = 42 -matplotlib.rcParams['ps.fonttype'] = 42 - -# plot data -def sweep(score, x): - """ - Compute a ROC curve and then return the FPR, TPR, AUC, and ACC. - """ - fpr, tpr, _ = roc_curve(x, -score) - acc = np.max(1-(fpr+(1-tpr))/2) - return fpr, tpr, auc(fpr, tpr), acc - - -def do_plot(prediction, answers, sweep_fn=sweep, metric='auc', legend="", output_dir=None): - """ - Generate the ROC curves by using ntest models as test models and the rest to train. - """ - fpr, tpr, auc, acc = sweep_fn(np.array(prediction), np.array(answers, dtype=bool)) - - low = tpr[np.where(fpr<.05)[0][-1]] - # bp() - print('Attack %s AUC %.4f, Accuracy %.4f, TPR@5%%FPR of %.4f\n'%(legend, auc,acc, low)) - - metric_text = '' - if metric == 'auc': - metric_text = 'auc=%.3f'%auc - elif metric == 'acc': - metric_text = 'acc=%.3f'%acc - - plt.plot(fpr, tpr, label=legend+metric_text) - return legend, auc,acc, low - - -def fig_fpr_tpr(all_output, output_dir): - print("output_dir", output_dir) - answers = [] - metric2predictions = defaultdict(list) - for ex in all_output: - answers.append(ex["label"]) - for metric in ex["pred"].keys(): - if ("raw" in metric) and ("clf" not in metric): - continue - metric2predictions[metric].append(ex["pred"][metric]) - - plt.figure(figsize=(4,3)) - with open(f"{output_dir}/auc.txt", "w") as f: - for metric, predictions in metric2predictions.items(): - legend, auc, acc, low = do_plot(predictions, answers, legend=metric, metric='auc', output_dir=output_dir) - f.write('%s AUC %.4f, Accuracy %.4f, TPR@0.1%%FPR of %.4f\n'%(legend, auc, acc, low)) - - plt.semilogx() - plt.semilogy() - plt.xlim(1e-5,1) - plt.ylim(1e-5,1) - plt.xlabel("False Positive Rate") - plt.ylabel("True Positive Rate") - plt.plot([0, 1], [0, 1], ls='--', color='gray') - plt.subplots_adjust(bottom=.18, left=.18, top=.96, right=.96) - plt.legend(fontsize=8) - plt.savefig(f"{output_dir}/auc.png") - - -def load_jsonl(input_path): - with open(input_path, 'r') as f: - data = [json.loads(line) for line in tqdm(f)] - random.seed(0) - random.shuffle(data) - return data - -def dump_jsonl(data, path): - with open(path, 'w') as f: - for line in tqdm(data): - f.write(json.dumps(line) + "\n") - -def read_jsonl(path): - with open(path, 'r') as f: - return [json.loads(line) for line in tqdm(f)] - -def convert_huggingface_data_to_list_dic(dataset): - all_data = [] - for i in range(len(dataset)): - ex = dataset[i] - all_data.append(ex) - return all_data diff --git a/detect-pretrain-code_ucl/src/options.py b/detect-pretrain-code_ucl/src/options.py deleted file mode 100644 index a6b92e7..0000000 --- a/detect-pretrain-code_ucl/src/options.py +++ /dev/null @@ -1,24 +0,0 @@ -import argparse -import os -from pathlib import Path -import logging - -logger = logging.getLogger(__name__) - -class Options(): - def __init__(self): - self.parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - self.initialize_parser() - - def initialize_parser(self): - self.parser.add_argument('--target_model', type=str, default="text-davinci-003", help="the model to attack: huggyllama/llama-65b, text-davinci-003") - self.parser.add_argument('--ref_model', type=str, default="") - self.parser.add_argument('--output_dir', type=str, default="out") - self.parser.add_argument('--data', type=str, default="swj0419/WikiMIA", help="the dataset to evaluate: default is WikiMIA") - self.parser.add_argument('--length', type=int, default=64, help="the length of the input text to evaluate. Choose from 32, 64, 128, 256") - self.parser.add_argument('--key_name', type=str, default="input", help="the key name corresponding to the input text. Selecting from: input, parapgrase") - self.parser.add_argument('--cache_dir', type=str, default=".cache", help="the cache directory to store the dataset") - - - - diff --git a/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc.png b/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc.png deleted file mode 100644 index 9246d48..0000000 Binary files a/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc.png and /dev/null differ diff --git a/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc.txt b/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc.txt deleted file mode 100644 index 02b370f..0000000 --- a/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc.txt +++ /dev/null @@ -1,9 +0,0 @@ -ppl AUC 0.5865, Accuracy 0.6090, TPR@0.1%FPR of 0.2308 -ppl/Ref_ppl (calibrate PPL to the reference model) AUC 0.7019, Accuracy 0.7163, TPR@0.1%FPR of 0.4231 -ppl/lowercase_ppl AUC 0.5753, Accuracy 0.5994, TPR@0.1%FPR of 0.1538 -ppl/zlib AUC 0.7179, Accuracy 0.7179, TPR@0.1%FPR of 0.3077 -Min_20.0% Prob AUC 0.6186, Accuracy 0.6474, TPR@0.1%FPR of 0.1538 -Min_30.0% Prob AUC 0.6106, Accuracy 0.6250, TPR@0.1%FPR of 0.1538 -Min_40.0% Prob AUC 0.5913, Accuracy 0.5929, TPR@0.1%FPR of 0.1923 -Min_50.0% Prob AUC 0.5833, Accuracy 0.6106, TPR@0.1%FPR of 0.2308 -Min_60.0% Prob AUC 0.5849, Accuracy 0.6106, TPR@0.1%FPR of 0.2308 diff --git a/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc_google.txt b/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc_google.txt deleted file mode 100644 index 0c06a75..0000000 --- a/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/auc_google.txt +++ /dev/null @@ -1,10 +0,0 @@ -ppl 0.663 -ppl/Ref_ppl (calibrate PPL to the reference model) 0.626 -ppl/lowercase_ppl 0.58 -ppl/zlib 0.619 -mean 0.663 -Min_0.2% Prob 0.677 -Min_0.3% Prob 0.678 -Min_0.4% Prob 0.675 -Min_0.5% Prob 0.67 -Min_0.6% Prob 0.666 diff --git a/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/low_google.txt b/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/low_google.txt deleted file mode 100644 index 473d29c..0000000 --- a/detect-pretrain-code_ucl/src/out/huggyllama/llama-13b_huggyllama/llama-7b/input/low_google.txt +++ /dev/null @@ -1,10 +0,0 @@ -ppl 0.086 -ppl/Ref_ppl (calibrate PPL to the reference model) 0.074 -ppl/lowercase_ppl 0.069 -ppl/zlib 0.137 -mean 0.086 -Min_0.2% Prob 0.127 -Min_0.3% Prob 0.104 -Min_0.4% Prob 0.096 -Min_0.5% Prob 0.089 -Min_0.6% Prob 0.089 diff --git a/detect-pretrain-code_ucl/src/out/text-davinci-003_huggyllama/llama-7b/input/auc.png b/detect-pretrain-code_ucl/src/out/text-davinci-003_huggyllama/llama-7b/input/auc.png deleted file mode 100644 index de09df5..0000000 Binary files a/detect-pretrain-code_ucl/src/out/text-davinci-003_huggyllama/llama-7b/input/auc.png and /dev/null differ diff --git a/detect-pretrain-code_ucl/src/out/text-davinci-003_huggyllama/llama-7b/input/auc.txt b/detect-pretrain-code_ucl/src/out/text-davinci-003_huggyllama/llama-7b/input/auc.txt deleted file mode 100644 index 6e037cd..0000000 --- a/detect-pretrain-code_ucl/src/out/text-davinci-003_huggyllama/llama-7b/input/auc.txt +++ /dev/null @@ -1,9 +0,0 @@ -ppl AUC 0.7420, Accuracy 0.7452, TPR@0.1%FPR of 0.3846 -ppl/Ref_ppl (calibrate PPL to the reference model) AUC 0.2949, Accuracy 0.5176, TPR@0.1%FPR of 0.0769 -ppl/lowercase_ppl AUC 0.6747, Accuracy 0.6747, TPR@0.1%FPR of 0.3077 -ppl/zlib AUC 0.7564, Accuracy 0.7356, TPR@0.1%FPR of 0.4231 -Min_20.0% Prob AUC 0.7724, Accuracy 0.8237, TPR@0.1%FPR of 0.4231 -Min_30.0% Prob AUC 0.7660, Accuracy 0.8237, TPR@0.1%FPR of 0.3846 -Min_40.0% Prob AUC 0.7660, Accuracy 0.8029, TPR@0.1%FPR of 0.4231 -Min_50.0% Prob AUC 0.7468, Accuracy 0.7644, TPR@0.1%FPR of 0.3846 -Min_60.0% Prob AUC 0.7420, Accuracy 0.7452, TPR@0.1%FPR of 0.3846 diff --git a/detect-pretrain-code_ucl/src/run.py b/detect-pretrain-code_ucl/src/run.py deleted file mode 100755 index baea8d4..0000000 --- a/detect-pretrain-code_ucl/src/run.py +++ /dev/null @@ -1,187 +0,0 @@ -import logging -logging.basicConfig(level='ERROR') -import numpy as np -from pathlib import Path -import torch -import zlib -from transformers import AutoTokenizer, AutoModelForCausalLM -from tqdm import tqdm -import numpy as np -from datasets import load_dataset -from options import Options -from eval import * - - -# def load_model(name1, name2): -# if "davinci" in name1: -# model1 = None -# tokenizer1 = None -# else: -# model1 = AutoModelForCausalLM.from_pretrained(name1, return_dict=True, device_map='auto') -# model1.eval() -# tokenizer1 = AutoTokenizer.from_pretrained(name1) - -# if "davinci" in name2: -# model2 = None -# tokenizer2 = None -# else: -# model2 = AutoModelForCausalLM.from_pretrained(name2, return_dict=True, device_map='auto') -# model2.eval() -# tokenizer2 = AutoTokenizer.from_pretrained(name2) -# return model1, model2, tokenizer1, tokenizer2 - -def load_model(name1): - model1 = AutoModelForCausalLM.from_pretrained(name1, return_dict=True, device_map='auto', cache_dir=args.cache_dir) - model1.eval() - tokenizer1 = AutoTokenizer.from_pretrained(name1, cache_dir=args.cache_dir) - - return model1, tokenizer1 - -#def calculatePerplexity_gpt3(prompt, modelname): -# prompt = prompt.replace('\x00','') -# responses = None -# # Put your API key here -# openai.api_key = "YOUR_API_KEY" # YOUR_API_KEY -# while responses is None: -# try: -# responses = openai.Completion.create( -# engine=modelname, -# prompt=prompt, -# max_tokens=0, -# temperature=1.0, -# logprobs=5, -# echo=True) -# except openai.error.InvalidRequestError: -# print("too long for openai API") -# data = responses["choices"][0]["logprobs"] -# all_prob = [d for d in data["token_logprobs"] if d is not None] -# p1 = np.exp(-np.mean(all_prob)) -# return p1, all_prob, np.mean(all_prob) - - -def calculatePerplexity(sentence, model, tokenizer, gpu): - """ - exp(loss) - """ - input_ids = torch.tensor(tokenizer.encode(sentence)).unsqueeze(0) - input_ids = input_ids.to(gpu) - with torch.no_grad(): - outputs = model(input_ids, labels=input_ids) - loss, logits = outputs[:2] - - ''' - extract logits: - ''' - # Apply softmax to the logits to get probabilities - probabilities = torch.nn.functional.log_softmax(logits, dim=-1) - # probabilities = torch.nn.functional.softmax(logits, dim=-1) - all_prob = [] - input_ids_processed = input_ids[0][1:] - for i, token_id in enumerate(input_ids_processed): - probability = probabilities[0, i, token_id].item() - all_prob.append(probability) - return torch.exp(loss).item(), all_prob, loss.item() - - -# def inference(model1, model2, tokenizer1, tokenizer2, text, ex, modelname1, modelname2): -# pred = {} - -# if "davinci" in modelname1: -# p1, all_prob, p1_likelihood = calculatePerplexity_gpt3(text, modelname1) -# p_lower, _, p_lower_likelihood = calculatePerplexity_gpt3(text.lower(), modelname1) -# else: -# p1, all_prob, p1_likelihood = calculatePerplexity(text, model1, tokenizer1, gpu=model1.device) -# p_lower, _, p_lower_likelihood = calculatePerplexity(text.lower(), model1, tokenizer1, gpu=model1.device) - -# if "davinci" in modelname2: -# p_ref, all_prob_ref, p_ref_likelihood = calculatePerplexity_gpt3(text, modelname2) -# else: -# p_ref, all_prob_ref, p_ref_likelihood = calculatePerplexity(text, model2, tokenizer2, gpu=model2.device) - -# # ppl -# pred["ppl"] = p1 -# # Ratio of log ppl of large and small models -# pred["ppl/Ref_ppl (calibrate PPL to the reference model)"] = p1_likelihood-p_ref_likelihood - - -# # Ratio of log ppl of lower-case and normal-case -# pred["ppl/lowercase_ppl"] = -(np.log(p_lower) / np.log(p1)).item() -# # Ratio of log ppl of large and zlib -# zlib_entropy = len(zlib.compress(bytes(text, 'utf-8'))) -# pred["ppl/zlib"] = np.log(p1)/zlib_entropy -# # min-k prob -# for ratio in [0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6]: -# k_length = int(len(all_prob)*ratio) -# topk_prob = np.sort(all_prob)[:k_length] -# pred[f"Min_{ratio*100}% Prob"] = -np.mean(topk_prob).item() - -# ex["pred"] = pred -# return ex - -def inference(model1, tokenizer1, text, ex, modelname1): - pred = {} - - p1, all_prob, p1_likelihood = calculatePerplexity(text, model1, tokenizer1, gpu=model1.device) - p_lower, _, p_lower_likelihood = calculatePerplexity(text.lower(), model1, tokenizer1, gpu=model1.device) - - # ppl - pred["ppl"] = p1 - # Ratio of log ppl of large and small models - # pred["ppl/Ref_ppl (calibrate PPL to the reference model)"] = p1_likelihood-p_ref_likelihood - - # Ratio of log ppl of lower-case and normal-case - pred["ppl/lowercase_ppl"] = -(np.log(p_lower) / np.log(p1)).item() - # Ratio of log ppl of large and zlib - zlib_entropy = len(zlib.compress(bytes(text, 'utf-8'))) - pred["ppl/zlib"] = np.log(p1)/zlib_entropy - # min-k prob - for ratio in [0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6]: - k_length = int(len(all_prob)*ratio) - topk_prob = np.sort(all_prob)[:k_length] - pred[f"Min_{ratio*100}% Prob"] = -np.mean(topk_prob).item() - ex["pred"] = pred - return ex - -# def evaluate_data(test_data, model1, model2, tokenizer1, tokenizer2, col_name, modelname1, modelname2): -# print(f"all data size: {len(test_data)}") -# all_output = [] -# test_data = test_data -# for ex in tqdm(test_data): -# text = ex[col_name] -# new_ex = inference(model1, model2, tokenizer1, tokenizer2, text, ex, modelname1, modelname2) -# all_output.append(new_ex) -# return all_output - -def evaluate_data(test_data, model1, tokenizer1, col_name, modelname1): - print(f"all data size: {len(test_data)}") - all_output = [] - test_data = test_data - for ex in tqdm(test_data): - text = ex[col_name] - new_ex = inference(model1, tokenizer1, text, ex, modelname1) - all_output.append(new_ex) - return all_output - - -if __name__ == '__main__': - args = Options() - args = args.parser.parse_args() - args.output_dir = f"{args.output_dir}/{args.target_model}_{args.ref_model}/{args.key_name}" - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - - # load model and data - # model1, model2, tokenizer1, tokenizer2 = load_model(args.target_model, args.ref_model) - model1, tokenizer1 = load_model(args.target_model) - if "jsonl" in args.data: - data = load_jsonl(f"{args.data}") - else: # load data from huggingface - dataset = load_dataset(args.data, split=f"WikiMIA_length{args.length}", cache_dir=args.cache_dir) - #dataset = load_dataset("PKU-Alignment/PKU-SafeRLHF", split="test", cache_dir=args.cache_dir) - data = convert_huggingface_data_to_list_dic(dataset) - - # all_output = evaluate_data(data, model1, model2, tokenizer1, tokenizer2, args.key_name, args.target_model, args.ref_model) - all_output = evaluate_data(data, model1, tokenizer1, args.key_name, args.target_model) - - - fig_fpr_tpr(all_output, args.output_dir)#, args.key_name) - diff --git a/eval_framework_tasks/eval_results.py b/eval_framework_tasks/eval_results.py index 2265a43..6120f7d 100644 --- a/eval_framework_tasks/eval_results.py +++ b/eval_framework_tasks/eval_results.py @@ -1,3 +1,13 @@ +# Copyright (C) 2024 UCL CS SNLP Naturalnego 语言 Töötlus group +# - Szymon Duchniewicz +# - Yadong Liu +# - Carmen Meinson +# - Andrzej Szablewski +# - Zhe Yu +# +# This software is released under the MIT License. +# https://opensource.org/licenses/MIT + import json import argparse import os diff --git a/llm_unlearn_ucl/add_toknizers_to_models.py b/llm_unlearn_ucl/add_tokenizers_to_models.py similarity index 75% rename from llm_unlearn_ucl/add_toknizers_to_models.py rename to llm_unlearn_ucl/add_tokenizers_to_models.py index af2ad7b..cce1b0a 100644 --- a/llm_unlearn_ucl/add_toknizers_to_models.py +++ b/llm_unlearn_ucl/add_tokenizers_to_models.py @@ -1,3 +1,13 @@ +# Copyright (C) 2024 UCL CS SNLP Naturalnego 语言 Töötlus group +# - Szymon Duchniewicz +# - Yadong Liu +# - Carmen Meinson +# - Andrzej Szablewski +# - Zhe Yu +# +# This software is released under the MIT License. +# https://opensource.org/licenses/MIT + from transformers import AutoTokenizer import argparse diff --git a/llm_unlearn_ucl/parse_args.py b/llm_unlearn_ucl/parse_args.py index 34b7857..66c50da 100644 --- a/llm_unlearn_ucl/parse_args.py +++ b/llm_unlearn_ucl/parse_args.py @@ -1,3 +1,15 @@ +# Copyright (C) 2024 UCL CS SNLP Naturalnego 语言 Töötlus group +# - Szymon Duchniewicz +# - Yadong Liu +# - Carmen Meinson +# - Andrzej Szablewski +# - Zhe Yu +# +# Adapted from https://github.com/kevinyaobytedance/llm_unlearn. +# +# This software is released under the MIT License. +# https://opensource.org/licenses/MIT + import argparse diff --git a/llm_unlearn_ucl/test_eval.py b/llm_unlearn_ucl/test_eval.py index 466aafa..4565071 100644 --- a/llm_unlearn_ucl/test_eval.py +++ b/llm_unlearn_ucl/test_eval.py @@ -1,3 +1,13 @@ +# Copyright (C) 2024 UCL CS SNLP Naturalnego 语言 Töötlus group +# - Szymon Duchniewicz +# - Yadong Liu +# - Carmen Meinson +# - Andrzej Szablewski +# - Zhe Yu +# +# This software is released under the MIT License. +# https://opensource.org/licenses/MIT + import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline diff --git a/llm_unlearn_ucl/unlearn_harm.py b/llm_unlearn_ucl/unlearn_harm.py index fcbc672..6cbfbf5 100644 --- a/llm_unlearn_ucl/unlearn_harm.py +++ b/llm_unlearn_ucl/unlearn_harm.py @@ -1,4 +1,11 @@ -# Copyright (C) 2023 ByteDance. All Rights Reserved. +# Copyright (C) 2024 UCL CS SNLP Naturalnego 语言 Töötlus group +# - Szymon Duchniewicz +# - Yadong Liu +# - Carmen Meinson +# - Andrzej Szablewski +# - Zhe Yu +# +# Adapted from https://github.com/kevinyaobytedance/llm_unlearn. # # This software is released under the MIT License. # https://opensource.org/licenses/MIT @@ -6,7 +13,7 @@ """ A script to show an example of how to unlearn harmfulness. -The dataset used in is `PKU-SafeRLHF`. Model support OPT-1.3B, OPT-2.7B, and Llama 2 (7B). +The dataset used in is `PKU-SafeRLHF` and TruthfulQA. Model supports OPT-1.3B. """ import json diff --git a/llm_unlearn_ucl/utils.py b/llm_unlearn_ucl/utils.py index c82179d..1e328f1 100644 --- a/llm_unlearn_ucl/utils.py +++ b/llm_unlearn_ucl/utils.py @@ -1,4 +1,11 @@ -# Copyright (C) 2023 ByteDance. All Rights Reserved. +# Copyright (C) 2024 UCL CS SNLP Naturalnego 语言 Töötlus group +# - Szymon Duchniewicz +# - Yadong Liu +# - Carmen Meinson +# - Andrzej Szablewski +# - Zhe Yu +# +# Adapted from https://github.com/kevinyaobytedance/llm_unlearn. # # This software is released under the MIT License. # https://opensource.org/licenses/MIT diff --git a/mink_data_probability/README.md b/mink_data_probability/README.md deleted file mode 100644 index 87419a7..0000000 --- a/mink_data_probability/README.md +++ /dev/null @@ -1,10 +0,0 @@ -# Project to get all min-k% log likelihood probability for ANY dataset - - -Use: -`python3 src/run.py --target_model