forked from huggingface/lm-evaluation-harness
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add EQ-Bench as per EleutherAI#1459 (EleutherAI#1511)
* Start adding eq-bench * Start adding to yaml and utils * Get metric working * Add README * Handle cases where answer is not parseable * Deal with unparseable answers and add percent_parseable metric * Update README
- Loading branch information
Showing
3 changed files
with
129 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# EQ-Bench | ||
|
||
Title: `EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models` | ||
|
||
Abstract: https://arxiv.org/abs/2312.06281 | ||
|
||
EQ-Bench is a benchmark for language models designed to assess emotional intelligence. | ||
|
||
Why emotional intelligence? One reason is that it represents a subset of abilities that are important for the user experience, and which isn't explicitly tested by other benchmarks. Another reason is that it's not trivial to improve scores by fine tuning for the benchmark, which makes it harder to "game" the leaderboard. | ||
|
||
EQ-Bench is a little different from traditional psychometric tests. It uses a specific question format, in which the subject has to read a dialogue then rate the intensity of possible emotional responses of one of the characters. Every question is interpretative and assesses the ability to predict the magnitude of the 4 presented emotions. The test is graded without the need for a judge (so there is no length bias). It's cheap to run (only 171 questions), and produces results that correlate strongly with human preference (Arena ELO) and multi-domain benchmarks like MMLU. | ||
|
||
Homepage: https://eqbench.com/ | ||
|
||
|
||
NOTE: There are some key differences between the lm-evaluation-harness version and the implementation described in the EQ-Bench paper (These have been OK'd by the author): | ||
|
||
- The lm-eval version uses the EQ-Bench v2 test set (171 questions) and score calculation. It does not incorporate the revision part of the prompt, as per v2.1 (https://github.com/EQ-bench/EQ-Bench) | ||
- No retries in lm-eval version (EQ-Bench pipeline retries with successively higher temps if it encounters unparseable answers) | ||
- In the original implementation, unparseable answers are excluded from the final score, and 83% of answers have to be parseable or a fail is returned. The lm-eval version instead assigns 0 to unparsable answers and has no fail criteria. So for lower performing models, there may be differences with the EQ-Bench leaderboard. | ||
|
||
|
||
### Citation | ||
|
||
```bibtex | ||
@misc{paech2023eqbench, | ||
title={EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models}, | ||
author={Samuel J. Paech}, | ||
year={2023}, | ||
eprint={2312.06281}, | ||
archivePrefix={arXiv}, | ||
primaryClass={cs.CL} | ||
} | ||
``` | ||
|
||
### Groups and Tasks | ||
|
||
#### Groups | ||
|
||
* Not part of a group yet | ||
|
||
#### Tasks | ||
|
||
* `eq_bench` | ||
|
||
### Checklist | ||
|
||
* [x] Is the task an existing benchmark in the literature? | ||
* [x] Have you referenced the original paper that introduced the task? | ||
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? | ||
|
||
If other tasks on this dataset are already supported: | ||
* [ ] Is the "Main" variant of this task clearly denoted? | ||
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? | ||
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
task: eq_bench | ||
dataset_path: pbevan11/EQ-Bench | ||
output_type: generate_until | ||
validation_split: validation | ||
doc_to_text: prompt | ||
doc_to_target: reference_answer_fullscale | ||
process_results: !function utils.calculate_score_fullscale | ||
generation_kwargs: | ||
do_sample: false | ||
temperature: 0.0 | ||
max_gen_toks: 80 | ||
metric_list: | ||
- metric: eqbench | ||
aggregation: mean | ||
higher_is_better: true | ||
- metric: percent_parseable | ||
aggregation: mean | ||
higher_is_better: true | ||
metadata: | ||
version: 2.1 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
import math | ||
import re | ||
|
||
|
||
def calculate_score_fullscale(docs, results): | ||
reference = eval(docs["reference_answer_fullscale"]) | ||
user = dict(re.findall(r"(\w+):\s+(\d+)", results[0])) | ||
# First check that the emotions specified in the answer match those in the reference | ||
if len(user.items()) != 4: | ||
# print('! Error: 4 emotions were not returned') | ||
# print(user) | ||
return {"eqbench": 0, "percent_parseable": 0} | ||
emotions_dict = {} | ||
for emotion, user_emotion_score in user.items(): | ||
for i in range(1, 5): | ||
if emotion == reference[f"emotion{i}"]: | ||
emotions_dict[emotion] = True | ||
if len(emotions_dict) != 4: | ||
print("! Error: emotions did not match reference") | ||
print(user) | ||
return {"eqbench": 0, "percent_parseable": 0} | ||
|
||
difference_tally = ( | ||
0 # Tally of differerence from reference answers for this question | ||
) | ||
|
||
# Iterate over each emotion in the user's answers. | ||
for emotion, user_emotion_score in user.items(): | ||
# If this emotion is in the reference, calculate the difference between the user's score and the reference score. | ||
for i in range(1, 5): | ||
if emotion == reference[f"emotion{i}"]: | ||
d = abs( | ||
float(user_emotion_score) - float(reference[f"emotion{i}_score"]) | ||
) | ||
# this will be a value between 0 and 10 | ||
if d == 0: | ||
scaled_difference = 0 | ||
elif d <= 5: | ||
# S-shaped scaling function | ||
# https://www.desmos.com/calculator | ||
# 6.5\cdot\ \frac{1}{\left(1\ +\ e^{\left(-1.2\cdot\left(x-4\right)\right)}\right)} | ||
scaled_difference = 6.5 * (1 / (1 + math.e ** (-1.2 * (d - 4)))) | ||
|
||
else: | ||
scaled_difference = d | ||
difference_tally += scaled_difference | ||
|
||
# Inverting the difference tally so that the closer the answer is to reference, the higher the score. | ||
# The adjustment constant is chosen such that answering randomly produces a score of zero. | ||
adjust_const = 0.7477 | ||
final_score = 10 - (difference_tally * adjust_const) | ||
final_score_percent = final_score * 10 | ||
|
||
return {"eqbench": final_score_percent, "percent_parseable": 100} |