This is an evaluation harness for the benchmark described in CIBench: Evaluating Your LLMs with a Code Interpreter Plugin.
[Paper] [Project Page] [LeaderBoard]
While LLM-Based agents, which use external tools to solve complex problems, have made significant progress, benchmarking their ability is challenging, thereby hindering a clear understanding of their limitations. In this paper, we propose an interactive evaluation framework, named CIBench, to comprehensively assess LLMs' ability to utilize code interpreters for data science tasks. Our evaluation framework includes an evaluation dataset and two evaluation modes. The evaluation dataset is constructed using an LLM-human cooperative approach and simulates an authentic workflow by leveraging consecutive and interactive IPython sessions. The two evaluation modes assess LLMs' ability with and without human assistance. We conduct extensive experiments to analyze the ability of 24 LLMs on CIBench and provide valuable insights for future LLMs in code interpreter utilization.
CIBench is evaluated based on OpenCompass. Please first install opencompass.
conda create --name opencompass python=3.10 pytorch torchvision pytorch-cuda -c nvidia -c pytorch -y
conda activate opencompass
git clone https://github.com/open-compass/opencompass opencompass
cd opencompass
pip install -e .
pip install requirements/agent.txt
Then,
cd ..
git clone https://github.com/open-compass/CIBench.git
cd CIBench
move the cibench_eval to the opencompass/config
You can download the CIBench from here.
Then, unzip the dataset and place the dataset in OpenCompass/data. The data path should be like OpenCompass/data/cibench_dataset/cibench_{generation or template}.
Finally, using the following scripts to download the nessceary data.
cd OpenCompass/data/cibench_dataset
sh collect_datasources.sh
- Download the huggingface model to your local path.
- Run the model with the following scripts in the opencompass dir.
python run.py config/cibench_eval/eval_cibench_hf.py
Note that the currently accelerator config (-a lmdeploy) doesnot support CodeAgent model. If you want to use lmdeploy to acclerate the evaluation, please refer to lmdeploy_internlm2_chat_7b to write the model config by yourself.
Once you finish all tested samples, you can check the results in outputs/cibench.
Note that the output images will be saved in output_images.
More detailed and comprehensive benchmark results can refer to 🏆 CIBench official leaderboard !
CIBench is built with Lagent and OpenCompass. Thanks for their awesome work!
This project is released under the Apache 2.0 license.