Skip to content

Latest commit

 

History

History
193 lines (135 loc) · 4.67 KB

README.md

File metadata and controls

193 lines (135 loc) · 4.67 KB

Generalized LLM Based Social Simulation Instrument

简体中文

Introduction

This repository extends the work of Generative Agents (GA) to create a more comprehensive and scalable simulation tool. Our project includes both an offline simulation module (based on GA) and an online simulation module. We are also developing a user-friendly interface for launching, displaying, and interacting with the simulation.

Table of Contents

Environment Setup

Prerequisites

  • Linux operating system (recommended)
  • Git
  • Internet connection

Frontend Environment Setup

Installing Node.js using NVM (Node Version Manager)

  1. Install NVM:

    curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash
  2. Restart your terminal or run:

    source ~/.bashrc
  3. Install and use the latest LTS version of Node.js:

    nvm install --lts
    nvm use --lts
  4. Verify the installation:

    node --version
    npm --version

Installing Frontend Dependencies

Install pnpm package manager: https://pnpm.io/installation

Navigate to the frontend directory and install dependencies:

cd frontend
pnpm install

Backend Environment Setup

We recommend using Miniconda to manage Python environments and avoid package conflicts.

Installing Miniconda

  1. Download the Miniconda installer:

    wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
  2. Run the installer:

    bash Miniconda3-latest-Linux-x86_64.sh
  3. Follow the prompts to complete the installation.

  4. Restart your terminal or run:

    source ~/.bashrc

Creating a Conda Environment and Installing Dependencies

  1. Create a new conda environment:

    conda create -n llm-sim python=3.12
  2. Activate the environment:

    conda activate llm-sim
  3. Install PyTorch (adjust the command based on your CUDA version if using GPU): https://pytorch.org/get-started/locally/

    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
  4. Install other dependencies:

    pip install -r requirements.txt

Runtime Configuration

Changing Ports

  1. Copy config.template.yaml into config.yaml

  2. Modify port1, port2, and port3 in config.yaml to use available ports:

server_ip: <your server ip address>
front_port: <frontend port>
back_port: <backend port>

LLM Configuration

Copy config_template.py to config.py under reverie/backend_server/utils. Use the following template and replace the placeholders with your actual API keys and preferences:

openai_api_base = "https://api.openai.com/v1"
openai_api_key = "<Your OpenAI API Key>"
override_model = "<Your Model Name>"  # Set to non-empty string to override all API calls
google_api_key = "<Your Google API Key>"
google_api_cx = "c2ab1202fad094a87"
key_owner = "<Your Name>"
maze_assets_loc = "../api/static/assets"
env_matrix = f"{maze_assets_loc}/the_ville/matrix"
env_visuals = f"{maze_assets_loc}/the_ville/visuals"
storage_path = "../storage"
temp_storage_path = "../temp_storage"
collision_block_id = "32125"
debug = True

Running the System

  1. Ensure you're in the project root directory.

  2. Start the system:

python start.py

Use python start.py --help for additional options.

  1. If you want to run it directy in your terminal:
cd reverie/backend_server
python reverie.py

Debugging

Set the LOG_LEVEL environment variable to control logging verbosity:

export LOG_LEVEL=debug  # Options: debug, info, warning, error, critical
python start.py

To save server logs, use the --save argument:

python start.py --save

This will save logs to webpage.log, frontend.log, and backend.log.

Shutting Down

Press Ctrl+C to shut down all services.

Acknowledgements

This project builds upon the work of Generative Agents (GA). We extend our gratitude to the original authors for their contributions to the field.