Official repository for the upcoming paper xLSTM4Rec. This repository contains the code and resources necessary to replicate the experiments and results presented in the paper.
xLSTM4Rec leverages the xLSTM model to capture sequential dependencies in user-item interactions, providing near state-of-the-art performance in recommendation tasks.
- CUDA-enabled GPU
- Python 3.7+
- Anaconda (recommended)
To set up the environment and run the model, follow these steps:
-
Clone the repository:
git clone https://github.com/Brotherhood-of-Silicon/XLSTM4Rec.git cd XLSTM4Rec
-
Install the required packages:
conda create --name xlstm4rec_env --file requirements.txt conda activate xlstm4rec_env
-
Navigate to the source directory:
cd src
-
Train the model:
python run.py
-
Run the Gradio interface for the Movielens1M dataset:
python gui.py
-
Modify training parameters:
Edit
config.yaml
to change training parameters as needed. -
Collaboratory Notebook:
Use the provided Jupyter notebook for Google Colab available at
src/collab.ipynb
. To run the notebook:- Upload
src/collab.ipynb
to your Google Drive. - Open the notebook in Google Colab.
- Follow the instructions within the notebook to set up the environment and run the cells.
- Upload
- Enable CPU-only training
- Implement xLSTM using CUDA kernels instead of PyTorch
- Leonardo16AM
- EdianBC
- AlexBeovides