Skip to content

Tensor-Reloaded/Advanced-Topics-in-Neural-Networks-Template-2024

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

54 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

image_clipdrop-enhance

Repository for the Advanced Topics in Neural Networks laboratory, "Alexandru Ioan Cuza" University, Faculty of Computer Science, Master degree.

Environment setup

Google Colab: PyTorch, Pandas, Numpy, Tensorboard, and Matplotlib are already available. Wandb can be easily installed using pip install wandb.

Local instalation:

  1. Create a Python environment (using conda or venv). We recommend installing conda from Miniforge.
# Create the environment
conda create -n 313 -c conda-forge python=3.13
# activate the environment
conda activate 313
# Run this to use conda-forge as your highest priority channel (not needed if you installed conda from Miniforge)
conda config --add channels conda-forge
  1. Install PyTorch 2.6.0+ from pytorch.org using pip.
    • Example CPU: pip install torch torchvision torchaudio
    • Example CUDA: pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
  2. Install Tensorboard and W&B
    • conda install -c conda-forge tensorboard wandb
  3. Install Matplotlib.
    • conda install conda-forge::matplotlib

Recommended resources:

Table of contents

  • Lab01: Tensor Operations (Homework 1: Multi Layer Perceptron + Backpropagation)
  • Lab02: Convolutions, DataLoaders, Datasets, Data Augmentation techniques (Homework 2: Kaggle competition on CIFAR-100 with VGG-16)
  • Lab03: ResNets (Homework 3: Implement a complete training pipeline with PyTorch)
  • Lab04: Training pipeline implementation
  • Lab05: R-CNN, Fast R-CNN, Faster R-CNN, YOLO, U-Net, 3D U-Net, Ensemble methods, Model Soup
  • Lab07: Self-Supervised Learning, Autoencoders, VAE, GAN, Diffusion
  • Lab09: Sequence to sequence models, RNN, LSTM, Attention Is All You Need
  • Lab10: Multi-Headed Attention, Transformers, BERT, GPT, ViT
  • Lab11: Generalization, Batch Sizes, SAM, Benchmarks