Skip to content

Commit

Permalink
Add files via upload
Browse files Browse the repository at this point in the history
  • Loading branch information
aparjadis authored Nov 6, 2023
1 parent 760e015 commit e8dbd44
Show file tree
Hide file tree
Showing 45 changed files with 10,558 additions and 1 deletion.
77 changes: 76 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,76 @@
# learning-hk-bound
## Content of the repository

This repository provides a training algorithm for Lagrangian multipliers generation for the Held-Karp TSP relaxation, and an associated branch-and-bound TSP solver.

```bash
.
├── conda_env.yml # configuration file for the conda environment
└── solver/
├── src/ # solver source code
├── models/ # GNN models
├── testGraphs/ # test instances
├── loadNN.py # GNN loading and calling functions
├── run_tsp.sh # to run the solver
├── run_test.sh
├── makefile # to compile the solver
└── training/
├── src/ # solver source code
├── trained_models/ # trained GNN models
├── training_graphs/ # training instances
├── trainHKgnn.py # GNN loading and calling functions
├── run_training.sh # to run the training
```

## Installation instructions

### 1. Importing the repository

```shell
git clone https://github.com/corail-research/learning-hk-bound.git
```

### 2. Setting up the conda virtual environment

```shell
conda env create -f environment.yml
```

### 3. Compiling the solver

A makefile is available in the solver and trainer. First, add your python path. Then, you can compile the project as follows:

```shell
cd ./training
make
cd ../solver
make
```


## Basic use

### 1. Training a model

```shell
cd ./training
```

Edit the configuration in training/trainHKgnn, then you can start the training as follows:
```shell
cd ./training
./run_training.sh
```

### 2. Solving instances

```shell
# For TSPTW
./test.sh
```


## Technologies and tools used

* The TSP solver is implemented in C++ and is based on the [solver](https://hal.science/hal-01344070/document) of Pascal Benchimol.
* The code handling the training and the GNN inferences is implemented in Python3.
* The graph neural network architecture has been implemented in Pytorch together with DGL.
22 changes: 22 additions & 0 deletions environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: base
channels:
- defaults
dependencies:
- cython=0.29.32
- cytoolz=0.11.0
- ipykernel=6.15.2
- ipython=7.31.1
- jellyfish=0.9.0
- libgcc-ng=11.2.0
- mkl=2021.4.0
- mkl-service=2.4.0
- mkl_fft=1.3.1
- networkx=2.8.4
- pip=22.2.2
- python=3.9.13
- tbb=2021.6.0
- pip:
- dgl==1.0.1
- keras==2.12.0
- numpy==1.24.3
- torch==1.13.1
31 changes: 31 additions & 0 deletions solver/include/atsp.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
#ifndef ATSP_H
#define ATSP_H
#include <graphe.h>
#include <stsp.h>

/*!
* \class ATSP
* \brief An Asymmetric Travelling Salesman Problem object.
* \author Pascal Benchimol
*/

class ATSP : public Graphe {

public :
/*!
* \brief Constructeur
* \param n : number of nodes.
*/
ATSP(int n);

/*!
* \brief Build a symmetric TSP instance from this asymmetric instance using duplication of nodes.
* \return A SymTSP object.
*/

SymTSP * getTSP();

void printDot(string nomFichier) const;

};
#endif
62 changes: 62 additions & 0 deletions solver/include/bb.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
#ifndef BB_H
#define BB_H

#include <hk.h>
#include <stsp.h>
#include <hungarianMethod.h>

#include <time.h>

class BB {

private :
HeldKarp hk;
SymTSP * graphe;

hungarianMethod hg;
clock_t startTime;
float percent_edges_filtered;

double upperBound;
int nbNode;
bool bb_tourFound;
double tourCost;


std::vector<Edge*>* toForce_temp;
std::vector<Edge*>* toRemove_temp;

pair<bool,Edge*> getEdgeWithMaxReplacementCost();
void getEdgesToBranchOn(std::vector<Edge*>* edgesToBranchOn);

int nbForce;

int nbForceCut;
int nbForceCost;

public :
BB(SymTSP * graphe, double UB, int nF);

~BB();

void test();

void compute();


void dfs_Remove(int profondeur);

void filter_and_force(std::vector<Edge*>* toRemove, std::vector<Edge*>* toForce, bool *canStopBranching);


void compute_ap();


void dfs_Remove_ap(int profondeur);

void filter_and_force_ap(std::vector<Edge*>* toRemove, std::vector<Edge*>* toForce, bool *canStopBranching);
void printEndInfos(bool isOptimal);

};

#endif
91 changes: 91 additions & 0 deletions solver/include/bigraphe.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
#ifndef BIGRAPHE_H
#define BIGRAPHE_H


#include<stsp.h>

#include <list>

#include <vector>
#include <binode.h>
#include <node.h>

#include <iostream>
#include <string>
#include <fstream>
//#include <stsp.h>
/*!
* \file Graphe.h
* \brief Header for Graphe.
* \author Pascal Benchimol
*/

typedef std::vector<BiNode*> bigraphNodes;


using namespace std;

/*!
* \class Graphe
*/
class BiGraphe {
private:
int size;
int matchingSize;
bigraphNodes rightNodes; /*!< Vector of nodes*/
bigraphNodes leftNodes; /*!< Vector of nodes*/
bigraphEdges edges; /*!< List of edges*/


BiEdge* addEdge(int leftIndex, int rightIndex, double weight, Edge* _edge);



public:

/*!
* \brief Constructeur
* \param size : number of nodes.
*/
BiGraphe(SymTSP * stsp);

~BiGraphe();


/*!
* \brief Return the maximum index of a node that can be found in this graphe.
*/
int getSize() const;
int getMatchingSize() const;


bigraphNodes* getRightNodes();
bigraphNodes* getLeftNodes();
bigraphEdges* getEdges();




void print() const;


void addEdge(BiEdge * edge);

void removeEdge(BiEdge * e);

/*!
* \brief Print the graph in graphviz dot format into file nomFichier.
* A view of the graph can the be created using graphviz (www.graphviz.org).
* type "dot -Tformat -o outfile grapheInDotFormatFile
*/
void printDot(string nomFichier) const;








};
#endif
Loading

0 comments on commit e8dbd44

Please sign in to comment.