-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Manos Gionanidis
authored
Nov 26, 2018
1 parent
a302d04
commit 25547a3
Showing
1 changed file
with
4 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,11 +1,11 @@ | ||
Implementation of SVM algorithm for classification svm_default.py is using only the default parameters to initialize the procedure. | ||
Implementation of SVM algorithm for classification **svm_default.py** is using only the default parameters to initialize the procedure. | ||
|
||
In this folder there are variations as concerns the methods of training and the method of evaluation of SVM algorithm. Experiment resutlts using different kernel functions, and different values of parameters. Training methods with balanced training set, the balance is about the number of samples of each class svm_balancedSampleNumber_greedySearch.py . | ||
In this folder there are variations as concerns the methods of training and the method of evaluation of SVM algorithm. Experiment resutlts using different kernel functions, and different values of parameters. Training methods with balanced training set, the balance is about the number of samples of each class **svm_balancedSampleNumber_greedySearch.py**. | ||
|
||
Examples of this training is using the divided parts and keep only the samples that are support vectors in every iteration, continue this procedure until the class with more samples is finished of iterating. Last one, using greedy algorithms to calculate the kernel parameters. | ||
|
||
In the script svm_keeping_supportVectors.py the above experiment is taking place. As a first approach we train out model taking all the samples from class0 and devide them accrodingly just to balance our data, we continue this porcedure until we do not have more untis of samples from class0. From this iteration we keep all the support vectors which contains samples from both classes. We erase the duplicates and we delete all the samples from class1, so we have a dataframe containing all the support_vectors from class0. And then we feed our model in order to train it with all the samples from class1 and only the samples that were support vectors from class0, and we repeat this procedure. In the end the amount of samples from class0 is going to be smaller than the amount of samples from class1 and when this becomes smaller than the half of the amount of class1 samples we stop. | ||
In the script **svm_keeping_supportVectors.py** the above experiment is taking place. As a first approach we train out model taking all the samples from class0 and devide them accrodingly just to balance our data, we continue this porcedure until we do not have more untis of samples from class0. From this iteration we keep all the support vectors which contains samples from both classes. We erase the duplicates and we delete all the samples from class1, so we have a dataframe containing all the support_vectors from class0. And then we feed our model in order to train it with all the samples from class1 and only the samples that were support vectors from class0, and we repeat this procedure. In the end the amount of samples from class0 is going to be smaller than the amount of samples from class1 and when this becomes smaller than the half of the amount of class1 samples we stop. | ||
|
||
In general because class0 has 6 times more samples than class1 in order to reduce the amount of samples of class one we try this procedure taking the support vectors and then the support vectors of support vectors and goes on. | ||
|
||
Furthermore in the script #svm_multiclass.py we try to classify a dataset of three classes. | ||
Furthermore in the script **svm_multiclass.py** we try to classify a dataset of three classes. |