You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
When I use the complete data (1000x1000) in your paper, it works correctly. While intercepting part of data(1000x24, it is the same size as my dataset), it will have an error. The trace back are as follow.
It seems that there is something wrong with the pool layer. Does experiment trials have influence? I would appreciate it if you could explain it.
X=np.loadtxt(path)
Y=np.loadtxt(path)
Labels=np.loadtxt(path)
X = X[:,:24]
Y = Y[:,:24]
Labels = Labels[:,:24]
Number of classes: 2
Using GPU: False
Training. Please wait.
Traceback (most recent call last):
File "", line 1, in
runfile('D:/anaconda_python_exercise/dataset/1uneye/my_trial.py', wdir='D:/anaconda_python_exercise/dataset/1uneye')
File "D:\Anaconda3\envs\py3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "D:\Anaconda3\envs\py3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "D:/anaconda_python_exercise/dataset/1uneye/my_trial.py", line 65, in
model.train(X,Y,Labels)
File "D:\anaconda_python_exercise\dataset\1uneye\uneye\classifier.py", line 257, in train
out = self.net(Vbatch,key)[0] # network output
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "D:\anaconda_python_exercise\dataset\1uneye\uneye\functions.py", line 95, in forward
out['p1'] = self.p1(out['c1'])
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\pooling.py", line 76, in forward
self.return_indices)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch_jit_internal.py", line 138, in fn
return if_false(*args, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\functional.py", line 457, in _max_pool1d
input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (20x1x1). Calculated output size: (20x1x0). Output size is too small
The text was updated successfully, but these errors were encountered:
In fact, this is just because you reduce the second dimension of the data matrix to 24, but in the algorithm, it is assumed to be time points. 24 time points are too little for the algorithm to work. If you want it to be 24 samples (so 24 pieces of 1000 time points), you can transpose the matrix X.T when passing it to the function. However, you then have to reduce the number of validation samples a lot (now it's 30). You can use for example val_samples = 6
It works when I transpose the matrix and change val_samples. It shows "Early stopping at epoch 80 before overfitting occurred." The reason perhaps is less data.
You mean the second dimension of the data matrix is time points? So the column of your data is timestamp, the row of your data is trial? I mistook a column for a trail. I just changed the open source dataset(Lund2013-image), a conlumn for a trial(file) in X_Position.
After using this dataset, I will collect my own eye tracker data and have a rough estimate of the amount of data collected. Thanks!
Hello,
When I use the complete data (1000x1000) in your paper, it works correctly. While intercepting part of data(1000x24, it is the same size as my dataset), it will have an error. The trace back are as follow.
It seems that there is something wrong with the pool layer. Does experiment trials have influence? I would appreciate it if you could explain it.
X=np.loadtxt(path)
Y=np.loadtxt(path)
Labels=np.loadtxt(path)
X = X[:,:24]
Y = Y[:,:24]
Labels = Labels[:,:24]
Number of classes: 2
Using GPU: False
Training. Please wait.
Traceback (most recent call last):
File "", line 1, in
runfile('D:/anaconda_python_exercise/dataset/1uneye/my_trial.py', wdir='D:/anaconda_python_exercise/dataset/1uneye')
File "D:\Anaconda3\envs\py3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 827, in runfile
execfile(filename, namespace)
File "D:\Anaconda3\envs\py3\lib\site-packages\spyder_kernels\customize\spydercustomize.py", line 110, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "D:/anaconda_python_exercise/dataset/1uneye/my_trial.py", line 65, in
model.train(X,Y,Labels)
File "D:\anaconda_python_exercise\dataset\1uneye\uneye\classifier.py", line 257, in train
out = self.net(Vbatch,key)[0] # network output
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "D:\anaconda_python_exercise\dataset\1uneye\uneye\functions.py", line 95, in forward
out['p1'] = self.p1(out['c1'])
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\container.py", line 92, in forward
input = module(input)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\module.py", line 541, in call
result = self.forward(*input, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\modules\pooling.py", line 76, in forward
self.return_indices)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch_jit_internal.py", line 138, in fn
return if_false(*args, **kwargs)
File "D:\Anaconda3\envs\py3\lib\site-packages\torch\nn\functional.py", line 457, in _max_pool1d
input, kernel_size, stride, padding, dilation, ceil_mode)
RuntimeError: Given input size: (20x1x1). Calculated output size: (20x1x0). Output size is too small
The text was updated successfully, but these errors were encountered: