You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am still working on my DEM Data. As I already mentioned I inpainted my 32,32,1 data with your Partial Convolution Model. Results were satisfactory! This time I am using the same data set with bigger images
After Normalization, my Data looks like this:
x_train shape: (1000, 256, 256, 1)
x_test shape: (100, 256, 256, 1)
Minimum Value in Train: 0.0
Maximum Value in Train: 1.0
Minimum Value in Test: 0.0821
Maximum Value in Test: 0.6226
Masks were created like before and have the value 1.0
Histograms of a selection (sample_idx = 20) from masked images in traingen seem plausible:
My Major Changes:
classcreateAugment(keras.utils.Sequence):
def__init__(self, X, y, batch_size=8, dim=formatxy, n_channels=1, shuffle=True):
The Model summary is always very long, i can only repeat that it looks equally to the initial one with 32,32,3 - i changed nothing else. the bottom says:
Total params: 4,769,039
Trainable params: 4,769,039
Non-trainable params: 0
Legend: Original Image | Mask generated | Inpainted Image | Ground Truth
I like the results, because yet I did not manage a full 20 Epochs run. See below why that is. I have a problem with val_loss not dropping and the model does not learn.
Ofc my DEM-Data (VT-1000) is too big, say 150 MB to upload here.
Another Run with 10 Epochs and a Kernel Size of 5x5 gave even better results:
However when I repeat the run, even after restarting the Kernel - it might turn out the model does not learn a thing. (Results = Black Images with the same value on any pixel)
Even when I did not change a thing. I use Google Colab's Cloud-computing GPUs. I am unsure if this can cause such problems. But this problem does indeed undermines the model's scientific repeatability. So if anyone is able to help me here I will be very thankfull!
I will continue on implementing scheduling learning rate to hopefully fix this.
Cheers
The text was updated successfully, but these errors were encountered:
SvSz
changed the title
Own Training Data is has shape 256,256,1
Own Training Data has shape 256,256,1
Feb 25, 2021
Hey Guys, it's me again.
I am still working on my DEM Data. As I already mentioned I inpainted my 32,32,1 data with your Partial Convolution Model. Results were satisfactory! This time I am using the same data set with bigger images
After Normalization, my Data looks like this:
x_train shape: (1000, 256, 256, 1)
x_test shape: (100, 256, 256, 1)
Minimum Value in Train: 0.0
Maximum Value in Train: 1.0
Minimum Value in Test: 0.0821
Maximum Value in Test: 0.6226
Masks were created like before and have the value 1.0
Histograms of a selection (sample_idx = 20) from masked images in traingen seem plausible:
My Major Changes:
The Model summary is always very long, i can only repeat that it looks equally to the initial one with 32,32,3 - i changed nothing else. the bottom says:
Total params: 4,769,039
Trainable params: 4,769,039
Non-trainable params: 0
Complete Version:
IPC_VT-1000.txt
Here is the run:
Here is the output:
Legend: Original Image | Mask generated | Inpainted Image | Ground Truth
I like the results, because yet I did not manage a full 20 Epochs run. See below why that is. I have a problem with val_loss not dropping and the model does not learn.
Here is the whole Notebook:
ipc_vt_1k_4.zip
Ofc my DEM-Data (VT-1000) is too big, say 150 MB to upload here.
Another Run with 10 Epochs and a Kernel Size of 5x5 gave even better results:
However when I repeat the run, even after restarting the Kernel - it might turn out the model does not learn a thing. (Results = Black Images with the same value on any pixel)
Even when I did not change a thing. I use Google Colab's Cloud-computing GPUs. I am unsure if this can cause such problems. But this problem does indeed undermines the model's scientific repeatability. So if anyone is able to help me here I will be very thankfull!
I will continue on implementing scheduling learning rate to hopefully fix this.
Cheers
The text was updated successfully, but these errors were encountered: