You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I also join the safety hat detection competition . After read your tricks ,I still confused about this description “The position pixels could not cover all possible pixels in such situation. After exploring the training label, I found the images > are resize to one-third during labeling. So I approximate the predicted position to the nearby pixels appearing in training label.” Why can't the position pixel cover all the pixels after artificially reducing the image to label ? And how do you explore the train label ,are you just counting the distribution of all training labels' pixels? Can you give more details? Thanks
The text was updated successfully, but these errors were encountered:
Hi, I also join the safety hat detection competition . After read your tricks ,I still confused about this description “The position pixels could not cover all possible pixels in such situation. After exploring the training label, I found the images > are resize to one-third during labeling. So I approximate the predicted position to the nearby pixels appearing in training label.” Why can't the position pixel cover all the pixels after artificially reducing the image to label ? And how do you explore the train label ,are you just counting the distribution of all training labels' pixels? Can you give more details? Thanks
If you explore the training label pixels, you would find that the pixels could not cover all possible pixels, a simple example is that the xmin (or ymin, xmax, ymax) can only be 0, 3, 6, 9 ....., so I establish a list to store all possible pixel position, and approximate the predict position(float) to the nearby pixels(int) which appear in train set label
Hi, I also join the safety hat detection competition . After read your tricks ,I still confused about this description “The position pixels could not cover all possible pixels in such situation. After exploring the training label, I found the images > are resize to one-third during labeling. So I approximate the predicted position to the nearby pixels appearing in training label.” Why can't the position pixel cover all the pixels after artificially reducing the image to label ? And how do you explore the train label ,are you just counting the distribution of all training labels' pixels? Can you give more details? Thanks
The text was updated successfully, but these errors were encountered: