Skip to content

Latest commit

 

History

History
343 lines (343 loc) · 15.7 KB

disguised-faces-in-the-wild-competition.md

File metadata and controls

343 lines (343 loc) · 15.7 KB
ideaSubmitFormInstruction startDate votingAllowed newCampaign status commentCount challenge-id moderatorAdminOnlyIdeasEnabled funnelId ideaFromUnauthorizedMemberAllowed tagline groupName hideIdeaAuthor template campaignAttributes attributes total-prize-awarded-cash external-url submission-end why-use-prizes submission-start fiscal-year public-voting-end-date budget-and-resources total-prize-offered-cash campaign-owner public-voting-start-date legal-authority total-number-of-prizes-awarded evaluation-of-submissions agency-id solicitation-of-submissions total-submission-received total-number-of-participant show-winners-instead-of-prizes estimated-value-of-partner-contributions non-monetary-incentives-awarded partner-agencies-federal judging-end-date solicitation-methods advancing-the-agency-mission rules submission-start-date-1 hide-challenge-timeline judging-start-date winners-announced-date cash-prizes-and-non-cash-prize-awards campaign-owner-email solution-type partner-agencies-non-federal original-post-id total-number-of-winners-awarded hosting hide-challenge-funnel type-of-challenge participation-requirements number-of-phases how-to-enter partnerships groupAttributes judging-criteria-description-0 judging-criteria-percentage-0 judging-criteria-0 judging-criteria-description-1 judging-criteria-percentage-1 judging-criteria-1 judging-criteria-description-10 judging-criteria-percentage-10 judging-criteria-10 judging-criteria-description-11 judging-criteria-percentage-11 judging-criteria-11 judging-criteria-description-12 judging-criteria-12 judging-criteria-percentage-12 judging-criteria-description-13 judging-criteria-13 judging-criteria-percentage-13 judging-criteria-percentage-14 judging-criteria-14 judging-criteria-description-14 judging-criteria-percentage-15 judging-criteria-15 judging-criteria-description-15 judging-criteria-16 judging-criteria-percentage-16 judging-criteria-description-16 judging-criteria-17 judging-criteria-percentage-17 judging-criteria-description-17 judging-criteria-description-18 judging-criteria-percentage-18 judging-criteria-18 judging-criteria-description-19 judging-criteria-percentage-19 judging-criteria-19 judging-criteria-description-2 judging-criteria-2 judging-criteria-percentage-2 judging-criteria-description-3 judging-criteria-3 judging-criteria-percentage-3 judging-criteria-percentage-4 judging-criteria-4 judging-criteria-description-4 judging-criteria-percentage-5 judging-criteria-5 judging-criteria-description-5 judging-criteria-6 judging-criteria-percentage-6 judging-criteria-description-6 judging-criteria-7 judging-criteria-percentage-7 judging-criteria-description-7 judging-criteria-description-8 judging-criteria-percentage-8 judging-criteria-8 judging-criteria-description-9 judging-criteria-percentage-9 judging-criteria-9 prize-description-0 prize-cash-amount-0 prize-name-0 prize-description-1 prize-cash-amount-1 prize-name-1 prize-cash-amount-10 prize-name-10 prize-description-10 prize-cash-amount-11 prize-name-11 prize-description-11 prize-name-12 prize-cash-amount-12 prize-description-12 prize-name-13 prize-cash-amount-13 prize-description-13 prize-description-14 prize-cash-amount-14 prize-name-14 prize-description-15 prize-cash-amount-15 prize-name-15 prize-description-16 prize-name-16 prize-cash-amount-16 prize-description-17 prize-name-17 prize-cash-amount-17 prize-cash-amount-18 prize-name-18 prize-description-18 prize-description-2 prize-name-2 prize-cash-amount-2 prize-description-3 prize-name-3 prize-cash-amount-3 prize-cash-amount-4 prize-name-4 prize-description-4 prize-cash-amount-5 prize-name-5 prize-description-5 prize-name-6 prize-cash-amount-6 prize-description-6 prize-name-7 prize-cash-amount-7 prize-description-7 prize-description-8 prize-cash-amount-8 prize-name-8 prize-description-9 prize-cash-amount-9 prize-name-9 winner-solution-description-0 winner-solution-link-0 winner-name-0 winner-solution-title-0 winner-solution-link-1 winner-solution-description-1 winner-name-1 winner-solution-title-1 winner-solution-description-2 winner-solution-link-2 winner-solution-title-2 winner-name-2 winner-solution-link-3 winner-solution-description-3 winner-solution-title-3 winner-name-3 winner-name-4 winner-solution-title-4 winner-solution-description-4 winner-solution-link-4 winner-name-5 winner-solution-title-5 winner-solution-link-5 winner-solution-description-5 winner-solution-title-6 winner-name-6 winner-solution-description-6 winner-solution-link-6 winner-solution-title-7 winner-name-7 winner-solution-link-7 winner-solution-description-7 winner-solution-description-8 winner-solution-link-8 winner-name-8 winner-solution-title-8 winner-solution-link-9 winner-solution-description-9 winner-name-9 winner-solution-title-9 memberIdeaSubmissionAllowed showTitle description campaignStatusName templateId stageStatistics summaryEnabled voteCount ideaTabEnabledForChallenge moderatorAdminOnlyIdeasNotificationEnabled hideCommentAuthor authorizedGroupIds userSubscriptionAllowed bannerImage groupId showTagline challenge-title privateCampaign ideaCount memberIdeaAttachmentAllowed authorEdit permalink layout
<strong>Registration</strong> A full description of registration, rules, submissions, and evaluations are at: <a href="http://iab-rubric.org/DFW/dfw.html">http://iab-rubric.org/DFW/dfw.html</a>. <strong>Protocol</strong> The DFW dataset consists of 1000 subject and total of 11155 images. Out of this dataset, 400 subjects comprise the training set and 600 subjects comprise the testing set. The subject folder consists of a subject, disguised, and impersonator images.   Access to the DFW dataset is granted to participants after enrollment through the DFW website. <strong>Submission</strong> <ol> <li>Participants' are required to generate similarity scores (a larger value indicates greater similarity) from the biometric matchers. If a participant's matcher generates a dissimilarity score instead of a similarity score, the scores should be negated or inverted in some way so that the resulting value is a similarity measure. Participants in the competition have been provided with the testing set. From the data, the participants are required to generate and submit similarity matrices of size 7771 X 7771, the size of the testing data. The ordering of test images is same in both row and column. The (i,j) entry of a similarity matrix is the similarity score generated by the algorithm when supplied an image i from the testing set and query entry j as a probe sample. Entry (i,i) as it corresponds to matching the same image.</li> <li>Participants are also required to submit the score matrix on the training database. The ordering should be exactly same as the order given in the text file containing subject names.</li> <li>Participants are required to submit the matrices along with the companion data for the corresponding 1,000 points ROC curves. The match scores computed with validation and disguised images will comprise the genuine scores. Impostor scores will include match scores generated from impersonator images as well as cross subject matching scores.</li> <li>While it is not mandatory, we also encourage the participants to submit their models/executable/API for verification of the results.</li> <li>The participants can choose to remain anonymous in the analysis and report. Participants must explicitly make this request; the default position will be to associate results with participants.</li> </ol>
2018-11-26T07:14:30
false
false
closed
0
943
false
4
true
Face recognition with disguise variations
Office of Director of National Intelligence - Intelligence Advanced Research Project Activity
false
ideation
05/01/2018 11:59 PM
01/20/2018 12:00 AM
FY18
$25,500
Dr. Chris Boehnen
Other
4901
No
A full description of rules is available at: <a href="http://iab-rubric.org/DFW/dfw.html">http://iab-rubric.org/DFW/dfw.html</a>.
No
05/10/2018 11:59 PM
Software and apps
University of Maryland, IBM, IIIT-Delhi
173415
Hosted on this platform
Yes
Software and apps
<strong>Registration</strong> A full description of registration, rules, submissions, and evaluations are at: <a href="http://iab-rubric.org/DFW/dfw.html">http://iab-rubric.org/DFW/dfw.html</a>. <strong>Protocol</strong> The DFW dataset consists of 1000 subject and total of 11155 images. Out of this dataset, 400 subjects comprise the training set and 600 subjects comprise the testing set. The subject folder consists of a subject, disguised, and impersonator images.   Access to the DFW dataset is granted to participants after enrollment through the DFW website. <strong>Submission</strong> <ol> <li>Participants' are required to generate similarity scores (a larger value indicates greater similarity) from the biometric matchers. If a participant's matcher generates a dissimilarity score instead of a similarity score, the scores should be negated or inverted in some way so that the resulting value is a similarity measure. Participants in the competition have been provided with the testing set. From the data, the participants are required to generate and submit similarity matrices of size 7771 X 7771, the size of the testing data. The ordering of test images is same in both row and column. The (i,j) entry of a similarity matrix is the similarity score generated by the algorithm when supplied an image i from the testing set and query entry j as a probe sample. Entry (i,i) as it corresponds to matching the same image.</li> <li>Participants are also required to submit the score matrix on the training database. The ordering should be exactly same as the order given in the text file containing subject names.</li> <li>Participants are required to submit the matrices along with the companion data for the corresponding 1,000 points ROC curves. The match scores computed with validation and disguised images will comprise the genuine scores. Impostor scores will include match scores generated from impersonator images as well as cross subject matching scores.</li> <li>While it is not mandatory, we also encourage the participants to submit their models/executable/API for verification of the results.</li> <li>The participants can choose to remain anonymous in the analysis and report. Participants must explicitly make this request; the default position will be to associate results with participants.</li> </ol>
Evaluation of recognition performance using all DFW test images.
6000
Overall Recognition Accuracy - 1st Place
Evaluation of recognition performance using all DFW test images.
2500
Overall Recognition Accuracy - 2nd Place
Evaluation of recognition performance using DFW impersonator test images.
Impersonation Recognition - 1st Place
6000
Evaluation of recognition performance using DFW impersonator test images.
Impersonation Recognition - 2nd Place
2500
6000
Obfuscation Recognition - 1st Place
Evaluation of recognition performance using DFW obfuscation test images.
2500
Obfuscation Recognition - 2nd Place
Evaluation of recognition performance using DFW obfuscation test images.
National Taiwan University
MiRA-Face
ITMO University
AEFRL
AEFRL
ITMO University
MiRA-Face
National Taiwan University
National Taiwan University
MiRA-Face
ITMO University
AEFRL
false
true
<p>With recent advancements in deep learning, the capabilities of automatic face recognition has been significantly increased. However, face recognition in unconstrained environment with non-cooperative users is still a research challenge, pertinent for users such as law enforcement agencies. While several covariates such as pose, expression, illumination, aging, and low resolution have received significant attention, “disguise” is still considered an arduous covariate of face recognition. Disguise as a covariate involves both intentional and unintentional changes on a face through which one can either obfuscate his/her identity or impersonate someone else’s identity. The problem can be further exacerbated due to unconstrained environment or “in the wild” scenarios. However, disguise in the wild has not been studied in a comprehensive way, primarily due to unavailability of such as database. As part of 1<sup>st</sup> International Workshop on Disguised Face in the Wild at CVPR 2018, a competition is being held in which participants are asked to show their results on the Disguised Faces in the Wild (DFW) database.  More details on the workshop and competition are available at: <a href="http://iab-rubric.org/DFW/dfw.html">http://iab-rubric.org/DFW/dfw.html</a>.</p> <p>Prize awards for the DFW 2018 Competition are provided by the Intelligence Advanced Research Projects Activity (IARPA), within the Office of the Director of National Intelligence (ODNI).<br> <strong>Two phases of the competition and winners:</strong> The competition involves two phases. The early phase involves the participants with early access to the data with an opportunity to submit a paper describing their approach. The second phase participants will have more time to submit their score results but will not be able to submit a written paper. However, the second phase participants may be invited to orally present their work based on their performance results at the CVPR workshop.  Teams may elect to participate in either or both phases.</p> <p><strong>Phase 1:</strong> <ul> <li><strong>Enrollment deadline for the overall competition:</strong> February 23, 2018</li> <li><strong>Result Submissions due to Organizers:</strong> March 05, 2018 - UPDATED</li> <li><strong>Invitation to top performing participants for paper submission (results will not be released publicly):</strong> Mar 07, 2018</li> <li><strong>Paper submissions by selected Competition Participants due to organizers:</strong> Mar 20, 2018</li> <li><strong>Notification provided authors:</strong> Apr 05, 2018</li> <li><strong>Camera-ready deadline:</strong> Apr 09, 2018</li> </ul> <strong>Phase 2:</strong> <ul> <li><strong>Final results submissions to Organizers:</strong> May 01, 2018</li> <li><strong>Winners of the competition announced based on final Comparative Results:</strong> May 10, 2018</li> </ul></p> <p>Results will be announced at the end of Phase 2 based on submission received up until the May 1<sup>st</sup> deadline. Participants are permitted to submit one set of results to Phase 1 and then resubmit a second set of revised/final results to Phase 2.  No paper submission is possible for participants who <strong>only</strong> participate in Phase 2, but winners will be invited to give an oral presentation on their work at the CVPR workshop. <strong>Questions?</strong> In case of any difficulties or questions, please email to <a href="mailto:[email protected]">[email protected]</a>.</p>
Launched
0
false
0
true
false
false
false
269
true
Disguised Faces in the Wild Competition
true
0
false
false
/challenge/disguised-faces-in-the-wild-competition/
json-page