Skip to content
redshiftzero edited this page Feb 13, 2017 · 7 revisions

Welcome to the OpenOversight wiki!

Crowdsourcing Resources

"Data Quality using Crowdsourcing Techniques" - http://gureckislab.org/mtworkshop/

Once data is uploaded to the platform, the system automatically allocates the work to contributors and tests them against known answers hidden within the task (what CrowdFlower refers to as a "job" [5]). The way in which contributors perform on these hidden test questions calibrates how much the system trusts them on an individual level. As long as contributors remain trusted they're allowed to continue working on a given job. If they become untrusted, they're removed from the job and all of their work is disregarded. Multiple contributor judgments are collected and an aggregate answer with an associated confidence score (agreement of the contributors weighted by the trust of each contributor) is provided as a result - effectively returning the "most trusted judgment," for a given unit of data.

from Wikipedia's description of CrowdFlower

Clone this wiki locally