You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on the data we are already collecting, we will assess if it is possible to already identify faulty network measurements to establish a baseline.
We will first implement a set of very simple heuristics that operate on the features which we already collect and evaluate if we can detect faulty measurements. Examples of this include looking at the consistency between the probe_cc reported in the measurement and the IP address of the probe sending the measurements or other inconsistencies in the measurement itself.
This activity will be carried out mostly by the OONI team (backend developer, CTO and research engineer) and will require allocating time to map out the impact and constraints of a potential solution based on existing OONI data.
Output: documentation outlining our assessment using simple heuristics and current OONI measurements.
The text was updated successfully, but these errors were encountered:
jbonisteel
changed the title
Current state assessment document & document of findings from literature review
OTFcreds: Activity 1.1 Evaluate current OONI measurements
Jan 14, 2025
What this does is it record the number of measurements per second where the ingress IP address country code matches or doesn't match that of the measurement. This check is performed every time a measurement is submitted to us:
Probably a good starting point is to look a bit into the data we currently have and maybe extend this to include richer metrics.
At the same time we should probably extend the prometheus retention period so that these metrics are kept for a long enough time.
One issue with how these metrics are getting collected using netdata is that it seems like we only get the persec average events emitted, while it would be helpful to have these collected as a true counter value such that we are able to assess the actual volume of measurements that are probe inconsistent. Moreover it would be useful to have some information that disaggregates these metrics based on probe_cc and probe_asn, such that we are able to better assess which regions present higher level of inconsistencies.
It's unclear if doing this using standard monitoring metrics exporters like prometheus or netdata is the way to go, which is why some research would be needed.
It would probably be useful to look into opentelemetry too, as those exporters are richer than prometheus and netdata supporting for attaching arbitrary metadata to exported metrics.
It might be worth doing experimentation in relation to this as part of the work related to porting probe-services to ECS deployment.
Based on the data we are already collecting, we will assess if it is possible to already identify faulty network measurements to establish a baseline.
We will first implement a set of very simple heuristics that operate on the features which we already collect and evaluate if we can detect faulty measurements. Examples of this include looking at the consistency between the probe_cc reported in the measurement and the IP address of the probe sending the measurements or other inconsistencies in the measurement itself.
This activity will be carried out mostly by the OONI team (backend developer, CTO and research engineer) and will require allocating time to map out the impact and constraints of a potential solution based on existing OONI data.
Output: documentation outlining our assessment using simple heuristics and current OONI measurements.
The text was updated successfully, but these errors were encountered: