You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For further inspiration, we can look at the metrics exposed by other execution layer clients. In case there is any standardization of the metric names across clients, we should adopt the standard as well. The developed Grafana dashboard has to be published in the repository for easier local testing through the nimbus-eth1/eth2 simulation scripts. See the Grafana files in the nimbus-eth2 repo as an example.
The logs produced by Nimbus1 during the simulation should be compared to the logs produced by other clients and the user experience should be improved by increasing signal-to-noise ratio of our logs. The nodes on the fleet will run
The text was updated successfully, but these errors were encountered:
With the deployment of a Sepolia fleet node, we'll need to prepare a Grafana dashboard that will offer comparable set of metrics to our nimbus-eth2 nodes:
https://metrics.status.im/d/pgeNfj2Wz23/nimbus-fleet-testnets?orgId=1&refresh=15m
For further inspiration, we can look at the metrics exposed by other execution layer clients. In case there is any standardization of the metric names across clients, we should adopt the standard as well. The developed Grafana dashboard has to be published in the repository for easier local testing through the nimbus-eth1/eth2 simulation scripts. See the Grafana files in the nimbus-eth2 repo as an example.
The logs produced by Nimbus1 during the simulation should be compared to the logs produced by other clients and the user experience should be improved by increasing signal-to-noise ratio of our logs. The nodes on the fleet will run
The text was updated successfully, but these errors were encountered: