Production hosting is managed by the Shields ops team:
Component | Subcomponent | People with access |
---|---|---|
shields-production-us | Account owner | @paulmelnikow |
shields-production-us | Full access | @calebcartwright, @chris48s, @paulmelnikow, @pyvesb |
shields-production-us | Access management | @calebcartwright, @chris48s, @paulmelnikow, @pyvesb |
Compose.io Redis | Account owner | @paulmelnikow |
Compose.io Redis | Account access | @paulmelnikow |
Compose.io Redis | Database connection credentials | @calebcartwright, @chris48s, @paulmelnikow, @pyvesb |
Zeit Now | Team owner | @paulmelnikow |
Zeit Now | Team members | @paulmelnikow, @chris48s, @calebcartwright, @platan |
Raster server | Full access as team members | @paulmelnikow, @chris48s, @calebcartwright, @platan |
shields-server.com redirector | Full access as team members | @paulmelnikow, @chris48s, @calebcartwright, @platan |
Cloudflare (CDN) | Account owner | @espadrine |
Cloudflare (CDN) | Access management | @espadrine |
Cloudflare (CDN) | Admin access | @calebcartwright, @chris48s, @espadrine, @paulmelnikow, @PyvesB |
Twitch | OAuth app | @PyvesB |
Discord | OAuth app | @PyvesB |
YouTube | Account owner | @PyvesB |
OpenStreetMap (for Wheelmap) | Account owner | @paulmelnikow |
DNS | Account owner | @olivierlacan |
DNS | Read-only account access | @espadrine, @paulmelnikow, @chris48s |
Sentry | Error reports | @espadrine, @paulmelnikow |
Metrics server | Owner | @platan |
UptimeRobot | Account owner | @paulmelnikow |
More metrics | Owner | @RedSparr0w |
Shields has mercifully little persistent state:
- The GitHub tokens we collect are saved on each server in a cloud Redis database. They can also be fetched from the GitHub auth admin endpoint for debugging.
- The server keeps the regular-update cache in memory. It is neither persisted nor inspectable.
To bootstrap the configuration process, the script that starts the server sets a single environment variable:
NODE_CONFIG_ENV=shields-io-production
With that variable set, the server (using config
) reads these
files:
local-shields-io-production.yml
. This file contains secrets which are checked in with a deploy commit.shields-io-production.yml
. This file contains non-secrets which are checked in to the main repo.default.yml
. This file contains defaults.
Sitting in front of the three servers is a Cloudflare Free account which provides several services:
- Global CDN, caching, and SSL gateway for
img.shields.io
andshields.io
- Analytics through the Cloudflare dashboard
- DNS resolution for
shields.io
(and subdomains)
Cloudflare is configured to respect the servers' cache headers.
The raster server raster.shields.io
(a.k.a. the rasterizing proxy) is
hosted on Zeit Now. It's managed in the
svg-to-image-proxy repo.
Both the badge server and frontend are served from Heroku.
After merging a commit to master, heroku should create a staging deploy. Check this has deployed correctly in the shields-staging
pipeline and review http://shields-staging.herokuapp.com/
If we're happy with it, "promote to production". This will deploy what's on staging to the shields-production-eu
and shields-production-us
pieplines.
DNS is registered with DNSimple.
Logs can be retrieved from heroku.
Error reporting is one of the most useful tools we have for monitoring
the server. It's generously donated by Sentry. We bundle
raven
into the application, and the Sentry DSN is configured via
local-shields-io-production.yml
(see documentation).
The canonical and only recommended domain for badge URLs is img.shields.io
. Currently it is possible to request badges on both img.shields.io
and shields.io
i.e: https://img.shields.io/badge/build-passing-brightgreen and https://shields.io/badge/build-passing-brightgreen will both work. However:
- We never show or generate the
img.
-less URL format on https://shields.io/ - We make no guarantees about the
img.
-less URL format. At some future point we may remove the ability to serve badges onshields.io
(withoutimg.
) without any warning.img.shields.io
should always be used for badge urls.
Overall server performance and requests by service are monitored using Prometheus and Grafana.
Request performance is monitored in two places:
- Status (using UptimeRobot)
- Server metrics using Prometheus and Grafana
- @RedSparr0w's monitor which posts notifications to a private #monitor chat room